Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking The Internet

IEEE Seeks Consensus on Ethernet Transfer Speed Standard 92

New submitter h2okies writes "CNET's News.com reports that the IEEE will start today to form the new standards for Ethernet and data transfer. 'The standard, to be produced by the Institute of Electrical and Electronics Engineers, will likely reach data-transfer speeds between 400 gigabits per second and 1 terabit per second. For comparison, that latter speed would be enough to copy 20 full-length Blu-ray movies in a second.' The IEEE also reports on how the speed needs of the internet continue to double every year. Of what consequence will this new standard be if the last mile is still stuck on beep & creep?"
This discussion has been archived. No new comments can be posted.

IEEE Seeks Consensus on Ethernet Transfer Speed Standard

Comments Filter:
  • Hype! (Score:5, Funny)

    by fm6 ( 162816 ) on Monday August 20, 2012 @03:07PM (#41058105) Homepage Journal

    Ethernet transfers never use more than a fraction of available bandwidth. So it's 2 blu-ray discs per second, 4 tops!

    • I would be happy just to have speeds shown at 'real world' results, rather than 'theoretical' limits. What good are these ratings if people in the real world never actually see them?

      • by fm6 ( 162816 )

        There's a good reason and a bad reason. The good reason is that theoretical limits are objective and reproducible; real world limits depend upon a host of factors.

        The bad reason is that the most impressive-sounding statistic is the one that sells.

        • Yes, but knowing that overhead will always reduce the throughput by a specific amount, they could simply exclude that. Wireless is a good example of what you are saying as it would vary so much depending on location, interference, etc, but it should be based on the best possible 'real world' values, rather than a non-achievable theoretical limit.

          • by fm6 ( 162816 )

            overhead will always reduce the throughput by a specific amount,

            No it won't. Overhead depends on a variety of factors: cable quality, traffic levels, software.

            • I'm talking about protocol overhead.

              For example, all things being equal, a computer connected to a hub via a stock ethernet cable with a guaranteed link speed up and down should produce a result that's generally in the same area each time (hence the 'real world').

              It's not a difficult concept. We're not asking for a rating for every conceivable configuration, but best case real world numbers. WiFi theoreticals are nowhere near their real world numbers.

              • But running what? Are you measuring the speed of delivering ethernet frames? Or of IPv4 packets? Or IPv6 packets? Or of payload carried by TCP packets on either?
              • by fm6 ( 162816 )

                I suppose you could probably get a consistent value if you were very specific about the physical setup and the benchmark. But it wouldn't be that much different from the theoretical limit, and wouldn't tell you jack about real-world use cases.

                Benchmarking is a complicated, controversial branch of computing. Any time you try to "prove" that one piece of hardware or software is faster than another, somebody who's selling competing technologies will show you benchmarks to "prove" that you're wrong. You're not

              • But when you're looking at layer 1, you can get those speeds in real world scenarios.
      • by dgatwood ( 11270 )

        I would be happy just to have speeds shown at 'real world' results, rather than 'theoretical' limits. What good are these ratings if people in the real world never actually see them?

        Sadly, for most of us, the real-world speed of our 1 Tbps Ethernet connection will be the speed of the 1.5 Mbps DSL line that feeds it. All the fast LANs in the world won't help you if WAN speed improvement is blocked by telecoms that don't want to spend any money on their infrastructure.

    • It's pretty easy to max out a 100Mbit ethernet link. Gigabit is also doable with a bit of work. It's a bit harder to max out a 10G port but it can be done with multiple queues and large packets. Once you hit 10G you really need to be using multiple queues spread across multiple CPUs and offloading as much as possible to hardware.

      • Gigabit isn't difficult at all. I max out my gigabit network between my main computer and my NAS quite easily. Well, using 95% of it anyhow -- over windows shares none the less. I'm sure I could do better with a protocol that uses less overhead and better windowing.

      • by AaronW ( 33736 )

        I have no problem saturating 10G links, but then again I'm working on multi-core CPUs with 10-32 cores optimized for networking (the 10G interfaces are built-in to the CPU). I have a PCIe NIC card on my desk with 4 10Gbe ports on it (along with a 32-core CPU).

        It's also neat when you can say you're running Linux on your NIC card (it can even run Debian).

        • what card is this

          • by AaronW ( 33736 )

            Search for CN6880-4NIC10E [google.com]. It has a Cavium OCTEON CN6880 [cavium.com] 32-core CPU on it with dual DDR3 interfaces. It would take some work to make it run Debian (requires running the root filesystem over NFS over PCIe or 10Gbe). All of the changes to support Linux are in the process of being pushed upstream to the Linux kernel GIT repository and hopefully sometime in the future I will get enough time to start pushing my U-Boot bootloader changes upstream as well.

            All of the toolchain support is in the mainline GCC and bi

  • by VMaN ( 164134 ) on Monday August 20, 2012 @03:07PM (#41058117) Homepage

    I think someone got their bits and bytes mixed up...

    • by JavaBear ( 9872 )

      That always happen.
      With a little overhead, 1 Tbit/s is at most 100GiB a second. 2 Blu rays.

    • by Anonymous Coward

      1 tbit = 1000 gbit = 125 gbyte = 20 x 6.25 gByte.

      A 1080p feature film could be 6.25GB.

      I don't see the mix up. (Though I do agree that's a theoretical maximum with no accounting for overhead or other factors that reduce the effective performance).

    • It's much worse than that. Somebody's reading comprehension isn't quite up to par

      FTFA: "enough to copy two-and-a-half full-length Blu-ray movies in a second."

      • Whooops!!

        Correction: "Updated 10:05 a.m. PT August 20 to correct the 1Tbps data-transfer speed in terms of Blu-ray disc copying times."

    • I think someone got their bits and bytes mixed up...

      They never said how big the blurays actually were.

    • I think someone got their bits and bytes mixed up...

      Stop picking on Billy Van:

      http://www.youtube.com/watch?v=EntiJhQ9z_U [youtube.com]

  • The MPAA will be putting the kabosh on that.

  • by jandrese ( 485 ) <kensama@vt.edu> on Monday August 20, 2012 @03:17PM (#41058261) Homepage Journal
    How important is 400G to the last mile? You might as well ask how important a new high bypass turbine engine for jumbo jets will be to my motorcycle. It's for a totally different market. We're just barely getting to the point where it starts to make sense for early adopters to get 10G Ethernet on their ridiculously tricked out boxes (and industry has been using it for backhaul for some time now), and 1G Ethernet is still gross overkill for the majority of users. We have at least gotten to the point where 10MB Ethernet is too slow however.
    • by Shatrat ( 855151 ) on Monday August 20, 2012 @03:42PM (#41058619)

      Consequences to me in long haul fiber optic transport? Massive.
      Depending on how they implement 400G and Terabit it may affect the transport systems I deploy today, given that those speeds will likely require gridless DWDM which is currently just on the roadmap for most vendors.
      Then, once it does come out, if our infrastructure is ready for it we will probably be able to deploy a Terabit link for the same price as 3 or 4 100G links. By that time 100G will start feeling a little tight anyway if we keep up the 50% a year growth rate.

      There are no consequences to the last mile, for the same reason 100G has no consequences in the last mile.
      Even 10G I only see used in the last mile to large customers like wireless backhaul or healthcare.
      It's a silly summary but still an important topic.

    • I'm sure they could put 400G last-mile to good use.

      But yeah, for most of us, not so much, at least not this half-decade.

    • I bought CAT6A bulk for my apartment, and have a star-layout with wall panels in every room. The termination is probably not up to spec, so I expect a little lower speeds. Then again, it's only a home network.

      The point of going CAT6A was to avoid (or at least delay) upgrade.

      So far, CAT6A equipment is nowhere to be found in my price-range. And laptop hard disks are still the number one bottleneck. Going all SSD on OS disks and 7200rpm on the NAS.

  • When will the standard become final?

    If it will become final by Christmas, I'll give you a number I can live with.

    If it won't become final for 12 months after that, I'll give you a higher number.

  • I will accept nothing less than 1 zillion bits per second.

    • pff i will except nothing less then a closed time like curve between my neural implant and every server in existence. information delivered straight to my brain just before i ask for it

      • by Anonymous Coward

        Well, at least there is plenty of room for it.

  • Of what consequence will this new standard matter if the last mile is still stuck on beep & creep?

    We're gonna need a faster station wagon!

  • http://www.computerworld.com/s/article/9151159/Facebook_sees_need_for_Terabit_Ethernet [computerworld.com] Companies have been asking for 400/1TB for years now, and they are just now forming a group to figure it out?
  • by Above ( 100351 ) on Monday August 20, 2012 @03:53PM (#41058751)

    Last time around there was a question about 40GE or 100GE. Largely (although not exactly) server guys pushed a 40GE standard for a number of reasons (cost, time to market, cabling issues, and bus-throughput of the machines), and the network guys pushed to stay with 100GE. Some 40GE (pre-standard?) made it out the door first, but it's basically not a big enough jump (just LAG 4x10GE cheaper) so there is no real point. 100GE is starting to gain traction as doing a 10x10GE LAG causes reliability and management issues.

    This diversion probably delayed 100GE getting to market by 12-24 months, and the vast majority of folks, even server folks, now think 40GE was a mistake.

    Why is the IEEE even asking this question again? The results are going to be basically the same, for basically the same reasons. 1Tbe should be the next jump, and they should get working on it pronto.

    • by mvar ( 1386987 )

      Why is the IEEE even asking this question again? The results are going to be basically the same, for basically the same reasons. 1Tbe should be the next jump, and they should get working on it pronto.

      Because the companies that make the hardware are going to sell more modules :-P
      I can't understand why the author is even mentioning laptops and PCs on this article. First make sure you can utilize the existing 1gbps technology, then see how to implement faster interfaces. Right now the bottleneck at home ethernet is slow hard drives and cheap "gigabit" NICs that underperform.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      The confusion between 40G ethernet and 100G ethernet is vast. But the actual reason for the standard has nothing to do with time-to-market or technological limitations beyond 40G. The 40G ethernet standard is designed to run ethernet over telco OC768 links. This standard allows vendors to support OC768 with the same hardware they use in a 100Gbps ethernet port.

  • For comparison, that latter speed would be enough to copy 20 full-length Blu-ray movies in a second.'

    someone doesn't understand the difference between bits and bytes.

  • "The IEEE also reports on how the speed needs of the internet continue to double every year. Of what consequence will this new standard be if the last mile is still stuck on beep & creep?"

    None what so ever, since ethernet is a LAN protocol, not WAN. It will be used in data centers that require big pipes between servers, and possibly compete with Fiber Channel for access to storage.

  • by Anonymous Coward

    I have been waiting 5+ years for 10 gigabit ethernet copper to fall in price like 100 mbps fast ethernet and 1 gigabit ethernet did, but it hasn't happened. Infiniband adoption has grown rapidly in the last few years. 12x FDR infiniband promises ~160 gigabit speeds, comparable to 16x PCI Express version 3. Maybe the IEEE should come up with cheap 2.5 gigabit ethernet, and give up on higher speed copper networking.

    As for long distance optical, I want them to cram as many lasers into a single node fiber as ec

"If it ain't broke, don't fix it." - Bert Lantz

Working...