Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Networking IT Technology

IEEE Releases 802.3ba Standard 141

An anonymous reader writes "EEE announced the ratification of IEEE 802.3ba, a new standard governing 40Gbps and 100Gbps Ethernet operations. An amendment to the IEEE 802.3 Ethernet standard, IEEE 802.3ba, the first standard ever to simultaneously specify two new Ethernet speeds, paves the way for the next generation of high-rate server connectivity and core switching. The new standard will act as the catalyst needed for unlocking innovation across the greater Ethernet ecosystem. IEEE 802.3ba is expected to trigger further expansion of the 40 Gigabit and 100 Gigabit Ethernet family of technologies by driving new development efforts, as well as providing new aggregation speeds that will enable 10Gbps Ethernet network deployments."
This discussion has been archived. No new comments can be posted.

IEEE Releases 802.3ba Standard

Comments Filter:
  • You'll still be stuck on 3Mb/512kb DSL.

  • I know who can use this type of network speed: the guys trying to make a quadrillion-flop computer [slashdot.org]. What good is all that CPU horsepower if it can't be used to serve up, um, web pages?
  • Much welcomed tech (Score:5, Interesting)

    by mnmn ( 145599 ) on Wednesday June 23, 2010 @03:44PM (#32670472) Homepage
    It's interesting how this will increase the adoption of iSCSI storage, yet the original reason to go to iSCSI will be lost since fiber cables will have to be laid.

    Either way 1Gbit Ethernet is beginning to feel a bit like a bottleneck with storage and other bottlenecks being removed.

    It'll take some time between ratification and cheap D-Link switches...
    • by afidel ( 530433 ) on Wednesday June 23, 2010 @03:49PM (#32670592)
      Nah, thanks to DCE most storage will stay FC using FCoE. What this will do is eventually allow us to get to ridiculous port counts in top of rack and end of row switches and upload all that capacity without requiring a 6" diameter bundle of trunking cables. It will also allow 100Gb to be usable at metro distances since it only requires 4 pairs instead of 10.
    • Re: (Score:2, Informative)

      The original goal of iSCSI wasn't to avoid using Fiber, it was to avoid using Fiber Channel and requiring the creation of a second network, dedicated to storage, that is managed separately from the standard data network.
    • Re: (Score:1, Interesting)

      by whit3 ( 318913 )

      It's interesting how this will increase the adoption of iSCSI storage, yet the original reason to go to iSCSI will be lost since fiber cables will have to be laid.

      That seems a tad disingenuous. The real reason for iSCSI was a
      Microsoft price structure that made a network file service very
      expensive unless it went in through the 'disk-on-SCSI-bus'
      back door.

      Linux and iSCSI was a way around the high cost of
      a MS server/client system. None of the Linux-only or Macintosh
      network systems were so encumbered, and worked
      quite well without any iSCSI.

      • by bertok ( 226922 ) on Wednesday June 23, 2010 @06:27PM (#32672082)

        It's interesting how this will increase the adoption of iSCSI storage, yet the original reason to go to iSCSI will be lost since fiber cables will have to be laid.

        That seems a tad disingenuous. The real reason for iSCSI was a
        Microsoft price structure that made a network file service very
        expensive unless it went in through the 'disk-on-SCSI-bus'
        back door.

        Linux and iSCSI was a way around the high cost of
        a MS server/client system. None of the Linux-only or Macintosh
        network systems were so encumbered, and worked
        quite well without any iSCSI.

        WTF are you talking about? Why was this modded up? Is it just because he's saying something negative about Microsoft?

        I've worked in Microsoft Windows server environments for a decade, and I've never heard of SCSI specific MS licensing, or any kind of special licensing at all for file servers.

        While it's true that a Linux server in general is cheaper from a licensing standpoint (hard to compete with free), that has nothing to do with iSCSI, SCSI, or FC.

        The reason iSCSI is popular is because it's simpler to set up, halves the number of ports and switches required for a fully redundant server environment (minimum 2 ports and 2 switches vs 4 and 4), it has real authentication instead of the worthless "zones" crap in the FC world, provides user friendly names instead of numeric IDs, has encryption, 10Gb Ethernet can outperform even 8Gb FC, and even old 1GbE switches can perform adequately if port trunking is used properly.

        What this all boils down to is that iSCSI is both better and cheaper than FC. Once popular SAN arrays from big vendors start to appear with 10GbE iSCSI as standard instead of an expensive "option", then FC will start to die a rapid and well deserved death.

        • Hey man, don't ruin this amature linux admin's fantasy with real world experience, that's just cruel!

          MS licensing is obscene, that's for sure, but they've never tied anything to the hard disk. It's all installs, users, and CPUs, with variations for each category.

      • Re: (Score:3, Informative)

        by mnmn ( 145599 )
        I do not believe you've actually used iSCSI, at all.

        The performance numbers are very different and so are the technologies, Microsoft filesharing is file-level and iSCSI is block level. It means with an iSCSI card, the machine can treat volumes as local disks and install any OS.

        Secondly, you're confusing iSCSI with NFS. NFS has been freely available even back on Windows NT4. However it was not created to counter Microsoft, it was ALREADY there.

        iSCSI until recently has been the only technology that provides
    • by swb ( 14022 )

      I think 10 gig ethernet has been an option for a while now. I'm almost positive one of the sales droids spouted something about Equalogic shipping 10 gig iSCSI SANs.

      AFAIK, most small-midsize organizations engaged in iSCSI SAN also do virtualization and thus don't have a ton of hosts to connect so the fiber part is less of a pain than it might seem given they can still get the "IP" part of iSCSI and leverage cheap and still useful 1 gig connectivity elsewhere.

      Plus 10 gig can do copper. But there won't be a

    • It's interesting how this will increase the adoption of iSCSI storage, yet the original reason to go to iSCSI will be lost since fiber cables will have to be laid.

      The spec includes 40Gbit and 100Gbit over copper via twinaxial cables, so you do have to make new runs of cable but you don't have to take the fiber hit when you do. Cat 6 definitely won't cover it though, I'm afraid.

    • by jon3k ( 691256 )
      FWIW, you can do 10Gb on copper.
  • What I remember most fondly about CompuServe on my 300 baud modem and Commodore 64 was the lack of ads ...
    • by MrEricSir ( 398214 ) on Wednesday June 23, 2010 @03:51PM (#32670618) Homepage

      Yes, but the porn was low-res and slow to download. So it's a double-edged sword.

      • C-64 porn (Score:4, Funny)

        by Tetsujin ( 103070 ) on Wednesday June 23, 2010 @05:55PM (#32671850) Homepage Journal

        Yes, but the porn was low-res and slow to download. So it's a double-edged sword.

        Still, I think you're underrating the merits of the slow reveal... I mean, as the image file was loaded byte by byte onto the computer's memory, filling the display with that lustworthy graphical data, gradually revealing more and more, until you had a naked woman on your screen in 320x200 glory, 1bpp plus 4 bit colors, foreground and background, per 8x8 character cell... The five minute wait for the elusive delights to be laid plain was like a striptease...

        And when I say 5-minute wait, that's how long it took to load an image from disk. Modem would take longer. :)

    • and yet in probably 5-10 years even 100 gb/s will probably not be fast enough.

      • by Surt ( 22457 ) on Wednesday June 23, 2010 @04:41PM (#32671230) Homepage Journal

        For end users, 100gb/s is almost 'enough'. It's just a hair short (about 2.5x) of the speed needed to stream uncompressed video at the highest resolution anyone is likely to seriously consider, at 240hz. Once you hit that point, you just remote your applications to wherever the data is, and forget about moving data ever again, assuming, of course, that the data is close enough to you to avoid any latency issues for interactivity.

        • What about stereo? Multiple users?

          • by Surt ( 22457 )

            Stereo is covered by the 240hz. But the multiple users .. yeah, for a family of 4, I suppose you might want to multiply by ... 4.

            • yeah, for a family of 4, I suppose you might want to multiply by ... 4.

              I think you're still fine. It would be fair to use something like REDCODE Raw [wikipedia.org] as an upper limit for home viewership at 10GB per minute. It actually costs more to process uncompressed video these days, so nobody would put money into such silicon.

              Presumably for deployed stereo they'll come up with a 'joint-stereo'-like algorithm that doesn't duplicate the data-rate.

              And a good LDS family can used a bonded pair.

        • uncompressed video? 240 frames/second?

          I'll assume you're joking and making fun of the crazy videophiles who think something like this is necessary. So well played, good sir.

          • by Surt ( 22457 )

            240hz is needed for stereo 3d at 120hz. 120hz is the upper limit of detectability for about 98% of the population. 60hz is choppy for almost 30%.

            My point was just to figure out roughly where you could guarantee that not even the videophiles would be unlikely to complain about the quality, and 120hz is generally well received by videophiles, and unfortunately you do have to multiply whatever rate you pick by 2 for stereo 3d implementations.


            • My point was just to figure out roughly where you could guarantee that not even the videophiles would be unlikely to complain about the quality, and 120hz is generally well received by videophiles, and unfortunately you do have to multiply whatever rate you pick by 2 for stereo 3d implementations.

              Being a videophile isn't about actual perception, it's about being superior. It's a dick measuring contest with specifications. Give them a maximum perceivable specification, and they'll imagine their way out it.

            • by bbn ( 172659 )

              Please post your math...

              240 hz * 1920 px * 1080 px * 24 bit = 11,943,936,000 bit/s = 12 Gbit/s.

              Seems to me that you could watch multiple 1080p uncompressed videos at 240 hz at 100 Gbit/s.

              • by Surt ( 22457 )

                240hz *
                http://en.wikipedia.org/wiki/Super_Hi-Vision [wikipedia.org]
                7,680 * 4,320
                * 64 bit (not 24, you don't want to have to lose accuracy in blend)
                = 509 megabit. I was actually off by a factor of 4 because I thought the 8k format was 4x2, not 8x4 before I double checked. So, sorry, it's actually worse than I thought!

        • Even with Gigabit and 10:1 wavelet compression you're pretty much dandy. 10gb I think would be good enough with adequate compression.

          • by Surt ( 22457 )

            Compression is horrible for a lot of content, so videophiles will insist on uncompressed for some applications.

        • it's never safe to assume that there is a "good enough" ceiling of bandwidth, for personal or enterprise use.

          I know you specifically went for "end users", but honestly? this stuff is also geared at corporate. Think about it. Do you really need a SC/ST when you can replace it with ethernet? Eventually as things get higher in speeds along with better storage it enables more people to host servers themselves. what is enterprise now becomes home user. Easy examples: home theatre pc's, p2p/bittorrent, more comp

          • by Surt ( 22457 )

            My point was merely that there is a concretely definable point at which, barring latency issues, you no longer need to send data to the end user, and can instead remote the output losslessly, and do the processing wherever the data lives. This is at most one order of magnitude past 100gb/s.

      • by afidel ( 530433 )
        10Gb Ethernet is 8 years old and it's more than fast enough for all but a niche of applications. Heck even with high consolidation ratios most VMWare servers deployed today don't have a need for a 10Gb ethernet port. It's more useful in channelized form ala HP Flex10 or the Palo adapter in Cisco's UCS systems where you can break out specific chunks of bandwidth for various purposes.
    • Re:More ads faster! (Score:4, Informative)

      by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday June 23, 2010 @03:52PM (#32670636) Homepage Journal

      I would rather have the adds and my 15Mb line, then my old 1200 baud connection to compuserve.
      We only use 300 baud for internal stuff.

      In 10 minutes I can down load some porn, whack off, and be a sleep in 10 minutes. In those days it was hours just to get a 5 second clip.

      what..TMI?

      • Re: (Score:3, Funny)

        by value_added ( 719364 )

        In 10 minutes I can down load some porn, whack off, and be a sleep in 10 minutes. In those days it was hours just to get a 5 second clip.

        Hmm. I could write that the above is an example of how user contributions on a site like Slashdot can offer recommendations to the average reader that are both informative and practical. On the other hand, I could write something to the effect that what you wrote provides more information than most of us asked for, or want.

        I suspect both of those are too subtle, so I'll

  • Seriously? (Score:5, Funny)

    by Pojut ( 1027544 ) on Wednesday June 23, 2010 @03:47PM (#32670544) Homepage

    I just finally upgraded all of the connections in my house to Gigabit Ethernet, you fucking clod you!

  • I can't help but wonder what you could actually use 100Gbit/s for, I mean to the best of my knowledge (which is not all that vast I admit) you'd be hard pressed to find a storage unit that can handle these sorts of speeds.

    • Re:Disc speeds (Score:4, Interesting)

      by afidel ( 530433 ) on Wednesday June 23, 2010 @03:52PM (#32670634)
      100Gb isn't for server to server or server to storage connections today, it's for network aggregation (switch to switch ISL's). 40Gb is there for server to storage on some high end configurations.
      • by dave562 ( 969951 )

        Another thing to consider is that "100Gb" is also a measure of bandwidth. While a switch might be able to handle 100Gb, it won't be able to handle 100Gb to every port at the same time. Even on a 12 port switch that is less than 10Gb per port. That is still a lot of bandwidth, but you can obviously predict how it will degrade as the port count increases.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      European internet exchange points (IXP's) such as LINX and AMS-IX are eagerly waiting 100GE. There's only so many 10GE interfaces you can aggregate together between large chasis-based switches.

    • Interlinks. Router to switch, switch to switch, ISP to ISP, etc.

    • Re: (Score:3, Insightful)

      by Surt ( 22457 )

      SSDs are going to hit 6 gbit/sec in the next year or so. Multiply by 17 devices on a SAN and you're done.

      • by bertok ( 226922 )

        SSDs are going to hit 6 gbit/sec in the next year or so. Multiply by 17 devices on a SAN and you're done.

        Those are consumer grade devices. Many SSDs are already well above 10Gbit speeds, and I fully expect 20Gbit in a single PCI-e card this year or early next year. Just 5 of those could saturate 100Gbit!

        • Re: (Score:3, Insightful)

          by Surt ( 22457 )

          True, though many a small business has a SAN built on consumer grade devices. My point was exactly that the low end will be pushing up against this limit all too soon.

      • by guruevi ( 827432 )

        The interconnect will be 6Gbit/s and the highest interconnects I've seen commercially used are bonded 4 * 10Gbit/s (40Gbit/s) mainly for redundancy and latency. At 50MB/s (400Mbit/s) you'll need at least 100 of them to fill the bandwidth - enterprise SSD's don't sell for $200/32GB, try $2000.

        • by Surt ( 22457 )

          I'm not sure who you're replying to ... I didn't say anything about pricing. And even consumer level SSDs are already at 250MB/s, not 50.

    • You couldn't is the simple answer, unless you perhaps run the network at CERN. This is Tier 1 territory at the moment, eventually your corporate backbone, probably never on the desktop though. It just means you can run 10x10Gb networks through a single cable. So unless you have massive needs of say a large internet video hosting site you probably never need anything like this. In time I would expect it to filter down to medium sized business, by which time, no doubt, 1Tb/s will have been standardised, if n
    • Re: (Score:3, Insightful)

      by Kjella ( 173770 )

      Delivering 100 Mbit/s Internet to 1000 people before over-subscription seems like a nice application. Unless you're in the US in which case it probably covers New York.

    • I can't help but wonder what you could actually use 100Gbit/s for, I mean to the best of my knowledge (which is not all that vast I admit) you'd be hard pressed to find a storage unit that can handle these sorts of speeds.

      Just depends on what you consider a "storage unit," and what you are willing to pay for it.

      If "storage unit" means a hard drive, then no.

      If "storage unit" means a big box in a data center with room for hundreds of drives, then yes you will be able to find interesting uses for this speed ri

    • Look up Storage Area Network, and Trunking.

      There is never, ever a such thing as "too much bandwidth". You're just thinking too small, that's all.

    • you'd be hard pressed to find a storage unit that can handle these sorts of speeds.

      If it was needed "NOW" it would be getting manufactured and sold NOW. It's not. It's just now getting standardized, so the hardware can be developed, and come out at a reasonable price a few years in the future, when it will in fact be needed.

      We've had 10GBit ethernet for quite some time now, and yet the cards still cost $1,000 a pop. So if you could find a use for that much speed (and many do) you might find the cost proh

  • About Time! (Score:3, Funny)

    by Above ( 100351 ) on Wednesday June 23, 2010 @03:53PM (#32670652)

    I've been waiting to connect to my 8M Cable modem with 100GE for a while now. Finally, no more bottleneck!

    • by vlm ( 69642 )

      I've been waiting to connect to my 8M Cable modem with 100GE for a while now. Finally, no more bottleneck!

      The inter-router links that connect your CMTS back at the headend might, eventually, be 100GE. 100GE would be about 12K customers at full blast. With reasonable oversubscription ratios, figure the headend for a small city, or "a major portion" of a large city.

  • That's:

    9102 full 3.5" floppy disks (1.44MB)
    18 full CDs (700MB)
    1 full DVD (8.54GB)

    Every second, with room to spare (I just counted complete transfers).

    Of course I'm still waiting on 10g to be affordable for LAN use and barely get 10m to the WAN, so I'm sure the various **AAs aren't afraid of this for now.

    • by HTH NE1 ( 675604 )

      That's:

      9102 full 3.5" floppy disks (1.44MB)
      18 full CDs (700MB)
      1 full DVD (8.54GB)

      Every second, with room to spare (I just counted complete transfers).

      CD and DVD capacities and transfer rates are measured in metric units, and 1.44 "MB" floppies are a combination of one metric and one binary measure (1.44 "MB" * 1024000 bytes/"MB"). Still, 8 bits per byte, so 100 Gb/s is 12.5 GB/s.

      Using the correct units, I get:

      1 DVD
      17 CDs
      8477 floppies

      Consider a 1.44 "MB" floppy is defined using two different definitions for a kilobyte: a 1000 B/KB and a 1024 B/KiB factor.

      (1.44 "MB"/floppy * 1024000 bytes/"MB" == 1474560 B/floppy; / 1,000,000,000 bytes/GB == .00147456 GB/f

      • CD and DVD capacities and transfer rates are measured in metric units, and 1.44 "MB" floppies are a combination of one metric and one binary measure (1.44 "MB" * 1024000 bytes/"MB"). Still, 8 bits per byte, so 100 Gb/s is 12.5 GB/s.

        Using the correct units, I get:

        1 DVD
        17 CDs
        8477 floppies

        Consider a 1.44 "MB" floppy is defined using two different definitions for a kilobyte: a 1000 B/KB and a 1024 B/KiB factor.

        (1.44 "MB"/floppy * 1024000 bytes/"MB" == 1474560 B/floppy; / 1,000,000,000 bytes/GB == .00147456 GB/floppy; 100 Gb/s * 1B/8b == 12.5 GB/s; 12.5 GB/s / .00147456 GB/floppy > 8477 floppies/sec).

        I used the following capacities in my calculations:

        3.5" Floppy: 1,474,560 bytes (11,796,480 bits)
        80 minute CD-R: 360,000 sectors at 2,048 bytes each (Mode 1) = 737,280,000 bytes (5,898,240,000 bits)
        DVD+R DL: 4,173,824 sectors at 2,048 bytes = 8,547,991,552 bytes (68,383,932,416 bits)

        Our discrepancies in numbers seem to come from the fact that I did my math lazily using Google Calculator and queries such as "100 gigabits / X bytes". Google uses the binary meaning of 107,374,182,400 bits as the value for 100

    • Yeah, but how many Libraries of Congress per Fortnight? (LoC/Fn)

      Go for it, math boy! Show us what you got!

  • by elFarto the 2nd ( 709099 ) on Wednesday June 23, 2010 @04:10PM (#32670918)
    The MTU is still 1500 bytes though :(
  • But are we talking about 100Gb/s over copper or fiber?

    -Rick

    • You can run it a very short distance with fat copper cables, but almost everyone will use fiber.

    • by Shimbo ( 100005 ) on Wednesday June 23, 2010 @04:50PM (#32671350)

      But are we talking about 100Gb/s over copper or fiber?

      -Rick

      Fibre and short-haul (~10m) copper, at least for the current standard. Historically, there's usually a lag of several years between a new Ethernet standard and a 100m copper version.

      I'm a bit sceptical about folks who say they'll never be a copper version, because I've heard that tale often enough before. I confidently predict it will be the Year of Linux on the desktop before it's the year of Fibre to the Desktop.

      • by afidel ( 530433 )
        The copper solution requires twinax, might as well run fiber as it's easier to deal with at length and can actually fit into the existing raceways (twinax is huge). There's not enough bandwidth and S/N margin in even Cat6A to do 100Gb at 100m, you need Cat7A which was just approved late last year and which requires a full plant re-work who's going to do that when a OM3 fiber installation should be good all the way to 1000Gb.
        • Re: (Score:3, Informative)

          by Bigjeff5 ( 1143585 )

          Twinax isn't too big, it's the bundle of 10 twinax you need to run 100gbit that are huge.

          I'm a little confused, though. Cat6 is capable of 10GbE, so why not bundles of 4 and 10 Cat6 for the standard as well, instead of just twinaxial? I recognize you'd need a special port setup, but that would still be significantly smaller than twinax. They would then be capable of 100m, would they not?

        • by Ster ( 556540 )

          The copper solution requires twinax, might as well run fiber as it's easier to deal with at length and can actually fit into the existing raceways (twinax is huge).

          I think you're thinking of CX4 [wikipedia.org], which is indeed huge. 10Gb TwinAx [wikipedia.org] comes in SFP+ [wikipedia.org], which is the same port that you use for 10Gb fiber.

          • by afidel ( 530433 )
            OM3 has a typical jacket diameter of 2mm vs 6-8mm for twinax and 8-9mm for 7a, cat 5e is closer to 4-5mm typically. When you have hundreds of runs in a raceway it makes a big difference.
  • by Citizen of Earth ( 569446 ) on Wednesday June 23, 2010 @05:02PM (#32671464)
    USB3, HDMI, DVI, Ethernet, DisplayPort, FireWire, eSATA, proprietary. There should be one kind cable that can be used for all of these purposes. We have the technology. Consumers will thank you.
    • Re: (Score:3, Informative)

      They actually tried this with FireWire (IEEE-1394) in the consumer electronics industry back in 2000-ish, but then the whole HDCP thing came up, and that was that.

      The idea is that you'd have a home theater receiver that just had a crapload of firewire ports on the back, and all your stuff would plug in via that, including speakers. Never happened though.

      • I dunno. HDMI 1.4 now sports ethernet and audio return channels. About the only thing absent is USB for low/high speed data (keyboards or mice / disk drives: 100 Mb/s ethernet is a little slow for disks).

        So, I suppose it will factor out to three cable types: HDMI for "media" connections that are video-centric, ethernet for long distance data and networking connections, and USB for local data and peripherals. Maybe add 1394 (firewire) for video capture and control, though GbE and even 100 Mb/s ethernet coul

      • OSI Network model. Separate the physical layer from the application layers and everything in between.

      • They actually tried this with FireWire (IEEE-1394) in the consumer electronics industry back in 2000-ish, but then the whole HDCP thing came up, and that was that.

        It probably would have worked if Apple had allowed it to succeed. They always fuck that kind of shit up.

    • Light Peak (Score:3, Insightful)

      USB3, HDMI, DVI, Ethernet, DisplayPort, FireWire, eSATA, proprietary. There should be one kind cable that can be used for all of these purposes. We have the technology. Consumers will thank you.

      Are you here from Intel marketing?

      <wp:Light_Peak>

      Oh, heck, that's still not working. fine:

      http://en.wikipedia.org/wiki/Light_Peak [wikipedia.org]

    • by Nemyst ( 1383049 )
      Wait, you want ONE cable to rule them all? You do realize the average consumer already has enough problems understanding how YPbPr cables work (green in green, blue in blue, red in red, no not the audio!)? Making all things unified would just mean more money for BestBuy's installation services. Either that or lots of people would be watching their music and hearing their network.
    • Re: (Score:3, Insightful)

      by evilviper ( 135110 )

      USB3, HDMI, DVI, Ethernet, DisplayPort, FireWire, eSATA, proprietary. There should be one kind cable that can be used for all of these purposes.

      HDMI/DVI/DisplayPort are for raw video data. They have NOTHING in common with the USB mouse/keyboard on your desk. It makes no sense to combine them.

      There's some good reasons for the differences. For instance, even if I could hook up my internet access to the same port as my hard drives, I never would... One needs low-overhead, realtime and no security, while th

Keep up the good work! But please don't ask me to help.

Working...