Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Network Networking IT

IEEE Sets New Ethernet Standard That Brings 5X the Speed Without Cable Ripping (networkworld.com) 157

Reader coondoggie writes: As expected the IEEE has ratified a new Ethernet specification -- IEEE P802.3bz -- that defines 2.5GBASE-T and 5GBASE-T, boosting the current top speed of traditional Ethernet five-times without requiring the tearing out of current cabling. The Ethernet Alliance wrote that the IEEE 802.3bz Standard for Ethernet Amendment sets Media Access Control Parameters, Physical Layers and Management Parameters for 2.5G and 5Gbps Operation lets access layer bandwidth evolve incrementally beyond 1Gbps, it will help address emerging needs in a variety of settings and applications, including enterprise, wireless networks. Indeed, the wireless component may be the most significant implication of the standard as 2.5G and 5G Ethernet will allow connectivity to 802.11ac Wave 2 Access Points, considered by many to be the real driving force behind bringing up the speed of traditional NBase-T products.
This discussion has been archived. No new comments can be posted.

IEEE Sets New Ethernet Standard That Brings 5X the Speed Without Cable Ripping

Comments Filter:
  • until its done.
    • That means you have huge guts.

  • ...does it just require new plugs and jacks?

    • by Anonymous Coward on Tuesday September 27, 2016 @12:36PM (#52970597)

      "Our new router courageously has no jacks!"

    • I don't see why.
      If the cable is the same the jacks use all the cables. At least when I put the head on the cables I connected them all.

      • I don't see why. If the cable is the same the jacks use all the cables. At least when I put the head on the cables I connected them all.

        It's a matter of the frequency tolerance and ability for the cable to carry the requisite signals. CAT5 doesn't have the tolerance for 1GBps transmission while CAT5e does, but CAT6 and CAT7 do even better as the signal tolerances are significantly improved and able to push higher frequencies.

        Most likely, when they say that you don't need to tear out the cables, they're referring to CAT6 cable installations. It you have CAT5 you'll definitely need to upgrade the cable; you will *likely* need to upgrade if

        • To add to what you said, it very much depends on length, and also on exactly how the termination is done (untwisted length, etc). Ten feet of CAT5 introduces less noise than 100 meters of CAT6.

          Also, gigabit was actually designed to work over cat5, but barely.

        • They don't explain in TFA what speeds will be possible on what kinds of cable but they explicitly say that cat5e and cat6 will both be able to go forwards.

          • by joib ( 70841 )
            It's 2.5 Gbps over 100m Cat5e, and 5 Gbps over 100m Cat6.
            • It's 2.5 Gbps over 100m Cat5e, and 5 Gbps over 100m Cat6.

              So you won't be able to go 5 Gbps over Cat5e over short distances? I'm skeptical.

      • Because the jack connects the cable to the hardware. Is RJ-45 capable of achieving the same speeds as cat5e or cat6 with this new standard?

    • Not sure if serious, but new signaling. Why would you need to change out the jacks? RJ-45 has been sufficient for quite a long while.

      • It's too thick. Clearly they need to migrate it to USB-C.
      • Cables are made to one standard, jacks to another. The cable itself might be capable of multi-Gb data transmission, but is RJ-45 hardware up to the same task?

      • by skids ( 119237 )

        Older jacks may be terminated with too much untwisted wire at the end and the traces to the pins might not be as crosstalk-free as they could. YMMV.

        For 1G the answer has always been only change the jack out if there are problems with the connection, because usually it's not needed. For this, who knows?

        But even though connectors "rated" for higher speeds are a bit on the pricey side, this cost pales in comparision to runnng new cable... that's a lot more manpower.

        The main drive for this is terminating wifi

    • Seeing as you can already push 10Gb-E down a Cat6 or 6e cable, with regular RJ45 plugs on it, I'd say that they're keeping 2.5Gb-E and 5Gb-E backwards compatible and using Cat5e (for short runs) Cat6 (recommended) or 6e (more better) with RJ45 termination
      https://en.wikipedia.org/wiki/... [wikipedia.org]

  • by freeze128 ( 544774 ) on Tuesday September 27, 2016 @12:36PM (#52970601)
    This will certainly save a lot of money for enterprises. I expect it will be the RARE company that will actually need 5Gbps per workstation. Most can probably get by on 100Mbps.
    • Re:Beautiful (Score:5, Insightful)

      by Princeofcups ( 150855 ) <john@princeofcups.com> on Tuesday September 27, 2016 @12:42PM (#52970629) Homepage

      Applications and operating systems will bloat appropriately to use the new bandwidth.

      • by darkain ( 749283 )

        Or quite the opposite: once the resources become available, new tools will emerge that can use them. 100mbps is "fine" if all you're doing is casual web browsing and email, but on a home or corporate network with file sharing involved, this starts to eat up quite a bit more bandwidth. Add in 1080p/4k video streams, and that is even more. Now what about removing the most failed component in the PC, the local storage, and replace it with a network booting environment with all backend storage sitting on a nice

        • Good God, man! Where do you work? You watch 4K videos in the office? Your boss wants that TPS report before his meeting with the board at 1:00PM. You better turn off that video and open up Excel!
          • McCoy: Good God, man!

            Kirk: I don't care how you do it, Bones, just fix the damned video.

            McCoy: I'm a doctor, not a damned cable monkey!

            Spock: Fascinating. This router has no jacks.

            Chekov: It's a couragous router. Inwented in Russia.

            Uhura: This is not a federation signal. I can't make anything out of it, sir.

            Sulu: Faraday shields up. It's good to be Takei, bitches.

          • by darkain ( 749283 )

            Actually, I'm a part-time photographer who routinely works on PSD files so large they have to be saved as PSB. On a single gigabit link, these files take 20-60 seconds to save, so yeah, the increased bandwidth is much appreciated!

      • Applications and operating systems will bloat appropriately to use the new bandwidth.

        Oh hogwash. End user requirements will bloat to use the new bandwidth. 15 years ago we had 10/100 and were happy with it. But end users didn't have 4k cameras available in a $250 hand held package, we didn't have 25GB blueray streams being sent over network players, and games didn't come in 50GB downloads complete with the own torrenting client. My internet connection outpaces all wireless in my house and is 5 times faster than my wired ethernet from back in the day.

        Bloat and operating systems are consuming

    • You can get by on 100mbs, but it stinks. I am at a remote location and limited to 100mbs. I get a 10gig file to transform I spend so much time downloading it, fixing and send it back.
      So taking normally 15 minutes to transfer the file. it can take 1.5 with 1gbs, or 18 seconds with 5gbs.

      • If you frequently make a change to a 10GB file, check out rsync. It transfers only the bytes that changed, rather than the whole file. Basic usage:

        rsync -av local/file.bin user@123.1.1.8:/home/remote/file.bin

        Where the user has ssh access.

        • rsync sorta breaks down when you're dealing with large amounts of data because it has to scan all of it remotely and locally. True, it doesn't transfer much, but it can take an awful long time to figure out what's actually needs transferring.

          ZFS (the filesystem... probably don't need to be pointed out on ./) on the other hand knows what blocks have changed and would probably work better. I've only tinkered with it in VM environments but I would like to give it a spin as an offsite backup sync solution.

          • ZFS and any fs with the Copy on write feature should introduce a hinting api to rsync. That what the fs that knows what's changed can let rsync know.

          • Of course if that "large amounts of data" of data is multiple files, rsync doesn't have to read the unchanged the files. It can see by the file modification time and size that it matches the remote copy.

            You mentioned ZFS, and offsite backup. For our business grade offsite backup and hot spare, we use LVM (logical volume manager). If you have a very large file, particularly a drive image, you'll get significantly better performance by creating it as a logical volume rather than as a file* on another filesyst

            • PS:

              > I've only tinkered with it in VM environments but I would like to give it a spin as an offsite backup sync solution.

              In all my years on Slashdot I've never done this, but since you said you would like to give something like this a spin:

              We've spent many years developing a pretty bad ass offsite backup solution based on this concept. One reason it's bad ass is that I found some cool ways to make it very efficient (cheap). You can boot up your backups live in our DC and SSH to them (or however you like

    • in my experience, a 1 Mbps cap (per workstation) would increase productivity 100 fold.

      it's good enough to move documents around, send emails and use whatever internal tool is necessary. but painful enough to render fakebook useless.

    • Re:Beautiful (Score:4, Interesting)

      by jedidiah ( 1196 ) on Tuesday September 27, 2016 @12:52PM (#52970711) Homepage

      You don't just have the average speed to consider. You also have to consider peak demand. Many relatively mundane individuals may benefit from much higher peak capacity. They may not need it constantly, but it will be terribly useful when they can take advantage of it.

    • by NotAPK ( 4529127 )

      Depends on what you're doing. I find multi-monitor RDP over 1000Mpbs to be much more pleasant than over 100Mbit.

      • by AmiMoJo ( 196126 )

        File transfers too. 100Mb only gets you about 12MB/sec, rather slow for running network applications or shifting even moderate size files around.

        Gigabit has been standard on laptops and desktop motherboards for years now, and the switches are only slightly more expensive than 100Mb. It's crazy to even consider 100Mb when buying equipment these days.

    • by Shimbo ( 100005 )

      This will certainly save a lot of money for enterprises. I expect it will be the RARE company that will actually need 5Gbps per workstation. Most can probably get by on 100Mbps.

      As the summary says, getting 5Gbps to a WAP and sharing it between N laptops is probably more important. It might take a bit longer for 5Gbps interfaces to become the standard on-board Ethernet.

      • by bjwest ( 14070 )

        As the summary says, getting 5Gbps to a WAP and sharing it between N laptops is probably more important. It might take a bit longer for 5Gbps interfaces to become the standard on-board Ethernet.

        Damn! If only there were some way to get Ethernet on a desktop other than with an on-board adapter.

    • Indeed, and 640KB should be fine too...

    • Assuming that the company is rational and is using a switch to connect the workstations instead of a hub. Getting 20 to 30 people sharing a hub is not fun. Especially around 9:00 AM when a lot of them come in, turn on the computer, and find out that there's a large set of patches to be applied to their computer. Network slows down to a crawl and everybody goes on an hour coffee break.

      • by TheSync ( 5291 )

        using a switch to connect the workstations instead of a hub.

        Can you even buy a hub these days?

        You can get a 5 port 100 Mbps Ethernet switch for $15...

        • This was 10 years ago and it was also in a large organization so it wasn't a hub that you would have at home (sorry, since this is /., most of us wouldn't have at home). It was a large hub that was rack mountable. Probably from Cisco since the networking people there liked the products from Cisco.

    • If users benefit from SSDs they'll benefit from 10Gbe. There is a real appeal to storing the application once on a large storage array and launching the application remotely over the network. It's a happy middle ground between a thin client and a fat workstation.

    • by swb ( 14022 )

      You mean it will cost a lot of money.

      Vendors will end up playing games where the features you want won't be available unless you buy into their new product lines featuring 802.3bz ports at increased prices. Dumb, unmanaged 1 gig at today's managed 1 gig prices or managed L2/L3 802.3bz at the price you paid 5 years ago for 1 gig.

      Server and desktop vendors will have a new upcharge option for 802.3bz ports that will allow them to hold the line on 10 gig port prices, and stupidly, many people will go for it th

    • This will certainly save a lot of money for enterprises. I expect it will be the RARE company that will actually need 5Gbps per workstation. Most can probably get by on 100Mbps.

      Anyone that upgrades to VOIP will need a 1Gbps network, preferably with CAT6; CAT5e is okay but CAT6 really enables it.

    • I expect it will be the RARE company that will actually need 5Gbps per workstation. Most can probably get by on 100Mbps.

      The faster the network is, the more you use it. That leads to more files stored on the network and actually getting backed up. Yeah, you can back up files from workstations, but how inconvenient... and often, how fragile.

    • Gigabit Ethernet maxes out at between 70 and 100 Megabytes per second, depending on your file sharing protocol. When Gb-E was first introduced this was faster than local disk, so it meant that workstations could get data to and from the server faster than they could from local storage. This was a good thing.

      Now that even a cheap 1TB consumer hard drive (not to mention a SSD) can push more than double this data transfer rate, working off a server is (relative to local storage) getting slower and slower.

      There

    • If 5Gbps networks become the norm I can imagine something of a return to the VAX and green screen days. Imagine a terminal with ports for display, keyboard, mouse, whatever, and the processing done on a central server somewhere in the building. The terminal could be a small box to turn the RJ-45 into a bunch of USB-C ports for all the peripherals one would need.

      Perhaps that is going a bit too far since even at 5Gbps, and allowing for some level of video processing/compression/whatever in the box that migh

    • If you work in IT 100 Mbps is unacceptable, especially if you move big data such as backups and OS images.

  • Bob Metcalf, 1976
  • by Chmarr ( 18662 ) on Tuesday September 27, 2016 @12:55PM (#52970727)

    Wasn't clear from TFA if this would work on Cat 5e, or if Cat 6 is required.

  • by enriquevagu ( 1026480 ) on Tuesday September 27, 2016 @01:20PM (#52970893)

    This new standard is very interesting: it employs the same coding and spectral density as 10GBase-T (6.25 bps/Hz), but it employs the available bandwidth (Hz) depending on the cable category: Cat.5e (100 MHz) can provide 2.5Gbps and Cat.6 (250 MHz) can provide 5 Gbps.

    Interestingly, before this standard there was no practical use for Cat.6 cabling: any speed you could obtain using Cat.6 cable (1Gbps) could be also obtained using cat.5e, and if you wanted something faster (10Gbps) you needed Cat. 6A (500 MHz BW). This newly ratified standard finally gets some use from those extra MHz you have in Cat. 6, if you have installed it. It will be interesting to know if 802.3bz ports will be able to measure link bandwidth to adapt speed accordingly to 2.5/5Gbps.

    • by joib ( 70841 )
      IIRC Cat6 is quite common these days because it's essentially the same price as Cat5e. Cat6a, OTOH, is a lot more expensive.
  • For the Price of 10GB and 40GB, so I can get rid of those pesky (and expensive) Fibre Channel links to my storage!!!!

    That has been in development for ages now...

    Latency not whitstanding, that's what infiniband is for anyway!

  • SInce 1990 we were promised IDF closets would be a thing of the past.

    Everyone would use wifi and VOIP by the year 2000. We all have fast cell phones so why can't our offices use the same?

    • For the simple reason that there isn't enough bandwidth within the available spectrum. It's called physics.

  • by AaronW ( 33736 ) on Tuesday September 27, 2016 @01:49PM (#52971069) Homepage

    I have been working with 2.5G for around a year now using a 2.5G physical interface chip from Aquantia that seamlessly handles everything from 100Mbps to 10Gbps including 1G, 2.5G and 5G. If the cable isn't too long I've run 10G over cat 5. Hopefully the prices will drop quickly once more companies support this standard since I just bought the cheapest 2.5G switch I could find, 8 ports for around $1200 for development purposes. It also interoperates fine with standard 1G equipment.

    Aquantia is also nice is that unlike many phy chip vendors their phy SDK is free as in beer and is fully GPL and BSD compatible, though it will need to be re-written for the Linux kernel to follow the guidelines. I re-wrote it for U-Boot though I won't be able to push it upstream for a while yet. The chip I'm using even supports MACsec [wikipedia.org] in hardware. There were two different 2.5G proposals, one from Broadcom and one from Aquantia. The Aquantia is the one that ultimately got accepted as the standard.

  • This is cool, but ultimately irrelevant until someone forces the ISP's to admit there is no bandwidth shortage and do some high tier interchange upgrades. The current venal ISP's have spent millions convincing the FCC and customers there is a shortage of bandwidth so they can vastly overcharge for the 'available' bandwidth. As long as the cable companies won't compete and aren't interested in resolving the situation, most of us are stuck in a hell where 25 mbps is the best we can get on a good day.

  • I'm kind of struggling for what this is good for besides giving switch vendors a reason to push needless IDF upgrades and technology vendors yet another upcharge option.

    1 gig Ethernet is already overkill for just about every desktop purpose and still has some useful life left in many data center applications, especially for lower performance areas, even in network storage.

    The only place it becomes somewhat weak is in heavy use AC wireless deployments where it can be truly taxed, but most often even these de

    • I'm kind of struggling for what this is good for besides giving switch vendors a reason to push needless IDF upgrades and technology vendors yet another upcharge option.

      It's actually cost-saving since you can use the same cabling.

      1 gig Ethernet is already overkill for just about every desktop purpose

      Yes, but that's not what it's for. It's for feeding these fancy new wireless access points, without having to do wiring. Customers will already have to purchase APs, they're not going to balk at buying a switch to feed them. It's not really interesting to the home user unless they've got multiple users doing remote storage, or if it turns out to be close to the price of 1GbE... or at least, notably cheaper than 10GbE. Then most of the few niche ho

      • by bongey ( 974911 )

        Just ploy to keep 10G artificially higher. 10G switches are nearly 1/2 the price of similar 2.5G/5G switches , with nearly quadruple the performance.
        *$1100 Netgear m4200 2.5/5G switch 8 RJ45 2 SFP+ 90Gbps Capacity , 66 Mpps . (no number on non blocking) http://www.downloads.netgear.c... [netgear.com]
        *$595 Ubiquiti ES-16-XG 4xRJ45 10G 12xSFP+ 320 Gbps Capacity, 160Gpbs Nonblocking, 238Mpps http://www.balticnetworks.com/... [balticnetworks.com]

        Also if you import 300m(1000f) cat7 1200Mhz 23 AWG directly from germany for about 300 bucks.

        So f

        • by swb ( 14022 )

          Not a great comparison, the 10 gig switch is mostly SFP ports which are only useful for short run twinax or with fiber optic SFP modules for anything beyond twinax lengths. 10g copper SFP modules don't exist. Useful in a rack with servers with SFP NICs or if you want to fuck around with fiber, but in my mind that rates them as less useful than base-T which has much simpler and cheaper cabling demands.

          I see a lot of twinax/optical deployments as converged core server + iSCSI storage but mostly in new clust

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...