Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Networking IT Technology

10GbE: What the Heck Took So Long? 295

Posted by Soulskill
from the i-blame-the-schools dept.
storagedude writes "10 Gigabit Ethernet may finally be catching on, some six years later than many predicted. So why did it take so long? Henry Newman offers a few reasons: 10GbE and PCIe 2 were a very promising combination when they appeared in 2007, but the Great Recession hit soon after and IT departments were dumping hardware rather than buying more. The final missing piece is finally arriving: 10GbE support on motherboards. 'What 10 GbE needs to become a commodity is exactly what 1 GbE got and what Fibre Channel failed to get: support on every motherboard,' writes Newman. 'The current landscape looks promising. 10 GbE is starting to appear on motherboards from every major server vendor, and I suspect that in just a few years, we'll start to see it on home PC boards, with the price dropping from the double digits to single digits, and then even down to cents.'"
This discussion has been archived. No new comments can be posted.

10GbE: What the Heck Took So Long?

Comments Filter:
  • Cost (Score:3, Insightful)

    by Anonymous Coward on Friday June 07, 2013 @04:43PM (#43941099)

    10GE Motherboards are still pointless when 10G routers & switches are still way too expensive.

  • Re:The real reason (Score:5, Insightful)

    by redmid17 (1217076) on Friday June 07, 2013 @04:44PM (#43941117)
    Biggest reason I can remember from when we were looking at upgrading SAN and LAN equipment in our data center was the price/performance point. We didn't need 10 GbE performance yet and the price was pretty far above what we were using. That was 3 years ago though, so I'd have to poke around some of the newer equipment to see if we have any boxes with it. I just took a gander through the HP and Dell offerings and it's not even an option on anything but the top tier equipment. I think that pretty much explains the situation itself.
  • Commodity (Score:4, Insightful)

    by rijrunner (263757) on Friday June 07, 2013 @04:55PM (#43941163)

    Of course its growth was going to be lower.

    The primary use of 10GbE is virtualization. The use of network cards are a function of the number of chassis, not the number of hosts. Numerically, 10GbE is not 10 1GbE cards. You can split the 10GbE between a lot of hosts. You can easily double, triple, or even quadruple that to making that 10 GbE card the equivalent of 1 GbE cards on 40 servers, depending of their load and use. Instead of buying 40 servers and associated cards, you're buying one larger chassis with larger pipes. In a large farm environment, and it makes sense.

    Throw in the fact that network is only as fast as its narrowest choke point, there is no reason to put in a 10 GbE card behind a 7MB DSL connection.

    What 10GbE needs to become a commodity is a) end of any data caps, b) data to put down that pipe, and c) a pipe that can handle it.

    Show me fiber to my door and then, it will be a commodity.
     

  • by AdamHaun (43173) on Friday June 07, 2013 @04:58PM (#43941191) Journal

    Ten gigabits per second is 1,250 megabytes per second. High-end consumer SSDs are advertising ~500 MB/sec. A single PCIe 2.0 lane is 500 MB/sec. Then there's your upstream internet connection, which won't be more than 12.5 MB/sec (100 megabits/sec), much less a hundred times that. I guess you could feed 10GbE from DDR3 RAM through a multi-lane PCIe connection, assuming your DMA and bus bridging are fast enough...

    I'm sure a data center could make use of 10GbE, but I don't think consumer hardware will benefit even a few years from now. Seems like an obvious place to save some money in a motherboard design.

  • by D1G1T (1136467) on Friday June 07, 2013 @04:59PM (#43941197)
    What you describe exists. It's not uncommonly used for IP cameras outside the 100m limit of TP Ethernet (on perimeter fences, etc.). The problem with fibre is that it's a bitch to terminate compared to copper, and therefore quite a bit more expensive to install on a large scale. Fibre still only makes sense when you need the long cable runs.
  • Re:Meanwhile (Score:2, Insightful)

    by homey of my owney (975234) on Friday June 07, 2013 @05:20PM (#43941387)
    What the hell do you do at home that would require a 10GbE network?
  • Re:Meanwhile (Score:4, Insightful)

    by fuzzyfuzzyfungus (1223518) on Friday June 07, 2013 @05:22PM (#43941405) Journal

    its also not needed for most work environment.

    It is extremely convenient when doing large building and/or campus networking, though...

    Sure, it makes very little sense to do 10Gb to the drop(barring fairly unusual workstation use cases); but if all those 1GbE clients actually start leaning on the network(and with everybody's documents on the fileserver, OS and application deployment over the network, etc, etc. you don't even need a terribly impressive internet connection for this to happen), having a 1Gb link to a 48-port(sometimes more, if stacked) switch becomes a bit of an issue.

    Same principle applies, over shorter distances, with datacenter cabling.

  • by Guspaz (556486) on Friday June 07, 2013 @05:29PM (#43941475)

    You're looking at things backwards. If you've got a 500 MB/s SSD, then you shouldn't look at 10GigE and say "that's twice as fast as I need, it's useless". You should look at the existing GigE and say "my SSD is four times faster, one gigabit is too slow"...

    Even a cheap commodity magnetic hard disk can saturate a gigabit network today. The fact that lots of computers use solid state drives only made that problem worse. Transferring files between computers on a typical home network these days, I think the one gigabit per second network limitation is going to be the bottleneck for many people.

  • Re:Meanwhile (Score:5, Insightful)

    by lightknight (213164) on Friday June 07, 2013 @05:34PM (#43941517) Homepage

    Oh for crying out loud. Where do you people get off with this kind of thinking? How are you even allowed in technology fields with a mind like that?

    It's not needed...technology is about advancing because it's WANTED. It's not run by committee, and it's not run by determination of some group need, because if it were, we'd still be living in caves and worshiping rocks, because fire isn't needed by anyone.

    And the reason, reading between the lines, for it taking so long to be adopted, is because everyone has become cheapskates when it comes to technology. The idea of a separate NIC to handle network traffic is a lost cause, as is a dedicated sound card, and now video card. Why? Because you're trying to justify to a group of people who refuse to educate themselves why it would be in their own best interest to pay a little more.

    I applaud the people behind 10GB E, and hope they have enough resources / energy to bang out 100GB E. This is progress we can measure, easily, and it should be rewarded.

  • by AdamHaun (43173) on Friday June 07, 2013 @06:07PM (#43941805) Journal

    You're looking at things backwards. If you've got a 500 MB/s SSD, then you shouldn't look at 10GigE and say "that's twice as fast as I need, it's useless". You should look at the existing GigE and say "my SSD is four times faster, one gigabit is too slow"...

    If I want to copy tons of large, sequentially-read files every day, maybe. (Assuming that 500 MB/sec actually hits the wire instead of bottlenecking in the network stack.) But I'm not sure why I would do that. If I have a file server, my big files are already there. If I have a media server, I can already stream because even raw Blu-ray is less than 100 Mbps. If I'm working on huge datasets, it's faster to store them locally. If I really need to transfer tons of data back and forth all the time, I'm probably not a typical home network user. ;-)

  • Re:Meanwhile (Score:4, Insightful)

    by LordLimecat (1103839) on Friday June 07, 2013 @11:05PM (#43943807)

    Thats why you have a few 10GbE uplinks on the access switch, that way everyone generally gets 1gbit at all times.

Stinginess with privileges is kindness in disguise. -- Guide to VAX/VMS Security, Sep. 1984

Working...