10GbE: What the Heck Took So Long? 295
storagedude writes "10 Gigabit Ethernet may finally be catching on, some six years later than many predicted. So why did it take so long? Henry Newman offers a few reasons: 10GbE and PCIe 2 were a very promising combination when they appeared in 2007, but the Great Recession hit soon after and IT departments were dumping hardware rather than buying more. The final missing piece is finally arriving: 10GbE support on motherboards. 'What 10 GbE needs to become a commodity is exactly what 1 GbE got and what Fibre Channel failed to get: support on every motherboard,' writes Newman. 'The current landscape looks promising. 10 GbE is starting to appear on motherboards from every major server vendor, and I suspect that in just a few years, we'll start to see it on home PC boards, with the price dropping from the double digits to single digits, and then even down to cents.'"
Meanwhile (Score:2, Informative)
Everyone's still running off of ancient Cat3 wiring laid down when telephones were still analog.
Re: (Score:2)
Sounds like my home network may jump from 1Gb to 10Gb sooner than I expected, but it's still behind 3Mb DSL as my only non-Comcast option. Yay?
Re: (Score:2, Insightful)
Re: (Score:3)
stream hd video to multiple nodes frome a network file server
Re: (Score:2)
Depending on the level of compression a full HD (1080p) stream requires between 400KBytes/sec and ~2 MBytes/sec of bandwidth. That is, approximately 4MBits-20MBits.
Needless to say, even 100MBit ethernet has no problem with a couple of those, let alone existing 1-gigabit ethernets.
At 2160p (which is what people call 4K, for 3840x2160), perhaps ~1 MByte/sec to ~5 MBytes/sec depending on the level of compression and the complexity of the video. That is, somewhere north of 50 MBits on the top end. Despite ha
Ummm (Score:2)
Ok #1 who does that? I mean that is not a very "home user" application in general. However #2 is gig is plenty for that. 1920x1080 24/30fps AVCHD PH video is 24mbps max. Blu-rays can in theory be 50mbps (between audio and video) mostly for MPEG-2 though in practice it is usually more like 25mbps AVC. Youtube is 6mbps for 1920x1080.
So even with the max Blu-ray rate you are good for two streams. Realistically you can do 4 streams at most data rates. Even when 4k stuff starts to happen, it'll be fine to do one
Re: (Score:2)
Ok #1 who does that? I mean that is not a very "home user" application in general.
you mean you have never heard of a home with people watching television in different rooms? at my home there are tiem when everyone is watching a different show/movie and using network resources with their portable devices such as laptops and tablets so it is not that hard to swamp a 1 gig nic.
Re: (Score:2)
If people are running these TVs in different rooms, you only need the really fast uplink from the media center to the switch, its probably a ton cheaper to simply have 2 NICs in the server. 10GbE runs about $200 per port on the switch, and a ton more for NICs.
Re: (Score:3)
Who said it does? If you need 1.1Gbit/s of sustained traffic, then a 1Gbit/s link will not be sufficient. The next step upwards is 10Gbit/s.
Re: (Score:3)
No, the next step is to get a second NIC.
Re: (Score:2)
Until sustained R/W speed on disk passed 10MB/s, I had no use for 1 GbE either! But it looks like it won't be too long before the network is back to being the bottleneck on network file copies/backups again, even on simple non-RAID volumes.
satellite tv has to compress att does even fios hi (Score:2)
satellite tv has to compress att does even fios hit the wall in QAM space.
Current high-cost item: The 10Gb switch (Score:2)
Also, for flexibility, you want SFP+ ports and adapters for each port. None of those are cheap.
Re: (Score:3)
That's been true at this point at each jump in speeds (well, other than the details of the connection). The Ethernet chip-on-motherboard heralds the price fall on the switch - at the scale the 10GbE chips will soon be made, their price will fall (and thus the price of port-specific electronics in the switch will fall too), and then reasonably-priced unmanaged switches from low-end vendors follow soon after.
Re:Meanwhile (Score:5, Insightful)
Oh for crying out loud. Where do you people get off with this kind of thinking? How are you even allowed in technology fields with a mind like that?
It's not needed...technology is about advancing because it's WANTED. It's not run by committee, and it's not run by determination of some group need, because if it were, we'd still be living in caves and worshiping rocks, because fire isn't needed by anyone.
And the reason, reading between the lines, for it taking so long to be adopted, is because everyone has become cheapskates when it comes to technology. The idea of a separate NIC to handle network traffic is a lost cause, as is a dedicated sound card, and now video card. Why? Because you're trying to justify to a group of people who refuse to educate themselves why it would be in their own best interest to pay a little more.
I applaud the people behind 10GB E, and hope they have enough resources / energy to bang out 100GB E. This is progress we can measure, easily, and it should be rewarded.
Re: (Score:2)
Wants cost a lot more money than needs do. I WANT a Ferrari, but what I need is a car to get me to and from work. Which one cost more? When the Ferrari becomes inexpensive as a Ford, let me know, I'll buy two.
Re: (Score:3)
Re: (Score:2)
wiki: In measurements made between January and June 2011, the United States ranked 26th globally in terms of the speed of its broadband Internet connections, with an average measured speed of 4.93 Mbit/s
And I call BS on people having usable 30Mb connections. How many people actually get their rated speeds most o
Re: (Score:2)
Re: (Score:2)
Asus RT-N16 is by no means new tech, is less than $100, and has gigabit ports.
Dunno how fast it can actually route, since as a midwestern American, my 12Mbps connection is considered "fast," and the absolute maximum I can get is 18Mbps.
Re: (Score:2)
Re: (Score:2)
About two years ago I was contracting to Clearwire to turn up data centers. 40 rack caged colo space with DC power and four half rack sized Cisco routers, rock and roll shit. Once we got all the thousands of cables in the right places and all the configs in the equipment I was able to turn up BGP on two 10Gb circuits. 20Gb of raw fucking Internet all to myself for the next week... and only a Thinkpad with a 1Gb port to connect it to. But I will say that I could torrent the shit out of stuff! Good time
Dy-no-mite!!!! (Score:2)
Re: (Score:2)
Here at work we have 100Gb links... in the lab. Our internet connection is better than my home (slow DSL), but not as nice as many friends' cable links, and not nearly as nice as the sonic.net 1Gb links available a mile across town from my house. :-(
Re: (Score:2)
You are confusing GB and Gb. Today's hard drives, even the spinning disks of metallic dust kind can write and read faster than 1Gb. I had a NAS attached to my machine via 1Gb, but found it was too slow, so I moved my RAID internal (sorta). Internal RAID card with cables that run to an external enclosure. Speed went from 50-60GB/sec to 1.2TB/sec.
Re: (Score:2)
Lol, and I just totally botched that one up myself. I meant 50-60MB/sec to 1.2GB/sec.
I just retested, I only get 928MB/sec write, but 2391MB (2.3GB)/sec read. Even 10Gb ethernet would bottleneck that, but at least at a more reasonable point. Then I could move my RAID array to an external storage machine.
Re: (Score:2)
I'm not a cheapskate. I see no reason to invest in hardware that doesn't provide a tangible benefit to me.
That's not the same as saying "No one needs 10gigE." It means *I* don't. If I could get FiOS or Google Fiber, I'd be a lot more interested. But I'm already stuck at about 20Mbps and none of my systems have storage systems that can write at 1,250MB/s that 10GigE makes possible. Adopting it requires new routers and new Ethernet cards.
I'm not saying it's a bad standard. I'm not saying some people can't use
Re: (Score:3)
People dont want 10GbE, they want wireless.
Corporate environments are generally also not wanting to spend $200 per switch port for all access switches for no other reason than that they can boast about it on slashdot.
Re: (Score:2)
Do you have a 10 kV, 100 A electrical service to your home?
If I did, would I be able to recharge my Leaf in a reasonable amount of time? Seriously, where I work, we move a lot of radar data. Everything now is gigabit. Do we need 10Gb? No, but if it were available at a reasonable price it would allow us to change the operating parameters of the radars and do new things with them. So, while we don't need it, we could use it, but the price needs to come down first, and the only way that will happen is if it becomes common.
Re: (Score:2)
100A service is actually 24kVA.
My home has 150A service, and this is typical for this area. This would only be considered excessive for a small apartment.
New construction is generally done with 200A service.
A more apt analogy would be "do you have a 120/208V, 200A 80kVA three-phase service to your home?" :)
Re:Meanwhile (Score:4, Insightful)
its also not needed for most work environment.
It is extremely convenient when doing large building and/or campus networking, though...
Sure, it makes very little sense to do 10Gb to the drop(barring fairly unusual workstation use cases); but if all those 1GbE clients actually start leaning on the network(and with everybody's documents on the fileserver, OS and application deployment over the network, etc, etc. you don't even need a terribly impressive internet connection for this to happen), having a 1Gb link to a 48-port(sometimes more, if stacked) switch becomes a bit of an issue.
Same principle applies, over shorter distances, with datacenter cabling.
Re: (Score:2)
Re:Meanwhile (Score:4, Insightful)
Thats why you have a few 10GbE uplinks on the access switch, that way everyone generally gets 1gbit at all times.
Am I on Slashdot? (Score:2)
My God, the luddites have taken over Slashdot tonight.
When I have 10Gb at home, I'll:
* Boot every PC from a remote server. No need even for local swap.
* Have a much better time doing backups. When I get a new computer, the first thing I do is boot a liveOS and run:
gzip /dev/sda | nc home.server 12345
and on the other end store the image. That way it can in theory go back for factory service if needed. The bottleneck is completely the network here, and it's slow even at gigabit speeds.
* Nev
Re: (Score:2)
" Boot every PC from a remote server. No need even for local swap"
10GBe won't be as fast as a nice cheap SSD -- but not even an SSD can keep up with an avalanche of data requests from multiple systems unless that remote server is pretty damn beefy by home standards. Managing simultaneous uploads and downloads from multiple systems to a single home location won't be much fun. Simpler to keep your OS local, and trivial as far as cost.
* Have a much better time doing backups.
True, provided your storage can hand
Re: (Score:2)
What's wrong with building out your home network like a datacenter? :) I'm perfectly happy with GigE. It handles the servers, iSCSI to the SAN, and an isolated branch for the desktops. It's the uplink speeds we have to work on.. I could upgrade to 10GigE, but when will the uplinks even get close? I'm putting my change or
Re:Am I on Slashdot? (Score:5, Interesting)
10GBe won't be as fast as a nice cheap SSD
It doesn't matter, remote storage is faster than gigabit. I don't need to hit 10 to get a benefit.
but not even an SSD can keep up with an avalanche of data requests from multiple systems unless that remote server is pretty damn beefy by home standards.
What? That's the whole point of fileservers. They need to meet the usage, of course, but that's an always increasing spec.
Simpler to keep your OS local, and trivial as far as cost.
Consolidating is always cheaper (per unit of storage) and it's easier to back up and manage, keep on UPS power, etc.
You'll need a heck of a RAID array for that, but it's buildable. Or, you could just stick with GigE, since that still tops out at 125MB/s and that pushes local (non-SSD) storage.
eh, my current central storage is 5 hard drives in a ZFS raidz2 with one SSD split up for L2ARC (cache) and ZIL (write cache). The entirely of the setup difficulty is:
cd /etc/yum.repos.d
wget url-to-repo
yum install zfs
(reboot or modprobe)
zpool create home raidz2 sda sdb sdc sdd sde cache sdf6 log sdf7
Oh, I had to plug in 6 SATA cables. Typical throughput is about 340MB/s. The only reason they're not all SSD's is because SSD's are expensive and unreliable. If it wasn't a home machine, the ZIL would be on a mirror of SSD's.
Correct me if I'm wrong, but my understanding is that this requires a lot more than just a high-speed connection. High-end connection + craptastic router = terrible latency when dealing with high load.
Switch, not router. There are problems with current buffer management techniques that effectively means that higher ceiling room means latency improvements. Google 'bufferbloat'. Things like CoDel will make this better when the pipes are more full, but they're not widely deployed yet.
"I have my home wired up like a datacenter. Everyone else should want a huge amount of network capacity and capability so that it makes my already extravagant costs slightly cheaper."
JHFCOAS - this is Slashdot. What we're doing now is what will be sold in a box for $200 at WalMart in five years. I'm amazed to find tech geeks who don't even know that normal people have been buying inexpensive Buffalo and WD SAN solutions at the office supply store since 2008. And with all this shit going on about the NSA, you can bet people are going to be pulling some of their stuff back out of the cloud.
Re: (Score:2)
On the typical home network, this may very well be true. I can fill a 1Gbit/s link at home, if I really want to. But for real data transfers, the network link does not tend to be the bottleneck.
On production servers, things look different. I have worked in a place, where 1Gbit/s links were a very problematic limitation for some of our servers. We would have loved to have 10Gbit/s on board.
Re: (Score:2)
Re: (Score:2)
Parent may be flamebait (atm) but i found it terribly funny.....i just installed an analog phone system from the mid-80's. The phone rings, we answer it. I didn't feel the need to buy a new shiny to do that. Spent $25 (dsl splitters) to set up a 3 line and 13 handset system and only used half the system :)
Personally I don't have a real need for this, but seems the logical progression. I know lots of you do move mountains of stuff locally at home and/or work.
The real reason (Score:2)
Re:The real reason (Score:5, Insightful)
It's still expensive as hell (Score:2)
And if you've ever looked at a NIC, you can see why. You get a modern gig server class NIC and it has this tiny little ASIC on it that does everything and draws less than a watt. Heck it'll probably drive two ports, if the hardware is on it. Then you get a 10gig NIC and it has a much larger ASIC with a big heatsink on it, and perhaps another chip as well. Guess what? That extra silicon cost extra money, as well as all the other related shit. And it just gets more and more expensive as you want more ports, l
Re: (Score:3)
The fact that you can use existing commodity Cat 6 cables for up to 55m with 10GigE will help a lot too. Yes, Cat 6a cables are required for the full 100m distance, and yes, Cat 6a cables are themselves cheap ($4 for that 6 feet you mention costing $80 with Twinax), but for lengths under 55m, the cabling that you've already got will continue working at the higher speeds. I think that will be a big factor, especially for consumers, where cables longer than 10 or 15 metres are incredibly rare anyhow.
Right now
Cost (Score:3, Insightful)
10GE Motherboards are still pointless when 10G routers & switches are still way too expensive.
Re: (Score:2)
It's a case of demand. There's no demand for those routers and switches because motherboards don't have 10GbE ports on them. Motherboards don't have 10GbE on them because there's no cheap routers or switches. Something has to give eventually and the motherboard probably makes the most sense to give in first.
Re: (Score:2)
Chicken and Egg, Bob. If I have a bunch of devices held back only by a few switches that can easily be replaced, the switches, when they drop a little in price, are getting replaced.
It's totally different when I need to rip out every single component, down to the wiring in the walls, to upgrade the network.
Re: (Score:2)
10GE Motherboards are still pointless when 10G routers & switches are still way too expensive.
Absolutely true. You can get a single-port 10Gb card that uses Cat6 cabling for less than $300, but the cheapest switch with more than eight 10Gb ports is around $8000. You can piece together a switch with 6-8 10Gb ports (using modules) for around $4000.
So, the reality is that you will pay 1x-3x the cost of the 10Gb NIC for a port to plug it into. Although that is less than the relative cost per port for high-end 1Gb managed switches, that's because the cost of a 1Gb NIC is basically pennies.
Re: (Score:2)
Well, if you are ok with going totally no-frills, you can get a 8*10G switch for under 800€ from netgear:
http://direkt.jacob-computer.de/_artnr_1491948.html?ref=103 [jacob-computer.de]
Re: (Score:2)
Well, if you are ok with going totally no-frills, you can get a 8*10G switch for under 800€ from netgear:
Yes, I am, and thanks for the pointer.
I have two dual port 10Gb cards in the machine that is my SAN so that I can connect to 4 servers back-to-back, and the switch would allow me to replace them with single port cards and still have failover. I could then sell the two dual port cards for almost the cost of the switch.
My idea of the perfect cable (Score:2, Interesting)
Four strands, two copper, two fiber.
The two fiber strands enable redundancy (ring topology all the way to the end-point);
The two copper strands for being able to provide power to devices.
That's it. That's all that's needed.
Re:My idea of the perfect cable (Score:4, Insightful)
Re: (Score:2)
Also a lot of big box stores. Loews stores are mostly fiber and they just did a huge upgrade from 10Mb/s to 1Gb/s last year.
Side note distance story:
Had a trouble ticked for a Home Depot where we found that one of the printers up front was wired all the way back to the data center in the opposite corner, about 550 feet. Out temporary fix was to drop the port down to 10Mb/s until we could get a lift in to run a line to the IDF by the printer.
Re: (Score:3)
Re: (Score:2)
For what? What's the application? Way too expensive to run to my IP Phone or Desktop PC (could juse use fiber or copper, why both?). Unnecessary in the datacenter (we don't need PoE). What's the use case?
Purists demand that One Cable Rule Them All. This naturally leads to a One True Cable that is wildly overengineered and expensive for the keyboards and mice and IP phones of the world, while still failing to support common, but in some way unusually demanding, edge scenarios.
Re: (Score:2)
But you have to admit that Cat5e with RJ45s on the end sure came close to that "one true cable" for a long time.
Re: (Score:2)
. . . along with fusion splicers dropping from thousands to under $200
Commodity (Score:4, Insightful)
Of course its growth was going to be lower.
The primary use of 10GbE is virtualization. The use of network cards are a function of the number of chassis, not the number of hosts. Numerically, 10GbE is not 10 1GbE cards. You can split the 10GbE between a lot of hosts. You can easily double, triple, or even quadruple that to making that 10 GbE card the equivalent of 1 GbE cards on 40 servers, depending of their load and use. Instead of buying 40 servers and associated cards, you're buying one larger chassis with larger pipes. In a large farm environment, and it makes sense.
Throw in the fact that network is only as fast as its narrowest choke point, there is no reason to put in a 10 GbE card behind a 7MB DSL connection.
What 10GbE needs to become a commodity is a) end of any data caps, b) data to put down that pipe, and c) a pipe that can handle it.
Show me fiber to my door and then, it will be a commodity.
Re: (Score:2)
Re: (Score:2)
Most people use issued DSL or Cable modems for networking. Commodity use is directly tied to broadband. And those modems shipped based on the tech supported by the ISP. Switching to 1GbE on the switch side tracks to when companies implemented DOCSIS 2.0. When they move to DOCSIS 3.0, then you'll see an upgrade in networking layer in residential use.
Re: (Score:2)
And yet, there was apparently a reason to put GbE cards behind that same 7Mbit DSL connection, or else we'd still be on 100BaseTx.
The bottlenecks are elsewhere (Score:4, Insightful)
Ten gigabits per second is 1,250 megabytes per second. High-end consumer SSDs are advertising ~500 MB/sec. A single PCIe 2.0 lane is 500 MB/sec. Then there's your upstream internet connection, which won't be more than 12.5 MB/sec (100 megabits/sec), much less a hundred times that. I guess you could feed 10GbE from DDR3 RAM through a multi-lane PCIe connection, assuming your DMA and bus bridging are fast enough...
I'm sure a data center could make use of 10GbE, but I don't think consumer hardware will benefit even a few years from now. Seems like an obvious place to save some money in a motherboard design.
Re: (Score:2)
Re:The bottlenecks are elsewhere (Score:5, Insightful)
You're looking at things backwards. If you've got a 500 MB/s SSD, then you shouldn't look at 10GigE and say "that's twice as fast as I need, it's useless". You should look at the existing GigE and say "my SSD is four times faster, one gigabit is too slow"...
Even a cheap commodity magnetic hard disk can saturate a gigabit network today. The fact that lots of computers use solid state drives only made that problem worse. Transferring files between computers on a typical home network these days, I think the one gigabit per second network limitation is going to be the bottleneck for many people.
Re:The bottlenecks are elsewhere (Score:4, Insightful)
You're looking at things backwards. If you've got a 500 MB/s SSD, then you shouldn't look at 10GigE and say "that's twice as fast as I need, it's useless". You should look at the existing GigE and say "my SSD is four times faster, one gigabit is too slow"...
If I want to copy tons of large, sequentially-read files every day, maybe. (Assuming that 500 MB/sec actually hits the wire instead of bottlenecking in the network stack.) But I'm not sure why I would do that. If I have a file server, my big files are already there. If I have a media server, I can already stream because even raw Blu-ray is less than 100 Mbps. If I'm working on huge datasets, it's faster to store them locally. If I really need to transfer tons of data back and forth all the time, I'm probably not a typical home network user. ;-)
Re: (Score:2)
Re: (Score:3)
Transferring files between computers on a typical home network these days, I think the one gigabit per second network limitation is going to be the bottleneck for many people.
Real world calling, most home networks have gone wireless and most use laptops, tablets or other portable devices that don't get plugged in more than they need to. Even if you have a family server or one of the kids is a gamer with a desktop it still won't go any faster. The GigE cap is only if you need to move huge amounts of data between two wired - or at least plugged in for the occasion - boxes in the same house, which is quite rare. That anybody feels speed is a limitation is rarer still, cables are mo
Re: (Score:2)
Actually, I'd say "two bonded/teamed/aggregated GbE NICs is good enough". That's half the throughput of your SSD, but you're probably not maxing out your SSD constantly, and you've got headroom for plenty of local disk I/O while you're at it. You could go for 4 bonded GbE NICs, and that'll cost far less than even a single 10GbE port.
If we're talking about a SAN, sure, you probably want (multiple) 10GbE po
Re: (Score:2)
Ten gigabits per second is 1,250 megabytes per second. High-end consumer SSDs are advertising ~500 MB/sec. A single PCIe 2.0 lane is 500 MB/sec. Then there's your upstream internet connection, which won't be more than 12.5 MB/sec (100 megabits/sec), much less a hundred times that. I guess you could feed 10GbE from DDR3 RAM through a multi-lane PCIe connection, assuming your DMA and bus bridging are fast enough...
More importantly, you can't make an IP stack consume or generate 10Gbit on any hardware I know of, even if the application is e.g. a TCP echo client or server where the payload gets minimal processing. The only use case is forwarding, in dedicated hardware, over 1Gbit links. 10Gbit is router technology, until CPUs are 5--10 times faster than today, i.e. forever.
Re: (Score:2)
Netflix OpenConnect pushes 20GBit+ on a FreeBSD-9 base with nginx and SSDs. Over TCP. To internet connected destinations.
Please re-evaluate your statement.
Re: The bottlenecks are elsewhere (Score:2)
You are so wrong it isn't even funny.
We're running app stacks at full line rate on 40GbE using today's hardware. A dual-socket sandy bridge server (I.e. HP DL380) has no problem driving that kind of bandwidth. Look up Intel DPDK or 6Windgate if you want to learn a thing or two.
It's real, it works, and we're getting ready to start 100GbE testing.
Also it is a matter of what you need (Score:3)
For many things you do, you find 1gbit is enough. More doesn't really gain you anything. It is enough to stream even 4k compressed video, enough such that opening and saving most files is as fast as local access, enough that the speed of a webpage loading is not based on that link but something else.
Every time we go up an order of magnitude, the next one will matter less. There will be fewer things that are bandwidth limited and as such less people that will care about the upgrade.
As you say, 10gbit, or eve
Re: (Score:3)
I'm sure a data center could make use of 10GbE, but I don't think consumer hardware will benefit even a few years from now.
10GbE would mean you could move your storage off your local machine to your NAS, since those remote disks would be as fast as the average local disk. There are a lot of uses for this, like saving money by only having programs/data on one set of disks, but still having very fast access.
No, not every home user could benefit from this, but not every home user benefits from 1GbE, either.
Re: (Score:2)
So storage access is already 5x faster than 1GbE.
Sounds to me like 10GbE is already overdue.
For the cluster I develop for at work we have a 40GB infiniband LAN. For serious IT I'd skip 10GbE now and go to IB.
Re: (Score:2)
Seems like an obvious place to save some money in a motherboard design.
Savings are only available right now. 10Mbit chips are actually more expensive now than 10/100. Older style cards even more so. It's all about economies of scale. Given enough years 10Gbit may become the standard and it may be too expensive to produce slower boards.
Re: (Score:2)
Given enough years 10Gbit may become the standard and it may be too expensive to produce slower boards.
That was kinda my point, although I didn't say it very well. I think 10GbE will become common on home systems when it's about the same price as 1GbE.
Re: (Score:2)
Re: (Score:2)
And memresistors are right around the corner and can run at main memory speeds.
That will be great, but I think "right around the corner" is a little ambitious. It takes a long time to implement a new memory technology at the scale needed for PC hard drives. I'd expect memristor USB drives long before SSDs.
How long until 4GB/s cheap SSDs?
My guess? Never. Shrinking flash makes reliability harder (fewer electrons on the floating gates). And manufacturers are already pushing TLC SSDs for density. Both of those affect read and write speeds. And again, you have to look at the overall picture. SATA3 is 600 MB/sec, so for
Re: (Score:2)
They will be selling both DDR3 and SSDs by 2015.
Re: (Score:3)
One 6Gb SAS drive (defacto local and network standard in 2013 Datacenters) can do 3-600 MB/s per port (a good deal faster than older 6Gb SAS drives). It's pretty easy to saturate a 10Gb ethernet connection under the right conditions with the standard 2 ports found on an HP, DELL or IBM low end x86 solution.
dropping to cents (Score:2)
Don't count on the price of 10gigE dropping to cents. Unlike gigE, 10gigE has really very little 'enterprise' competition technologies. Fibre channel, infiniband, etc. - if you want more than gigE speeds, it's going to cost you. Those were costly technologies then - but back then, they offered significantly more performance (and thus value) than gigE. With 10gigE, there is no financial incentive to drop costs.
Re: (Score:2)
Re: (Score:2)
So in other words, this is probably 4-5 years off from chipset implementation - for Intel boards. This leaves all the other boards implementing Broadcom, Realtek, etc. out in the rain...
Still waiting for 1G (Score:2)
Most of my customers are still running 100base-T and see little reason to upgrade since their networks primarily exist to distribute Internet access. What took so long? Nobody seems to really want it. Slashdot crowd not withstanding.
Limited use (Score:2)
I would argue that part of the issue is that 10GigE connections have limited use. Not that they're not useful, but at this point, with the amount of data we're moving around, most people aren't going to see a huge benefit over existing solutions. It's a little like why desktop computer sales have slowed in general: what people have now is kind of working "well enough".
Of course, part of the problem is that a lot of what people are doing now is over the Internet, which means that you're bottlenecked by yo
Expensive (Score:3)
The best reason I can think of not to buy a 10-gigabit Ethernet card is simple: The cheapest ones go for $351 on Newegg. Want an Ethernet switch to go with that? That will be $1036.
So once again, the answer is simple, and it has to do with a dollar sign.
Gigabit equipment got really cheap fairly quickly, but not so much for the 10-gigabit equipment.
PCs do not define IT (Score:2)
Please stop talking like your desktop defines IT. 10 Gb ethernet has been around for years for Sun/Oracle servers, IBM servers, Cisco switches, storage arrays, etc. Hell, I could even get 10Gb for my Mac. It hadn't made it into the PC world yet due to office wiring to the desk still being Cat 5. It's hard enough to get 1 Gb connections for the general user.
So nice and fast (if you can afford it) (Score:3)
We have some of these at work where we do have the need for moving massive volumes of data around. We can get about 99.6% of theoretical throughput in actual use, thanks to the hardware offloading and large frame support. Besides the 10x faster to start with, that's way above any efficiency we get from the 1 GbE ports, though I expect if 10 GbE went commodity you'd lose all the hardware support and you'd be back to 80-90% range.
Note to sustain a data feed to one of these you at least need two SATA 6 gbps SSD drives in RAID0. On the receiving end we're not writing to disk, or you'd need ~3-4 RAIDed.
In our case we're feeding 4 10GbE ports on the same machine and using a 10 SSD RAID0 to supply the data with some headroom (we don't care if we lose the data if one fails, these aren't the master copies). We're just using software RAID, but thanks to all the DMA and offloading the CPU usage is quite low.
Now do I need this at home? Well, SSD speeds are far above the ~85 MB/sec 1GbE delivers, but so far the cost hasn't made it worth it. If I'm copying a gigabyte it takes 12 seconds, which I can live with.
Re: (Score:2)
Windows is Involved. Going Linux to Linux we can get faster, and Linux is the receiver, but this product requires Windows to be the sender.
It's quite possible we are doing it wrong, but even supposed network experts were unable to get the Windows boxes to go faster with commodity 1GbE motherboard ports. I'm the one who got 10 GbE working Windows -> Linux and that was pretty effortless (turn on large frame on both ends, turn on all offloading on both ends).
Router / switch costs (Score:2)
Cisco and most other vendors have made 10Gb ports too expensive and/or don't have a backplane that can effectively support 10Gb across all the ports. This is pretty ridiculous given how cheap processors have gotten. Even when they do support it, the licensing and maintenance costs can be crazy.
For that reason we're currently deploying several 1Gb connections to our VM servers through various switches (depending on costs per port, reliability needed and location).
I've been hoping that late 2013 is when 10Gb
Cost and complexity (Score:3)
I think the main reason is cost. I have been working with 10Gbe for several years writing drivers for PHYs and MACs. I've worked with a number of PHYs and 10Gbe is a lot more complex. For example, the SFP+ cables and modules each have a serial EEPROM that contains parameters needed to program the PHY. It's not just a simple RJ45 CAT5/CAT6 cable. As someone who has worked in 10Gbe drivers there's a lot more complexity. With some PHYs I have to query the serial EEPROM to make changes based on things such as cable length and whether or not it's active or passive or if it's copper or optical. Distances over copper are also usually limited to much shorter distances unless active cabling is used.
In terms of cost, a 1 meter copper cable is around $43 from www.cablesondemand.com. A 12 meter cable is $189. It's not like gigabit where you just plug in a CAT5 cable and go.
Here's a thought (Score:2)
Diminishing returns (Score:2)
Most people I know barely benefit from gigabit Ethernet. Most people I know are not running Exchange servers and huge file sharing projects on their LANs but hosting their data on their local PC and using their network for E-mail and the web.
While 10-100Mbit made a huge difference to peoples' networking abilities, and going from that to gigabit helps with smooth transfers of larger files, there's still a lot of people running 100Mbit and quite happy with it because modern switches are pretty good at what t
Re: (Score:2)
Define "large capacities"? Most notebooks sold at a thousand bucks or more use SSDs for primary storage now (certainly all the ultrabooks and tablets do), and even the $700 Dell notebook that got recently has a 32 gig SSD for caching (Intel SRT).
Re: (Score:3, Informative)
it's trivial to enable LACP to bond several 1 gbps links. no new equipment, no new cabling. that would have slowed down my 10 gbps deployment.
10x1gb != 1x10gb. Your LACP bond still limits a single stream to a single link. Even with multiple streams, you would have to have a lot in order of them to hash out to all the links.
Re: (Score:2)
That is very true indeed. And I have experienced a real world use case, where that was a severe limitation. Since then TCP has been extended to take advantage of multiple links, but that feature is not yet widely supported. In performance tests, it has been shown possible to push 50Gbit/s over a single TCP connection run across a bundle of 6x10Gbit/s.
Re: (Score:2)
The main reason why 10GbE took time to arrive is simple : connectors are not the good-old RJ45 used for 10Mb, 100Mb and 1GbE. The RJ45 connector is small, cheap and backward compatible. The 10GbE connectors are deep, expensive and not RJ45-compatible, hence cannot be used as a 1GbE port.
I use 10Gb over Cat6 in my home (to connect my servers to the SAN). It's really easy to find 10GbE with RJ45 connectors, like this card [newegg.com].
Re: (Score:2)
Re: (Score:2)
No, he's talking about SFP+ connectors. All of the 10Gbe equipment I work with has SFP+ cages to accept various modules for optical or fiber connections. They look just like GBIC transceivers.
Re: (Score:2)
I don't know how any of you people get streaming to work over wireless.