IEEE Seeks Data On Ethernet Bandwidth Needs 117
itwbennett writes "The IEEE has formed a group to assess demand for a faster form of Ethernet, taking the first step toward what could become a Terabit Ethernet standard. 'We all contacted people privately' around 2005 to gauge the need for a faster specification, said John D'Ambrosia, chairman of the new ad hoc group. 'We only got, like, seven data points.' Disagreement about speeds complicated the process of developing the current standard, called 802.3ab. Though carriers and aggregation switch vendors agreed the IEEE should pursue a 100Gbps speed, server vendors said they wouldn't need adapters that fast until years later. They wanted a 40Gbps standard, and it emerged later that there was also some demand for 40Gbps among switch makers, D'Ambrosia said. 'I don't want to get blindsided by not understanding bandwidth trends again.'"
Build it (Score:2)
& they will come
Re: (Score:2)
Re: (Score:2, Interesting)
Did they? Because I remember finding 10Mb/s networks too slow in the mid '90s. Switched 10Mb/s networks made that a bit better, but often there was still a bottleneck. On the other hand, I've only found 100Mb/s too slow on a few occasions - maybe once per year. I've used GigE, but I've never come close to saturating it.
Like the grandparent said, it's a question of diminishing returns. 1Mb/s is fast enough for pretty much any text-based data. 10Mb/s is fine for still images, unless they're really hu
Re:Build it (Score:5, Informative)
Much of the talk is about operator and hub level, not end-user. As a result, terabit ethernet makes sense with numbers you present - provided specific hub serves enough clients.
Essentially it's a case of making internal ISP networks simpler to build.
Re: (Score:1)
Dammit. I need terabit ethernet between my computer and my server at my home. I demand near-instant access to my information at all times.
GigE is such a drag.
Re:Build it (Score:4, Informative)
depends what you're using it for, doesn't it?
gig-e is still slow. sure it might be fine for a single desktop port, but...
hook it up to a SAN, and before you know it you're running into the limits of a few gig-e ports bound into an etherchannel.
storage requirements are going to continue to grow. HD video / audio is going to continue to become more widespread. if you're dealing with limited numbers of cables to carry data for large (and increasing) numbers of users, there's no escaping the need for more bandwidth.
Re: (Score:1)
When I use DLNA to stream HD content to 3 TV's (one in kitchen, one in living room and 1 or 2 in kids rooms) and use N spec wifi at the same time, the DLNA lags sometimes. By calculations there should be some bandwith left over but not much. The lagging is probably caused by unexpected overheads and GbE switches preforming at "GbE in theory" speeds, but with the world moving towards a phase where every
Re: (Score:2)
1 Gig is slow even for a desktop! (Score:2)
I use 1G ethernet @ home and have it 'topped' out as far as speed usage.
I get up to 125MBps writes and 119MBps reads over CIFS using Win7-64 and a Linux-based Samba server, and it's no where near fast enough to keep up with many apps.
I'd *like* to go 'diskless' on my Win7 box so all my files would be backed up on my server, but have to make do with only storing my home dir (docs, basically).
Even with that, a roaming profile can take about 5-10minutes/Gig to save when you logoff or logon (logon is faster if
Re: (Score:2)
For instance, if you are doing server virtualization, cheap multicore CPUs and cheap RAM means that it isn't at all implausible or uncommon to have numerous VMs all living in a single 2U, with the bandwidth demands of whatever it is that they are doing, plus the bandwidth demands brought about by the fact that there isn't any room for disks in there, so all their st
Re: (Score:2)
At least until it becomes very much cheaper, anything faster than gigabit is mostly about reducing the cable mess in high density situations. For instance, if you are doing server virtualization, cheap multicore CPUs and cheap RAM means that it isn't at all implausible or uncommon to have numerous VMs all living in a single 2U, with the bandwidth demands of whatever it is that they are doing, plus the bandwidth demands brought about by the fact that there isn't any room for disks in there, so all their storage I/O is happening over iSCSI. You end up with every expansion slot filled with 4 port gigE cards and a real rat's nest.
Try an ESXi cluster of a blade chassis of 16 servers, each running 10 or more VMs. The switch cross connects back to the core start to become a problem even at 10 GbE.
Re: (Score:3)
Except you'll not be seeing anywhere close to 54Mb/s actual throughput. You'll see around 20Mb/s, barely double the 10Mb/s Ethernet network that you deemed too slow in the mid-90's. Proves your point though that you're unlikely to need more in a home setup. Server data centres are a different story...
Re: (Score:3)
Except that modern wireless access points and NICs do 54 Mbps on multiple channels.
Unfortunately, 802.11n is a marketing term, and can mean either 2.4 GHz, 5 GHz or both. Because consumers are cheapskates with little or no technical understanding and WAY too much faith in marketing, the trend is towards not offering 5 GHz band anymore, to save costs.
Hint: If equipment says a/b/g/n, it will support both, and you'll likely get 150 Mbps speeds (120 in reality). If lucky, you may even get 300 (230 in reality
Re: (Score:2)
Re: (Score:2)
150 is not 120 in reality. not even theoretically.
Re: (Score:1)
Wifi-g actually doesn't provide 54Mb/s of effective BW, more like around 27Mb/s. Just FYI.
Re: (Score:1)
Re: (Score:2)
On the other hand, I've only found 100Mb/s too slow on a few occasions - maybe once per year. I've used GigE, but I've never come close to saturating it.
This isn't about or for home users, or even small office users. It's about network operators.
In my small operation (under 100 servers, 3 1 gb internet connections) I have several places where I completely saturate 1 GB and have, for cost reasons, trunked it (10 GbE is very expensive still when you look at having to replace/upgrade core switching to support it). Switch cross connects and SANs are the biggest offenders. Trunking sucks (anything that requires more complexity and configuration is always wors
Re: (Score:2)
This isn't about or for home users, or even small office users. It's about network operators.
Which is exactly the point. That's what diminishing returns means here. 10Mb/s is too slow for 90% of users. 100Mb/s is too slow for 10% of users. 1Gb/s is too slow for 1% of users. 10Gb/s is too slow for 0.001% of users. Each speed bump increases the number of people for whom it's fast enough. If you're designing a new 100Gb/s interconnect, it's going to be for such a small group of people (compared to the total set of computer users) that defining something backwards compatible with 100Mb/s Etherne
Re: (Score:2)
it's going to be for such a small group of people (compared to the total set of computer users) that defining something backwards compatible with 100Mb/s Ethernet may not be worthwhile.
I'm not sure what you think I was responding to or talking about. You're bringing up a point that I completely agree with, but also one that I wasn't discussing in my post.
Re: (Score:2)
Re: (Score:2)
... and his point is that you're looking at the bottom.
You need to look further up, where 1000s of those users are trying to cram data through your links. It adds up.
Re: (Score:2)
Well, I think the ATM crowd should be allowed to say "I told you so" since "trunking" under ATM is fantastically simple since there are no reordering problems and as such no need for hashing and balancing.
ATM crowd, please step to the stage for due credit... ...crickets...
Oh right, everyone went for the technology they understood instead of the better one. Par for the course.
Re: (Score:2)
ATM crowd, please step to the stage for due credit... ...crickets...
Oh right, everyone went for the technology they understood instead of the better one. Par for the course.
There is no possible rebuttal to this.
Re: (Score:2)
I don't think you guys were listening (Score:1)
Re: (Score:2)
AIUI the real issue is that 40 and 100 gigabit ethernet is just a low level (and as I understand it more efficient than packet level link aggregation techniques) system for aggregating 10 gigabit links. If you want 40 gigabit you need 4 fiber pairs (or 4 wavelengths in a WDM system), if you want 100 gigabit you need 10 fiber pairs (or 10 wavelengths in a WDM system).
40G/100G is the first time in the history of ethernet that the top speed hasn't been able to be run through a single fiber transceiver. Do you
Re: (Score:3)
Do you really want to be using up 10 fiber pairs when 4 would be sufficient?
I would when 4 is no longer sufficient.
The cost of the cable is minor compared to the cost of laying it, so I can't help thinking 100Gb makes more sense overall.
Re: (Score:2)
"Laying" in this context typically means buried cable, in other words medium-to-long or longer-distance runs. Even cable that costs $10's per foot, costs much more than that per foot when you factor in the heavy equipment needed to dig the trench and the manpower to physically lay the cable.
Re: (Score:1)
Typical Slashdotters, they know nothing when it comes to getting laid.
Re: (Score:2)
Cable pulls might work well in a building through conduits, but it gets kind of difficult to pull a cable several miles...
Re:I don't think you guys were listening (Score:4, Interesting)
When you can, you 'plan for expandability' by pulling as many strands of fiber in a single bundle as they'll let you get away with. The cost of each strand is comparatively small. The cost of pulling a bundle, whether it be two strands or 128 strands, is comparatively huge. You then just leave the ones you don't immediately need dark until you do need them.
For very nasty runs(undersea cables, backbones of large landmasses, etc.) I'm told that there is some emphasis on designing new transmitter/receiver systems that can squeeze more bandwidth out of the strands you already have(when the alternative is laying another fiber bundle across the Pacific Ocean almost arbitrarily expensive endpoint hardware starts to look like a solid plan...) Such matters are well beyond my personal experience, though.
Re: (Score:2)
I'm told that there is some emphasis on designing new transmitter/receiver systems that can squeeze more bandwidth out of the strands you already have
Yeah this was in the news today [zdnet.com.au]. It talks about 100Gbps per wavelength and 16Tbps in total.
Re: (Score:2)
Ferrets.
Re: (Score:2)
True enough, but there's a lot of cable already installed, and the cost of requiring new cable as opposed to being able to use the currently installed one is VERY high indeed, and the replacement-cost goes up even more if the new cable is thicker than the one it is replacing, since that can lead to needing new buried pipes since the new cables won't fit trough the old pipes.
And I don't see a compelling reason. A single current-day single-mode optical fiber is capable of transmitting 15 Tbit/s over 100 miles
Re: (Score:2)
True. Servers and datacenters will need more bandwith earlier. But on the flipside, they're also willing to pay bigger bucks, so overall it evens out.
A $100 network-card is right-out for the home-market, or atleast very expensice, given that that's 10% of what an entire typical computer costs today. In contrast, a $1000 network-card for a heavy-duty server can be entirely acceptable.
Re: (Score:2)
Lay 10 fibers in preparation for 100gb, then team two 40gb using 8 of those 10. As 22nm yields go up and the tech leaks to into networking, prices will drop dramatically. Heck, Intel claims cheap consumer-grade 10gb NICs will be made with 22nm and we should see integrated 10gb cropping up in 2012.
In ~3 years, we should see 10gb NICs where 1gb use to be.
Re: (Score:2)
I just did a round of purchases and all of our new servers included integrated 10gb. These were all supermicro based w/ integrated intel 10gb. You can pickup XFP transceivers for around 250 ea. and I think the chassis cost around 2500 bare bones. Switches were pricey as hell though.
Re: (Score:2)
These things have been in use for a long time in telecom and they are still pretty expensive.
If you're using multi-mode fiber in a small LAN then you can use cheaper components, but multimode fiber won't be as future proof if they ever move up to the terabit speeds mentioned by tfa.
Re: (Score:2)
No, but it will make signal processing for copper based 10gb cheap enough to put 10gb NICs into $80 motherboards just like how 1gb became a commodity.
Re: (Score:1)
Rather then 10 make it 12 or 16 even, 10 makes it future proof, 12-16 gives businesses a opportunity for other channels. The cost will go down as they implement anyway.
Re: (Score:2)
If you are laying new fiber from scratch I would agree laying plenty of spare is a good idea given that the ammount we can cram down one fiber seems to be platauing somewhat (it hasn't completely stopped increasing but I'm pretty sure that 40/100 gigabit is the first time a new speed of ethernet has been unable to run down a single fiber at release)
OTOH a lot of places will be using fiber layed years ago. Back in the days when gigabit (which can easilly run on one fiber pair) was the new hotness even four p
Bandwidth trends? (Score:1)
When was the last time someone significantly increased hardwired bandwidth?
I gotta stop drinking red wine, and then posting on
Re: (Score:2)
It might be a misinterpretation, but its the most common usage in the world today.
Yeah, because being commonly believed makes something true *facepalm*
When was the last time someone significantly increased hardwired bandwidth?
I guess Firewire, USB, HDMI, DisplayPort, Thunderbolt, etc. If you're talking switches then I think there are 10Gbps ones available but they aren't necessary for most home users and businesses yet - anything much above 10Gbps and you're going faster than most storage devices can currently handle anwyay, and for most people right now, 1Gbps should be acceptable for backups and file transfers.
I don't give a crap about increasing local ethern
Re: (Score:1)
anything much above 10Gbps and you're going faster than most storage devices can currently handle anwyay,
Not true for long. Infiniband EDR 12x is 300Gbit/sec. It's only a matter of time before that speed hits the desktop. The fastest single internal device you can buy [fusionio.com] currently goes 6Gbit/sec. You'd need a cluster linked via Infiniband to reach 300Gbit, probably around 9 nodes with 6 cards per node. It's definitely attainable.
Re: (Score:2)
That fusion IO thing is actually 6GByte/s, which is 48Gbit/s (unless they made a mistake with capitals on that page), but it's not exactly small business/consumer grade stuff! If you set up a RAID array then you're obviously going to be able to handle higher bandwidths, but such a setup is really superfluous and overcomplicated for the majority of PC users.
Re: (Score:1)
If you are talking about SMB / consumer level stuff take a good look at solid state.
The last generation of the OCZ vertex can saturate a couple gigabit links especially since it is saturating the 3Gb SATA link that is connecting it to the PC, it would take a mere 3-4 of these to saturate a 10Gb link. Mind u this is consumer level and last generation at that. Two of the newer generations (running on SATA 3 vs SATA 2) would easily saturate a 10Gb ethernet link
All of this assumes the machine with these beasts
Re: (Score:2)
Re: (Score:2)
The assorted 802.11 standards are substantially slower even in theory, and their quoted bandwidth numbers are usually absurdly inflated.
Re: (Score:2)
no you won't.
not unless you have an airport in your lap as well. And it will be the 450 megabit shared between every device, rather than switched 100 meg per port.
Besides, If you were in any way cluey, you would have used cat5e, and be pushing gigabit.
Re: (Score:1)
Who cares about our "needs"?
I believe that developing a "next-generation" standard costs time and money. They probably want to avoid investing millions to develop a technology that people won't buy quickly (perhaps due to the high price that the products would have at the beginning).
Re: (Score:1)
Re: (Score:2)
Basically anyone using a real computer with a real operating system. Toys and their vendors need not apply.
They should have asked meatloaf (Score:1)
You and me we're goin' nowhere slowly
And we've gotta get away from the past
There's nothin' wrong with goin' nowhere, baby
But we should be goin' nowhere fast
640 k... (Score:3)
Re: (Score:2)
I got my personal server with a 3TB raid 5 array at home. And when it is backup time, my Gbps Ethernet card is white hot. My scenario is not about copying stuff to get more stuff but to just do backup of my stuff. I just do photography, and one photo could take up to 100 MB (1 x RAW (5616 x 3744 x 14bits), 1 x Color/Distortion corrected (5616 x 3744 x 16bits), 1+ x Edited (5616 x 3744 x 16bits), 2+ x JPGs at different resolutions). It is mostly OK because I launch the backups just before going to bed, but w
Re: (Score:2)
So your desktops are all 100Mbps (which, you're right, is more than adequate for general use).
So the switch they plug into has to have a 1Gb backbone (usually one per 12-16 clients for office-type stuff, or else you hit bottlenecks when everyone is online - but for everyone to have "true" 100Mb, you need a 1Gb line per 8-or-so clients).
Those 1Gb backbones (usually muliple) then have to daisy-chain throughout your site (and thus if your total combined usage is over 1Gb in any one direction, you're stuffed) O
Re: (Score:3)
If you're running two bonded 1Gb connections from a database server to serve 25 users in a school and it's not fast enough, I can only think of two possible explanations:
1. It's a university rather than a school, and it's a big dataset being used for reasonably high-tech research.
2. Your problem is not the network.
Re: (Score:1)
Re: (Score:2)
That's more-or-less what I meant by "your problem is not the network" ;)
Re: (Score:2)
He said "school" and "central data base" couple that with a server running with two bonded nics, and 25 users, there is one logical conclusion: its a shared MS Access file, and its gotten pretty big.
Those things can easily if not compacted hit 2 gigs or so. 25 users all trying to hit it via cif/smb sounds like loads of bandwidth to me.
Re: (Score:2)
If he's got 25 people opening a 2GB Access database simultaneously, I refer you to explanation 2. The network is not the problem.
Re: (Score:3)
The first bit sounds more like a design issue than a problem with network speed, if you're really saturating your uplinks in this way, and heavily utilising the network infrastructure, I suspect you might want something a bit more robust than the setup you have described.
"A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW."
To be honest, the price difference between a 24x10/100 + 2x10Gb and a 24x10/100/1000 + 2x10Gb would pr
Re: (Score:2)
A 24-port 10/100 with 2 port 10Gb will be a killer product when it emerges, is standardised, and cheap enough. Hell, I could use it NOW.
The future is here! 10GBASE-T was standardized over 5 years ago, and fiber variants before that. Every major manufacturer's midrange fixed-config edge switch lineup has a 24/48 port 10/100/1000 switch with dual 10Gb uplinks.
Just a few examples:
http://www.cisco.com/en/US/products/ps6406/index.html [cisco.com]
http://www.extremenetworks.com/products/summit-x350.aspx [extremenetworks.com]
http://www.brocade.com/products/all/switches/product-details/fastiron-gs-series/index.page [brocade.com]
http://h30094.www3.hp.com/product.asp?sku=3981100&mfg_part=J914 [hp.com]
Re: (Score:1)
Re: (Score:2)
Honestly I don't understand you use case. If you are really working with data volumes that large than IO is almost certainly your problem. You would be better of sharing a terminal server(on whatever OS you like) or each having your own VM that you remote into in some way. That way that machine can be attached to a SAN with fiber channel or iscsi on bonded Ethernet with more channels than is practical to run to your desk. Also that SAN can have a metric shit tonne of cache, and loads of spindles.
There i
Re: (Score:2)
At the office, where basically everything but the OS is done from network storage, for backup and easy-availability-from-any-PC purposes, 100mb is OK; but for working on larger files y
Re: (Score:2)
Some of us have internet connections that are faster than 100mb.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
I have GigE at home and I use it. 100M can't keep up with even a crappy hard disk.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Using Win7 at home, I hit 110MB/sec over SMB2.0/IPv6 on my integrated Intel NIC. Best part is the 1.5% cpu that 110MB/sec uses.
At my last job, we had ~200 computer that did nightly back-ups of the primary user's profile. We had quite a few back-ups that were over 2GB 7-zipped. Quite often, we had to restore these back-ups to their computers because they deleted a file or something. A lot of man hours were saved using gigabit. Our workshop had its own 96port gig switch with dual 10gb uplinks to the network's
Yes, it's needed (Score:2)
It may not be needed this instant, but there's no such thing as too much bandwidth. Just off the top of my head, I can think a whole bunch of reasons one would want terabit Ethernet:
- For High Performance Computing and Database Replication -- both of these can result in systems that have performance that is almost entirely limited by the network, or very careful (expensive) programming is required to work around the network. Think about Google's replication bandwidth requirements between data centers! Cloud
Re: (Score:2)
Terabit?? Terabit?! Gimme Zettabit Ethernet, give me sex...tillion bits per second, baby!
you probably mean titillions per second ;)
Say no to 40Gbps (Score:2)
Come on guys. Powers of 10! You can't be going and moving from my powers of 10 wired Ethernet speeds, how will I do the simple math!
1 -> 10 -> 100 -> 1000 -> 10000
Easy maths! Say no to 40Gpbs.
pimp my ethernet (Score:2)
What we should have had all along was a system by which ethernet could dynamically adjust its speed in smaller increments to match the existing wiring capacity, both in terms of bit signaling rate on a pair of wires, to how many pairs are used (e.g. if I use 16 pairs from 4 parallel Cat 7 cables, it should boost the speed as much as it can and use them all in parallel). Of course actual devices can have limits, too, and the standard should specify the minimums (like at least 4 pairs required, additional pa
Stop making new things (Score:1)
Re: (Score:2)
Just wait when Thunderbolt hits 40/100gb. I could see stacked switches using TB for cheap uplinks
Target requirement (Score:2)
The user should notice no delay or lag anywhere, performing any task. This goes not only for bandwidth but operating systems and applications.
Obviously there are physical limitations and ultimately, there are compromises to be made but the above should be a design goal always.
why is terabit needed ... (Score:1)
... when the ISPs have barely even scratched the surface of getting megabit to the home.
What the IEEE needs to work on is technology that makes it easier to bring a few hundred megabit to the home. Whoever it was that said no one needed any more than 640kbits to the home was an idiot.
Spoilt Kids! (Score:3)
Re: (Score:2)
You got packets? In our day we had bits. Bits of lead. And there was no routing, you had to go to every single computer and ask its operator "hey, is this your bit?"