IEEE Seeks Consensus on Ethernet Transfer Speed Standard 92
New submitter h2okies writes "CNET's News.com reports that the IEEE will start today to form the new standards for Ethernet and data transfer. 'The standard, to be produced by the Institute of Electrical and Electronics Engineers, will likely reach data-transfer speeds between 400 gigabits per second and 1 terabit per second. For comparison, that latter speed would be enough to copy 20 full-length Blu-ray movies in a second.' The IEEE also reports on how the speed needs of the internet continue to double every year. Of what consequence will this new standard be if the last mile is still stuck on beep & creep?"
Hype! (Score:5, Funny)
Ethernet transfers never use more than a fraction of available bandwidth. So it's 2 blu-ray discs per second, 4 tops!
Re: (Score:3)
I have had the results of SQL queries nearly max out my Ethernet connection (96Mbps)
I told you it wasn't good to use a CROSS JOIN across all of your Access tables.
Re: (Score:1)
You are wrong. The whole point of databases is to handle such data intensive workloads.
Re: (Score:2, Informative)
Re: (Score:3, Insightful)
Unfortunately I have met several programmers who do exactly that. Usually recent refugees from homemade .csv land.
Then they go on an epic bender of why SQL is not webscale and we need to use nosql solutions etc etc.
I realize this sounds like a daily WTF post but I've also seen people implement sorting in the app instead of letting the DB do it. Madness.
Re:Hype! (Score:5, Funny)
Unfortunately I have met several programmers who do exactly that. Usually recent refugees from homemade .csv land.
Then they go on an epic bender of why SQL is not webscale and we need to use nosql solutions etc etc.
I realize this sounds like a daily WTF post but I've also seen people implement sorting in the app instead of letting the DB do it. Madness.
Why would I trust the lousy SQL server app to properly implement a superior bubble sort algorithm?
Re: (Score:2, Offtopic)
Re: (Score:1)
Re: (Score:2)
Not everything can be done in the database, even sorting. I've had client requirements that a column by sorted by the 4th character unless the field only had a 2 character prefix instead of a 3 character prefix. and some values did not have a prefix at all, and it got worse from there.
The arcane sql that would have been required for that would have been nearly impossible to deal with. The good news is that I was eventually given permission to tell the client to fuck off (though I had to be slightly nicer ab
Re: (Score:2)
Re: (Score:2)
Not everything can be done in the database, even sorting. I've had client requirements that a column by sorted by the 4th character unless the field only had a 2 character prefix instead of a 3 character prefix. and some values did not have a prefix at all, and it got worse from there.
Been there done that did not enjoy it at all. Well not that exact situation. The solution I chose, because I had DBA access, was to create a big fixed width synthetic sort key and index on it. You put all that icky if/then and case/end stuff into an app that squirts out a big sort key that'll always sort correctly based on the crazy rules. Often this is an excuse for implementing some kind of MVC wrapper where you put the key generator in the model, or an excuse to make triggers in the database if it ca
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I've got two webservers and a database server connected via gigabit. It routinely blasts past 100mbps in DB traffic.
Right now it averages over half a terrabyte of data a day and most of that is in certain peak hours.
Re: (Score:2)
I would be happy just to have speeds shown at 'real world' results, rather than 'theoretical' limits. What good are these ratings if people in the real world never actually see them?
Re: (Score:2)
There's a good reason and a bad reason. The good reason is that theoretical limits are objective and reproducible; real world limits depend upon a host of factors.
The bad reason is that the most impressive-sounding statistic is the one that sells.
Re: (Score:2)
Yes, but knowing that overhead will always reduce the throughput by a specific amount, they could simply exclude that. Wireless is a good example of what you are saying as it would vary so much depending on location, interference, etc, but it should be based on the best possible 'real world' values, rather than a non-achievable theoretical limit.
Re: (Score:2)
overhead will always reduce the throughput by a specific amount,
No it won't. Overhead depends on a variety of factors: cable quality, traffic levels, software.
Re: (Score:3)
I'm talking about protocol overhead.
For example, all things being equal, a computer connected to a hub via a stock ethernet cable with a guaranteed link speed up and down should produce a result that's generally in the same area each time (hence the 'real world').
It's not a difficult concept. We're not asking for a rating for every conceivable configuration, but best case real world numbers. WiFi theoreticals are nowhere near their real world numbers.
Re: (Score:3)
Re: (Score:2)
I suppose you could probably get a consistent value if you were very specific about the physical setup and the benchmark. But it wouldn't be that much different from the theoretical limit, and wouldn't tell you jack about real-world use cases.
Benchmarking is a complicated, controversial branch of computing. Any time you try to "prove" that one piece of hardware or software is faster than another, somebody who's selling competing technologies will show you benchmarks to "prove" that you're wrong. You're not
Re: (Score:2)
Re: (Score:2)
Sadly, for most of us, the real-world speed of our 1 Tbps Ethernet connection will be the speed of the 1.5 Mbps DSL line that feeds it. All the fast LANs in the world won't help you if WAN speed improvement is blocked by telecoms that don't want to spend any money on their infrastructure.
Re: (Score:2)
Yes, and for a few esoteric markets like high-performance computing, that matters. For the rest of the world, it doesn't.
Businesses with lots of computers may say they want terabit Ethernet, but computers aren't built for them. They're built for consumers. When the average consumer has at best double-digit Mbps service to their home, there's not much impetus for computer manufacturers to build in hardware that goes much over gigabit. So when businesses realize that the only way they're getting terabit-
Re: (Score:2)
This is the old chicken and egg situation when it comes to parts, so the sooner the standard is released, the sooner products will show up that support that standard. Now, one thing that many businesses want would be the idea of clients with no local storage, or perhaps even remove all processing in the local "terminal", and have a central server provide EVERYTHING. The only way to make it so this sort of thing won't seem like crap compared to a reasonable workstation would be to have enough bandwidth
Re: (Score:3)
The last thing most businesses want are dumb terminals. It doesn't matter how fast the link is. A thin-client business is a business that ceases to function if the very-expensive server goes down. The few businesses I've seen that want thin clients are mostly in retail, and their networking needs are usuall
not so much hype (Score:3)
It's pretty easy to max out a 100Mbit ethernet link. Gigabit is also doable with a bit of work. It's a bit harder to max out a 10G port but it can be done with multiple queues and large packets. Once you hit 10G you really need to be using multiple queues spread across multiple CPUs and offloading as much as possible to hardware.
Re: (Score:2)
Gigabit isn't difficult at all. I max out my gigabit network between my main computer and my NAS quite easily. Well, using 95% of it anyhow -- over windows shares none the less. I'm sure I could do better with a protocol that uses less overhead and better windowing.
Re: (Score:3)
I have no problem saturating 10G links, but then again I'm working on multi-core CPUs with 10-32 cores optimized for networking (the 10G interfaces are built-in to the CPU). I have a PCIe NIC card on my desk with 4 10Gbe ports on it (along with a 32-core CPU).
It's also neat when you can say you're running Linux on your NIC card (it can even run Debian).
Re: (Score:2)
what card is this
Re: (Score:2)
Search for CN6880-4NIC10E [google.com]. It has a Cavium OCTEON CN6880 [cavium.com] 32-core CPU on it with dual DDR3 interfaces. It would take some work to make it run Debian (requires running the root filesystem over NFS over PCIe or 10Gbe). All of the changes to support Linux are in the process of being pushed upstream to the Linux kernel GIT repository and hopefully sometime in the future I will get enough time to start pushing my U-Boot bootloader changes upstream as well.
All of the toolchain support is in the mainline GCC and bi
20 bluray per tbit? (Score:5, Insightful)
I think someone got their bits and bytes mixed up...
Re: (Score:3)
That always happen.
With a little overhead, 1 Tbit/s is at most 100GiB a second. 2 Blu rays.
Re: (Score:3)
1 Terabit a second on the wire translates to about 100 Gigabytes a second of actual data transfer. Most modern encoding schemes and encapsulation protocols average 10 bits to represent an octet.
Re: (Score:3)
Ethernet's specs account for encoding overhead. That means 1Gb/s is 1Gb/s minus protocol overhead, not 800Mb/s minus protocol overhead.
Re: (Score:2)
True... the phrase I was searching for was 'protocol overhead'. I did a quick google and found the following:
http://sd.wareonearth.com/~phil/net/overhead/ [wareonearth.com]
Since we are talking ethernet, here are the numbers:
Ethernet: 1500/(38+1500) = 97.5293 %
TCP: (1500-52)/(38+1500) = 94.1482 %
Re: (Score:1)
1 tbit = 1000 gbit = 125 gbyte = 20 x 6.25 gByte.
A 1080p feature film could be 6.25GB.
I don't see the mix up. (Though I do agree that's a theoretical maximum with no accounting for overhead or other factors that reduce the effective performance).
Re: (Score:2)
but we're talking DVD9's with room to spare now, not blu-rays.
Re: (Score:2)
It's much worse than that. Somebody's reading comprehension isn't quite up to par
FTFA: "enough to copy two-and-a-half full-length Blu-ray movies in a second."
Re: (Score:2)
Whooops!!
Correction: "Updated 10:05 a.m. PT August 20 to correct the 1Tbps data-transfer speed in terms of Blu-ray disc copying times."
Re: (Score:2)
I think someone got their bits and bytes mixed up...
They never said how big the blurays actually were.
Re: (Score:2)
A standard bluray is 120 x 1.2mm, right?
Re: (Score:1)
More importantly, what's the conversion between Blurays and Blue Whales?
Re: (Score:1)
I think someone got their bits and bytes mixed up...
Stop picking on Billy Van:
http://www.youtube.com/watch?v=EntiJhQ9z_U [youtube.com]
copy 20 full-length Blu-ray movies in a second (Score:1)
The MPAA will be putting the kabosh on that.
Consequence for the last mile? None for ages. (Score:4, Insightful)
Re:Consequence for the last mile? None for ages. (Score:5, Insightful)
Consequences to me in long haul fiber optic transport? Massive.
Depending on how they implement 400G and Terabit it may affect the transport systems I deploy today, given that those speeds will likely require gridless DWDM which is currently just on the roadmap for most vendors.
Then, once it does come out, if our infrastructure is ready for it we will probably be able to deploy a Terabit link for the same price as 3 or 4 100G links. By that time 100G will start feeling a little tight anyway if we keep up the 50% a year growth rate.
There are no consequences to the last mile, for the same reason 100G has no consequences in the last mile.
Even 10G I only see used in the last mile to large customers like wireless backhaul or healthcare.
It's a silly summary but still an important topic.
Re:Consequence for the last mile? None for ages. (Score:4, Interesting)
Ask CERN (Score:1)
I'm sure they could put 400G last-mile to good use.
But yeah, for most of us, not so much, at least not this half-decade.
Re: (Score:1)
I bought CAT6A bulk for my apartment, and have a star-layout with wall panels in every room. The termination is probably not up to spec, so I expect a little lower speeds. Then again, it's only a home network.
The point of going CAT6A was to avoid (or at least delay) upgrade.
So far, CAT6A equipment is nowhere to be found in my price-range. And laptop hard disks are still the number one bottleneck. Going all SSD on OS disks and 7200rpm on the NAS.
Re: (Score:1)
"the IEEE will staht today"
there, fix that for ya.
That depends (Score:1)
When will the standard become final?
If it will become final by Christmas, I'll give you a number I can live with.
If it won't become final for 12 months after that, I'll give you a higher number.
not worth it in most cases (Score:2)
Before you drop serious dough on a 10G switch...consider whether you'll be able to actually use the speed. That's roughly a gigabyte per second. You'd need a reasonably serious RAID to get anywhere close to that unless your data is all in RAM. You'd also need a fairly beefy PCI subsystem and likely 8+ CPU cores just to keep up with the I/O.
For backplane routing it makes sense because it's just forwarding lots of I/O aggregated from lots of other places. For most servers it's overkill.
Re: (Score:3)
Nah. My NAS (low end) maxes out my 1Gbps connection easily, and they claim I can team two 1Gbps connections together and it will fill them up. Based on the CPU usage and I/O, I'd say that it could do much more than that if it had better connectivity options. It's not unreasonable to need 10Gbps connections, although yes, to actually use all the bandwidth between any two connections would be more difficult. Most enterprise SANs and some NASs use RAM and SSDs as caching mechanisms and can easily saturate
Re: (Score:3)
Re: (Score:2)
Perhaps more importantly when you combine storage, VM migration, and network traffic onto a pair of interfaces 10Gb is often barely enough and really requires some type of QoS so that migration traffic doesn't starve network or more importantly storage traffic.
Wussies (Score:2, Funny)
I will accept nothing less than 1 zillion bits per second.
Re: (Score:2)
pff i will except nothing less then a closed time like curve between my neural implant and every server in existence. information delivered straight to my brain just before i ask for it
Re: (Score:1)
Well, at least there is plenty of room for it.
last mile (Score:2)
Of what consequence will this new standard matter if the last mile is still stuck on beep & creep?
We're gonna need a faster station wagon!
Re: (Score:2)
We could add a second station wagon and allow for full-duplex communications
Re: (Score:2)
Well now you got me thinking about racks of Backblazes in a cargo container.
Little slow/late? (Score:1)
We did this last time, and wasted a bunch of time. (Score:5, Insightful)
Last time around there was a question about 40GE or 100GE. Largely (although not exactly) server guys pushed a 40GE standard for a number of reasons (cost, time to market, cabling issues, and bus-throughput of the machines), and the network guys pushed to stay with 100GE. Some 40GE (pre-standard?) made it out the door first, but it's basically not a big enough jump (just LAG 4x10GE cheaper) so there is no real point. 100GE is starting to gain traction as doing a 10x10GE LAG causes reliability and management issues.
This diversion probably delayed 100GE getting to market by 12-24 months, and the vast majority of folks, even server folks, now think 40GE was a mistake.
Why is the IEEE even asking this question again? The results are going to be basically the same, for basically the same reasons. 1Tbe should be the next jump, and they should get working on it pronto.
Re: (Score:3)
Why is the IEEE even asking this question again? The results are going to be basically the same, for basically the same reasons. 1Tbe should be the next jump, and they should get working on it pronto.
Because the companies that make the hardware are going to sell more modules :-P
I can't understand why the author is even mentioning laptops and PCs on this article. First make sure you can utilize the existing 1gbps technology, then see how to implement faster interfaces. Right now the bottleneck at home ethernet is slow hard drives and cheap "gigabit" NICs that underperform.
Re: (Score:2, Informative)
The confusion between 40G ethernet and 100G ethernet is vast. But the actual reason for the standard has nothing to do with time-to-market or technological limitations beyond 40G. The 40G ethernet standard is designed to run ethernet over telco OC768 links. This standard allows vendors to support OC768 with the same hardware they use in a 100Gbps ethernet port.
dodgy calculations (Score:2)
For comparison, that latter speed would be enough to copy 20 full-length Blu-ray movies in a second.'
someone doesn't understand the difference between bits and bytes.
LAN not WAN (Score:2)
"The IEEE also reports on how the speed needs of the internet continue to double every year. Of what consequence will this new standard be if the last mile is still stuck on beep & creep?"
None what so ever, since ethernet is a LAN protocol, not WAN. It will be used in data centers that require big pipes between servers, and possibly compete with Fiber Channel for access to storage.
Re: (Score:1)
... ethernet is a LAN protocol, not WAN.
Not anymore.http://google.com/search?q=ethernet+wan [google.com]
2 Feb 2006 – Undoubtedly, Ethernet has become the technology of choice for wide-area network (WAN) connectivity for both the enterprise and the carrier. [electronicdesign.com]
Infiniband is advancing over Ethernet (Score:1)
I have been waiting 5+ years for 10 gigabit ethernet copper to fall in price like 100 mbps fast ethernet and 1 gigabit ethernet did, but it hasn't happened. Infiniband adoption has grown rapidly in the last few years. 12x FDR infiniband promises ~160 gigabit speeds, comparable to 16x PCI Express version 3. Maybe the IEEE should come up with cheap 2.5 gigabit ethernet, and give up on higher speed copper networking.
As for long distance optical, I want them to cram as many lasers into a single node fiber as ec