IEEE Seeks Consensus on Ethernet Transfer Speed Standard 92
New submitter h2okies writes "CNET's News.com reports that the IEEE will start today to form the new standards for Ethernet and data transfer. 'The standard, to be produced by the Institute of Electrical and Electronics Engineers, will likely reach data-transfer speeds between 400 gigabits per second and 1 terabit per second. For comparison, that latter speed would be enough to copy 20 full-length Blu-ray movies in a second.' The IEEE also reports on how the speed needs of the internet continue to double every year. Of what consequence will this new standard be if the last mile is still stuck on beep & creep?"
20 bluray per tbit? (Score:5, Insightful)
I think someone got their bits and bytes mixed up...
Consequence for the last mile? None for ages. (Score:4, Insightful)
Re:Consequence for the last mile? None for ages. (Score:5, Insightful)
Consequences to me in long haul fiber optic transport? Massive.
Depending on how they implement 400G and Terabit it may affect the transport systems I deploy today, given that those speeds will likely require gridless DWDM which is currently just on the roadmap for most vendors.
Then, once it does come out, if our infrastructure is ready for it we will probably be able to deploy a Terabit link for the same price as 3 or 4 100G links. By that time 100G will start feeling a little tight anyway if we keep up the 50% a year growth rate.
There are no consequences to the last mile, for the same reason 100G has no consequences in the last mile.
Even 10G I only see used in the last mile to large customers like wireless backhaul or healthcare.
It's a silly summary but still an important topic.
Re:Hype! (Score:3, Insightful)
Unfortunately I have met several programmers who do exactly that. Usually recent refugees from homemade .csv land.
Then they go on an epic bender of why SQL is not webscale and we need to use nosql solutions etc etc.
I realize this sounds like a daily WTF post but I've also seen people implement sorting in the app instead of letting the DB do it. Madness.
We did this last time, and wasted a bunch of time. (Score:5, Insightful)
Last time around there was a question about 40GE or 100GE. Largely (although not exactly) server guys pushed a 40GE standard for a number of reasons (cost, time to market, cabling issues, and bus-throughput of the machines), and the network guys pushed to stay with 100GE. Some 40GE (pre-standard?) made it out the door first, but it's basically not a big enough jump (just LAG 4x10GE cheaper) so there is no real point. 100GE is starting to gain traction as doing a 10x10GE LAG causes reliability and management issues.
This diversion probably delayed 100GE getting to market by 12-24 months, and the vast majority of folks, even server folks, now think 40GE was a mistake.
Why is the IEEE even asking this question again? The results are going to be basically the same, for basically the same reasons. 1Tbe should be the next jump, and they should get working on it pronto.