The Road To Terabit Ethernet 210
stinkymountain writes "Pre-standard 40 Gigabit and 100 Gigabit Ethernet products — server network interface cards, switch uplinks and switches — are expected to hit the market later this year. Standards-compliant products are expected to ship in the second half of next year, not long after the expected June 2010 ratification of the 802.3ba standard. Despite the global economic slowdown, global revenue for 10G fixed Ethernet switches doubled in 2008, according to Infonetics. There is pent-up demand for 40 Gigabit and 100 Gigabit Ethernet, says John D'Ambrosia, chair of the 802.3ba task force in the IEEE and a senior research scientist at Force10 Networks. 'There are a number of people already who are using link aggregation to try and create pipes of that capacity,' he says. 'It's not the cleanest way to do things...(but) people already need that capacity.' D'Ambrosia says even though 40/100G Ethernet products haven't arrived yet, he's already thinking ahead to terabit Ethernet standards and products by 2015. 'We are going to see a call for a higher speed much sooner than we saw the call for this generation' of 10/40/100G Ethernet, he says."
Re:Ethernet or Token Ring (Score:5, Informative)
When Token Ring died it was because 100Mbps ethernet was cheaper than 16Mbps token ring. I was there. Token couldn't keep up; case closed.
Re:Ethernet or Token Ring (Score:4, Informative)
Many companies should have sticked to Token Ring, so there wouldn't be this slowdown during backups, updates etc. In the end Ethernet is just slow because of the amount of users on the network, they yell for a bigger integer before "bit" instead of changing technology.
Who modded this informative?
You're a moron or in love with token ring. Token ring doesn't magically create bandwidth out of thin air. Even with token ring, the network has a finite, fixed maximum speed and a finite, fixed maximum bandwith.
If you're moving terabytes on the network for big backups, there is less idle capacity on that segment of the network for other traffic, regardless of the network technology (fiber, ethernet, token ring, ATM, MPLS etc.).
Token ring does prevent COLLISIONS, but so do full-duplex ethernet switches. It may be easier to implement QOS and traffic shaping on token ring, but that is a completely different story.
Comment removed (Score:4, Informative)
Re:More data forces the need for more bandwidth (Score:4, Informative)
Corning's bendable fiber. There ya go.
http://www.xchangemag.com/hotnews/77h23134942.html [xchangemag.com]
Re:Physics? (Score:2, Informative)
You have a fundamentally flawed understanding of how waveforms propagate over a medium.
Electrons in a physical signal such as one that is transmitted via ethernet do NOT move at the speed of light or anywhere near it. If that happened, there would be near infinite current; which would fry anything and everything it touched.
The signal propagates as a wave. The electrons only shift slightly because we are talking about very small current here. I won't go into the details of electron behavior with respect to its random behavior and current.
The waveform moves much faster than the electrons themselves and its speed is based upon the conductivity of its medium (when talking about signals over a copper wire anyway - in optical, electrons do not move at all)
Re:Physics? (Score:5, Informative)
In practical applications, the latency is greater for two reasons. The most obvious is that we are not laying cables in a straight line. If I ping the machine I have on the other side of the park from here, the data goes via London, a few hundred kilometres out of the way. If you use satellite relays, then the signal is bouncing up and down between the surface and the satellite's orbit at least once, adding to the distance.
The second reason is switching time. The signal travelling along the wire is very quick, but even on a single-segment network that data has to be processed by two network cards, encoded going out and decoded coming in, transferred to and from userspace process's address spaces and so on. Things like infiniband lower this latency by allowing userspace code to write directly to the card, which removes some but not all of the overhead. If you are using fibre then the transformation between an electrical signal and a sequence of photons, and then back again, adds still more latency. In a switched or routed network (like, for example, the Internet), this has to be done several times because (outside of labs) we can't route packets without turning them back into electronic signals. Most routers will queue a few packets while making decisions and at the very least they typically read the entire packet off the line before routing it, which, again, adds a bit of latency.
In terms of throughput, there is no theoretical limit. If you can send one bit per photon, you can double the throughput by doubling the number of photons (i.e. just use two fibres). The limit is set by cost, rather than by physics. There are a few physical limits which affect this. Shannon's limit gives an upper bound on the number of symbols per second you can send across any given link, given an amount of signal bandwidth and a signal-to-noise ratio. This is quite misleading, however, because the number of symbols does not directly correlate to a number of bits. Early modems used two tones and got speed increases by switching between the two faster. Later ones used a number of different tones and so transmitted the same number of symbols per second but more bits. The same is done with fibre, for example using polarised photons or photons of different wavelengths to provide different virtual channels within a single fibre. These can be detected separately and distinguished from each other. If, for example, you send photons of four different wavelengths, you can send two bits per photon instead of one. If you use 16 different wavelengths, you can send four bits per photon.
When it comes to radio transmission, there are some even more interesting effects. If you've tried receiving analogue TV between hills, you will have seen a ghosting effect because your signal comes via two different paths. It turns out that, with two different transmitters, you can distinguish between them even if they are transmitting on the same frequency, by measuring the different paths each takes. This is particularly interesting for things like WiFi, because in urban environments (where you have the most people trying to use the same radio bandwidth at once), you get more possible return paths (due to more objects that bounce the signals), and so (given enough processing power), you can discern more individual transmitters, giving more usable bandwidth. There are lots of tricks like this - probably a great many that no one has thought of yet - that can provide greater throughput in exchange for more signal processing power.
Re:Physics? (Score:3, Informative)
Most of our network cards use intel chipsets. What OS are you using? I've never been able to fully saturate gigabit or even 100mbit ethernet under Windows. Even when using simple protocols like FTP it usually tops out at 80-90% of network capacity. Using more complicated ones (SMB/windows file sharing comes to mind) will reduce it even further. Transfers between two Linux machines are another matter. I've been able to saturate 100mbit networks easily using a variety of protocols and achieve the aforementioned 950 Mbit/s transfer.
I just fished through all of my cacti graphs and found a sustained (25 minutes) period of 980 Mbit/s data transfer between two of my switches. The link between them is regular Cat5e with a run of about 60 yards. So I'm not sure that you can attribute your issues to the physical layer, although anything is possible.
Re:Physics? (Score:3, Informative)
In theory it is possible to create a system that transmits informations faster than the speed of light. Taking a perfect weightless incompressible solid marbles, and place them in a 1 lightyear long gutter made of a similar perfect inelastic material. place a compressive spring at the receiving end, and push the marbles from the transmitting end in a pattern. The very moment you push the marble in a bit, and let it relax again, the far end, 1 lightyear away, will see that exact same movement, thus transferring information faster than the speed of light.
It is however obviously impossible to make those perfect materials, thus we're bound to sub-c communications.
40 Gigabit Ethernet explained (Score:5, Informative)
So that's why we're making a stop at 40 Gbps instead of going straight to 100 Gbps. Existing technology is being reused to get a useful product to market faster.
Incidentally, 10 Gigabit Ethernet is similarly based on OC-192 technology, so it's actually 9.953280 Gbps.
Fiber in the future... (Score:5, Informative)
Personally, much like how BNC is still hanging around in a few spots, I think 15 years for 'more than half' would be optimistic. On the other hand, I have actually dealt with installed fiber to the desktop systems, so I have a bit of experience.
Fiber patch cords aren't as easily damaged anymore, especially for the plastic multimode stuff. There's also nothing preventing them coming out with patch cords that are armored to the diameter of today's cat6 cables. That's a LOT of armor. ;)
Another option would be to steal a bit of PoE technology - make the computer's ethernet port support PoE, which feeds a media converter in the wall. Other options include fiber with a couple of small gauge wires with it to provide power to the MC in the jack, wiring AC to the jack, etc...
Why I see fiber eventually winning, even to the desktop.
1. Cost - Copper keeps going up in price, while fiber remains stable or even drops, relatively. Even today bulk gigabit+ capable fiber can be obtained cheaper than bulk cat6 cable. What currently kills fiber to the desktop is generally connector cost, combined with higher adapter cost because they're 'special purpose'. Still, laser tech keeps getting cheaper. Many motherboards today have optical connectors on them for the audio. Network adapter is a different matter, but the potential is there. Cat6 connectors are a bit harder to terminate and are also a bit more expensive. Thus far, the higher speed copper ones I've read about have been even harder. So that advantage copper has is going away.
2. Speed - Gigabit cat5e/6 costs more than old style cat5, which is more than phone quality cat3. They're looking at having to add wires to break gigabit speeds, and change the connecter so it's no longer RJ45 compatible. This, to me, breaks the backwards compatibility that has allowed twisted pair to win for so long.
3. Range - With a large building, the difference between fiber and copper can be the difference between having 1 network room and 8 or more network closets with powered equipment in them. If fiber was a bit cheaper, I'd run large multifiber wires to the closets, and merely have a patch panel inside to distribute the lines out to the various jacks.
4. Weight & Bulk - Cat6+ is getting heavier and heavier - computer density is still increasing today. With the increase in weight and bulk, existing building cable trays and runs are becoming overloaded. Adding more is an expensive proposition, and I estimate that I can fit two times as many fiber cords into a given cable tray, at half the weight over copper runs. Even more if you put in patch closets so that you run many pair.
5. Emissions - fiber doesn't emit or be affected by EMF radiation.
6. Future proofing - copper is pushing it's limits, fiber installed today would likely only need minimal modifications to support terabit speeds in the future.
What applications do you think will require this kind of bandwidth? HD video with moderate compression should easilly fit into a gigabit.
Well, how about HD 3D video? 120-150HZ refresh rates combined with blink glasses to display those 3D videos that movie theaters are showing?
Still, for most business uses, I tend to say that even 10meg connections are more than enough for most users. Seriously, we still occasionally find a 10 meg hub with some users on it. Thus why speed is only one of the advantages fiber has. Cost, Range, and bulk are bigger ones. Range and bulk because, well, they increase costs.
What I think fiber to the desktop needs is the equivalent to 10baseT - an open, low cost standard that is cheap and easy to use. Right now you have a dozen of propriatary connectors. Some are tougher, some are cheaper, etc... We need the equivalent of the RJ-45.
For fiber I'd consier a standard specifying optional small gauge metallic wires for power transmission to compete with PoE, one of the things keeping copper alive. Being pure power, it could be injected cheaply and effectively just about anywhere. Just keep the voltage low enough to not hurt anyone - depowering such as system could be a nightmare.
Re:More data forces the need for more bandwidth (Score:3, Informative)
Most of our buildings have been wired with fiber to the desktop for the last 4-5 years. Biggest breakdown are the stupid transceivers. Their power supplies go wonky and we can't get just the power supply. Have to swap out entire unit. New machines are coming in with fiber cards but still have older machines with ethernet only.
Re:Physics? (Score:4, Informative)
The Shannon-Hartley theorem is not the relevant limit. The hard limit for copper is the cutt-off frequency, and for optical systems other technical challenges come into play.
Any given copper wire has an associated cutoff frequency. Passed this frequency, it is almost impossible to get significant amounts of energy to pass through the cable. The cutoff is very steep.
For most types of coaxial cable, the cutoff frequency is on the order of 1 GHz to 8 GHz. Since the bandwidth required for a working communications link is generally higher than the bandwidth of the cable, copper wiring will top out at something on the order of a few GHz for most practical applications. UTP cable, as used in existing Ethernet, will perform worse than coaxial cable. For practical purposes, we have probably used all of its available bandwidth for 1 Gb Ethernet. UTP has a cutoff frequency on the order of 300 to 500 MHz, if memory serves. As such, the 1 Gb Ethernet specification resorts to uses all four pairs to achieve the 1Gb rated speed.
To increase bandwidth further, either microwave or optical waveguides can be used. Microwave waveguides are not practical for personal computer use. This leaves optical fiber, which is an optical waveguide.
Optical fiber has an essentially unlimited bandwidth, on the order of 500 Tb/s. Its performance is primarily limited by cost reasons and technical reasons relating to receivers and transmitters. It is difficult to generate the variable frequency light sources required to make use of the vast amounts of light spectrum. Separation of the light sources at the receiver is also a major issue. There are optical dispersion problems relating to the cable, but these are easier to deal with than the problems of creating a precision wide-band variable frequency laser.
In general, the technologies at optical speeds are not as well developed as the electrical technologies for microwave, broadcast, and copper communications transmission. It is much more difficult to use all available bandwidth at optical speeds, than at copper speeds. However, since the theoretical bandwidth at optical speeds is huge, much higher communication speeds are possible with optical.
Re:More data forces the need for more bandwidth (Score:2, Informative)