Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking

The Road To Terabit Ethernet 210

stinkymountain writes "Pre-standard 40 Gigabit and 100 Gigabit Ethernet products — server network interface cards, switch uplinks and switches — are expected to hit the market later this year. Standards-compliant products are expected to ship in the second half of next year, not long after the expected June 2010 ratification of the 802.3ba standard. Despite the global economic slowdown, global revenue for 10G fixed Ethernet switches doubled in 2008, according to Infonetics. There is pent-up demand for 40 Gigabit and 100 Gigabit Ethernet, says John D'Ambrosia, chair of the 802.3ba task force in the IEEE and a senior research scientist at Force10 Networks. 'There are a number of people already who are using link aggregation to try and create pipes of that capacity,' he says. 'It's not the cleanest way to do things...(but) people already need that capacity.' D'Ambrosia says even though 40/100G Ethernet products haven't arrived yet, he's already thinking ahead to terabit Ethernet standards and products by 2015. 'We are going to see a call for a higher speed much sooner than we saw the call for this generation' of 10/40/100G Ethernet, he says."
This discussion has been archived. No new comments can be posted.

The Road To Terabit Ethernet

Comments Filter:
    • Re: (Score:3, Funny)

      Don't you mean 5 Mebibit per second?

      That gives you 640KB per second.

    • A joke, yes, but what applications will this next generation of bandwidth be for? Is there (or will there be) really a big demand for high-definition, full-screen video streamed through the Net? Or is this bandwidth advance more about increasing the number of devices attached to the Net, so that we can have pervasive surveillance, er, sensor networks?
      • Initially this is to connect disks to database engines and to push entire virtual machines onto servers to handle demand spikes and things like that. Later to handle the upstream end of pushing multiple HD video streams out from servers towards large numbers of clients.

      • by takev ( 214836 )

        3D high definition voxel video.
        That will probably eat a bit of bandwidth.

        By that time the minimum resolution will be 2048 x 2048 x 2048 voxels 8 bytes per voxel for RGBA at 128 frames per second. Which adds up to 8,796,093,022,208 or 8TB/sec. maybe we can get 1:1000 compression ratio, so that will end up with 8GB/sec.

    • you get 10mbit/s??? Lucky!

  • Physics? (Score:3, Interesting)

    by happy_place ( 632005 ) on Wednesday April 22, 2009 @08:56AM (#27673593) Homepage
    Does anyone know what are the physical limitations of highspeed ethernet? I mean at some point doesn't it become impossible to move electrons or modulate data any faster?
    • Re:Physics? (Score:5, Funny)

      by Shakrai ( 717556 ) on Wednesday April 22, 2009 @08:59AM (#27673629) Journal

      I mean at some point doesn't it become impossible to move electrons or modulate data any faster?

      Nah, at that point you just place the whole ethernet infrastructure within a subspace field, modulate the deflector dish a little bit and you'll be off and running.

      • by rossdee ( 243626 )

        You left out the 'tachyon pulse' part. Thats the way to get your data moving faster than light.

    • Re: (Score:3, Interesting)

      That's more a problem with copper wiring. Cat5e seems to have problems reaching above 700-800 Mbits/s, I assume cat6 does better but wouldn't expect to see 10Gbit or even close.

      At this point we haven't really started to see limitations on how fast a fiber optic connection can be switched, although I wouldn't doubt there being a theoretical limit.

      • Re: (Score:3, Interesting)

        by Shakrai ( 717556 )

        Umm, I've reached 950 Mbit/s on Cat5e before. Fairly long (70-80 yards) runs of it too. Are you sure you aren't running into a limitation of your hardware? I've seen a lot of PCs with crummy NICs or slow PCI buses that can't reach full gigabit speeds no matter how good the cabling is.

        • Most of the time even a regular PCI bus will be able to saturate gigabit ethernet. At the moment I just have two systems that are gigabit capable, both on PCIe and both have onboard marvell yukon chips. Which chips are you using?
          • Re: (Score:3, Informative)

            by Shakrai ( 717556 )

            Most of our network cards use intel chipsets. What OS are you using? I've never been able to fully saturate gigabit or even 100mbit ethernet under Windows. Even when using simple protocols like FTP it usually tops out at 80-90% of network capacity. Using more complicated ones (SMB/windows file sharing comes to mind) will reduce it even further. Transfers between two Linux machines are another matter. I've been able to saturate 100mbit networks easily using a variety of protocols and achieve the aforem

            • when i worked for the 10 gigabit ethernet consortium at the UNH IOL (www.iol.unh.edu), i had to do a 10 gig demo once. not naming vendors, i wasn't able to get it above 1.1 gigabit/second with commodity pc hardware. we had pattern generators, but those don't count in the real world.

              for 1 gigabit, the best line utilization i ever got was about 97%, using two linux boxes, netcat, and piping /dev/random into /dev/null across it. i'm not a math guy so i can't say what the theoretical max is.

            • Have you tried disabling the QoS bandwidth reserve thing? IIRC, Windows reserves a portion of the network throughput for QoS stuff (something like 10-20%). I forget how to do it exactly... I think you have to go into the Group Policy Editor or something. Although I do know for a fact that if you use nLite to make an XP install you can remove it entirely.

    • Does anyone know what are the physical limitations of highspeed ethernet? I mean at some point doesn't it become impossible to move electrons or modulate data any faster?

      Very roughly, at one terahertz you can transmit one terabit. Now whats the frequency of an XRay laser expressed in hertz?

      • by Bandman ( 86149 )

        I think the grand parent was talking about using copper wire as a conduit.

        I agree, optical will go much, much, much faster. And later, when we have vacuum-optic cables...well, all that much faster, I suppose.

        It's those pesky optical logic gates that are holding us back.

    • Re: (Score:3, Insightful)

      Does anyone know what are the physical limitations of highspeed ethernet? I mean at some point doesn't it become impossible to move electrons or modulate data any faster?

      The speed of light limitation will limit ping times over a set distance. Upgrading to terabit speed doesn't make the end nodes further apart, it widens the pipe between them. So, no, I don't see a theoretical limit to how wide the pipe can be. At some point, you'd need a really thick cable, I suppose, which could become impractical.

      There's other bottlenecks, too, such as the speed of the systems' internal busses, or storage devices, though.

      • by fwr ( 69372 )
        Electricity does not flow through cables at the speed of light. It is more like 2/3rd the speed of light...
    • Re:Physics? (Score:5, Interesting)

      by empiricistrob ( 638862 ) on Wednesday April 22, 2009 @09:13AM (#27673767)
      That's a bit hard to say. But here's a way of thinking about it:

      The Shannon-Hartley theorem [wikipedia.org] states that the channel capacity (e.g. the data bandwidth, measured in bits per second) is related to the channel bandwidth (measured in hertz). If we assume a very pessimistic signal to noise ratio of 1:1, the SH theorem says that the cable's bandwidth in hertz will be the same as the cable's bandwidth in bps.

      So if we want a cable capable of transmitting information at 1tbps, the cable will need a bandwidth of roughly 1000 GHz. That means that it would be impossible to carry that amount of information using even microwaves. We're talking about at minimum infrared light. Or in other words -- we're talking about fiber optics, not cat5.
      • Comment removed (Score:4, Informative)

        by account_deleted ( 4530225 ) on Wednesday April 22, 2009 @09:26AM (#27673897)
        Comment removed based on user account deletion
        • by evanbd ( 210358 )
          They've been doing that for some time. GigE [wikipedia.org] uses a 5-level trellis code, and even 100Mb used 3 signaling levels.
      • That all depends. While networking has traditionally been a serial connection, there's nothing stopping multi-mode connections and in fact multi-mode already has some implementations.

        Spread your 1Tbit connection across 10 lines and you only need 100Gbit's per line.


      • We're talking about at minimum infrared light. Or in other words -- we're talking about fiber optics, not cat5.

        Except for the fact that:

        1. Twisted pair ethernet uses electrical signal modulation, not photon modulation.
        2. There are 4 pairs of wire in twisted pair ethernet cable.
        3. The S/N ratio is a LOT higher than 1/1

      • Re:Physics? (Score:4, Informative)

        by Cassini2 ( 956052 ) on Wednesday April 22, 2009 @11:50AM (#27675365)

        The Shannon-Hartley theorem is not the relevant limit. The hard limit for copper is the cutt-off frequency, and for optical systems other technical challenges come into play.

        Any given copper wire has an associated cutoff frequency. Passed this frequency, it is almost impossible to get significant amounts of energy to pass through the cable. The cutoff is very steep.

        For most types of coaxial cable, the cutoff frequency is on the order of 1 GHz to 8 GHz. Since the bandwidth required for a working communications link is generally higher than the bandwidth of the cable, copper wiring will top out at something on the order of a few GHz for most practical applications. UTP cable, as used in existing Ethernet, will perform worse than coaxial cable. For practical purposes, we have probably used all of its available bandwidth for 1 Gb Ethernet. UTP has a cutoff frequency on the order of 300 to 500 MHz, if memory serves. As such, the 1 Gb Ethernet specification resorts to uses all four pairs to achieve the 1Gb rated speed.

        To increase bandwidth further, either microwave or optical waveguides can be used. Microwave waveguides are not practical for personal computer use. This leaves optical fiber, which is an optical waveguide.

        Optical fiber has an essentially unlimited bandwidth, on the order of 500 Tb/s. Its performance is primarily limited by cost reasons and technical reasons relating to receivers and transmitters. It is difficult to generate the variable frequency light sources required to make use of the vast amounts of light spectrum. Separation of the light sources at the receiver is also a major issue. There are optical dispersion problems relating to the cable, but these are easier to deal with than the problems of creating a precision wide-band variable frequency laser.

        In general, the technologies at optical speeds are not as well developed as the electrical technologies for microwave, broadcast, and copper communications transmission. It is much more difficult to use all available bandwidth at optical speeds, than at copper speeds. However, since the theoretical bandwidth at optical speeds is huge, much higher communication speeds are possible with optical.

      • So if we want a cable capable of transmitting information at 1tbps, the cable will need a bandwidth of roughly 1000 GHz. That means that it would be impossible to carry that amount of information using even microwaves. We're talking about at minimum infrared light.

        As others already pointed out, with a better S/N ratio you can lower the frequency bandwidth, although impedance mismatches become more of a problem. If you try to send a 250 GHz signal through the cable, the wavelength is about 1 mm, which means

      • How about using 2 wires to send 2 bits simultaneously? Doesn't that solve the problem?

    • Re: (Score:3, Interesting)

      by setagllib ( 753300 )

      We can always keep adding more bandwidth - in the extreme case (as in TFS) by trunking together more of the same links. But latency is not really improving. Ethernet itself is very high-latency compared to e.g. Infiniband. But fundamental limits of latency are impossible to overcome, and the best you can do is get closer and closer, perhaps asymptotically so. Between our planet and another, any latency in hardware is going to be a rounding error compared to the latency in the electromagnetic waves themselve

    • Re: (Score:3, Insightful)

      by BlueParrot ( 965239 )

      Does anyone know what are the physical limitations of highspeed ethernet? I mean at some point doesn't it become impossible to move electrons or modulate data any faster?

      Depends what you mean with speed. The lowest possibel latency is limited by teh speed of light.

      Bandwidth is limited by the number of "tubes" you can run and how much data you can push down each tube. In principle there's nothing stopping you from doing something crazy like encoding your data on DNA strands that you dissolve in a soup and pu

    • Re:Physics? (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Wednesday April 22, 2009 @09:54AM (#27674177) Journal
      Electrons move very slowly (at least, compared to the rate the data travels) along the wire. The effect is roughly analogous to a Newton's Cradle [wikipedia.org], where the balls move slowly but the shock wave when they are together travels at the speed of sound so the data (one bit - the ball has impacted on one end) travels at the speed of sound. Using light or electrons, the speed of the signal is the speed of light. This give 67ms as the theoretical minimum signal time (134ms ping time, since pings are round trips) anywhere in the world, given by dividing the circumference of the earth by the speed of light.

      In practical applications, the latency is greater for two reasons. The most obvious is that we are not laying cables in a straight line. If I ping the machine I have on the other side of the park from here, the data goes via London, a few hundred kilometres out of the way. If you use satellite relays, then the signal is bouncing up and down between the surface and the satellite's orbit at least once, adding to the distance.

      The second reason is switching time. The signal travelling along the wire is very quick, but even on a single-segment network that data has to be processed by two network cards, encoded going out and decoded coming in, transferred to and from userspace process's address spaces and so on. Things like infiniband lower this latency by allowing userspace code to write directly to the card, which removes some but not all of the overhead. If you are using fibre then the transformation between an electrical signal and a sequence of photons, and then back again, adds still more latency. In a switched or routed network (like, for example, the Internet), this has to be done several times because (outside of labs) we can't route packets without turning them back into electronic signals. Most routers will queue a few packets while making decisions and at the very least they typically read the entire packet off the line before routing it, which, again, adds a bit of latency.

      In terms of throughput, there is no theoretical limit. If you can send one bit per photon, you can double the throughput by doubling the number of photons (i.e. just use two fibres). The limit is set by cost, rather than by physics. There are a few physical limits which affect this. Shannon's limit gives an upper bound on the number of symbols per second you can send across any given link, given an amount of signal bandwidth and a signal-to-noise ratio. This is quite misleading, however, because the number of symbols does not directly correlate to a number of bits. Early modems used two tones and got speed increases by switching between the two faster. Later ones used a number of different tones and so transmitted the same number of symbols per second but more bits. The same is done with fibre, for example using polarised photons or photons of different wavelengths to provide different virtual channels within a single fibre. These can be detected separately and distinguished from each other. If, for example, you send photons of four different wavelengths, you can send two bits per photon instead of one. If you use 16 different wavelengths, you can send four bits per photon.

      When it comes to radio transmission, there are some even more interesting effects. If you've tried receiving analogue TV between hills, you will have seen a ghosting effect because your signal comes via two different paths. It turns out that, with two different transmitters, you can distinguish between them even if they are transmitting on the same frequency, by measuring the different paths each takes. This is particularly interesting for things like WiFi, because in urban environments (where you have the most people trying to use the same radio bandwidth at once), you get more possible return paths (due to more objects that bounce the signals), and so (given enough processing power), you can discern more individual transmitters, giving more usable bandwidth. There are lots of tricks like this - probably a great many that no one has thought of yet - that can provide greater throughput in exchange for more signal processing power.

      • If, for example, you send photons of four different wavelengths, you can send two bits per photon instead of one. If you use 16 different wavelengths, you can send four bits per photon.

        How about this: What if you used a monochromatic light source in a single mode, polarization-maintaining fiber and had your "bits" be the polarization phase? If you divided the phase (0-2*Pi) into 256 chunks, you could send a byte per photon! With more sensitive equipment, you could get even more. Anyone heard of this?

        • Yes, there are techniques which do this. Sending polarized light down a fibre is actually very difficult. Remember that the light bounces around over the length and may bounce a different number of times for two subsequent photons, and each bounce slightly affects the polarity. Frequency is easier to work with for two reasons. The first is that it is not affected by travelling through the fibre. The second is that it's easy to separate out the individual channels. You can send a photon beam into a pri
        • by Alef ( 605149 )
          I'm not a physicist, but if you're talking about single photons I'm fairly certain that is impossible. To determine the polarization of the photon, you'll essentially have to "test" one direction, and by doing that you destroy its state.
    • The electrons actually move quite slowly regardless to how fast you send stuff down a cord.
  • Gee, that's great. (Score:5, Insightful)

    by Doodoo Browntrout ( 1476405 ) on Wednesday April 22, 2009 @09:13AM (#27673773)
    I'd settle for gigabit speeds from the gigabit hardware I have now.
    • by Eil ( 82413 )

      Then you're not involved in HPC applications that demand an extremely fast physical layer (e.g., clustering).

  • Bah (Score:2, Funny)

    by BigBlueOx ( 1201587 )
    All *we* had was an acoustic coupler. And an Ohio Scientfic. S-100 bus. 8k of memory IF you were looky. AND we read the bits as they came over the phone AND typed them in ourselves.

    And you tell that to the kids today and they won't believe you. Bah. Spit.
  • Why? (Score:3, Insightful)

    by quenda ( 644621 ) on Wednesday April 22, 2009 @09:29AM (#27673943)

    Why?

    • Re: (Score:3, Funny)

      Cloud computing.

      • by julesh ( 229690 )

        Cloud computing

        Doesn't really answer the question. To elaborate: most current computer systems are incapable of maxing out Gigabit ethernet. For any nontrivial application you're going to be loading data from disk, and unless you have very fast disks you're not going to hit 1Gbps.

        Now I can see 10 or even 100Gbps being sensible for high performance computing applications. But terabit? Isn't that a little OTT?

        • Terabit, at the moment, is entirely for enterprise, where the amount of hard drives connected to the bigger machines lies in the triple to quadruple digits. However, 10G is at least useful for higher-end consumers.

          8 hard drives in a RAID6 array managing full speed (approx. 20mbytes/sec. per drive?) hits 120 megabytes/sec., already reaching gigabit's limits. Add two more arrays and 10G becomes useful. While I personally can't see much of a use for that beyond, say, murdering load times in modern games (and m

    • Porn

  • by wowbagger ( 69688 ) on Wednesday April 22, 2009 @09:45AM (#27674097) Homepage Journal

    An open letter to any hardware vendor considering making chips for these higher speed protocols:

    Please add the timestamp counters needed to support IEEE-1588 Precise Timing Protocol [nist.gov]. These counters don't add much in the way of complexity when added to the NIC, but they are VERY complex to add after the fact.

    Being able to synchronize the clocks of 2 hosts to 5nS or less may seem esoteric right now, but for these sorts of transfer speeds, you are going to have a significant number of users (Test and Measurement folks like me, scientists at places like CERN and FermiLab, grid computing) who will need that kind of time sync.

    • An open letter to any hardware vendor considering making chips for these higher speed protocols:

      Please add the timestamp counters needed to support IEEE-1588 Precise Timing Protocol [nist.gov]. These counters don't add much in the way of complexity when added to the NIC, but they are VERY complex to add after the fact.

      Being able to synchronize the clocks of 2 hosts to 5nS or less may seem esoteric right now, but for these sorts of transfer speeds, you are going to have a significant number of users (Test and Measurement folks like me, scientists at places like CERN and FermiLab, grid computing) who will need that kind of time sync.

      http://www.ieee802.org/1/pages/802.1as.html [ieee802.org]

      There you go.

  • by egghat ( 73643 ) on Wednesday April 22, 2009 @09:55AM (#27674193) Homepage

    cause the PCIe bus is way too slow for transporting terabits.

    Or am I wrong?

    bye egghat

    • You're wrong. Even the original PCIe specification supports around 2000MBytes/sec (or around 20Gbits/sec) on an 8x link. You get double that with PCIe2, and there's always the option to go with x16. All together the maximum theoretical throughput currently available on PCIe is around 80Gbits/sec per card.

      PCIe3 will be introduced years before terabit ethernet, doubling theoretical throughput again.

      • by egghat ( 73643 )

        But that's not terabit/s. Even with PCIe3.

        Or am I missing sth?

        Bye egghat

        • If you read the summary, you'll notice that the intermediate goal is 40-100g-base, which is mostly currently achievable. As faster ethernet standards are developed, faster bus speeds will be.
    • The PCIe bus is not the only problem. Even at 10Gb Ethernet, the load on the CPU of processing each interrupt corresponding to each packet arriving on the network becomes significant. At 1 Tb/s, your network speed is substantially higher than both the peak hard drive bandwidth (3 Gb/s) in your average desktop computer. At 1 Tb/s, you could fill a 1 TB hard drive in 8 seconds flat!

      For the moment, these high speed technologies will be primarily used in the network and cabling closets, where the aggregate

  • The other day I had a small business for a client and was amazed to discover that they were running a significant network through a 10Mbps hub. Being able to upgrade that to a (rather affordable) Gigabit switch was quite satisfying.
  • By 2015... (Score:5, Funny)

    by castironpigeon ( 1056188 ) on Wednesday April 22, 2009 @10:09AM (#27674315)
    ...we'll be able to use our monthly bandwidth allowance in under one second. Hooray?
  • by IGnatius T Foobar ( 4328 ) on Wednesday April 22, 2009 @10:17AM (#27674395) Homepage Journal
    In case anyone was wondering "40? Why 40 Gigabit?" here's the answer: 40 Gigabit Ethernet reuses existing OC-768 technology. So it's actually not exactly 40 Gbps, it's actually 39.813120 Gbps. The idea is that Ethernet encapsulation and framing are being applied to existing components that are electrically (and optically) OC-768. (For the nitpickers out there, yes, I know there's more to it than that, but let's not get bogged down in details.)

    So that's why we're making a stop at 40 Gbps instead of going straight to 100 Gbps. Existing technology is being reused to get a useful product to market faster.

    Incidentally, 10 Gigabit Ethernet is similarly based on OC-192 technology, so it's actually 9.953280 Gbps.
  • I thought The heading was "The Road to Terabithia" :)
  • by Namarrgon ( 105036 ) on Wednesday April 22, 2009 @09:00PM (#27681719) Homepage

    Didn't anyone else think 'Bridge to Terabit Ether'?

    What a missed headline opportunity.

It is easier to write an incorrect program than understand a correct one.

Working...