Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Technology

10 Terabit Ethernet By 2010 306

Eric Frost writes "From Directions Magazine: 'Because it is now impossible to sell networking unless it is called Ethernet (regardless of the actual protocols used), it is likely that 1 Terabit Ethernet and even 10 Terabit Ethernet (using 100 wavelengths used by 100 gigabit per second transmitter / receiver pairs) may soon be announced. Only a protocol name change is needed. And the name change is merely the acknowledgment that Ethernet protocols can tunnel through other protocols (and vice versa).'"
This discussion has been archived. No new comments can be posted.

10 Terabit Ethernet By 2010

Comments Filter:
  • Article Text (Score:4, Informative)

    by Anonymous Coward on Wednesday August 27, 2003 @03:28PM (#6807424)
    10 Terabit Ethernet: from 10 Gigabit Ethernet, to 100 Gigabit Ethernet, to 1 Terabit Ethernet
    By: Steve Gilheany
    (Aug 27, 2003)

    Ethernet Timeline

    * 10 Megabit Ethernet 1990*
    * 100 Megabit Ethernet 1995
    * 1 Gigabit Ethernet 1998
    * 10 Gigabit Ethernet 2002
    * 100 Gigabit Ethernet 2006**
    * 1 Terabit Ethernet 2008**
    * 10 Terabit Ethernet 2010**

    * Invented 1976, 10BaseT 1990
    ** projected

    Every kind of networking is coming together: LANs (Local Area Networks), SANs (Storage / System Area Networks), telephony, cable TV, inter-city optical fiber links, etc., but if you don't call it Ethernet you cannot sell it. Your networking must also include a reference to IP (Internet Protocol) to be marketable.

    Above 10 Gigabit Ethernet lies 100 Gigabit Ethernet. The fastest commercial bit rate on a fiber transmitter/receiver pair is 80 Gigabits per second. Each Ethernet speed increase must be an order of magnitude (a factor of 10) to be worth the effort to incorporate a change, and 100 Gigabit Ethernet has not been commercially possible with a simple bit multiplexing solution, but NTT has solved this problem and has the first 100 Gigabit per second chip to begin a 10 Gigabit system [http://www.ntt.co.jp/news/news02e/0212/021204.htm l]. Currently, Nortel Networks offers DWDM (Dense Wavelength Division Multiplexing) where 160 of the 40 Gigabit transmitter/receiver pairs are used to transmit 160 wavelengths (infrared colors) on the same fiber yielding a composite, multi-channel, bandwidth of 6.4 terabits per second. Because it is now impossible to sell networking unless it is called Ethernet (regardless of the actual protocols used), it is likely that 1 Terabit Ethernet and even 10 Terabit Ethernet (using 100 wavelengths used by 100 gigabit per second transmitter / receiver pairs) may soon be announced. Only a protocol name change is needed. And the name change is merely the acknowledgment that Ethernet protocols can tunnel through other protocols (such as DWDM) (and vice versa). In fact, Atrica has been advertising such a multiplexed version of 100 Gigabit Ethernet since 2001. [http://www.atrica.com/products/a_8000.html] Now that NTT has announced a reliable 100 Gigabit per second transmitter/receiver pair, the progression may be 1 wavelength for 100 Gigabit Ethernet, 10 wavelength (10 x 100 Gigabits per second) CWDM (Coarse Wavelength Division Multiplexing) for 1 Terabit Ethernet, and 100 wavelength (100 x 100 Gigabits per second) DWDM for 10 Terabit per second Ethernet in the near future.

    iSCSI (Internet SCSI) over Ethernet is replacing: *SCSI (Small Computer Systems Interface, in 1979 it was Shugart Associates Systems Interface: *SASI), *FC (Fibre Channel), and even *ATA (IBM PC AT Attachment) aka (also known as) *IDE (Integrated Drive Electronics) *see [http://www.pcguide.com], Ethernet is replacing ATM (Asynchronous Transfer Mode), Sonet (Synchronous Optical NETwork), POTS (Plain Old Telephone Service, which is being replaced with Gigabit Ethernet to the home in Grant County, Washington, USA ) [see references from Cisco Systems 1, 2, 3, or 4] [www.wwp.com], *PCI (Peripheral Component Interconnect local bus), Infiniband, and every other protocol, because, as described above, if you don't call it Ethernet you cannot sell it. Everything, in every type of, communications must now also include a reference to IP (Internet Protocol) for the same reason.

    At the same time that the transmitter / receiver pairs are getting faster, and DWMD is adding channels, the capacity of fibers is increasing, as is the transmission distance available without repeaters. Omni-Guide [http://www.omni-guide.com/; then click on enter] is working on fibers that "could substantially reduce or even eliminate the need for amplifiers in optical networks. Secondly it will offer a bandwidth capacity that could potentially be several orders of magnitude greater than conventional single-mode optical fibers." Eliminating
  • by Brahmastra ( 685988 ) on Wednesday August 27, 2003 @03:34PM (#6807484)
    Bandwidth doesn't necessarily help play games with very little delay. For quick responses in games, you need low one-way latency. A network may be capable of throwing out 1000 zillion bytes/second, but if it takes too long to send out the first packet, the game isn't going to work very well. One-way latency is way more important than bandwidth when the goal is to send out many small packets as soon as possible. High bandwidth would greatly speed up large downloads, but for faster response in games, etc, lower latency is what you need.
  • by mao che minh ( 611166 ) * on Wednesday August 27, 2003 @03:34PM (#6807493) Journal
    iSCSI bascially takes native SCSI commands, wraps it up (encapsulates it), and sends it over the wire. In other words, you could use a SCSI scanner over a network without having to resort to PC Anywhere or something.
  • by Kjella ( 173770 ) on Wednesday August 27, 2003 @03:41PM (#6807549) Homepage
    You'd probably not do a thing. But I know the internal network lines at my Uni (left this summer) are glowing pushing 1Gbit, the main backbone is now 10Gbit I think. And keeping the internal network fast (not to mention, look the other way) keeps the external connection from getting squished. If 10Tbit is available in 2010, they'll probably go for something like that. It doesn't take many student's homes to create huge amounts of traffic...

    Kjella
  • by cybergibbons ( 554352 ) on Wednesday August 27, 2003 @03:41PM (#6807553) Homepage

    These high speed DWDM systems talked about in this article aren't designed to be used for LANs or home internet connections - they are meant for high speed backbones that span huge distances (such as across the US or Australia).

    They carry mutiple 10Gb/s or 40Gb/s channels on one fibre pair - and these individual channels can be added or removed as necessary, and can be treated independantly. Saying this, 10Gb/s is still a lot, and generally that needs to be broken down into more managable sections, such as gigabit copper ethernet or maybe even 100Mb/s.

    It may seem like overkill, but at the core of most networks, there is a distinct lack of bandwidth. Maybe the VOD and video calling predicted 10 years back won't happen on these networks, but more applications are requiring these huge amounts of bandwidth.

    An example of this sort of system being rolled out is the Marconi Solstis system in Australia [fibre-systems.com]. A very small part of that system was designed by me :)

  • by luckyguesser ( 699385 ) on Wednesday August 27, 2003 @03:43PM (#6807567)
    I agree that it is rather useless. For instance, consider hard drive speeds. I did a little searching and found that the fastest hard drive on the planet ( http://radified.com/Benches/hdtach_x15_36lp_wme.ht m ) has an average speeds of 420Mbps.

    So, it seems to me that for massive data transfer, we should be worrying more about the beginning and end of the line rather than the middle.

    Not that I think improving network transfer speeds is bad...
  • by chill ( 34294 ) on Wednesday August 27, 2003 @03:47PM (#6807597) Journal
    Lucent was selling their all-optical DWDM switches (Lambda Series) last year. The LambdaXtreme is a 40 Gbps DWDM unit that uses micro-mirrors (MEMS) for switching. Data is not converted to electricity, but stays as photons the entire route. It is capable of sending data through optical fibers for 1,000 KM *without regeneration* and at 4,000 KM *without regeneration* at reduced (10 Gbps) speeds.

    They sold a pair of units (and you have to buy at least 2 or they are useless) to Time-Warner. There is one on the East Coast and one on the West and it forms a major part of their cross-country backbone.

    8-10 of the units were sold to Korea (South) for use in wiring up their national rail systems. I also believe NTT DoCoMo (Japan) bought a couple.

    This is all last year. Since I'm no longer with that company (layoffs), I no longer get all the product updates. These units were in my product group for install, service and support.
  • by joib ( 70841 ) on Wednesday August 27, 2003 @03:59PM (#6807705)
    Uh oh, guess how myrinet, quadrics and scali achieve their indeed impressively low latency? By having special user-space MPI libraries that access the hardware directly, without the kernel. And of course, they dont use the IP protocol, but some proprietary protocol designed specifically for cluster use (as simple as possible e.g. no routing)

    So, unfortunately the technology used for cluster interconnects is totally non-general purpose. Actually it's more or less useless unless you have a MPI application.
  • by MrPerfekt ( 414248 ) on Wednesday August 27, 2003 @04:02PM (#6807731) Homepage Journal
    I'll stop trolling here after I get this out: stop thinking this has anything to do with your top-of-the-line, supergeekin' Athlon.

    This technology is namely meant for backbones, be it on a campus level or as a longer haul backbone. Obviously, your desktop will not need to transfer anywhere near that much data within the next, say, 25 years. If you were using your head while you were reading the (albiet poorly written) article, I wouldn't have to troll. :(

  • by NanoGator ( 522640 ) on Wednesday August 27, 2003 @04:12PM (#6807829) Homepage Journal
    "Pretty cool for LANs, but otherwise rather useless."

    Useless? My company could use it right about now. We've got a video system moving massive amounts of imagery through several machines. There's encoding, decoding, image processing, and all kinds of fun stuff going on. Our ethernet backbone is the bottleneck. We running at a gigabit and it barely keeps up. We've had to severely compress the video to keep up. With 10 terabits, (maybe even 1 terabit) we'd be able to do it all uncompressed. That'd be slick.

    Does this help you at all? No. That doesn't mean it's useless, or that that the need doesn't exist. Consider what computing will be like in 2010. You may not have a 10 terabit card, but I gurantee you that somewhere between you and Slashdot there'll be a 10 terabit line.
  • by ozzee ( 612196 ) on Wednesday August 27, 2003 @04:18PM (#6807894)

    10Tb/s means

    5 million 2Mb/sec compressed video streams

    Copy a 250GB drive in 1/4sec

    23 thousand streams of 24bit x 1600*1200pix x 75hz uncompressed

    1.5k byte packets at 670 million/sec

    2 billion x 50 byte packets per sec

    port scan all ports on all IPv4 addresses in 20 minutes

    Every US resident downloads Metallical's new track in 30 minutes of my http server

    And this will all be available at Fry's for a $50 NIC and $30 cable ? When ? I'll hold off buying any new network HW 'till then :^)

    Seriously, there are some significant implications here. For 1, you won't need a monitor connected directly to the "fast video card" to get the next fancy 3D graphics features. Memory bandwidth and network bandwidth will be similar meaning that clustered NUMA systems will be interesting. Some of the design decisions we deal with today have been because getting the person close to the computer to improve the experience was a critical factor will disappear.

  • Re:Cabling? (Score:2, Informative)

    by stratjakt ( 596332 ) on Wednesday August 27, 2003 @04:21PM (#6807919) Journal
    You can always pull new cable through the walls if you arent afraid of a little work. There are millions of tricks and tools and snakes and whatnot, electricians pull new wire all the time with minimal damage to the walls (minimal as in 5 secs with a putty knife to fix it).

    Hell you should be able to tie the new cable to the old, and yank it through, removing the old as the new replaces it.

    Unless you're going to be a dork and staple the Cat5 every few feet.

    BTW, that 1000 foot roll wont go as far as you think it will.

    The phrase "future proof" is kind of stupid. By the time your 4 cat5s per room are completely obsolete, your house will be in need of serious overhaul anyways, like new roofs, new floors, definately a paintjob.

    So messing the joint up a tad with new cable wont be a big deal.
  • Re:Cabling? (Score:3, Informative)

    by b1t r0t ( 216468 ) on Wednesday August 27, 2003 @04:33PM (#6808044)
    (Disclaimer: IANAElectrician)

    As I understand it, low-voltage cables like Ethernet and telephone wires do not need conduits. What they do need, however, is to be plenum-grade if they go into a "forced air space" like an air conditioning duct. It's also probably a bad idea to bring them through a hole in the ceiling of your wiring closet like I did :) in my install. But I don't have a good replacement idea other than a bunch of holes drilled from the top of the wall and brought out through a box on the wall.

    You can get boxless wall-plates (also for low-voltage use only) that have bendy clips that go around the sheetrock. In a single-story house that means drill down from the attic (and hope there aren't cross-studs in the wall) and fish the wire down to the hole you've cut.

    The main thing you can do to "future-proof" your installation is to put in enough wire! It's worse to have to add wires later than to leave spare wires unused for a few years. You can get modular wall plates (at Home Depot, even!) that can take up to six modules, so put four to six Cat5e and an RG-6 everywhere you can. Just cable-tie multiple cables together before you start so you only have one big cable to deal with.

    And keep in mind that this can be your telephone wiring too. Just put an RJ-11 jack in the plate instead of an RJ-12, and cram a regular RJ-11 down the jack in your wiring patch panel.

    Since you're doing a fresh install, you could get big-ass clips to hold the wire bundle against the stud. Make them vertical to reduce interference with your horizontal power wiring. And make sure that your wiring closet can be in an air-conditioned area.

    And forget about fiber, since while there is esentially only one kind of copper for networking these days (unshielded twisted pair), there are at least two kinds of fiber (single-mode and multi-mode), and having the wrong one is just as useless as having no fiber at all. And fiber doesn't like tight bend angles either.

  • Re:Good stuff (Score:3, Informative)

    by afidel ( 530433 ) on Wednesday August 27, 2003 @04:43PM (#6808139)
    You forget that ethernet equipment is superior and cheaper. For instance you can use a pair of Cisco 6509's with 10Gb blades and use redundant NIC's (even pair them up if you like with channel bonding) and get a SAN infrastructure that beats anything FC in throughput, reliability, and it won't cost more than a decent 2Gb FC setup. If TCP processing is actually an issue then use offloading NIC's (though they rarely beat Linux's software implementation in spead).
  • Re:Good stuff (Score:1, Informative)

    by Anonymous Coward on Wednesday August 27, 2003 @06:19PM (#6808895)
    I would direct your attention at the very least to Adaptec's iSCSI offerings which completely offload the iSCSI and TCP/IP processing, presenting an HBA interface to the OS instead of a NIC.

    IMHO that has been the largest hurdle for NAS/iSCSI as Gb speeds can waste a CPU with interrupts even if you can offload checksums.

    As for latency, I'll take the interoperability of Ethernet for a little more lag, which is pretty negligible for the types of applications that you'd use this kind of disk for. The use of Layer 3 switching removes a lot of the old router-based latencies.

    Just think... no more having to certify all of the damned firmware levels and combinations thereof on the HBAs, FC Switches, Disk Controllers. Actually get reliability without having to procure software to hardcode paths through the fabric. No more having to force LIPs to bring more disk online.

    When they created the FC protocols, it was like they completely forgot or ignored all that they had learned with Ethernet. The two protocols use the same damned physical media and signaling, but we'll throw everything else out.
  • Re:Good stuff (Score:3, Informative)

    by jdray ( 645332 ) on Wednesday August 27, 2003 @07:01PM (#6809176) Homepage Journal
    I may be wrong, but I think you missed the point. The way I got it, the parent was trying to say that our desktop machines would come down to nothing more than a box with a processor & motherboard, the latter of which have three ports: 1 Ethernet, 1 USB and 1 video. The whole thing would be small enough to hang off the back of an LCD monitor. It would make for a very manageable infrastructure.
  • 100 Gb/s + is bogus (Score:3, Informative)

    by Orthogonal Jones ( 633685 ) on Wednesday August 27, 2003 @10:31PM (#6810299)

    The article mentions DWDM systems with 100 Gb/s per wavelength. That's bogus.

    I am an optical engineer at a 40 Gb/s startup. The jump from 10 Gb/s to 40 Gb/s is huge. Many signal degradations (chromatic dispersion, polarization mode dispersion, nonlinearity, ...) become a LOT worse when you jump from 10 to 40 Gb/s. The jump to 100 Gb/s would incur an even greater penalty.

    Compensating for chromatic dispersion, PMD, et. al. requires optical components which DO NOT follow Moore's law. These components are handmade specialty devices.

    While a business case can be made for 40 Gb/s, the jump to 100 Gb/s is commercially pointless. If you are building a DWDM system anyway, just spread the same data across more 10 Gb/s channels.

    What the hell is "Directions", anyway? It sounds like sci-fi fluff meant to entice VC's.

"Summit meetings tend to be like panda matings. The expectations are always high, and the results usually disappointing." -- Robert Orben

Working...