Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Technology

10 Terabit Ethernet By 2010 306

Eric Frost writes "From Directions Magazine: 'Because it is now impossible to sell networking unless it is called Ethernet (regardless of the actual protocols used), it is likely that 1 Terabit Ethernet and even 10 Terabit Ethernet (using 100 wavelengths used by 100 gigabit per second transmitter / receiver pairs) may soon be announced. Only a protocol name change is needed. And the name change is merely the acknowledgment that Ethernet protocols can tunnel through other protocols (and vice versa).'"
This discussion has been archived. No new comments can be posted.

10 Terabit Ethernet By 2010

Comments Filter:
  • Good stuff (Score:5, Interesting)

    by mao che minh ( 611166 ) * on Wednesday August 27, 2003 @03:22PM (#6807383) Journal

    From the article: "iSCSI (Internet SCSI) over Ethernet is replacing: *SCSI (Small Computer Systems Interface..."

    iSCSI is far superior to stadard SCSI for obvious reasons, and its widespread adoption will really spark a massive gain in the SAN (Storage Area Network) market. The technology is there, now we just need more major vendors of SCSI devices (especially storage and image filing systems) to make more SCSI devices that support iSCSI natively and applications that take advantage of it. Combined with practical solutions from vendors of network storage software like Veritas we could see some major spending in IT. And more money being spent on IT is always a good thing.

    I don't keep up much with the progress of the Ethernet technologies at hand, so is it realistic to suppose that the practical implementation/creation of 100 Gigabit Ethernet, 1 Terabit Ethernet, and 10 Terabit Ethernet will be seperated by merely two years each?

    "Because it is now impossible to sell networking unless it is called Ethernet". Incorrect. You can easily sell network gear that is tagged with the "WiFi" designation.

    • Re:Good stuff (Score:5, Interesting)

      by prrole ( 639100 ) on Wednesday August 27, 2003 @03:57PM (#6807690) Homepage
      iSCSI is NOT far superior to SCSI, or fibrechannel. iSCSI has massive issues related to deterministic latency, and computational cost of processing TCP/IP at gigabit speeds. You may see some growth in the of iSCSI in the workgroup segment, but I don't see iSCSI replacing fc/scsi in the near future for mission critical computing.
      • Re:Good stuff (Score:3, Informative)

        by afidel ( 530433 )
        You forget that ethernet equipment is superior and cheaper. For instance you can use a pair of Cisco 6509's with 10Gb blades and use redundant NIC's (even pair them up if you like with channel bonding) and get a SAN infrastructure that beats anything FC in throughput, reliability, and it won't cost more than a decent 2Gb FC setup. If TCP processing is actually an issue then use offloading NIC's (though they rarely beat Linux's software implementation in spead).
    • Re:Good stuff (Score:2, Insightful)

      by Anonymous Coward
      There is however and inheirant and show stopping problemt with all the over ethernet type storage schemes (yes I am looking at you NetApp ya lieing sons-of-....). That problem is that the market currently does not have a feasible TOE (tcp/ip offload engine) card to actually give us any performance.

      Right now you're wasting your time putting more than a single 1 Gbps ethernet card into an Intel server for anything other than redundancy as the servers can't even drive that.

      Until we have the protocol handle
    • Re:Good stuff (Score:5, Interesting)

      by 4of12 ( 97621 ) on Wednesday August 27, 2003 @04:09PM (#6807793) Homepage Journal

      iSCSI

      A really nice development.

      Yet more big advantages to iSCSI are the ability to keep the

      • large,
      • noisy,
      • power-hungry,
      • heat-generating,
      • unsecure
      disks out of workstations in workers' offices and down the hall in a
      • sound-proof,
      • secure,
      • air-conditioned,
      • UPS'd server room with
      • mirrored images,
      • archival backups
      .

      Next thing you know, GPUs will come with on-board Ethernet controllers and USB plugs for keyboard and mouse, and be built in to the back of an LCD monitor.

      • Re:Good stuff (Score:3, Interesting)

        Good point. Most users are total morons and think they need more power and storage than their work warrants. However, only time will truly tell if iSCSI will really be the "winner" in all of this. I couldn't find the link for an open source project the I'd seen before that would actually export SCSI, USB and Firewire over IP, so here's this [aol.com]
    • Re:Good stuff (Score:3, Interesting)

      by crow ( 16139 )
      EMC is now supporting iSCSI in their high-end Symmetrix DMX storage systems. It's just a matter of time before all storage vendors offer this. It's also just a matter of time before network cards that can talk native iSCSI are available (allowing you to boot from iSCSI, among other things).

      [Note: I work for EMC and am friends with the iSCSI developers, so my views are a bit biased.]
    • Re:Good stuff (Score:5, Interesting)

      by Austerity Empowers ( 669817 ) on Wednesday August 27, 2003 @04:18PM (#6807888)
      I don't keep up much with the progress of the Ethernet technologies at hand, so is it realistic to suppose that the practical implementation/creation of 100 Gigabit Ethernet, 1 Terabit Ethernet, and 10 Terabit Ethernet will be seperated by merely two years each?

      I think not. 10 GbE hasn't exactly taken the world by storm and it's been around for over a year now. I know of products that have 10 GbE ports, but I have not witnessed an abundance of demand. To be nice the author of this article is just a little facetious in his claims.

      In reality if you read the article it's hard to even take him seriously. To say that Nortel's DWDM system is ethernet is like calling your 56k modem ethernet. Yeah, so you can pass ethernet frames on it. It's not standard, it's not documented anywhere in IEEE 802.anything (esp with regards to conformance), so it's NOT ethernet. Just passing ethernet frames does not make you an ethernet device. I'm honestly not really sure what the author's point is except that he seems to think 1) ethernet is increasingly popular, 2) everyone should want to carry ethernet frames, and 3) people want bigger and bigger pipes. The first 2 are true, the third is less true now than it was 3 years ago.

      So the answer is, it wouldn't surprise me if we see 10 Terabit links by 2010, I doubt very, very much that we'll see a 10 Terabit ethernet port on a single chassis ethernet switch with 100 Terabits of switching capacity. I could be wrong, I hope I am, but it doesn't seem reasonable.

  • by raehl ( 609729 ) * <raehl311@@@yahoo...com> on Wednesday August 27, 2003 @03:23PM (#6807385) Homepage
    Is there going to be storage that can read/write that fast by 2010 too?
    • You wont do anything to your desktop, however (with the right switches and routers) you can have 100,000 100mbit desktops running at full speed.
    • Play games with very little delay? That's what i would do if I and everyone else had a huge amount of bandwidth
      • Unfortunately, bandwidth plays but a miniscule part in online gaming. Latency (delay) is what makes-or-breaks game play.
        • by default luser ( 529332 ) on Wednesday August 27, 2003 @04:37PM (#6808091) Journal
          Actually, bandwidth and route have everything to do with latency.

          The efficiency of the routers / backbones you encounter is always a factor, and if one router in the chain takes forever to respond, it's going to kill your latency.

          Your packet has a certain size, and the time it takes to completely transmit that packet and complete the ack is your latency. Distance and bandwidth are the prime factors.

          Sure your packets travel fast on a fiber backbone, but if your last mile connection is several orders of magnitude slower ( broadband or dialup ), it's going to cause a significant increase in your latency.

          Even high bandwidth cannot save you from real distance. You try to play a game on the other side of the US, you're going to add a sizeable delay even with those high-bandwidth backbones. Gaming with a server on another continent? It becomes largely unplayable.
      • by Brahmastra ( 685988 ) on Wednesday August 27, 2003 @03:34PM (#6807484)
        Bandwidth doesn't necessarily help play games with very little delay. For quick responses in games, you need low one-way latency. A network may be capable of throwing out 1000 zillion bytes/second, but if it takes too long to send out the first packet, the game isn't going to work very well. One-way latency is way more important than bandwidth when the goal is to send out many small packets as soon as possible. High bandwidth would greatly speed up large downloads, but for faster response in games, etc, lower latency is what you need.
        • yes and no, while latency is obviously a big factor in your online experience, having unlimited bandwidth means that you could afford to send to the clients every single position for every actor (including orientation etc.) and moveable object instead of having to rely excessively on client-side compensation and prediction.

          While the perceived lag would remain pretty much the same, you'd be sure that the client-represented world would be much closer to the 'server world' than it is now.
          • having unlimited bandwidth means that you could afford to send to the clients every single position for every actor (including orientation etc.) and moveable object instead of having to rely excessively on client-side compensation and prediction.

            and then the aimbots and see-through-wall hacks become even more effective, as they can track every single player in the screen at all times.

            Most client-side compensation and prediction is latency compensation anyway.
        • "For quick responses in games, you need low one-way latency. A network may be capable of throwing out 1000 zillion bytes/second, but if it takes too long to send out the first packet, the game isn't going to work very well. "

          What if I send out all my moves at once? My brain has on board branch prediction.

        • I agree entirely with this. What I find games are doing, is that they are sending a lot more data then is actually needed. You can think of it as sending exact data instead of relative data: Instead of saying where you are, say how you've moved. Instead of what keys are down, what keys have changed. I think this has been done because if the latency is high, extra bandwidth won't slow down the transfers much at all. But in the future as we shrink the latency, we'll end up just streaming gigs and gigs of
    • Hell (Score:5, Interesting)

      by Dachannien ( 617929 ) on Wednesday August 27, 2003 @03:27PM (#6807418)
      ...is there going to be a bus on desktop machines that can read or write that fast?

      Probably not. But I could definitely see it being useful for top-end server systems with hugely parallel storage and memory access.

      • Latency is the killer there, not really bandwidth.
      • ...is there going to be a bus on desktop machines that can read or write that fast?

        I certainly hope so, or there's no way in the world I'll be able to play "Unreal Tournament 2010" with internet multiplayer.
      • Routing (Score:3, Interesting)

        by phorm ( 591458 )
        One would think that if they have a device that could route such traffic, then it must have some sort of bus/hardware capable of handling it. Somwhere along the line this traffic has to hit a node-point, right?

        Now really, I don't see much point in directing 10Tb ethernet to one machine anyhow. But it would be great for large node-points. I you think about 100Mbps, generally no single machine is going to use that much in a normal network. However, many machines will, and sometimes quite easily in large sit
    • by mirko ( 198274 ) on Wednesday August 27, 2003 @03:28PM (#6807425) Journal
      I guess there will only be one computer, at this time : a virtual computer distributed over millions of physical nodes, so the storage will might be each of these nodes' memory... Like Freenet but also aimed at distributing workload.
    • by Abm0raz ( 668337 ) on Wednesday August 27, 2003 @03:29PM (#6807440) Journal
      Will it be meant for actual LAN usage? I think it's being designed more for back-bone-like reasons. I can't even get my drives to transfer large amounts of data back and forth in a reasonable time, so I can't see the need unless we go to entirely solid state drives.

      but ... imagine a beowulf cluster interconnected by lines of these ... ;)

      -Ab
    • by rmarll ( 161697 ) on Wednesday August 27, 2003 @03:38PM (#6807524) Journal
      People/institutions with large storage arrays.
      Lan parties are, in a lot of ways, hindered by bandwidth. We have a monthly thing in town here that is pushing the limits of the 100mb switches and GE backbone.
      Watching multiple streams of HDTV video from the media server in your basement.
      Networking processors from different workstations to provide a little more processing power.

      And most importantly.

      Haptic porn.
    • by Kjella ( 173770 ) on Wednesday August 27, 2003 @03:41PM (#6807549) Homepage
      You'd probably not do a thing. But I know the internal network lines at my Uni (left this summer) are glowing pushing 1Gbit, the main backbone is now 10Gbit I think. And keeping the internal network fast (not to mention, look the other way) keeps the external connection from getting squished. If 10Tbit is available in 2010, they'll probably go for something like that. It doesn't take many student's homes to create huge amounts of traffic...

      Kjella
    • Who else thought of this when seeing this headline:
      Ten billion people coming your way
      Ten in 2010, Ten in 2010

      -- bad religion

      And if they're all on the Internet, we will *definitely* need 10 terabits... and if Big Brother is watching that much traffic, I'm sure there will be SCSI/RAID tech that can write that fast.

    • Better question... (Score:5, Interesting)

      by siskbc ( 598067 ) on Wednesday August 27, 2003 @04:15PM (#6807863) Homepage
      Will there be a computer with a bus that can transmit data that fast? To hell with read/write - I'll concede it's all in memory. I don't think computers will be able to do (10^13)/64 bus cycles by 2010, assuming Moore's is loosely adhered to. As I calculate it, 7 years at 1.5 years/doubling cycle leaves 4.8 doubling cycles. Assuming a top speed of 3 GHz and 64 bit architecture, 1 get 1E13bits/(64bits/clock)/((2^4.8)*3E9clocks) = 1.87, or 87% overcapacity.

      And that assumes that transfer occurs at chip speed, which it doesn't. Assuming a modest clock multiplier of 8 between system bus and chip, that's a 15x overcapacity, even if the entire computer were used to transmit.

  • by Brahmastra ( 685988 ) on Wednesday August 27, 2003 @03:25PM (#6807405)
    Bandwidth is good, but what about latency? Ethernet has traditionally suffered from high latencies and doesn't work very well for High-Performance-Computing-Clusters. Myrinet and other ridiculously overpriced networking hardware works much better for clustering. I wish terabit ethernet does something about ethernet latency so that efficient clustering becomes a little cheaper.
    • by joib ( 70841 )
      Uh oh, guess how myrinet, quadrics and scali achieve their indeed impressively low latency? By having special user-space MPI libraries that access the hardware directly, without the kernel. And of course, they dont use the IP protocol, but some proprietary protocol designed specifically for cluster use (as simple as possible e.g. no routing)

      So, unfortunately the technology used for cluster interconnects is totally non-general purpose. Actually it's more or less useless unless you have a MPI application.
  • Salad (Score:3, Insightful)

    by Anonymous Coward on Wednesday August 27, 2003 @03:27PM (#6807410)
    What is that article actually supposed to be about? Seems like a scrambled mess of acronymic buzzwords with no actual content to me.
  • LAN or Internet? (Score:3, Insightful)

    by blixel ( 158224 ) on Wednesday August 27, 2003 @03:28PM (#6807422)
    The article is already slashdotted so I can't read it. So what is it refering to? 10Tb LAN speeds? If so - who cares? My existing 100Mb (200Mb switched full duplex) LAN is hardly the weakest link.
    • Re:LAN or Internet? (Score:3, Interesting)

      by garcia ( 6573 ) *
      who cares?

      How about those interested in clustering and not interested in paying for expensive solutions (that now exist because of high latency in ethernet)?

      How about those that are interested in having a network other than their home network where 100 or 1000Mb is just not enough?

      The home market isn't the ONLY market available for networking you know. Especially with FL thinking about taxing it ;)
    • Re:LAN or Internet? (Score:2, Interesting)

      by El_Ge_Ex ( 218107 )
      The article is already slashdotted so I can't read it.

      At those speeds, does the /. effect still exist? Is it the server that becomes the sole limiting factor?

      -B
    • The only time I see utilization above 10% is when I'm backing up systems across the wire.

      This might be good for SAN's. But I'd be looking at iSCSI for that.

      We haven't even deployed gigabit Ethernet yet.

      I shudder to think of the size of the files that will need that much bandwidth for decent performance.
      • You may shudder... I just think of pr0n.
      • Yeah, and just think, the latest Microsoft Worm will be able to spread in a few seconds, instead of a few hours. Seriously though, this reminds me of all the talk from the EDO and RDRAM days. "Huge Bandwidth Increases" means almost nothing to most users. I'd personally like to see decreased latencies. Just because you can theoretically transfer 10Tb/sec doesn't mean that you'll get it in the first second (in fact, you won't). In fact, if latencies are anything like now, you'd be lucky to get 100Mb in t
      • Size of a 1 page MS Word file over time
        1 TerB = IINnnnnNnnnnNnnnnNnnnnNnnnnNnnnnN##Nnnnn
        lameness IInnnnnnnnnnnnnnnnnnnnnnnnnnnnnn##nnnnnn
        lameness IInnnnnnnnnnnnnnnnnnnnnnnnnnnnn##nnnnnnn
        500 GB = IINnnnnNnnnnNnnnnNnnnnNnnnnNn##NnnnnNnnn
        lameness IInnnnnnnnnnnnnnnnnnnnnnnnnn##nnnnnnnnnn
        lameness IInnnnnnnnnnnnnnnnnnnnnn###nnnnnnnnnnnnn
        1 GigB = IINnnnnNnnnnNnnnnNnnn###nnnNnnnNnnnnNnnn
        lameness IInnnnnnnnnnnnnnn###nnnnnnnnnnnnnnnnnnnn
        lameness IInnnnnnnnnnn####nnnnnnnnnnnnnnnnnnnnnnn
        500 M

    • When you're wiring up your home so that you can have high-quality, practically uncompressed high definition video coming from a central video server such that every room can be watching a different stream simultaneously, while some may be actively editing data and rerendering, you're going to want the fastest, fattest pipe you can get.

      And who knows what bandwidth-hungry LAN application you're going to want to do in the future. Have you any idea how long it takes to render a cup of tea, Earl Grey, hot in s
      • When you're wiring up your home so that you can have high-quality, practically uncompressed high definition video coming from a central video server such that every room can be watching a different stream simultaneously, while some may be actively editing data and rerendering, you're going to want the fastest, fattest pipe you can get.

        True enough. But it's not going to matter much having all that bandwidth in your house when you're still poking about the Internet at pathetically slow speeds by comparison
  • Article Text (Score:4, Informative)

    by Anonymous Coward on Wednesday August 27, 2003 @03:28PM (#6807424)
    10 Terabit Ethernet: from 10 Gigabit Ethernet, to 100 Gigabit Ethernet, to 1 Terabit Ethernet
    By: Steve Gilheany
    (Aug 27, 2003)

    Ethernet Timeline

    * 10 Megabit Ethernet 1990*
    * 100 Megabit Ethernet 1995
    * 1 Gigabit Ethernet 1998
    * 10 Gigabit Ethernet 2002
    * 100 Gigabit Ethernet 2006**
    * 1 Terabit Ethernet 2008**
    * 10 Terabit Ethernet 2010**

    * Invented 1976, 10BaseT 1990
    ** projected

    Every kind of networking is coming together: LANs (Local Area Networks), SANs (Storage / System Area Networks), telephony, cable TV, inter-city optical fiber links, etc., but if you don't call it Ethernet you cannot sell it. Your networking must also include a reference to IP (Internet Protocol) to be marketable.

    Above 10 Gigabit Ethernet lies 100 Gigabit Ethernet. The fastest commercial bit rate on a fiber transmitter/receiver pair is 80 Gigabits per second. Each Ethernet speed increase must be an order of magnitude (a factor of 10) to be worth the effort to incorporate a change, and 100 Gigabit Ethernet has not been commercially possible with a simple bit multiplexing solution, but NTT has solved this problem and has the first 100 Gigabit per second chip to begin a 10 Gigabit system [http://www.ntt.co.jp/news/news02e/0212/021204.htm l]. Currently, Nortel Networks offers DWDM (Dense Wavelength Division Multiplexing) where 160 of the 40 Gigabit transmitter/receiver pairs are used to transmit 160 wavelengths (infrared colors) on the same fiber yielding a composite, multi-channel, bandwidth of 6.4 terabits per second. Because it is now impossible to sell networking unless it is called Ethernet (regardless of the actual protocols used), it is likely that 1 Terabit Ethernet and even 10 Terabit Ethernet (using 100 wavelengths used by 100 gigabit per second transmitter / receiver pairs) may soon be announced. Only a protocol name change is needed. And the name change is merely the acknowledgment that Ethernet protocols can tunnel through other protocols (such as DWDM) (and vice versa). In fact, Atrica has been advertising such a multiplexed version of 100 Gigabit Ethernet since 2001. [http://www.atrica.com/products/a_8000.html] Now that NTT has announced a reliable 100 Gigabit per second transmitter/receiver pair, the progression may be 1 wavelength for 100 Gigabit Ethernet, 10 wavelength (10 x 100 Gigabits per second) CWDM (Coarse Wavelength Division Multiplexing) for 1 Terabit Ethernet, and 100 wavelength (100 x 100 Gigabits per second) DWDM for 10 Terabit per second Ethernet in the near future.

    iSCSI (Internet SCSI) over Ethernet is replacing: *SCSI (Small Computer Systems Interface, in 1979 it was Shugart Associates Systems Interface: *SASI), *FC (Fibre Channel), and even *ATA (IBM PC AT Attachment) aka (also known as) *IDE (Integrated Drive Electronics) *see [http://www.pcguide.com], Ethernet is replacing ATM (Asynchronous Transfer Mode), Sonet (Synchronous Optical NETwork), POTS (Plain Old Telephone Service, which is being replaced with Gigabit Ethernet to the home in Grant County, Washington, USA ) [see references from Cisco Systems 1, 2, 3, or 4] [www.wwp.com], *PCI (Peripheral Component Interconnect local bus), Infiniband, and every other protocol, because, as described above, if you don't call it Ethernet you cannot sell it. Everything, in every type of, communications must now also include a reference to IP (Internet Protocol) for the same reason.

    At the same time that the transmitter / receiver pairs are getting faster, and DWMD is adding channels, the capacity of fibers is increasing, as is the transmission distance available without repeaters. Omni-Guide [http://www.omni-guide.com/; then click on enter] is working on fibers that "could substantially reduce or even eliminate the need for amplifiers in optical networks. Secondly it will offer a bandwidth capacity that could potentially be several orders of magnitude greater than conventional single-mode optical fibers." Eliminating
  • Name Change (Score:3, Funny)

    by Nept ( 21497 ) on Wednesday August 27, 2003 @03:28PM (#6807428) Journal
    Tethernet?

  • Meh, the article already appears to be slashdotted, but from a first read I have to wonder if I am missing something here with iSCSI. Is not this simply a different protocol that Firewire already takes care of, especially with faster iterations? Firewire is already a subset of SCSI, but hot plug and play and you can also TCP/IP over Firewire.

    • by mao che minh ( 611166 ) * on Wednesday August 27, 2003 @03:34PM (#6807493) Journal
      iSCSI bascially takes native SCSI commands, wraps it up (encapsulates it), and sends it over the wire. In other words, you could use a SCSI scanner over a network without having to resort to PC Anywhere or something.
      • iSCSI bascially takes native SCSI commands, wraps it up (encapsulates it), and sends it over the wire. In other words, you could use a SCSI scanner over a network without having to resort to PC Anywhere or something.

        I believe the same concept is possible with Firewire. In fact, the Firewire protocol allows for use completely independently of any computer or CPU.

  • by tambo ( 310170 ) on Wednesday August 27, 2003 @03:30PM (#6807442)
    Pretty cool for LANs, but otherwise rather useless.

    We already have gigabit Ethernet - which (even rounding down somewhat to account for checksum and overhead and such) should be capable of transferring around 100 megabytes of data per second. How many of us have ever seen even 10% of this in practice for a general Internet connection? I'm lucky if I can pull one megabyte per second from an Internet site that doesn't happen to be, y'know, next door.

    - David Stein
    • I agree that it is rather useless. For instance, consider hard drive speeds. I did a little searching and found that the fastest hard drive on the planet ( http://radified.com/Benches/hdtach_x15_36lp_wme.ht m ) has an average speeds of 420Mbps.

      So, it seems to me that for massive data transfer, we should be worrying more about the beginning and end of the line rather than the middle.

      Not that I think improving network transfer speeds is bad...
      • The technologies they are talking about are tradionally MAN/WAN based. You'd see this sort of stuff in a POP or colo center. Consider this simplified view:

        Each machine would plug into a gigabit switch. Let's say there are 24 of these per switch. The gigabit switch would then uplink to a core switch at 4GB. There could be anywhere from 1 to 300 of these. The core switch would have multiple 10GB or faster uplinks to various ISPs for peering.

        When all is said in done you have a tonne of aggregate bandwidth be
    • "Pretty cool for LANs, but otherwise rather useless."

      Useless? My company could use it right about now. We've got a video system moving massive amounts of imagery through several machines. There's encoding, decoding, image processing, and all kinds of fun stuff going on. Our ethernet backbone is the bottleneck. We running at a gigabit and it barely keeps up. We've had to severely compress the video to keep up. With 10 terabits, (maybe even 1 terabit) we'd be able to do it all uncompressed. That'd b
  • Imagine how much pr0n....er....um...I mean valuable business data you could get with this!
  • by mnmn ( 145599 ) on Wednesday August 27, 2003 @03:32PM (#6807465) Homepage
    Gigabit ethernet, and 10 gigabit ethernet both have it in their specs to accomodate 100 ethernet and 10 ethernet. Therefore 10 Tb ethernet will be called 10000000/1000000/100000/10000/1000/100/10Base T for the OTHER technologies included. The chip will be bigger unless its fancy FPGAing with the FPGA code downloaded from the driver.

    So to sell it as Ethernet they have to make it compatible as such. Or to make things cheaper, they will have to settle on a different name to sell cheaper 10Tb cards only. Cheaper 10Tb cards will sell more than compatible ones.
  • by travdaddy ( 527149 ) <travo&linuxmail,org> on Wednesday August 27, 2003 @03:38PM (#6807523)
    My prediction for the year 2010... I'll still be on a 56k modem. :-(
  • by Mr. Neutron ( 3115 ) on Wednesday August 27, 2003 @03:41PM (#6807547) Homepage Journal
    ...to prevent the Slashdot Effect?
  • by cybergibbons ( 554352 ) on Wednesday August 27, 2003 @03:41PM (#6807553) Homepage

    These high speed DWDM systems talked about in this article aren't designed to be used for LANs or home internet connections - they are meant for high speed backbones that span huge distances (such as across the US or Australia).

    They carry mutiple 10Gb/s or 40Gb/s channels on one fibre pair - and these individual channels can be added or removed as necessary, and can be treated independantly. Saying this, 10Gb/s is still a lot, and generally that needs to be broken down into more managable sections, such as gigabit copper ethernet or maybe even 100Mb/s.

    It may seem like overkill, but at the core of most networks, there is a distinct lack of bandwidth. Maybe the VOD and video calling predicted 10 years back won't happen on these networks, but more applications are requiring these huge amounts of bandwidth.

    An example of this sort of system being rolled out is the Marconi Solstis system in Australia [fibre-systems.com]. A very small part of that system was designed by me :)

  • packetengines (Score:4, Interesting)

    by joeldg ( 518249 ) on Wednesday August 27, 2003 @03:43PM (#6807566) Homepage
    I am sure packetengines (http://www.scyld.com/network/yellowfin.html) is all over this.
    These guys had gigabit routers four years ago when I was helping to set up the AFN (ashlandfiber.com)

    Cool to see.. mo'faster is mo'betta

  • by MarcoAtWork ( 28889 ) on Wednesday August 27, 2003 @03:44PM (#6807572)
    I am an EE major and when I was going to university in the late 80s early 90s everybody was going on how fiber was the future and that we'd run out of capacity on copper RealSoonNow: who'd have thought about 10TERABIT ethernet back then! (heck, I was happy as a clam when my lab upgraded from coax to baseT so the jokers couldn't bring down my box by unscrewing their terminators...)
    • If you had been using thick coax (10base5) instead of thin coax (10base2), you wouldn't need a terminator. Of course you'ld need an external tranceiver and a vampire tap, and somebody who knows how to install the tap.

  • by chill ( 34294 ) on Wednesday August 27, 2003 @03:47PM (#6807597) Journal
    Lucent was selling their all-optical DWDM switches (Lambda Series) last year. The LambdaXtreme is a 40 Gbps DWDM unit that uses micro-mirrors (MEMS) for switching. Data is not converted to electricity, but stays as photons the entire route. It is capable of sending data through optical fibers for 1,000 KM *without regeneration* and at 4,000 KM *without regeneration* at reduced (10 Gbps) speeds.

    They sold a pair of units (and you have to buy at least 2 or they are useless) to Time-Warner. There is one on the East Coast and one on the West and it forms a major part of their cross-country backbone.

    8-10 of the units were sold to Korea (South) for use in wiring up their national rail systems. I also believe NTT DoCoMo (Japan) bought a couple.

    This is all last year. Since I'm no longer with that company (layoffs), I no longer get all the product updates. These units were in my product group for install, service and support.
  • by Rev.LoveJoy ( 136856 ) on Wednesday August 27, 2003 @03:50PM (#6807622) Homepage Journal
    that even then, Kevin Tolley will still be ranting away in the pages of Network World about how much better Token Ring really is...

    Cheers,
    -- RLJ

  • Well, I think that the reason that things can't be sold without the ethernet label on it has got to be because of the increase in the popularity of networking.

    Go driving around a neighborhood with Kismet and you'll see what I mean. There are tons of people with Wireless networks in thier homes. Now every Joe user in the world can set up thier own network in thier home. Now, Joe doesn't know the difference between ethernet and cat5. But what is the main thing that he sees on all of his packaging? ETHERNET.
  • Because it is now impossible to sell networking unless it is called Ethernet...
    I see. So that's why all of those public access points are springing up with wireless Ethernet access points.
    • I see. So that's why all of those public access points are springing up with wireless
      Ethernet access points.
      Was this intended to be sarcastic? I hear 802.11x referred to as "Wireless Ethernet" all the time.

      Actually, to me, Ethernet = "contention-based, shared-medium compupter network." In which case, 802.11x is very properly called Ethernet. Switched 100bT, on the other hand...

    • Wouldn't it be more appropriate if a UTP Ethernet be called "wired Ethernet"?
      "Ethernet" by itself should be more than descriptive enough for a wireless network...
    • The main meme of Ethernet is carrier-sense multiple access (CSMA), i.e., talk if it is quiet, don't if it isn't, asynchronously. This is in direct opposition to synchronous protocols of all kinds (including TDMA), CDMA, or token passing.

      So, OK, Wi-Fi (CSMA/CA) is not really Ethernet (CSMA/CD), but it has the same asynchronous spirit (CSMA)...
  • Ok, so here's my situation.

    Just bought a house. Got a sweet deal with the builder where I sign a waiver (if you kill yourself it's not our fault), and I'm able to go in and put network cable in the walls. This will probably happen in a month or so (they just poured the foundation two weeks ago).

    I was seriously going to go in there and put 2-4 Cat5e ports in each room, and I've already bought a 1000ft spool of the stuff for the occasion. Unfortunately due to building codes (so they say), they will not
    • Standard copper cabling I can't imagine will get much faster. The maximum feasible speed over 8 wires is pretty much 1 Gigabit/s and thats with Cat6. Wiring your new house with Cat5e would be silly if you're really serious about "keeping up with the times" although I highly doubt the consumer will be using GigE any time remotely soon.

      If you want to really future proof your house, go multi-mode fiber. I will cost more than alot of your appliances but the geek factor would be sky-high.
    • It won't make your copper obsolete, cause it ain't ment to replace it..
      Think internet backbones...
      Most PCs would probably have a hell of a time even coming close to pushing 10 terabits/sec
    • Re:Cabling? (Score:2, Informative)

      by stratjakt ( 596332 )
      You can always pull new cable through the walls if you arent afraid of a little work. There are millions of tricks and tools and snakes and whatnot, electricians pull new wire all the time with minimal damage to the walls (minimal as in 5 secs with a putty knife to fix it).

      Hell you should be able to tie the new cable to the old, and yank it through, removing the old as the new replaces it.

      Unless you're going to be a dork and staple the Cat5 every few feet.

      BTW, that 1000 foot roll wont go as far as you t
    • Re:Cabling? (Score:3, Informative)

      by b1t r0t ( 216468 )
      (Disclaimer: IANAElectrician)

      As I understand it, low-voltage cables like Ethernet and telephone wires do not need conduits. What they do need, however, is to be plenum-grade if they go into a "forced air space" like an air conditioning duct. It's also probably a bad idea to bring them through a hole in the ceiling of your wiring closet like I did :) in my install. But I don't have a good replacement idea other than a bunch of holes drilled from the top of the wall and brought out through a box on the wa

    • I was a building/wiring guy for a while and plenty of building codes require using conduits of the right type, but I've never heard of one that required you NOT to use any conduit at all. Codes vary in details widely around the country (and even the same state), but if you want to use a conduit you might want to make a phone call to the local inspection office or even an electrical company and ask them about the relevent local building codes.

      It's not exactly unknown for a builder to BS a customer and after
    • You got the right number of ports. If you can put 2 on one wall and 2 on the opposite wall do it. I can't tell you how many times I've needed one of my cat5e drops to be on the opposite wall or needed a 2nd one. On the same covers put 2 coax lines also for digital cable/satellite. Put that on each plate that has the cat5e drops. Make everything run into a closet on the top floor of the house. Make sure you label the runs well also. Do this for each room. You may be able to cut down to just one drop
  • Imagine a beowulf cluster of nodes connected by this!
  • by Sir Rhosys ( 84459 ) on Wednesday August 27, 2003 @04:01PM (#6807726)
    Just read this in ESR's Art of Unix Programming [catb.org] and thought it was applicable:
    "Robert Metcalf [the inventor of Ethernet] says that if something comes along to replace Ethernet, it will be called "Ethernet", so therefore Ethernet will never die. Unix has already undergone several such transformations."


    -- Ken Thompson
    Here is the page in the manuscript with the quote [catb.org].

    My apologies for both the recursive quoting and name dropping.
  • by MrPerfekt ( 414248 ) on Wednesday August 27, 2003 @04:02PM (#6807731) Homepage Journal
    I'll stop trolling here after I get this out: stop thinking this has anything to do with your top-of-the-line, supergeekin' Athlon.

    This technology is namely meant for backbones, be it on a campus level or as a longer haul backbone. Obviously, your desktop will not need to transfer anywhere near that much data within the next, say, 25 years. If you were using your head while you were reading the (albiet poorly written) article, I wouldn't have to troll. :(

    • Yes, in 1995 no one thought there was a call for 100BT to the desktop either. And in 2002, no one thought there was a need for GigE to the desktop either.

      (I write this after I just did a 500 Mbps ftp transfer of a 7GB video file over GigE...)
      • In 1995 there was a call for 100BT links trust me I was there were were doing it earlier than that. In 2002 GigE over copper was there and my persoanly first installation of GigE to the desktop was 1998. So far the PC could arguably handle them GigE pushed the PCI bus to breaking and 10GigE will do the same for PCI-X for a few years to come. Now granted I wholeheartedly agree that untill a lot of issues are worked out you wont be seeing fiber the the normal desktop.

        As an asside I think the funniest part
  • by El ( 94934 )
    My manager keeps correcting me: "It's not Ethernet, it's 802.3!" Uh, then why is the driver name "eth0"??? Is "eth" a contraction of "Eight oh two dot THree"?
  • RIAA (Score:2, Interesting)

    by dlosey ( 688472 )
    Can you imagine trying to stop mp3 transfers with this technology?
    A 5MB mp3 would take 0.000004 seconds. A whole CD would take a whopping 0.00056 seconds.

  • by ozzee ( 612196 ) on Wednesday August 27, 2003 @04:18PM (#6807894)

    10Tb/s means

    5 million 2Mb/sec compressed video streams

    Copy a 250GB drive in 1/4sec

    23 thousand streams of 24bit x 1600*1200pix x 75hz uncompressed

    1.5k byte packets at 670 million/sec

    2 billion x 50 byte packets per sec

    port scan all ports on all IPv4 addresses in 20 minutes

    Every US resident downloads Metallical's new track in 30 minutes of my http server

    And this will all be available at Fry's for a $50 NIC and $30 cable ? When ? I'll hold off buying any new network HW 'till then :^)

    Seriously, there are some significant implications here. For 1, you won't need a monitor connected directly to the "fast video card" to get the next fancy 3D graphics features. Memory bandwidth and network bandwidth will be similar meaning that clustered NUMA systems will be interesting. Some of the design decisions we deal with today have been because getting the person close to the computer to improve the experience was a critical factor will disappear.

  • Two things strike me:
    (1) All the IPv6 naysayers will have to come around to the better (IPv6) Protocol; and,

    (2) I can download my porn faster.
    And those are two very good things!
  • by greymond ( 539980 ) on Wednesday August 27, 2003 @04:36PM (#6808083) Homepage Journal
    From the article they have a snippet at the top that goes like this - i've added the years in between on my own:

    10 Megabit Ethernet 1990*
    (5 years)
    * 100 Megabit Ethernet 1995
    (3 years)
    * 1 Gigabit Ethernet 1998
    (4 years)
    * 10 Gigabit Ethernet 2002
    (4 years)
    * 100 Gigabit Ethernet 2006**
    (2 years)
    * 1 Terabit Ethernet 2008**
    (2 years)
    * 10 Terabit Ethernet 2010**

    I think this would be more accurate though:

    * 100 Gigabit Ethernet 2006**
    (3 years)
    * 1 Terabit Ethernet 2009**
    (3 years)
    * 10 Terabit Ethernet 2012**

    Basically I don't see the technology being developed any faster than 3-4 years because as it stands, home main stream still opperates at DSL connections of 10mb and home networks run at 100mbs. As far as the business world goes, the majority of companies I have had the opportunity of working at run only 100mb networks with IT "thinking/testing" going 1gb.

    In short - there is NO demand for 10gb networks currently and especially NO demand for 100gb let alone a freakin terrabyte pipe. Although those things are "nice" and very "cool", there is not a big enough demand/NEED for this kind of transfer - YET.

    You could also use the analogy of the current PC market. There is not a big demand for new systems right now because even for business use a P4 1.6ghz with 512mb of mem runs everything work and game related fine. As soon as something comes out that REQUIRES/needs more power THEN you will see a rise in pc sales.
  • 100 Gb/s + is bogus (Score:3, Informative)

    by Orthogonal Jones ( 633685 ) on Wednesday August 27, 2003 @10:31PM (#6810299)

    The article mentions DWDM systems with 100 Gb/s per wavelength. That's bogus.

    I am an optical engineer at a 40 Gb/s startup. The jump from 10 Gb/s to 40 Gb/s is huge. Many signal degradations (chromatic dispersion, polarization mode dispersion, nonlinearity, ...) become a LOT worse when you jump from 10 to 40 Gb/s. The jump to 100 Gb/s would incur an even greater penalty.

    Compensating for chromatic dispersion, PMD, et. al. requires optical components which DO NOT follow Moore's law. These components are handmade specialty devices.

    While a business case can be made for 40 Gb/s, the jump to 100 Gb/s is commercially pointless. If you are building a DWDM system anyway, just spread the same data across more 10 Gb/s channels.

    What the hell is "Directions", anyway? It sounds like sci-fi fluff meant to entice VC's.

It is easier to write an incorrect program than understand a correct one.

Working...