Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Technology

10 Terabit Ethernet By 2010 306

Eric Frost writes "From Directions Magazine: 'Because it is now impossible to sell networking unless it is called Ethernet (regardless of the actual protocols used), it is likely that 1 Terabit Ethernet and even 10 Terabit Ethernet (using 100 wavelengths used by 100 gigabit per second transmitter / receiver pairs) may soon be announced. Only a protocol name change is needed. And the name change is merely the acknowledgment that Ethernet protocols can tunnel through other protocols (and vice versa).'"
This discussion has been archived. No new comments can be posted.

10 Terabit Ethernet By 2010

Comments Filter:
  • Good stuff (Score:5, Interesting)

    by mao che minh ( 611166 ) * on Wednesday August 27, 2003 @03:22PM (#6807383) Journal

    From the article: "iSCSI (Internet SCSI) over Ethernet is replacing: *SCSI (Small Computer Systems Interface..."

    iSCSI is far superior to stadard SCSI for obvious reasons, and its widespread adoption will really spark a massive gain in the SAN (Storage Area Network) market. The technology is there, now we just need more major vendors of SCSI devices (especially storage and image filing systems) to make more SCSI devices that support iSCSI natively and applications that take advantage of it. Combined with practical solutions from vendors of network storage software like Veritas we could see some major spending in IT. And more money being spent on IT is always a good thing.

    I don't keep up much with the progress of the Ethernet technologies at hand, so is it realistic to suppose that the practical implementation/creation of 100 Gigabit Ethernet, 1 Terabit Ethernet, and 10 Terabit Ethernet will be seperated by merely two years each?

    "Because it is now impossible to sell networking unless it is called Ethernet". Incorrect. You can easily sell network gear that is tagged with the "WiFi" designation.

  • by raehl ( 609729 ) * <(moc.oohay) (ta) (113lhear)> on Wednesday August 27, 2003 @03:23PM (#6807385) Homepage
    Is there going to be storage that can read/write that fast by 2010 too?
  • Hell (Score:5, Interesting)

    by Dachannien ( 617929 ) on Wednesday August 27, 2003 @03:27PM (#6807418)
    ...is there going to be a bus on desktop machines that can read or write that fast?

    Probably not. But I could definitely see it being useful for top-end server systems with hugely parallel storage and memory access.

  • by mirko ( 198274 ) on Wednesday August 27, 2003 @03:28PM (#6807425) Journal
    I guess there will only be one computer, at this time : a virtual computer distributed over millions of physical nodes, so the storage will might be each of these nodes' memory... Like Freenet but also aimed at distributing workload.
  • by Abm0raz ( 668337 ) on Wednesday August 27, 2003 @03:29PM (#6807440) Journal
    Will it be meant for actual LAN usage? I think it's being designed more for back-bone-like reasons. I can't even get my drives to transfer large amounts of data back and forth in a reasonable time, so I can't see the need unless we go to entirely solid state drives.

    but ... imagine a beowulf cluster interconnected by lines of these ... ;)

    -Ab
  • Re:LAN or Internet? (Score:3, Interesting)

    by garcia ( 6573 ) * on Wednesday August 27, 2003 @03:31PM (#6807457)
    who cares?

    How about those interested in clustering and not interested in paying for expensive solutions (that now exist because of high latency in ethernet)?

    How about those that are interested in having a network other than their home network where 100 or 1000Mb is just not enough?

    The home market isn't the ONLY market available for networking you know. Especially with FL thinking about taxing it ;)
  • Re:LAN or Internet? (Score:2, Interesting)

    by El_Ge_Ex ( 218107 ) on Wednesday August 27, 2003 @03:32PM (#6807468) Journal
    The article is already slashdotted so I can't read it.

    At those speeds, does the /. effect still exist? Is it the server that becomes the sole limiting factor?

    -B
  • by DeathPenguin ( 449875 ) * on Wednesday August 27, 2003 @03:37PM (#6807514)
    Interestingly enough, if you did it wouldn't be a very big success because the internal PCI or PCI-X bus in the system would bottleneck the interconnects. The NICs would need on-board processors to scale with their enormous bandwidth potential so that they could solve problems like matching checksums and other package management tasks and not have to pound on the system bus so hard.

    It wasn't long ago that we really started exploiting video chipsets for rendering graphics, either...
  • packetengines (Score:4, Interesting)

    by joeldg ( 518249 ) on Wednesday August 27, 2003 @03:43PM (#6807566) Homepage
    I am sure packetengines (http://www.scyld.com/network/yellowfin.html) is all over this.
    These guys had gigabit routers four years ago when I was helping to set up the AFN (ashlandfiber.com)

    Cool to see.. mo'faster is mo'betta

  • by GigsVT ( 208848 ) on Wednesday August 27, 2003 @03:50PM (#6807625) Journal
    250 and 300GB SATA disks are already pushing sustained over 50 mbytes/sec, at 7200 rpm. That's enough to max out most gigabit cards, except for the higher end ones.

    As long as the aerial density keep increasing, we will see slow but steady increases in speed too.

    If anything, networking has been the stagnant factor lately. Gigabit over copper has been out for years now, and the hardware for it is still overpriced, and mostly made by a few manufacturers.
  • by HTH NE1 ( 675604 ) on Wednesday August 27, 2003 @03:51PM (#6807631)
    When you're wiring up your home so that you can have high-quality, practically uncompressed high definition video coming from a central video server such that every room can be watching a different stream simultaneously, while some may be actively editing data and rerendering, you're going to want the fastest, fattest pipe you can get.

    And who knows what bandwidth-hungry LAN application you're going to want to do in the future. Have you any idea how long it takes to render a cup of tea, Earl Grey, hot in spacetime over a 100 Mbit/sec connection? I can tell you one thing: it's not going to be hot.

    More bandwidth than you'll ever need is always better than not enough. Especially when you aren't leasing it from an outside party!
  • by Brahmastra ( 685988 ) on Wednesday August 27, 2003 @03:55PM (#6807671)
    100 ms latency would affect a 10Mbps network and a 10 Tbps network almost equally if a clustered application is using very small packets to communicate. Only if the application is using very large packets, the bandwidth will overcome the latency. At small packet sizes the latency will largely overshadow the bandwidth. And considering that a lot of scientific applications use small payloads, latency is very important. If ethernet wants acceptance in the High-Performance-Computing-Clusters world, something has to be done about the latency.
  • by ivan256 ( 17499 ) * on Wednesday August 27, 2003 @03:56PM (#6807680)
    Now take the same server, and instead of transfering a 1GB file, send a 4k message for a DSM page update, or a filesystem locking operation (4k is generous). Which network is effectively faster then? Transferring large files is far from the only use of networks. Latency *is* important, and ethernet latencies have not gotten the exponential speed boosts that the bandwidth has.

    Clustering and LAN file servers are two common uses for networks that won't benifit much by increasing bandwidth beyond 2gbps compared to how much they would benefit from lower latencies.
  • Re:Good stuff (Score:5, Interesting)

    by prrole ( 639100 ) on Wednesday August 27, 2003 @03:57PM (#6807690) Homepage
    iSCSI is NOT far superior to SCSI, or fibrechannel. iSCSI has massive issues related to deterministic latency, and computational cost of processing TCP/IP at gigabit speeds. You may see some growth in the of iSCSI in the workgroup segment, but I don't see iSCSI replacing fc/scsi in the near future for mission critical computing.
  • by netringer ( 319831 ) <.maaddr-slashdot. .at. .yahoo.com.> on Wednesday August 27, 2003 @04:07PM (#6807771) Journal
    Was this intended to be sarcastic? I hear 802.11x referred to as "Wireless Ethernet" all the time.
    OK, Sorry. What I meant was it seems to sell pretty well being called Wi-Fi.
  • Re:Good stuff (Score:5, Interesting)

    by 4of12 ( 97621 ) on Wednesday August 27, 2003 @04:09PM (#6807793) Homepage Journal

    iSCSI

    A really nice development.

    Yet more big advantages to iSCSI are the ability to keep the

    • large,
    • noisy,
    • power-hungry,
    • heat-generating,
    • unsecure
    disks out of workstations in workers' offices and down the hall in a
    • sound-proof,
    • secure,
    • air-conditioned,
    • UPS'd server room with
    • mirrored images,
    • archival backups
    .

    Next thing you know, GPUs will come with on-board Ethernet controllers and USB plugs for keyboard and mouse, and be built in to the back of an LCD monitor.

  • RIAA (Score:2, Interesting)

    by dlosey ( 688472 ) on Wednesday August 27, 2003 @04:10PM (#6807800)
    Can you imagine trying to stop mp3 transfers with this technology?
    A 5MB mp3 would take 0.000004 seconds. A whole CD would take a whopping 0.00056 seconds.

  • Routing (Score:3, Interesting)

    by phorm ( 591458 ) on Wednesday August 27, 2003 @04:12PM (#6807824) Journal
    One would think that if they have a device that could route such traffic, then it must have some sort of bus/hardware capable of handling it. Somwhere along the line this traffic has to hit a node-point, right?

    Now really, I don't see much point in directing 10Tb ethernet to one machine anyhow. But it would be great for large node-points. I you think about 100Mbps, generally no single machine is going to use that much in a normal network. However, many machines will, and sometimes quite easily in large situations.

    For huge networks, or ISP's, 10Tb would be the way to go.
  • Re:Good stuff (Score:3, Interesting)

    by crow ( 16139 ) on Wednesday August 27, 2003 @04:13PM (#6807831) Homepage Journal
    EMC is now supporting iSCSI in their high-end Symmetrix DMX storage systems. It's just a matter of time before all storage vendors offer this. It's also just a matter of time before network cards that can talk native iSCSI are available (allowing you to boot from iSCSI, among other things).

    [Note: I work for EMC and am friends with the iSCSI developers, so my views are a bit biased.]
  • Better question... (Score:5, Interesting)

    by siskbc ( 598067 ) on Wednesday August 27, 2003 @04:15PM (#6807863) Homepage
    Will there be a computer with a bus that can transmit data that fast? To hell with read/write - I'll concede it's all in memory. I don't think computers will be able to do (10^13)/64 bus cycles by 2010, assuming Moore's is loosely adhered to. As I calculate it, 7 years at 1.5 years/doubling cycle leaves 4.8 doubling cycles. Assuming a top speed of 3 GHz and 64 bit architecture, 1 get 1E13bits/(64bits/clock)/((2^4.8)*3E9clocks) = 1.87, or 87% overcapacity.

    And that assumes that transfer occurs at chip speed, which it doesn't. Assuming a modest clock multiplier of 8 between system bus and chip, that's a 15x overcapacity, even if the entire computer were used to transmit.

  • Re:Good stuff (Score:5, Interesting)

    by Austerity Empowers ( 669817 ) on Wednesday August 27, 2003 @04:18PM (#6807888)
    I don't keep up much with the progress of the Ethernet technologies at hand, so is it realistic to suppose that the practical implementation/creation of 100 Gigabit Ethernet, 1 Terabit Ethernet, and 10 Terabit Ethernet will be seperated by merely two years each?

    I think not. 10 GbE hasn't exactly taken the world by storm and it's been around for over a year now. I know of products that have 10 GbE ports, but I have not witnessed an abundance of demand. To be nice the author of this article is just a little facetious in his claims.

    In reality if you read the article it's hard to even take him seriously. To say that Nortel's DWDM system is ethernet is like calling your 56k modem ethernet. Yeah, so you can pass ethernet frames on it. It's not standard, it's not documented anywhere in IEEE 802.anything (esp with regards to conformance), so it's NOT ethernet. Just passing ethernet frames does not make you an ethernet device. I'm honestly not really sure what the author's point is except that he seems to think 1) ethernet is increasingly popular, 2) everyone should want to carry ethernet frames, and 3) people want bigger and bigger pipes. The first 2 are true, the third is less true now than it was 3 years ago.

    So the answer is, it wouldn't surprise me if we see 10 Terabit links by 2010, I doubt very, very much that we'll see a 10 Terabit ethernet port on a single chassis ethernet switch with 100 Terabits of switching capacity. I could be wrong, I hope I am, but it doesn't seem reasonable.

  • by default luser ( 529332 ) on Wednesday August 27, 2003 @04:37PM (#6808091) Journal
    Actually, bandwidth and route have everything to do with latency.

    The efficiency of the routers / backbones you encounter is always a factor, and if one router in the chain takes forever to respond, it's going to kill your latency.

    Your packet has a certain size, and the time it takes to completely transmit that packet and complete the ack is your latency. Distance and bandwidth are the prime factors.

    Sure your packets travel fast on a fiber backbone, but if your last mile connection is several orders of magnitude slower ( broadband or dialup ), it's going to cause a significant increase in your latency.

    Even high bandwidth cannot save you from real distance. You try to play a game on the other side of the US, you're going to add a sizeable delay even with those high-bandwidth backbones. Gaming with a server on another continent? It becomes largely unplayable.
  • by silas_moeckel ( 234313 ) <silas.dsminc-corp@com> on Wednesday August 27, 2003 @04:50PM (#6808208) Homepage
    In 1995 there was a call for 100BT links trust me I was there were were doing it earlier than that. In 2002 GigE over copper was there and my persoanly first installation of GigE to the desktop was 1998. So far the PC could arguably handle them GigE pushed the PCI bus to breaking and 10GigE will do the same for PCI-X for a few years to come. Now granted I wholeheartedly agree that untill a lot of issues are worked out you wont be seeing fiber the the normal desktop.

    As an asside I think the funniest part of current desktop GigE is the switches support full speed but I know of only one card that can get even close to those speeds.
  • by Feynman ( 170746 ) on Wednesday August 27, 2003 @05:05PM (#6808345)
    From the article:
    Only a protocol name change is needed. And the name change is merely the acknowledgment that Ethernet protocols can tunnel through other protocols (such as DWDM) (and vice versa).

    It's even simpler than this, in a way. "Ethernet" denotes a protocol. But in Ethernet parlance, "DWDM" is a Physical Medium Dependent (PMD) sublayer. 10 Gb/s Ethernet (802.3ae) already includes a WDM PMD, 10GBASE-LX4.

  • Re:Good stuff (Score:3, Interesting)

    by Trolling4Dollars ( 627073 ) on Wednesday August 27, 2003 @05:13PM (#6808410) Journal
    Good point. Most users are total morons and think they need more power and storage than their work warrants. However, only time will truly tell if iSCSI will really be the "winner" in all of this. I couldn't find the link for an open source project the I'd seen before that would actually export SCSI, USB and Firewire over IP, so here's this [aol.com]

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...