Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking Upgrades IT

Corporate Data Centers As Ethernet's Next Frontier 152

alphadogg writes with a story that's about the possibilities for the next generation(s) of Ethernet, stuff far beyond 10base-T: "Ethernet has conquered much of the network world and is now headed deep into the data center to handle everything from storage to LAN to high-performance computing applications. Cisco, IBM and other big names are behind standards efforts, and while there is some dispute over exactly what to call this technology, vendors seem to be moving ahead with it, and it's already showing up in pre-standard products. 'I don't see any show-stoppers here — it's just time,' says one network equipment-maker rep. 'This is just another evolutionary step. Ethernet worked great for mundane or typical applications — now we're getting to time-sensitive applications and we need to have a little bit more congestion control in there.'"
This discussion has been archived. No new comments can be posted.

Corporate Data Centers As Ethernet's Next Frontier

Comments Filter:
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Tuesday October 21, 2008 @11:38AM (#25454457)

    And to make it easy we could call it "TokenRing".

    Or maybe a token passing bus! Maybe call it "ARCnet".

    Seriously, if there are problems with Ethernet ... for the usage you envision ... don't try to change Ethernet.

    You take the parts you want from Ethernet and you create a NEW standard with the other features you want.

    But you leave Ethernet as Ethernet. That way there is no confusion.

  • Network Vendors (Score:4, Interesting)

    by maz2331 ( 1104901 ) on Tuesday October 21, 2008 @11:39AM (#25454481)

    This seems like a total kludge being put together by networking equipment vendors to find a way to differentiate their products and return to the days where a 10 Base-T hub was $1000.

    Network gear is now mostly a commodity, except at the super high end.

    The vendors hate that - so they are trying to push the host's functionality into the LAN gear instead. They don't want to provide "dumb pipes" any longer, they want to monkey around with the traffic and protocols, and provide virtual servers and such in their boxes.

    Really, it's just an attempt to make the servers the commodity and their gear the expensive part.

    Except... you already can implement this yourself with existing equipment and software, with much better control and no vendor lock-in. For low-end solutions, a Linux cluster works great behind an UltraMonkey front end. For higher-end ones, well, that's what IBM z-series mainframes are for.

    What a great solution in search of a problem.

  • by afidel ( 530433 ) on Tuesday October 21, 2008 @11:44AM (#25454573)
    No, they want Ethernet as a transport to contain a lot of the features of TCP so that they can lay their own protocols on top of it while being able to assume it's a reliable transport. That will increase the cost of ethernet by including that intelligence down the stack. Basically the cost of ethernet ports is plummeting compared to things like fiberchannel due to economies of scale and so cash strapped datacenters are trying to use it for everything, but it's not ideally designed to handle those other protocols so the other technology areas are trying to mold ethernet to meet their needs. Basically the way I see it if the industry does what is right there will be no 100Gbit Fiberchannel, there will be 100Gbit FCOE adapters.
  • by pecosdave ( 536896 ) on Tuesday October 21, 2008 @11:48AM (#25454641) Homepage Journal

    There's a draft of Firewire that hasn't hit yet that uses standard Ethernet cables. The port is supposed to be able to use either Firewire OR Ethernet and be able to switch between them depending on what it's plugged into. This sounds ideal to me.

  • by sirwired ( 27582 ) on Tuesday October 21, 2008 @12:02PM (#25454815)

    Fibre Channel over Ethernet has real promise, but these new requirements are a real stumbling block.

    What many of the posters here may not realize is that storage traffic (and the "standard" SCSI it uses) is extremely intolerant of dropped frames. A link that in the TCP/IP world would be specatacular is wholly unsuited for block-level storage, which is too latency sensitive to have time to recover from dropped data.

    Since the most common cause of dropped frames within a data center is congestion, FCoE requires your Ethernet to implement frame-based flow control, which prevents the congestion from occuring, along with the resulting frame loss.

    SirWired

  • This is FUD... (Score:4, Interesting)

    by volxdragon ( 1297215 ) on Tuesday October 21, 2008 @12:09PM (#25454907)

    Ethernet already has flow control at the link-level - they're called stop frames (and since all modern switches give you dedicated links to end workstations and have some amount of hardware buffering, collisions/overrun aren't an issue). Now, since the world really runs on IP (doing raw ethernet would only ever work in the most local of LAN applications which is rather pointless in most deployments), and IP has TOS bits (which every real modern router can classify, queue, and throttle per-queue all in the hardware fast-path with no additional latency), I'm failing to see what they're proposing to solve since the problem is already solved. 1G/10G switches are used all over data centers and in HPC situations today (and have been for years)...

  • Re:Hmm... (Score:4, Interesting)

    by pla ( 258480 ) on Tuesday October 21, 2008 @12:10PM (#25454929) Journal
    Are there any foreseeable applications for the consumer world?

    Connect your new keyboard and mouse via ethernet.
    Connect your new HDD* via ethernet.
    Connect your new video card via ethernet.
    Connect your new scanner via ethernet.
    Connect your new CD/DVD/BR via ethernet.
    Connect your new printer* via ethernet.
    Connect your new webcam* via ethernet.

    No more USB cables with a million different connector types. No more PATA or SATA cables. No more serial or parallel cables. No more trying to figure out where to plug a given device in on a motherboard or looking for spare PCI/whatever slots - Just one type of cable and they all plug into a switch-like section of the motherboard.

    Now, some devices (video cards as the most obvious) will still require extra power, but most devices could probably manage with a variant on PoE, meaning the inside of your case goes from rats-nets of assorted cable types, to a half-dozen or so tidy round cables.

    * Yes, you can already get network enabled versions of these, but they count as a real full-fledged network endpoint, not as a slave device "local" to a particular computer.
  • by mikael ( 484 ) on Tuesday October 21, 2008 @12:30PM (#25455261)

    For a high-performance system with a large number of nodes, the cost of the actual network to connect everything together can cost more than the CPU's and servers themselves. To get high performance from this network, everything has to be tied so tightly together, that is is considered a component in itself, the network fabric. Also, the actual communication through the network cables is the slowest part of the system. So this price/performance ratio is what customers will be considering when buying a system.

    The vendors want to keep the cheap network hardware (cables, connectors, switches) because the consumer market has driven down costs down to commodity prices. But Ethernet uses the cheapest method of shared communication - packet collision detection ("keep shouting until someone hears you"). I've read some research papers which say they can get up to 90% efficiency now.

    High performance network architectures (FDDI, token ring) are a bit more civilized - they had a token that was passed around - only the machine with the token could send any data.
    So there was never any lost packets. Other methods give each pair of devices a unique time-slot on a multi-slice basis. Or there are crossbar switch architectures like telephone exchanges that allow multiple connections to exist at the same time.

    So the vendors want it both ways - the cheap commodity prices of Ethernet hardware, combined with the high efficiency of existing high-end network hardware.

    The changes that they want only really affects the Link layer of TCP/IP, where collision detection is currently being performed instead of token passing or sychnronised time-slicing.

  • by ToasterMonkey ( 467067 ) on Tuesday October 21, 2008 @12:50PM (#25455617) Homepage

    Fibre Channel over Ethernet has real promise, but these new requirements are a real stumbling block.

    Something to note is that the Ethernet in FCoE is really not the same Ethernet we use today. The acronym really confuses things. The article offers some better names for the new Ethernet standard, "Converged Enhanced Ethernet (CEE)", "Data Center Ethernet (DCE)." It really is the convergence of Fibre Channel and Ethernet, NOT Fibre Channel glued to the back of Ether. Think of it more like a gigantic leap for Ethernet (and IP/TCP eventually, as functionality is pushed down a few layers), not so much a downgrade of Fibre. Also, this mostly applies to 10Gig Ether, which is already pretty damned different from previous forms of Ethernet.

    These [ieee802.org] are the new Ethernet standards.

    I think it's necessary to explain all this because while most posters don't know dick about storage and think existing Ethernet is good enough for everything, a good number of them might also be SAN admins that shrug it off without knowing that the "Ethernet" in the acronym has changed. HBA's aren't going anywhere, they will just be running more IP traffic now =) Also, if iSCSI is still around (gag me with a spoon if it is) it will at least have a better foundation to stand on. Damn I really hope FCoE ends its misery though.

  • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Tuesday October 21, 2008 @12:53PM (#25455657)

    t is possible to have IP on some other network, like token ring or FDDI, bother of which actually achieves higher throughput than ethernet for a given bandwidth.

    Nope, both of which have higher overhead than full-duplex ethernet. They have better throughput than half-duplex ethernet, which is about as useful as being better than avian carriers. Half-duplex ethernet should just be banned entirely. Maybe that would make Linksys wake up.

  • by afidel ( 530433 ) on Tuesday October 21, 2008 @01:23PM (#25456177)
    Since 10Gbit Ethernet has the collision domain defined as the two endpoints there IS no longer a collision domain on the wire, just a virtual one in an oversubscribed switch. This isn't about guaranteeing transmission over the internet, it's about guaranteeing reliability in a LAN/MAN/WAN Ethernet network. The idea is you will have one set of wires, one physical protocol with several personalities sitting on top. The biggies are TCP/IP and FCOE but there are other things like remote DMA that can greatly benefit from a reliable network layer transport.
  • Re:Hmm... (Score:4, Interesting)

    by mollymoo ( 202721 ) on Tuesday October 21, 2008 @02:39PM (#25457343) Journal
    Why is having the tab on the bottom better?
  • Re:Hmm... (Score:3, Interesting)

    by killmofasta ( 460565 ) on Saturday October 25, 2008 @06:04AM (#25508307)
    Actually Not at all.
    The design of RJacks specifically RJ-45, try and mitgate this as much as possible.

    If its properly crimped, wiggling the cable, even pulling it should have no effect on either the cable contact or the jack contact.

    i.e. it has no importance in real life by design.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...