Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking Upgrades IT

Corporate Data Centers As Ethernet's Next Frontier 152

alphadogg writes with a story that's about the possibilities for the next generation(s) of Ethernet, stuff far beyond 10base-T: "Ethernet has conquered much of the network world and is now headed deep into the data center to handle everything from storage to LAN to high-performance computing applications. Cisco, IBM and other big names are behind standards efforts, and while there is some dispute over exactly what to call this technology, vendors seem to be moving ahead with it, and it's already showing up in pre-standard products. 'I don't see any show-stoppers here — it's just time,' says one network equipment-maker rep. 'This is just another evolutionary step. Ethernet worked great for mundane or typical applications — now we're getting to time-sensitive applications and we need to have a little bit more congestion control in there.'"
This discussion has been archived. No new comments can be posted.

Corporate Data Centers As Ethernet's Next Frontier

Comments Filter:
  • by LWATCDR ( 28044 ) on Tuesday October 21, 2008 @11:39AM (#25454479) Homepage Journal

    No.
    Ethernet uses collision detection and resending to to manage packets.
    Well it used to anyway. I am not sure about Giga-E
    The way Ethernet used to work is that a sender would listen to see if the line was clear and then send a packet and listen at the same time. If the packet was damaged by a collision the sender would wait a random amount of time and then try to resend.
    This system really bugged a lot of people but it was cheap and it worked.
    This is the actually physical layer and not TCP/IP.

  • Re:Ethernet.. (Score:2, Informative)

    by JeepFanatic ( 993244 ) on Tuesday October 21, 2008 @11:47AM (#25454633)

    I always thought it had more to do with "Aether" - a concept, historically, used in science (as a medium).

    Definition from wikipedia - http://en.wikipedia.org/wiki/Aether [wikipedia.org]

  • by Ancient_Hacker ( 751168 ) on Tuesday October 21, 2008 @11:49AM (#25454649)

    Ethernet is more of a generic name than a specific thing. It's more like "soup" than it is like "VHS".

    Ethernet started as a daisy-chained garden-hose-size coax cable with vampire taps. Then RG-58 with BNC connectors, then twisted pairs to a hub, then switching hubs, then wireless... Not much stayed the same, not speed, media, topology,... except maybe carrier-sense. It's basically a comforting name, with the Ethernet-of-the-day varying at the chef's whim.

    Keeping the name while tossing out the last remaining bit of commonality is a bit bizarre.

  • by Paralizer ( 792155 ) on Tuesday October 21, 2008 @11:56AM (#25454741) Homepage
    Yes, that is the case strictly at layer 1 of the OSI model. However at layer 2 we have the switch. By segmenting the collision domain up and creating one for each port rather than the entire unit we no longer have collisions and CSMA/CD is no longer needed. Unfortunately wireless still uses CSMA/CA which operates similar to what you described, except it requests silence of the 'wire' first before trying to send rather than retransmitting when a collision occurs. Switches are still part of ethernet since they operate at layer 2. TCP/IP doesn't come into play until layer 3 when we get TCP/IP IP addresses.

    Ethernet itself is not reliable, which is why we use TCP in TCP/IP as the transport protocol so we know if we need to retransmit due to undelivered packets. I can't imagine how they would go about 'fixing' ethernet though, as the GP pointed out. If you pass something along between a series of switches/routers/nodes there must be the possibility something fubars and you lose that information in transit - be it a noise on the wire or maybe the node runs out of memory and can not process it.
  • by sirwired ( 27582 ) on Tuesday October 21, 2008 @12:09PM (#25454923)

    Nope, the CSMA/CA part of Ethernet is gone also. Specs for a GigE hub exist in the standards, but nobody ever implemented them. (Switching got to be too cheap for anybody to bother.)

    Obviously it didn't even get specc'd out with 10Gb Ethernet.

    Oh, the frame format is still more-or-less the same from Classic Ethernet. Not identical, but still pretty close.

    SirWired

  • USB (Score:4, Informative)

    by psbrogna ( 611644 ) on Tuesday October 21, 2008 @12:31PM (#25455279)
    Given the direction SATA & USB is going, the rate at which its bandwidth has increased relative to traditional CATx ethernet, and the relatively lower cost of interconnection devices, is Ethernet really the best? If we're going to making significant wiring changes in server rooms I'd prefer to just do it once and standardize on the cheapest, fastest "2-wire" solution that makes sense.
  • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Tuesday October 21, 2008 @12:48PM (#25455585)

    So technically you could argue that there was not really a "collision" but the computer that didn't get its packet through is still told there was one so that it retransmits.

    No. Switches don't notify the source that the packet was dropped. TCP's retransmit works without explicit notification.

    Collisions are a thing of the past. Dropped packets aren't.

  • Re:Hmm... (Score:3, Informative)

    by Shatrat ( 855151 ) on Tuesday October 21, 2008 @12:50PM (#25455607)
    I think the biggest reason that there are hundreds of different USB connectors is that standardized plugs don't help sell 30 dollar Apple or Sony branded AC/DC adapters.
    I really loved my old motorola phone with the mini-usb connector, now my LG phone doesn't share it's connector with any other device I own.
  • by erc ( 38443 ) <ercNO@SPAMpobox.com> on Tuesday October 21, 2008 @12:50PM (#25455609) Homepage
    However, with a switch at least one packet always gets through.

    Wrong. There is no "collision detection", the only way to tell that you had a collision is if the packet didn't get there. If two devices transmit at the same time, you get a mangled packet that won't pass CRC and gets dropped.

    What they're really looking for is token ring - which was (and still is) a superior protocol - for Ethernet, as you increase the bandwidth utilization beyond 10%, you get so many collisions that your throughput goes through the floor, while for token ring, the throughput degradation is much more gradual. For bandwidth utilization above 10%, token ring is far superior to Ethernet.

    Why Ethernet was adopted over token ring has more to do with religious issues and who can scream the loudest and bully their way through technical issues with emotion than it has to do with technical superiority.
  • Re:Hmm... (Score:3, Informative)

    by TheRaven64 ( 641858 ) on Tuesday October 21, 2008 @12:52PM (#25455639) Journal

    * Yes, you can already get network enabled versions of these, but they count as a real full-fledged network endpoint, not as a slave device "local" to a particular computer.

    You can with protocols like iSCSI or ATAoE. A lot of enterprise gear uses iSCSI, which makes a remote device appear like a local SCSI device, and some consumer-grade NAS devices running Linux can act as ATAoE devices, which does the same thing but with ATA instead of SCSI and over raw Ethernet frames rather than over IP.

  • by LWATCDR ( 28044 ) on Tuesday October 21, 2008 @01:00PM (#25455771) Homepage Journal

    I think it had a lot more to do with cost.
    Ethernet was available first and had more hardware suppliers so the cost went down.
    Token ring was really popular with IBM. It was almost a standard for IBM systems. I have a few microchanel Token Ring adapters if you need them :)
    FDDI is Token Ring on fiber.

  • by pecosdave ( 536896 ) on Tuesday October 21, 2008 @01:15PM (#25455997) Homepage Journal

    If anyone is interested, there's info here [ieee.org] on that.

  • by erc ( 38443 ) <ercNO@SPAMpobox.com> on Tuesday October 21, 2008 @01:26PM (#25456231) Homepage

    Haha, thanks for the offer but I don't have anything that will take MCA boards ... that was an interesting attempt by IBM to retake the PC market they lost. Oh, well.

    As to Ethernet being first, I thought it was the other way around?

  • by afidel ( 530433 ) on Tuesday October 21, 2008 @01:38PM (#25456407)
    Huh, in Ethernet which is CSMA/CD you listen to the wire before starting to transmit, this doesn't avoid all possible garbled packets but it does avoid most if things are working to spec. Also because VTT goes higher than normal transmit levels during a collision there IS detection. The reason that ethernet won over tokenring is that IBM charged a hefty fee per port for tokenring. There was also real world reliability problems with tokenring as early designs used a physical ring or string layout instead of the star topology of ethernet (later tokenring switches did allow for physical star topology but this was well past the point where ethernet had won the mass market).
  • by vux984 ( 928602 ) on Tuesday October 21, 2008 @01:56PM (#25456673)

    Collisions occur when there are more than one sender on a collision domain, they don't have to be sending to the same host. Imagine you have four computers on a hub. Computer A sends a message to B while C simultaneously sends a message to D -- this is a collision.

    We are really just talking about how collisions occur on a switch. Technically, they CAN'T occur on a full duplex switched network. The collision domain is the switch port and the PC port, and both can talk at once (full duplex).

    Hypothetically though, if you set aside buffering, a 'collision like' conflict occurs when multiple PCs try to talk to a single port, except that one gets through and the rest are 'blocked' which is what I was trying to say. Of course, due to buffering, this is 'handled' and the conflict is actually pushed back to when the buffer overflows instead.

    And yes, switches do have outbound buffers for each port so that if two sources try to send to the same host they can be done in sequence rather than causing an outbound collision on the destination port's collision domain. I am not sure what happens if this buffer becomes full, I had always assumed the switch would just begin dropping the packets (as indicated by this Cisco document).

    http://www.webopedia.com/TERM/B/backpressure.html [webopedia.com]

    Dropping packets is one option. The other is to use 'back pressure' to signal to the PC to back off a bit. This can be done by sending 'fake collisions' or via 802.3x Flow Control 'pause' signals. Many switches support these modes including those from intel and cisco.

    Its often better to just dropping the packets and let tcp deal with it, but in some cases you can get better performance by using flow control/back pressure features.

  • by Ungrounded Lightning ( 62228 ) on Tuesday October 21, 2008 @10:54PM (#25463729) Journal

    Nope, both of which have higher overhead than full-duplex ethernet. They have better throughput than half-duplex ethernet, which is about as useful as being better than avian carriers. Half-duplex ethernet should just be banned entirely. Maybe that would make Linksys wake up.

    Half-duplex ethernet corresponds to the way things work on a shared peer-to-peer radio channel. Like WiFi. (Which uses the Ethernet MAC and collision/backoff algorithms - though I think the collision detection is inferred rather than explicit.) (WiMAX, however, is a full-duplex protocol with central stations monopolizing an outbound channel and assigning timeslots for replies from remote stations on an inbound channel.)

    Of course that DOES qualify as an "avian carrier", doesn't it?

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...