Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

First Gigabit Ethernet Chip Demo 76

An anonymous reader writes "Broadcom demonstrated the world's first Gigabit Ethernet transceiver chip for existing CAT5 copper cabling yesterday at NetWorld+InterOp '99. Packaged in a 256-pin TBGA, Broadcom has begun delivering initial samples at $75/chip. No word on when full production starts or when to expect hubs, switches, or NICs based on the chip."
This discussion has been archived. No new comments can be posted.

First Gigabit Ethernet Chip Demo

Comments Filter:
  • by Anonymous Coward
    Actually Gig ethernet has a raw throughtput of 1200Mbit a second. With packet overhead it provides 1000Mbit at application level. The 400-500mbit numbers you are hearing are the limitations of servers.. The fastest single computer I've seen on gig ethernet is a 21264 running linux that was able to do 490Mbit transfers.

    The stupid E450's at work can't even hit 200mbit/s over gig ethernet though.. But gig ethernet and the switches really can push that much bandwidth.
  • by Anonymous Coward
    Why aren't we using coaxial cable for this stuff? It's better shielded, it's cheaper (premade cat5 cables often sell for >$1/foot), and it runs longer distances. 10base2 traditionally runs in a bus configuration which is not suitable for larger networks, but I can't think of any reason a coax hub couldn't be invented.

    100BaseTX is supposed to go 100 meters from hub to host, but I can't get a reliable signal with anything beyond about 80 feet. I tried 3 different cables (all homemade, but reliable at shorter distances) and every type of hub and NIC I could get my hands on. Is there some secret to it?

    I wish everyone would just bite the bullet and make everything fiber based. The prices will drop a lot when it is more mass produced, and we won't have to mess with a semi-reliable medium.
  • by Anonymous Coward
    This is what I understand about ATM an Ethernet:

    - ATM is conection-oriented network, e.g. you request a channel from starting point to end point and, if granted by network, you get a communications channel with guaranteed speed, delays, etc.. (this is called Qos)
    - Ethernet is packet (frame) technology, and it is not-connection oriented.
    - Ethernet is at Layer 2 (by ISO)
    - ATM is full networking technology at Layer 4, so you could compare it (functionaly) to Ethernet+IP
    - ATM is thus always "switched", so it does not need shared media access techniques (collision detection, etc..)
    - Ethernet (also Gigabit) is "switched" and "shared".
    - With shared Ethernet you can not guarantee no Qos and it saturates at about 65% of linespeed.
    - With switched (end-to-end!!!) Ethernet you can, to some extent, quarantee QoS.
    - To put it simple: ATM switch offers as much functionality (if not more) as nowadays modern (linespeed) Ethernet switch-routers. The price per port is comparable!
    - ATM's misfortune is that all today's applications use IP stack, so all vendors are seeking a way to use ATM as a transport medium for IP, instead of using it natively. (MPOA, LANE, CLIP..) They are mixing packet network and conection-oriented network.

    Real world comparison would be Post network against Phone (Fax) network.

    Please correct the above list.

    Peter

  • Simple.

    1000Base-T uses all 4 *pairs* in the cable. You could use coax but you would need 4 cables per connection. Sure it would go further than TP but it would make fibre look cheap....

    The reason you're having problems with 100Base-T is because you are probably using dodgy components. Proper Cat-5 cables and installations should have been tested to certify that they work. Perhaps using a tester would give you some indication as to where the faults lay.
  • Since it is the same range as 10/100baseT, it IS a big deal for upgrading any current LAN. In many cases, the cost of new cards and switches is nothing compared to the cost of running new cable. I can think of a use rught now where we have a 100baseT running between floors. I'd really like to up the bandwidth in the near future, but I don't want the hassle and paperwork of getting the fiber run.

    For new installation, fiber may still be the best bet because of the long runs it supports.

  • Just "for reference", FreeBSD currently supports gigabit adapters based on the Tigon chipset (eg. those from 3Com, Alteon, etc.). Initial benchmarks show a useable TCP bandwidth over 500Mbps and UDP bandwidth over 800Mbps. For a server feeding a gigabit-capable switch, this is a useful improvement over 100Mbps ethernet.

    Given that current pricing on fibre-based cards is well under $500, expect to be seeing Gb ethernet making a real play very soon.

  • Isn't that just the part connected to the physical wires of the network? The core chipset doing DMA and other things still could inflate this 75 buck just to todays' level :)
  • 10b2 is transciever style... the single pair transmits and receives. 10bT is different in the sense that there are seperate pairs of wires to do the talking and receiving.

    I won't say it's impossible, just not possible yet. :-)

    By the way, I have heard that the realisitic throughput of gigabit ethernet is closer to 6 or 700mbps (Communications Systems Design trade mag). While not truly gigabit, it is still pretty effing cool if you ask me. :-)
  • by Chris Gori ( 1825 ) on Wednesday May 12, 1999 @12:15PM (#1895508) Homepage
    Of course, >99% of Gigabit Ethernet is full-duplex so there are no collisions. This is partly because, up until now, all GigE was fiber-based, and most fiber topologies have one strand for transmit, one for receive (i.e. no way to collide). If you did build half-duplex, I believe the collision domain would be like 20 meters, not very useful.

    The reason Broadcom's chip is so complicated is that for full-duplex GigE on copper, all 4 pairs in the cable are used, _in both directions_. This means the chip uses fancy DSP techniques to subtract what is transmitting from what is receiving. It handles FEXT and NEXT (far-end and near-end cross-talk) as well.

    The only other concern is that the end-station just cannot create frames fast enough @ 1518 bytes, so there is a proposal for jumbo frames to enable endstations to lower their rate of frame generation. Still, for decent TCP stacks you can see 500-800 Mbps right now, which is a good bit better than 100Mbps.

    The ATM fixed-53B cell is an interesting idea, since it allows for uniform memory allocation per cell (versus ethernet, where you don't know if you need 64 or all the way up to 1518), but in hardware, often the implementation for Ethernet is to use linked-lists of buffers (say of 128B) which will have good efficiency.

    Problems like these can always be solved, and Ethernet is cheap and standard.

    Death to ATM! :-)
  • Wrong order of magnitude. Theoretically, 32 bit, 33 MHz PCI _could_ do it, but it won't happen at max. This is why new systems are comming out with multiple PCI busses, and a case for Alpha (even Apple) systems, a 64 bit PCI bus, so that eth traffic doesn't kill your drive or graphics bandwidth. You'll need expensive switches to get the max out of the pipe.
  • the services that it provides that most people need can be increasingly provided via IP QOS w/ overkill ethernet

    Doesn't seem very convincing to me. If people are still trying to solve QOS issues with overkill capacity then that's seems like little more than a cludge.

    I should be able to do video-conferencing no problem on 10Mbit/s Ethernet (the bandwidth is there), but if the image breaks down whenever there's a burst of activity from the department file server that's a pretty fragile solution.

    Overkill capacity is only overkill until someone builds a faster file server, or until you are unlucky and someone accesses a large cached file from a fast Linux machine, saturating your 'overkill' net.

    Sounds to me like the difference between a real RTOS and a timesharing system [oreilly.com]. Try asking a developer who uses QNX [qnx.com] and a 68k [mot.com] for a real-time app whether they would like to switch to Windows NT [realtime-info.be] and an Alpha 21264-500 [microway.com] with overkill processing power. All the processing power/bandwidth isn't going to help you if one app decides to monoplise it.

    By the way, I know that people keep trying to build RT systems out of NT. I can't imagine why. I even worked on a project that did that, and it was painful.

  • not really. the transceiver is media specific.

    eg the DEC 21143 network is used on Base2, BaseT and BaseFx cards. same chip, different transceivers. The 21143 takes care of most the ethernet framing side of things, and the transceiver takes care of communicating the bits across the media - be it fibre, copper..

    so i guess this transceiver is pretty much tied to BaseT.

    (although it's not very clear from the article whether the chip is just a receiver or capable of higher level ethernet 802 stuff).
  • even if $75 is just a single chip, the card can't be thathbad.. ) - we're going to choke the net to death.

    256 pins on a chip may take up significant card reale-state. It will be a busy board! I would imagine it could really make use of a 100MHz PCI bus too! I might expect a new generation of motherboards to appear in the future. Get your credit cards ready...
  • token passing and ATM networks are superior to collision detection, but it's a price performance thing. Ethernet is an open standard and it's CHEAP (and fast).

    Not sure where you got the 75% figure from, but frm my experience once ethernet hits 50% utilization it is hosed (time for a switch).
  • I remember a test about six months ago (in some ZD magazine) that tested linux, netware, solaris and NT with Gb ethernet adapters. NT maxed out at ~350Mb/sec, where all the others were getting above 800Mb/sec.

    It's interesting to note that with 4x100Mb adapters around 350Mb/sec is the figure quoted for NT performance (which is great for that test, but crummy for Gb ethernet).

  • 1 Gigabit/s, bah humbug! What I want is 10 Gigabit/s like the folks at Lucent/Bell Labs are playing around with. No sh*t folks, 10!

    They'll be showing this off this week. You can get the info about a LAN [bell-labs.com] and a multiplexer [bell-labs.com].

    This sort of makes our workplace upgrade to a 100Base-T switch look sort of feeble.:)
  • This is only ten times the bandwidth of 100Mb Ethernet, and those are available cheap as daisies, so this is no huge stretch.

    And it's for LANs, Local Area Networks, not modems to connect to ISPs.

    Seems to be some confusion here, as if suddenly homes will be getting a billion bytes a second net connections.

    --
  • Got a link for thoes budget NICs?

    The most in-expensive ones I've found are $850

    Thx.
  • If you're looking for info on optoelectric computing, check out the research at the University of Colorado [colorado.edu].

    ~afniv
    "Man könnte froh sein, wenn die Luft so rein wäre wie das Bier"

  • Maybe because fiber sucks and is much more expensive. You can still by 100Mbps fibre stuff - check out the prices.

    I would guess that most of the early 100Mbps short range fiber installations have been pulled in favor of cat5. 1000BaseT opens a huge market - I would suspect that prices will soon fall to around $1000 per switched port, which is affordable if you need it.
    --

  • I would guess that most unixy slashdot types don't have much experience with Token Ring, which is too bad, because it's actually very solid technology.

    Even standard 16Mbps token "feels" faster than your typical office 100Mbps non-switched ethernet segment. It's good to hear that the technology isn't being junked.
    --
  • Sure, it's only 800Mbit/sec, instead of 1000 (or is it 1200? I forgot), but that's AFTER the headers...

    And having nice features, like NO MTU is quite nice. I remember sending 40MB packets over our
    switches straight to a frame buffer... now THAT was sharp animation...

    And seeing an network of Suns (with my companies SBus cards and switches!) and Crays, that had, in 3 years of running, NEVER dropped a single IP packet.

    Sure, the cable (in the copper inplementation) is over $300 for a 3 meter one, and you need two of those for a full connection, but it sure was a well-engineered technology.

    I only wish I had stayed there long enough to see the HIPPI64 stuff...
  • According to the Gigabit Ethernet Alliance web page, 802.3ab has a maximum cable length of 100M. This happens to be the same for 10/100BT, so I don't call that "short range."
  • So you're sending the HDTV signals non-compressed? That doesn't seem like a particularly bright thing to do.
  • Do you have any concept of how much it would cost to pull out all the cat5 and replace it with fiber in any significantly large organization that would consider gigabit Ethernet?
  • Agreed, we've been installing a lot of 100BT and have not had the problems described. Get a real cat5 tester and test the cable. Are you using cat5 rated patch panels? If everything checks out, it's your NIC and/or the hub/switch.
  • Most Ethernets seem to start having collision problems above 50% utilization. The worst case was a Sun
    implementation of Ethernet where the wait time after a collision was fixed at the minimum, resulting in the NICs
    dropping into lockstep when collisions started and network utilization dropping to 0%.


    Then the SUN implementation was not to spec. True Ethernet devices are supposed to wait a random, or pseudo random, time before trying to retransmit after a collision. I'd say their implementation was broken.

    Also, if you are talking full-duplex connections, there is no such thing as a collision anymore. Since you can transmit and receive at the same time, you can't have a collision. Buffers can backup on the switches and you could drop traffic, but you wouldn't have a "collision." "Good" switches can also do flow-control, so that their buffers will not overflow. See the 802.3x spec.

    Looking at the Gigabit Ethernet Alliance [gigabit-ethernet.org] FAQ, it looks like they extend the carrier time and slot time to 512B from 512b. This, in effect, makes the minimum packet length 512B instead of 64B (The official min packet length is still 64B but it will pad it to 512B) for non-full duplex devices. So, unless you have full-duplex connections you are not gaining that much. However, it looks like they try to make up for this with the packet-burst feature, which allows stations to send out multiple consecutive packets without giving up control of the line (supposedly up to the 512B slot time).
  • I don't know about heat or power, but I know that the powers that be look really closely at the bottom line.

    CAT-5 is cheap (by comparison to optics) especially if it is already strung throughout the building. It can be cut to size, and you don't need to hire an optics-aware tech to handle it.
    The installed base of CAT-5 wiring is significant, and presents a large investment - so as you'd say, using it is a major benefit

    Also there's all the supporting hardware. Even if you make the investment in fiber, and string it throughout your building, you've got to tie it all together, and that requires special (non-copper) equipment. Fiber-handling equipment of any kind - be it routers, switches, couplers, whatever - is again more expensive than the CAT-5 counterpart.

    Conversely, a new installation is still likely to shy away from fiber, because this is still new tech, and nobody wants a lemon, or a token-ring, or BetaMax.

    Management will always look at the market inertia, and initial and running costs when making these decisions, and the sensible choice in those terms is copper ethernet.
  • Now THIS we can use around the office.
    The hell with Win2k, GB-ether is nice for all those big CAD drawings, DB shuffles, Quake games!

    Now, about that Internet backbone bandwidth:
    If I have GB-ether on my desk, and so does everyone else (even if $75 is just a single chip, the card can't be that bad.. ) - we're going to choke the net to death.

    Nevermind that a single PC can't pump data out that fast, a cubefarm of them can. We need significant backbone bandwidth improvements and faster routers.

    Where's those danged pure-optical chips?
  • As far as I know (correct me if I'm wrong please) but coax doesnt need a hub, it only comes in token ring style. I like coax personally, i have my home network using coax, it's cheap and I wasnt worried about running it through my attic (which is a frieghtening place for me, my poor 10BaseT cable would be scared to death). Fiber would be cool...but I think the main problem with fiber right now is they need to make the strands out of very high quality fibers which are expensive to produce (even in mass production). It would be cool to have a multiplexed fiber network though. Every bank of computers could be a different frequency. You could add more nodes (with a new frequency) without a loss in overall bandwidth, unless you were maxing out the server's PCI throughput.
  • The limiting factor in GigE performance is not the bus in your machine. It's the CPU on the receiver. At least if you are talking about TCP performance. It takes a fair amount of horsepower to put the individual ethernet packets back together into the reliable data stream that your program gets.

    Of course, by the time your PC has a 133MHz 64-bit PCI bus, it will also have a 1.5GHz CPU which can handle it. The fastest TCP performance I have seen over GigE using standard packets is ~350Mbps for a single TCP stream (memory to memory of course. Good luck making FTP go that fast.) I managed to get almost 500Mbps between the same multi-processor boxes by using three TCP streams, each running at about 160Mbps.
  • The problem with Ethernet at high utilizations is collisions. Only one station on the Ethernet can transmit at a time, but due to signal delay the station can't tell when it starts to transmit whether another station is starting to transmit at the same time. When two try to talk at once you get a collision and they both have to stop, wait a short, random amount of time and try again. The more of your bandwidth you're using, the greater the chances of this happening. If it happens too much, utilization goes down because all the stations spend more of their time waiting for their last collision to clear than in actually transmitting. ATM, being effectively a point-to-point network at the physical level, doesn't suffer from collisions.

    Most Ethernets seem to start having collision problems above 50% utilization. The worst case was a Sun implementation of Ethernet where the wait time after a collision was fixed at the minimum, resulting in the NICs dropping into lockstep when collisions started and network utilization dropping to 0%.

  • Yes, Sun was horribly out of spec. Another case of a company deciding that a feature was hurting performance without bothering to check why that feature was put in there in the first place. See also remove of "performance-robbing" slow start and exponential backoff in TCP stacks.

    As for full-duplex, good switches eliminate collisions but only as long as all nodes are connected directly to a switch. Throw in some vanilla hubs and a bunch of older network cards and you're back to collisions ( albeit the collision domain is drastically reduced ). Dropped traffic also has an effect on throughput even though you aren't generating collisions, all those retransmits eat up bandwidth too.

  • Try *ONE*. Different transciever styles for different media, and all your are *really* doing is propagating RF on the thing anyway.

  • After reading this, I wonder why one would choose to use the new gig over copper technology rather than the existing gigabit ethernet over fiber. Since the 1000 base T uses all four wire pairs at a high frequency, there is significant power consumption and heat. Am I way wrong in thinking that the only benifit of 1000T over fiber is that existing cat-5 wires can be used? Or are we expecting a big price difference for NIC cards and switches?
  • I would add that I think the IP protocol fits much better over simple ethernet than it does with the complex, fixed cell, connection oriented ATM protocol. The new gigabit ethernet layer 3 switches that route in hardware and foward packets at wire speed should limit the use of ATM to very specific applications.
  • All vendor/consortium links I'm afraid...

  • by chris.dag ( 22141 ) on Wednesday May 12, 1999 @10:50AM (#1895545) Homepage
    Sorry, no raw performance numbers :) I'm also not a networking guru...

    Back when I was looking into things, I found a Lisa98 presentation by Curtis Preston called "Using Gigabit Ethernet to Backup 6 Terabytes" -- in his presentation he referred to gigabit ethernet as really being "200Base-T" based on the results he saw. Much depends on your TCP stack and support for jumbo frames, etc. etc.

    The ATM vs gigabit ethernet debate totally depends on what "situation" you are talking about. ATM has alot of advantages and seems to be the fastest shipping bandwith available now (OC-48,etc.) It also has nifty billing/accounting/garanteed bandwith abilities and can easily handle both delay sensitive (isochronous) data like streaming media as well as more traditional computer network traffic.

    I guess it all comes down to how you want to use it -- I chose Gigabit ethernet for my DNA crunching alphaservers because I knew I was going to have a small number of hosts carrying IP traffic only -- no need for extensive WAN or MAN interconnects or thousands of circuits, no need to deal with isochronous data alongside computer traffic and no real urgent need for the accounting/management features of ATM.

    The biggest reason for my choice of gigabit over ATM was inhouse experience -- my group of biogeeks and the corporate IS people have tons of ethernet experience and no real ATM experience. This is why I think gigabit is going to _really_ take off in the LAN/intranet space-- being able to use your ethernet-aware people AND your existing Cat.5 copper wiring is very very attractive.

    just my $.02





  • "ATM, being effectively a point-to-point network at the physical level, doesn't suffer from collisions."

    Suffering isn't really a great term for what is going on. Ethernet uses a shared medium (coax or twisted pair) so it uses the collisions as an arbitration mechanism. On a lightly loaded circuit, there is no arbitration overhead. The mechanism you describe for collisions is correct, and I imagine that when you compare performance to ATM including arbitration (routing, circuit set up, Not too sure about ATM) that performance looks a bit better.

    Yes on a heavily loaded (60% - 70%) Ethernet circuit stations have to wait a bit longer for access to the medium, but with a light load, there is no arbitration delay (start transmitting, check for collisions later).

    A heavily loaded ethernet circuit can eaisly be divided in two (or more) with a bridge (switch), yielding two lightly loaded circuits, and better performance overall.

    Jeff
  • Yes it is. You still need the other supporting chips (which my company will happily sell :).

    I don't see the big deal. This is Gigabit Ethernet over COPPER and is short range. Gigabit Ethernet over fibre has been around a while.
  • I was reading some articles on this topic and found some interesting discussions about gigabit ethernet vs. ATM transmission.

    In a nutshell, one article suggested that ethernet began to perform less and less efficient the more data was pushed through the ethernet pipe (up to about 65 to 75% overall utilization), due to the differing packet sizes or something like that.

    The article then went on to say that for large pipes of data, the single cell-? size of an ATM packet allowed it to scale "much" better than the ethernet network. They said up to something like 95% utilization.

    Maybe gigabit ethernet wouldn't be as good of a decision if ATM worked better in this situation. Can anybody explain the difference to me? If you happen to have performance numbers, please post them.

  • That's really cool, and you're right, for your particular application, ATM is going to kick the bejeezus out of any frame-based tech.

    And that's the real, point, isn't it? For some applications, ATM makes good sense. I wouldn't implement anything but, if I needed to run concurrent media streams alongside my data. But for me, I just need a way for rows and rows of servers to get to the backbone with as little contention as possible. Pretty much all my frames end up as ether (at the server), so why try to change things?

    Once again:

    What do you want to do?
    What software (network, layer2, whatever) does that best?
    On what hardware does that software work the best?


    ^how to build an infrastructure^
  • but, wow check this out:

    I can get full duplex 1000-basewhatever to 200+ hosts through a single switch backplane. At that point the bandwidth available in the system makes the situation moot.

    yah, pricing... just what _are_ lightstreams going for these days?

    And have you tried it across _your_ WAN links? I don't know about you, but I don't consider 12Mbit of WAN connectivity to be exactly cheap, regardless of your transport... And just what do you mean by WAN? Those sissy little 500m fiber runs on campuses? Or my 23,000mi of multichannel OC192?

    oh wait, I forgot that the frame types are also 100% compatible with commodity hardware... :)
  • Another nice feature of Token Ring is the large frame sizes that are supported. I don't know about 100 or 155Mbps Token Ring, but the old 16Mbps Token Ring supports packets up to around 18K. Larger packets greatly improve bulk data transfer. Of course, ATM can do the same thing, with AAL5 supporting PDUs up to 64K with RFC1577 or 18K with LANE.
  • On the surface, most people would want to keep their existing wires, and not have to replace them. This is, of course, a reasonable concern, and of significant importance if it meant re-wiring a building.

    However, the time when every PC needs a Gigabit connection is still a while off. 10Mbit is still sufficient for the vast majority of networked PCs. The vast majority of network performance problems stem from frequent collisions, which in turn results from too many users hooked upto one port of a switch. Only a few will truely benefit from 100Mbit.

    Most small to medium size companies can easily run all their servers off single 100Mbit ethernet connections, and have no problems at all. 1Gbit ethernet cards are really only needed in enterprise servers, and it will probably stay that way for some time (at least 5 years). Most PCs can't even saturate 100Mbits while providing or storing/displaying meaningful data.

    I really don't see the problem with laying new lines between a handful of servers and switches which are normally placed in a central server room.
  • Gigabit ethernet still may not be fast enough for the new HDTV data requirements. Uncompressed HDTV video at 1080i 60fields per second needs at least 1.6 Gb to transfer properly. LANs setup in broadcast and post production studios are looking at ATM/Fibrechannel.

    Just my 02

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...