Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Mixing Gigabit, Copper, and Linux 249

iampgray writes: "With copper-based gigabit cards selling for less than $36 these days, what kind of performance can you expect -- especially in the often-overlooked Linux market? We sought out to test exactly what you could expect from copper-based gigabit solutoins for the desktop interface through the cluster-targeted products. Name brands and off-brands were put through the wringer. How'd they fare? Interesting results to say the least."
This discussion has been archived. No new comments can be posted.

Mixing Gigabit, Copper, and Linux

Comments Filter:
  • by Anonymous Coward
    Between 2 Tyan/AMB 1.2Ghz machines I was able to pull 800 mega bits per second on old copper.
  • Huh. (Score:2, Interesting)

    by autopr0n ( 534291 )
    Are we even to the point when a normal PC could handle Gigabyte? And if so, why not use optical? I mean, saying I've got a fiber optic home network is a lot cooler then saying I've got a gigabyte eth home network. I mean to a geek, (to anyone else, that would just be lame... er...)

    How much more expensive is the optical stuff for GigE? I'm mostly using optical audio connections from my home sterio, and that's not to much money
    • Re:Huh. (Score:2, Informative)


      Gigabit, not GigaByte. Gigabit = 1000000000 bits. GigaByte = 8000000000 bits. A 1 GBps connection is 8 times faster than a 1 Gbps connection.
    • Re:Huh. (Score:4, Informative)

      by Gojira Shipi-Taro ( 465802 ) on Sunday April 14, 2002 @04:51PM (#3340282) Homepage
      Well... copper is cheaper than fiber for the moment. I'd hate to think what my 50 meter run from my router to the second floor of my townhouse would cost if it was fiber.

      I use optical runs for my audo as well, but those are all under a meter, for the most part, and around $30 or so a piece. Not too much money for the purpose, but I dont' think I'd enjoy paying for a 50 meter run. Never mind the cost of devices with optical interfaces.

      That said, I guess the only reason I'd consider GB copper is that it's no more expensive than 100 base-T...
      • Wow, you are paying way to much. a 12ft optical cable I had used to connect my PC to my sound system broke a couple of days ago, and I thought it was gonna cost me $40 to replace it. Radio shack sold em for $44, but sears had a 12footer for just $20.
      • Get a cheaper brand of cable, something tells me you really won't be able to hear the difference. All that regardless, the kind of fibre used for eithernet is not nearly so expensive. I can get 12-strand (6 single mode, 6 multi mode) fibre at around $1.00-$1.50/foot. That has enough for 6 different connections, three of them single mode (which costs more). For a short run of premade multi-mode fibre with the ends on it I'd think you shouldn't pay much more than $1/foot and perhaps less. At a length of 50 metres it should be around $0.50/foot.

        IF Eithernet fibre was as expensive as you suggest, the university I work at would be bankrupt. Just last week I laid about 50 30-metre patch cables in a closet. This is not to mention the thousands already in place and the millions of metres of fibre that connects the buildings together.
    • Re:Huh. (Score:5, Interesting)

      by WolfWithoutAClause ( 162946 ) on Sunday April 14, 2002 @04:55PM (#3340296) Homepage
      > Are we even to the point when a normal PC could handle Gigabyte?

      Yes. Some memory parts are 333 Mhz and are 4 bytes parallel and instructions/s (as opposed to the clock rate) is over 1 GIP I think. So a PC can just about knock out a gigabyte/s if it has to, but it hasn't got much time to think about anything.

      But this article is talking about gigaBITs/s. That's 8x slower. So that too.
    • Re:Huh. (Score:3, Insightful)

      Are we even to the point when a normal PC could handle Gigabyte? And if so, why not use optical?

      A 32-bit 33 MHz PCI bus can support one (1) gigabit ethernet card at full capacity (card's bandwidth is about 100 Mbytes/sec, PCI 32/33 is 133 Mbytes/sec).

      If you want to stick multiple cards in (e.g. for a small hypercube-style cluster), buy motherboards that support 64/33 or 64/66 (I was drooling over the dual-processor 64-bit-PCI AMD boards a little while back).

      Gigabit ethernet over copper has the advantage of running over your existing cabling (i.e. cat-5 is fine). This avoids having to muck about with fiber, as fiber is a PITA to maintain yourself (getting optically perfect connections for the fiber jacks is picky).

      The way gigabit ethernet is encoded on cat-5 cable is both sneaky and elegant.
      • Re:Huh. (Score:2, Informative)

        by megabeck42 ( 45659 )
        >> Gigabit ethernet over copper has the advantage of running over your existing cabling (i.e. cat-5 is fine). This avoids having to muck about with fiber, as fiber is a PITA to maintain yourself (getting optically perfect connections for the fiber jacks is picky).

        Actually, the siecor unicam series work really, really well. They use a index of refraction matching gel inside the factory polished terminators. All you have to do is cut'n'crimp. They work great. I haven't ever had to do any splicing though - but, given how well the siecor stuff works, I can't see it being a remarkable problem.
      • Re:Huh. (Score:2, Insightful)

        by WhaDaYaKnow ( 563683 )
        A 32-bit 33 MHz PCI bus can support one (1) gigabit ethernet card at full capacity (card's bandwidth is about 100 Mbytes/sec, PCI 32/33 is 133 Mbytes/sec).

        Almost. 133MBytes/sec = 1064Mbit/sec. This means that it could only in theory keep up, if all bandwidth on the PCI bus was available for data. But, this number includes the overhead of setting up transfers and arbiting for devices that want to transfer, and these operations are fairly expensive.

        Also, the PCI device needs to obtain descriptors etc. (which indicate where to put the Ethernet packets in RAM) over the same bus, costing more valueable cycles.

        If you did have only the one device on the PCI bus (which is very unlikely), with a good chipset, you'd probably get over 100MB/sec, but not much more. So you'd never be able to actually get full Gb Ethernet. (as the test results show, things are MUCH worse then this, but this is probably caused by multiple devices on the PCI bus).

        Talking about chipsets, a long time ago we had a board with an OPTi chipset. They ran out of silicon when designing the chip, so they couldn't implement the Bus Master FIFO, so they decided to just abort every BM cycle after each 32 bit transfer, yielding a max transfer rate of 4MB/sec!! For weeks, I couldn't figure why my network driver wouldn't send packets faster than 30 Mbit/sec, until my boss flew to California (where OPTi was located) to find out what we where doing wrong.

        Back to the tests: for some reason they failed to mention the chipsets used on the motherboards which really is VERY important if you want to use a gigabit Ethernet card in a 32/33 PCI system. The fact that the Dell has 5 PCI slots probably means that it has an integrated PCI-PCI bus (not many chipsets support 5 PCI slots, unless one or more of them do not support Bus Mastering), which would certainly not improve things.

        I think this is important to mention, because most systems today, at least desktops, will only have 32/33 PCI, and as the test results show, with a presumably shitty chipset, you only get marginally better performance than 100Mb Ethernet...
    • Coolness aside, the market for gigabit ethernet is a lot larger for the corporate user than it is for the home user. One of the primary drivers for the fast adoption of gig-e in corporate environments is the ability to use the existing copper infrastucture by using an additional 2 pairs (copper gig-e uses 4 pairs).

      The problem with fiber vs copper isn't really the cost of the medium, it is the cost of laying the infrastructure. If I remember correctly, the cost of the cable is about 1/10 of the total cost.

      Part of the reason gig-e has become so cheap so quickly is that it has been able to ride the ethernet adoption curve to make the MACs and copper transcievers cheap because of the huge volumes. These volumes will never be reached by fiber.

      -tpg

  • I checked out the cards, and yes you can get them cheap, but what about switches? You figure they're still uber-pricey too, right?

    Nope... apparently Pricewatch.com has D-Link 8-port 10/100/1000baseT auto-detect switches listed for under $150! (I've been most happy with my D-Link DI-804 Router/firewall/switch for $79.)

    Is this the normal "cheaper as tech gets more widespread and easier to manufacture," and do you think maybe Apple making gigabit ethernet a standard feature had something to do with it? :)

    • Are you sure all 8 ports support gigabit? Most likely it has a gigabit uplink port and the rest are 10/100. The switches with all ports supporting gigabit are expensive.
    • by Christopher Thomas ( 11717 ) on Sunday April 14, 2002 @04:53PM (#3340286)
      apparently Pricewatch.com has D-Link 8-port 10/100/1000baseT auto-detect switches listed for under $150!

      These are for 8x100-base-T with a gigabit uplink. I researched this a while ago, when speccing out my dream network ;).

      The cheapest full-gigabit switch D-link sells is about $1500.
      • One could go "token ring" style with 2 cards per machine and the network being a ring not star.
        As long as it is a small network and no machine goes off line it should be ok.

        Perhaps an expert knows if you could have two virtual IP's per card, a simple "Y" splitter plus two cross-over cables running into each machine via the splitter?? That might be a cheaper ring configuration.

        Cheers,
        -B
        • One could go "token ring" style with 2 cards per machine

          That would give real below average performance, even if you did get it to work. IT would be a nightmare getting all the routing setup properly so the computers would pass packets around the ring correctly. And once you got it working, speed would suck. Every packet would have to be evaluated in software and then routed to the next interface if necessary. This would be slow as hell, neurtalising the speed advantage.

          Perhaps an expert knows if you could have two virtual IP's per card, a simple "Y" splitter plus two cross-over cables running into each machine via the splitter??

          That would work like not at all.

      • Dell has an eight port Gb switch for about 500 not too long ago. I think they are loss leading their networking stuff to get you to pay up for their more expensive severs and storage. Here [dell.com] is a link so you can spec it yourself. You have to drop the support option to 1 year and shipping is extra but its an 8 port gigabit switch over coper.
    • You wouldn't be confusing a D-Link 9-port 8-port 10/100 1-port giga switch for $150 with a D-Link 8-port giga for $600 would you. If you aren't, then the site that's selling the 8-port giga for that price is going to change its prices (upwards) _very_ soon, when they realise their mistake.

      FP.
    • Having a couple hundred thousand Macs with gigabit ethernet probably did have something to do with it. Many vendors now offer gigabit ethernet NICs in their professional series systems which means for network infrastructure folks there's a high demand for equipment leading to increased production and a lower cost. Apple's been selling gigabit ethernet standard for almost two years now which amounts to lots of gigabit ethernet cards floating around in Macland.
    • by ncc74656 ( 45571 ) <scott@alfter.us> on Sunday April 14, 2002 @05:53PM (#3340551) Homepage Journal
      Nope... apparently Pricewatch.com has D-Link 8-port 10/100/1000baseT auto-detect switches listed for under $150!

      D-Link's site is nearly impossible to navigate (maybe it requires JavaScript, which I've shut off), but the Pricewatch description of the DES-1009G indicates that Gigabit Ethernet is only available on one port as an uplink connection; the rest of the switch is your run-of-the-mill 10/100 job. The DGS-1008T is D-Link's 8-port unmanaged 10/100/1000 switch; the cheapest entry on Pricewatch for that is $595.

      BTW, I have the entire site downloaded. Maybe I'm insane to even think about mirroring a /.'ed article on my home cable-modem link, but here it is [dyndns.org]. I've converted all the charts to PNG so they'll load slightly faster, and I got rid of most of the godawful "super-31337" yellow-on-black text to improve readability. You can also choose this link [dyndns.org] to download the entire page (images and all) in one shot.

  • Hoboy. (Score:4, Funny)

    by Soko ( 17987 ) on Sunday April 14, 2002 @04:38PM (#3340215) Homepage
    How'd they fare?

    Not terribly well against the /. effect.

    Interesting results to say the least.

    Lessee, a story about increasing bandwidth on a server /.ed to oblivion? That's not interestng, that's anti-climatic - I know what happens before I get to the story. Oh well...

    Soko
  • by BeBoxer ( 14448 ) on Sunday April 14, 2002 @04:40PM (#3340232)
    Could you post a summary? That must be about the fastest /.-ing I've seen. What'd that take, about 5 minutes?
  • Obligatory Mac Plug (Score:4, Informative)

    by Lally Singh ( 3427 ) on Sunday April 14, 2002 @04:45PM (#3340253) Journal
    Just fyi, Macintosh 1000BaseT ethernet controllers go directly to the memory controller [apple.com], bypassing PCI altogether..
  • Clusters (Score:3, Informative)

    by jhunsake ( 81920 ) on Sunday April 14, 2002 @04:46PM (#3340255) Journal
    Stay away from cards that don't have PXE and cards in which the driver won't compile into the kernel (as opposed to a module) if you plan to do easy installations or mount root off the network. In other words, stay away from Netgear and some 3Com cards (I haven't tested others), and play it safe with Intel.
    • Does Intel's desktop cards support PXE (or rather, have the correct support so as not to be lumped in with Netgear's cards (bleh.. when I first got into networking I bought some Netgear cards because I had such great success with their switches/hubs-- NEVER AGAIN; this is the company that accidentally setup their PCI ID (or whatever it is that allows Win9x to autodetect and load drivers for devices) incorrectly as ANOTHER card, thus allowing Windows to load the WRONG driver for the card-- nightmare!)?

      I've had really good experiences with Intel NIC's, and in fact have two Pro 100/S Server Adapters and two Pro/1000 T Server Adapters (the forefather to the newer 'server' class models) for use in my systems-- Intel's driver support is absolutely amazing, and incredibly stable/friendly. The fact that they offer up alternate platform drivers is just another bonus.
  • Gigabit and Linux (Score:5, Informative)

    by GigsVT ( 208848 ) on Sunday April 14, 2002 @04:50PM (#3340277) Journal
    Well, check out the docs first off. It's hard to get much out of GBit, since most of the utilities don't call the socket open with properly sized buffers/window/whatever.

    I set up optical gigabit for some NAS type things at work, and out of the box, GBit performed maybe 30% better than 100 Mbit. We are talking about 110Mbit peaks, compared to 80Mbit peaks with 100Mbit switched.

    Setting the MTU to 6144 (max that I could set it to with the ns83820.o) I started to get peaks around 300Mbit/sec.

    I tried recompiling the module for higher limits, since in the source it has:

    #define RX_BUF_SIZE 6144 /* 8192 */

    But if I put in 8192, or 9000 like I wanted it to be, it would crask or lock up.

    Anyway, it's not trivial to get good performance out of GBit, and definitely don't expect anywhere near 10X gain.
    • one can make decent gains with a good managed switch and optimizing the workstations. i'm not really ready to invest in gig-e as of yet, since the internal transfer speeds are so determinent on the result. by the time general pc i/o speeds are up to par, 10gig-e will be the cost of gig-e today. a much better scenario. now, we simply ensure that certain workstations have precedence over the general pc population. hypertransport... yummy!
    • It's all about the card, cheap cards will perform terrible, like I posted a bit lower, the Dlink cards are TERRIBLE, but if you give a shot to an Intel Pro 1000T, these are the best cards on the market for gigabit ethernet over copper. 3Com is also good, but with my dlink cards I was getting HALF the bandwidth that I would get with my pro1000T.

  • The same site lists several 8 port switches for gigibit copper. Those with ONE 1000mhz port and 8 10/100 are low cost ($150) but those with 8 1000mhz ports are a bit more (about $600). Add the cost of the switch to your cards and it's probably not cost effective for the HO yet. I'm happy with my 100base T network, my 8 port switching hub was less than $40. I AM using CAT 5E so I can upgrade to 1000baseT someday, just not today!
    • I'd be happy with an 8 + 2 switch from someone-- i.e. 8 10/100 ports with 2 10/100/1000 ports for my main file sharing boxen (and I imagine these would be a hit at LAN parties, so the server running the game could have a gigabit connect to the switch, allowing most of the 10/100 connections to saturate it with updates (and vice versa)). The trend of hardware makers (Netgear has done this too, FS309T is an 8 port 10/100 with a SINGLE 10/100/1000 copper gigabit port) to make these 8 + 1 solutions just sucks (since you can't really test the faster speed of gigabit with JUST one port).

      Of course in a perfect world, I'd agree that 8 port gigabit switches being $200 or less would be about near perfect, especially if higher port counts weren't unrealistically high.
  • by Jah-Wren Ryel ( 80510 ) on Sunday April 14, 2002 @05:02PM (#3340330)
    The cards are well priced for home use, and CAT5E cabling is cheap too. The problem with gigabit ethernet is not the cards, it is the lack of switches or even plain hubs at an affordable price point. There are lots of switches out there with a single gigabit port, but even those are a couple of hundred dollars. If you want multiple gigabit ports, you are looking at more than $600 for the bottom rung products.
    • The cards are well priced for home use, and CAT5E cabling is cheap too. The problem with gigabit ethernet is not the cards, it is the lack of switches or even plain hubs at an affordable price point...

      Funny, this gets modded up as Informative, while my earlier post listing inexpensive 8-port gigabit switches [slashdot.org] languishes unmoderated.

      Slashdot moderation, yet another mystery of the universe. Even after reading the guidelines twice, I can't figure out how other people manage to interpret them [slashdot.org] the way they do.

    • The problem with Gigabit Ethernet is not the cards, it is the lack of switches or even plain hubs at an affordable price point.
      To quote from Ethernet: The Definitive Guide [oreilly.com] (February 2000):

      Currently, all Gigabit Ethernet equipment is based on the full-duplex mode of operation. To date, none of the vendors have plans to develop equipment based on half-duplex Gigabit Ethernet operation.

      Which would explain why there are no Gigabit Ethernet hubs available (hubs aka repeaters are half-duplex devices). Carrier extension and frame bursting are not needed in full-duplex mode, which would make the design of full-duplex devices simpler, I guess.

      On a side note, in the article, they used Ethernet Jumbo Frames which were not part of the official IEEE standard as of the writing of the book.

    • When I hear the word hub, I think of half duplex shared medium. Although the gig-e spec (802.3z) contains support for half duplex, I don't know of any vendor that has implemented and tested it, espicially in a hub product.

      -tpg
    • If you want multiple gigabit ports, you are looking at more than $600 for the bottom rung products.
      Hey, don't knock it. Two years ago a gigabit switch would run you $5000-$1000! Things are definitely getting more reasonable. Maybe not for the average (non-geek :-) home, but for business, it's getting very accessible.
    • I'm looking at upgrading the Linux server to act as a giga switch. We've got two Macs and another Linux workstation. That means four $40 Ark cards (both Macs already have giga-nics)and a changing over to a $65 SiS745-based motherboard. SiS claims they have a concurrent [sis.com] line (1.2GB/second bus total) to each of six PCI masters. The server we're using now does file and print serving for only two people. I think it will be able to handle giga-switching quite well.

      • I got a Farallon 4-port GigE switch for $210, and don't have to use some kind of bullshit bridging and fuck with an in-place server just to get switching capabilities.

        Consider looking for such a thing.
  • and I took Networking from him last semester. He did a preliminary demo for the class, and I think that on the 32 bit PCI Gigabit cards, the effective throughput was around 250Mbps. Of course, the PCI bus was the limitation.

    A 64 bit PCI card was getting significantly higher throughput. I don't remember the exact numbers, but it was much closer to 1000Mbps (maybe 800?).
  • by redelm ( 54142 ) on Sunday April 14, 2002 @05:16PM (#3340389) Homepage
    I bought a pair of DLink DGE500T's about 6 months ago, just to see what I could wring out of them.

    I got about 32 MByte/s one-way with `ttcp` [UDP] between a 1.2GHz K7 and 2*500 Celeron (BP-6) through a plain crossover cable.

    Not bad, but only 25% of wirespeed (125 MByte/sec). I figured the main limit was the PCI bus, which would only burst at 133 MByte/s, and I strongly suspected that the bursts were too short to achieve anything like this speed. I have yet to play with the PCI latency timer.

    One thing for sure -- it isn't the CPU speed or Linux network stacks. The K7 will run both ends of ttcp through the localhost loopback at 570 MByte/s, and the BP6 around 200 MB/s.

    • Don't be so sure about it not being the network stack. Localhost connections are often special cased because all the mechanisms for packet loss/reordering recovery are unnecessary over the loopback device.
    • By default the MTU size on a gigabit card is way too low. The efficency sucks because of that. Increase the MTU size and see how much better things become.
    • by cheese_wallet ( 88279 ) on Sunday April 14, 2002 @06:24PM (#3340668) Journal
      There are basically two types of latency in PCI. The first latency is the amount of time it takes for a target to return a requested word. This is 16 33MHz clocks, according to the PCI 2.2 spec.

      The second type of latency is the amount of time it takes a target to return a second word in a Burst transaction. This is 8 33MHz clocks, according to the PCI 2.2 spec.

      The setting you are playing with in BIOS is probably the first latency... which is basically a setting in the PCI master, deciding how long to wait for data from a target before deciding to change the transaction to a delayed read. A delayed read basically frees the bus, and the master will check back with the peripheral at a later time to see if it has the data ready yet or not.

      delayed reads slow down access to that peripheral, because no other transaction is allowed to take place with that peripheral until that delayed read is finished.

      Older PCI cards didn't have the 16 clock limit on returning the first word of data, and they usually took longer. On new systems that try to be pci 2.2 compliant, to prevent a bunch of delayed reads from taking place, you have the option of increasing the latency timer in bios, so that it won't time out exactly on the 16 clock boundry, thereby speeding up access to that peripheral, at the cost of hogging the bus.

      So anyway, adjusting the latency timer isn't likely to have an effect on newer peripherals... unless you make it too short, causing a bunch of delayed reads, and then your system will slow down.

      --Scott
      • This sounds interesting. I definitely want the GigE to hog the bus -- there's no other way. The only other really active device on the PCI bus should be the EIDE, and they should have at least two 512byte buffers, so could wait ~500 PCI clocks (20 MB/s disk).

        But I thought there was a register (oddly named latency) that governed how long a busmaster could burst when someone else wanted the bus.

        • Actually, the EIDE ports on modern chipsets are not connected to the PCI bus anymore. They use proprietary high-speed buses (V-Link, HyperTransport, etc.), which is a good thing, because one Ultra-ATA133 Port has already the same bandwidth as the whole PCI(-32/33) bus.
  • by sstammer ( 235235 ) on Sunday April 14, 2002 @05:49PM (#3340524)
    There was another review of GigE performance in the IEEE Network Magazine last year [google.com].
  • AGP NICs (Score:2, Interesting)

    by Rolo Tomasi ( 538414 )
    Are there any NICs using the AGP? Not many boards have 64bit PCI yet, let alone PCI-X, but every board has an AGP slot. This would be great for cheap 1U cluster nodes, with an appropriate riser card of course.
    • AGP = Accelerated Graphics Port. Useless for anything except video cards at this point.

      • Well AFAIK AGP is PCI on steroids, it uses the same signals as PCI, the difference is the timing and extra signal lines that allow the AGP card to read and write directly to system memory, so graphics cards would be able to store big textures directly in main RAM (which of course is a few orders of magnitude slower than the graphics card's RAM, so this functionality isn't used at all). That's why an AGP card shows up in the PCI device listing, and why there are AGP and PCI graphics cards using the same chipsets.

        But what the hell do I know.

        • i think the biggest difference is that AGP specifies that there /must/ be an IO-MMU to translate between bus addresses (which the AGP card specifies in requests) and physical addresses (ie of RAM). Ie the AGP host bridge can make system memory appear wherever it wants from the POV of the AGP card.

          Note that PCI too can access memory "directly" (ie it can initiate a transfer) and that PCI chipsets used in Alpha's (21174 and up) and Sun have IO-MMU's. However, there is no requirement for PCI host bridges to be able to translate memory accesses.

          IO-MMU's are useful on 64bit machines. Without them data held outside of the 4GB of adddress space that PCI can reference must be first copied into address space accessible to the PCI card (bounce buffering) - which is bad for performance. With an IO-MMU you can map the PCI address space to wherever you want in system address space.

          Good IO-MMUs (eg the DEC 21x7x) can map very specific ranges of bus addresses to multiple ranges of system address space (ie hardware scatter gather).
  • Black background, yellow text, and purple links? Sheesh, a few tags and this page would have sent me into a convulsion.
  • Would it be feasible to hookup a small network of machines with gigabit ethernet so I can transfer large files from one to the other? I do CAD/video editing work, and plan to have some different machines for doing different stuff (a Mac, an SGI box, and a few PC's). Would it also be possible to use my crap PC (233 Mhz Pentium MMX) as a router so I can share an internet connection over all the machines - like maybe the 233 has a gigbit card connected to the hub with the other machines and a 10/100 card connected to the internet connection?

    The gigabit network would be used exclusively for transferring files (> 1 GB uncompressed video) between machines. Would it just be easier (and cheaper) to do some hard drive swappage when needed?
    • I recently had to solve this kind of problem for my employer. We deal with large databases (usually 2-5GB each) and often have to move them from machine to machine for various things. We decided to go with removable hard drives rather than GBit ethernet for a couple of reasons. The first was cost--it cost us about $500 to outfit all of our servers and the necessary drives with removable kits. These aren't hot-swappable, but in our case that was no big deal because its no big deal to take down a server for 3 minutes to swap in a drive with new data. I believe when I figured out our cost to go with GBit ethernet, we were looking at around $1k for all the cards and just under $4k for the necessary switches. Quite a difference in cost. Obviously, if our servers didn't have the option of going down, we probably would have gone with GBit ethernet. But since they do, going with the removable hard drives was very cost effective and has thus far worked out great.

      Obviously you are working on a bit smaller of a scale, but you would still have to consider the cost of a GBit switch and I don't think I've seen one any cheaper than $500 (but granted, I haven't looked hard in 3-4 months). In your sitiuation, just based on my recent experiences, I'd most definitely go with removable drives.
  • Stuff about Gbit.... (Score:4, Interesting)

    by NerveGas ( 168686 ) on Sunday April 14, 2002 @06:46PM (#3340751)
    First, you can't just stick a gigabit card in a machine and expect it to work at full capacity. The basic design of ethernet was not really designed for gigabit speeds, but we've managed to squeeze it out - barely.

    With 10 mbit cards, having the card generate an interrupt with ever incoming frame wasn't too bad. And on 100-mbit, it's still managable - but at a full gigabit, it really, really starts to bog down the machine. Some cards get around that by using interrupt coalescing, where they buffer up a few frames before they trigger an interrupt. That has a drawback, though: It increases latency. The trade-off has to be at some point, and not choosing the RIGHT point can affect either throughput or latency.

    Furthermore, to get the full benefit out of your card, you generally need to enable jumbo-frames on both the card and the switch - and of course, your switch has to support that feature.

    To make matters even worse, you can't always pump out (or receive) a full gigabit in any other than testing situations. Say you're receiving a large incoming file via FTP, NFS, or the protocol of your choice. Can your machine *really* write data to the disk at over 100 megabytes per second? And if it can, can it really handle both receiving a gigabit from the card, processing it, and writing the gigabit out to the disk? Unless you've got a very large amount of money in the machine, it probably won't.

    steve
    • Why not trigger the interrupt immediately, but continue to buffer frames and let the CPU grab all the frames in one go when it gets around to servicing the interrupt?

      Actually, with a sophisticated card, driver, and OS, the range of systems which could pull in a full gigabit/sec would be vastly increased. It requires some careful thought and programming.

      I sat down and sketched out a sample of how things might go, and realized I was missing some important details. On detail being the fact that it takes time to copy the data from the card to memory, and the CPU can be doing other things in that time. So, it requires more than 10 minutes thought, but I'm sure given a day or two, and access to relevant documentation about how various bits work, I could sketch out a driver design that made near optimal use of the available hardware, and wouldn't be that hard to implement in hardware either.

  • Seems like every graph I look at these days in research papers are the same styles and colors (Microsoft Excel defaults).

    Too bad the open-source community doesn't have a better alternative. I've tried Grace...the learning curve was a little steep. Guppi is not ready, not is KChart. The best I've found so far is Octave, a open-source Matlab clone [octave.org]. That's because it provides an interface to GNU plot and Matlab is very familiar to me.

  • Gigabit optical network cards a only a little
    over a 100$ now, are full duplex and faster
    than copper in most cases. We've just installed
    4 Dual Athlon 2000MP linux boxes, with gigabit
    optical cards, pretty damn fast as you can
    imagine.
  • by GlobalEcho ( 26240 ) on Sunday April 14, 2002 @09:43PM (#3341301)
    The authors of the study write:

    the results obtained in this study clearly show that peak performance is not a complete indicator of peak performance


    Wow. That makes any analysis tough, when performance measures fail to satisfy the Reflexive Property!

    Brian
  • I was wondering whether it was possible to pipe uncompressed video through a gigabit ethernet--say 720p? I ask because I still have this dream that I will be able to buy (make?) a component for my living room entertainment system that logs into my main computer as a user and plays back media files (both audio and video) on my fancy living room equipment. I think that system would be much more elegant than what I have now (analog RCA cables for audio and S-video running into my living room--it's an ugly hack).

    The much nicer interface would be to have a living room box join my ethernet LAN. The box would just receive uncompressed audio and video from the computer over gig ethernet. That way, all the decompressing would be done by the fancy CPU in my bedroom, and the box would not become obsolete when new/more CPU-intensive codecs came out. (Because the alternative is, of course, to have the box do the decompression, but I don't like that.) Somebody please make one of these (or explain why it would be a bad idea).

    • well let's just start with the concept, and you can work it from there. first off, it'll most likely saturate the gigabit eithernet. your best bet is to go with mpeg decoding, but enough about that... first, figure out how many pixels you are going to have. this is going to be a number like 1028x768, or 1600x1200. multiply the two numbers together, and then multiply THAT number by how much color depth you're planning on having (16, 24, 32 bit). that gives you your bandwidth req's. 1600x122 @ 24 bit color @ 60 fps is somthing like 3.2 gigabits per second. now you know why those video cards cost so much :-D MPEG-2 encode/decode in either software or hardware isn't a whole lot. most every digital video system uses it, including your DVD player. you could probably hack somthing together using firewire, but wait till this summer when apple updates firewire to an 800mbps/gigabit. i might suggest nan apple cube. it's slow, but it's silent, and does firewire, i think. also has eithernet onboard, but not gigabit. shrug.
  • by IGnatius T Foobar ( 4328 ) on Sunday April 14, 2002 @09:52PM (#3341319) Homepage Journal
    Gigabit Ethernet comes in really handy on Linux when you add 802.11q VLAN tagging.

    For those of you who don't know how this works, here's a bit of a primer: basically, you set the port on your big data center grade switch to "trunk" and then you enable 802.1q on your Linux box. Then you don't just have one Ethernet interface with one address --- you have up to 4096 virtual ones, each on its own VLAN and each with an IP address that's valid on that VLAN. So you'd have eth0.1, eth0.2, eth0.3, etc... each talking to the machines on that VLAN.

    Once you've got that running, you can do all sorts of neat stuff, including:
    • A router! You're on every VLAN anyway, so why not? It's not nearly as fast as a hardware-based Layer 3 switching module, but it's several orders of magnitude cheaper.
    • Really complex firewalls. You could put different parts of your organization (or whatever) on different VLAN's and then use your nifty Linux box to dictate what kind of policy is used to route between them.
    • If you're in a big building with multiple tenants, each with their own VLAN on a shared network, you can reduce the number of Internet access NAT/firewall boxes. Instead of one for each tenant, you've got a single one.
    • How about a VPN gateway that can place the caller directly on his or her department's own VLAN instead of having to route to it?


    As you can see, it's limited only by your imagination. And with that much stuff potentially running through the box, you're going to need that 1 Gbps of speed. Happy hacking!
    • I've tested it with a Cisco switch (specifically, a Catalyst 4006). It does work. If you don't want a port to have access to "every" VLAN then you have to restrict it at the switch. Otherwise, anyone with root on the Linux box plugged into a trunk port can simply define additional eth0.xxx interfaces on whatever VLAN they want.
  • So I also ran a netpipe test to see what it thought of my NICs.

    It gives you a NetPIPE.out. According to the man page, they contain: "time to transfer the block, bits per second, bits in block, bytes in block, and variance."

    First of all, the manpage is wrong because the second column gives a number much closer to megabits per second, and after numerical verification, I found that it's giving the value of 1024*1024 bits per second and not 1000*1000 bits per second.

    In NIC-talk, when we say gigabit, we mean 1000,000,000 bits, not 1000*1024*1024 bits.

    So when benchmarking your gigabit network card with netpipe, please remember that you're looking at speed results "1024*1024"-megabits, so your NIC is really only 953.6 megabits, which immediately gives a much better insight into the speed achieved by the Syskonnect card.
  • What happens to the throughput when two links are active on one system?

    Pretend I have 3 cheap Athlon based systems in one building. Assume one is acting as a server, and the other two are clients that aren't talking to each other. Because these are the cheaper cards, I only expect 300Mbps when one client is active. What happens when both clients are active?

    Ideally, throughput would be no worse than 150Mbps/per card. I suspect it would be much worse.

    If multiple cards did work well, then you could buy 6 cards to directly connect 3 machines. Much cheaper than 3 cards and a GigE switch.

    I think I'll have to wait until even cheap machines have 64bit/66Mhz PCI busses. I know I'll have to wait until I get all my machines into one building.

  • There must be something wrong with the graphs for the e1000 packet size vs. throughput plot [uni.edu], I believe the axis are reversed.-

    Also Intel acknowledges that their e1000 adapter have driver issues under linux. This text is from: ftp://aiedownload.intel.com/df-support/2897/ENG/re adme.txt [intel.com]

    Known Issues
    ============
    Driver Hangs Under Heavy Traffic Loads


    Intel is aware that previously released e1000 drivers may hang under very
    specific types of heavy traffic loads. This version includes a workaround
    that resets the adapter automatically if a hang condition is detected. This
    workaround ensures network traffic flow is not affected when a hang occurs.

    This is for the driver verion 4.1.7, released 3/23/2002 (ie. quite new). Older versions had even bigger problems. This might explain why the Intel adapter does so bad in this test. I wish that Intel gets a clue and releases all card specs and GPLs the existing driver so that a true (stable) open source driver could be written and included in the linux kernel. I think the hardware is OK, but the drivers sucks.

  • As many have said, Gigabit switches are priced WAY out of proportion to the price of Gigabit NICs.

    So how about filling up a cheap PC with cheap NICs and using it as a switch?

    Granted as others have said, the PCI bus is a limiting factor. But it will certainly blow away any 100mbit switch.

    Another possibility is to put two Gigabit NICs in every machine, and run a daisy chain or even a ring type network.

    Sounds like a fun project!

  • I needed gigabit bandwidth at work because I am moving 100GB files.

    I went on reading about it on the net, on sites like www.3wire.com for example, and to make a long story short, Fiber optic yeilds the best results (obviously) but are way to expensive. Next are some 1000T copper cards that are almost doing the job, but then again, after getting 5 different cards, I can tell you right away that you can have a BIG difference from a board to another.

    The best card I've got so far performance-wise are the Intel Pro 1000T-based adapters, with no optimization card to card running netcps, I'd get twice as much speed out than with the Dlink counterpart (DGE500T). They are a bit more expensive, but if you want more than 3x increase over 100Mbits, you need something a tad more expensive.

    The other thing is you see card with 70Megs/second bandwidth tests on some websites, with jumbo packets turned on. You need a jumbo-frame capable switch (read: Expensive) to be able to turn that on. The cheapest gigabit switch I've found that could take an aweful lot of load without costing me an arm was the Netgear GS508T, but if you are used to managed switch, that one isn't.

    Also you might be tempted to get a Gigabit card as upling with let's say 8 ports @ 100Mbits, that way you won't waste bandwidth to the server and the 8 of them can crunch it. Well good idea on paper but don't get the Dlink DES-1009G, I had to return 2 of them, and the firmware on that thing truely SUCKS. You can't just leave it there and forget it, you need to cycle the power sometimes so it can "read properly" on the ports wether 100 or 10 or half or full duplex. It's miserable and poor performing. It's cheap though :). If you can afford to do power cycling once and a while and it's not a buisness server with critical uptime, it's not all bad.. (like for a little renderfarm).

    For the Intel pro cards, I got both the workstation and server ones, server being 64bits PCI.

    There's one thing you want to consider, if you use Gigabit ethernet, you need also to be able to feed it, 50megs/second on the board requires a drive being able to deliver 50 megs a second to the card, and requires a PCI bus able to take the load as well (remember, it's 50megs x 2 bandwidth on the bus that on pci32/33mhz saturates at 128 but in real world 100).

"Life sucks, but death doesn't put out at all...." -- Thomas J. Kopp

Working...