Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking Software IT Hardware Linux

Is There a Place for a $500 Ethernet Card? 423

prostoalex writes "ComputerWorld magazine runs a story on Level 5 Networks, which emerged from the stealth startup status with its own brand of network cards and software called EtherFabric. The company claims they are reducing the load on the servers CPUs and improve the communications between the servers. And it's not vaporware: 'The EtherFabric software shipping Monday runs on the Linux kernel 2.4 and 2.6, with support for Windows and Unix coming in the first half of next year. High volume pricing is $295 for a two-port, 1GB-per-port EtherFabric network interface card and software, while low volume quantities start from $495.'"
This discussion has been archived. No new comments can be posted.

Is There a Place for a $500 Ethernet Card?

Comments Filter:
  • by Anonymous Coward on Monday June 20, 2005 @09:04PM (#12868735)
    Yes, there is a place for a $500 ethernet card, far, far away from this guy. [fiftythree.org]
  • by bananahead ( 829691 ) * on Monday June 20, 2005 @09:04PM (#12868739) Journal
    This sounds very similar to the 'smart card' concept back in the late 80's and early 90's. Intel had the 586-driven smart-cards, and I believe 3Com had them as well. They were intended to offload the CPU by putting parts of the stack on the card. They failed because the performance gain and CPU offload numbers were never enough to justify the price difference.

    I wonder what has changed? I have never known the CPU to get dragged down by network traffic, but maybe in the network server markets it is different, However with the Ethernet chipsets being designed into the motherboard and integrated into the tight circle of RAM and CPU, it isn't clear there is a need for this.

    How long before the network control is put into the CPU? It is going to be tough to beat that type of performance.

    • by bhtooefr ( 649901 ) <bhtooefr@bhtoo[ ].org ['efr' in gap]> on Monday June 20, 2005 @09:25PM (#12868856) Homepage Journal
      The only time I've heard of that was with Ethel, an Apple II ethernet card. It used a PIC that ran the TCP/IP stack, and fed stuff to the A2.

      Of course, the A2 is perfectly capable of running it's own TCP/IP stack - Uther doesn't do any of that, IIRC, and nor does the LANceGS (although, it seems that the LANce can only do pings on the //e - either that, or it just costs too much).
    • by bluelip ( 123578 ) on Monday June 20, 2005 @09:27PM (#12868863) Homepage Journal
      I've noticed a slowdown in computer response when using gig cards and moving lotsa' data. I thought the bottleneck may have moved to the file systems. Didn't seem to be the case as pumping dummy data throught the nic also caused issues.

      I didn't pursue it far enough to see where where the actual problem was. These cards may help, but my money is on a faster cpu.
      • by __aaclcg7560 ( 824291 ) on Monday June 20, 2005 @09:55PM (#12869037)
        Jerry Pournelle had a column in the February 2005 issue of Dr. Dobb's Journal about Gigabit hardware. If you have a Gigabit PCI card, expect to see a doubling of speed over 100Mb PCI card. If the motherboard has a built-in Gigabit port, you can see a five to six times speed over 100Mb PCI card or port. PCI cards are limited by the PCI bus, but built-in ports have direct access.
        • Built-in ports have direct access... depending on the chipset/motherboard.

          I've seen some 'built-in' broadcom gig-e ports that were on the PCI bus, even though they were technically built into the board. Horrible performance.
        • by Anonymous Coward on Monday June 20, 2005 @10:26PM (#12869198)
          Plain old 32-bit PCI has a bandwidth of 32 bits * 33 MHz = 1.056 Gbps. A 64-bit bus would obviously be twice that. Unless there is a lot of other traffic on the PCI bus, I suspect the limitation is the driver, not the bus.

          Peripherals like that built into the motherboard are generally on a PCI bus segment anyway. You can see by looking at the device manager in Windows or by using lspci in Linux. In both cases you will see a vendor ID, bus number, and slot number.

        • Um, no. (Score:5, Interesting)

          by holysin ( 549880 ) on Monday June 20, 2005 @11:20PM (#12869454) Homepage
          If you have a machine (say on a machine running linux kernel 2.4.20-30.9smp) with a built in gig port (say with eth0 identified as eth0: Tigon3 [partno(BCM95704A6) rev 2003 PHY(5704)] (PCI:66MHz:64-bit) 10/100/1000BaseT) connected to a decent gigabit switch, and another machine (same card, same os)with a gigabit card, those two machines will achieve 940Mbps talking to each other (results via iperf, 0.0-10.0 sec 1.09 GBytes 940 Mbits/sec).

          However, if you plug a windows box (2000 or xp, didn't have a 2003 handy) with either an add on card, OR built in gig (2000 vs xp) you get a rather less impressive figure of 550-630. Coincidentally, you'll get the same basic number if you run two instances of iperf on the same computer... This tells me the bottleneck isn't the PCI bus, it's the OS. If you can prove me wrong please do so...
        • by klui ( 457783 ) on Monday June 20, 2005 @11:54PM (#12869609)
          It would depend on the implementation. Not all mobos with built-in ports have "direct access." Some of them go through a shared bus or worse, the PCI bus.

          Intel's implementation for the 865P/875P chipset goes through the memory hub directly http://www.intel.com/design/chipsets/schematics/25 281202.pdf [intel.com] while the i845 chipset has the ethernet interface connected to the ICH4 controller hub that is shared among other devices like the PCI bus http://www.intel.com/design/chipsets/datashts/2519 2401.pdf [intel.com]. VIA's PT894/PT880 ethernet connection goes through a "VIA Connectivity" bus much like the Intel 845 http://www.via.com.tw/en/products/chipsets/p4-seri es/pt894pro [via.com.tw] and http://www.via.com.tw/en/products/chipsets/p4-seri es/pt880 [via.com.tw]. There were some value motherboards that although I recall that they use good/decent chipsets, their designers decided to connect the built-in gigabit ethernet ports off the PCI bus. I cannot recall what these were but I read about them in anandtech several years ago.

      • I have found that the CPU load due to the disk I/O is usually a lot worse than CPU load from the network card. A nice test is to take a computer with both SCSI and IDE drives, and try moving data off of each of the drives over the network. The IDE drive will cause the system to slow down noticably, while the SCSI drive will barely (or not at all) affect the system.
      • No.

        Realistically there are bottlenecks all over the place and out of them these 2 prevent nearly any computer from reaching 1G.

        1. Interrupt handling bottleneck. Even with interrupt mitigation your typical pps value for a single CPU P4 is under 100 kps. It falls down to under 60 kps when using Intel dual CPUs (dunno about AMD or Via) or SMT due to the overly deep pipeline on the P4. That is way less then 1G for small packets.

        2. IO bottleneck. Many motherboards have IO-to-memory speeds which are realistica
    • Back when our company was running ATM on the backbone and in our HQ offices, all of the FORE HE155 NIC's were "smart". That was also due, in part, to the particular nature of ATM. The NIC's handled their own routing in addition participating in LANE and PNNI services. Very little of the network load was actually handled by the server themselves. It was really very nice and the NIC's themselves were more than $600 a pop.

      Load on our servers from network processing increased easily by 20% when we moved to an
      • by Phil Karn ( 14620 ) <karn@@@ka9q...net> on Monday June 20, 2005 @10:52PM (#12869325) Homepage
        And how long ago was that? What kind of servers had loads increase by 20% when you dumped the "smart" NICs? How much faster have general purpose CPUs gotten since then? And whose unusually inefficient TCP/IP stack and/or Ethernet driver were you running?

        "Smart" network cards are one of those bad ideas that keep coming back from the grave, because computer science seems to lose its collective memory every decade or so.

        Fifteen years ago, Van Jacobsen did a wonderful presentation at SIGCOMM 1990 on just why they were such a bad idea. The reason is very simple. A modern, well-tuned and optimized TCP/IP stack can process a packet with only about 20 instructions on average. Very few "smart" controller cards have host interfaces that can be spoken to with so few instructions! The switch to and from kernel context will usually cost you more than TCP/IP.

        Not only that, but the coprocessor on the "smart" controller card inevitably ends up being considerably slower than the host CPU, because typical host CPUs are made in much larger quantities, enjoy large economies of scale, and are updated frequently. So you often have the absurd situation of a blazingly fast and modern host CPU twiddling its thumbs waiting for some piss-poor slow CPU on a "smart" controller to execute a protocol handler that could have been done on the host with fewer instructions than it takes just to move a packet to or from the "smart" card.

        And if that weren't enough, rarely do these "smart" network controllers come with open source firmware. Usually the company that makes them obsoletes them quickly (because they don't sell well) and/or goes out of business, and you're left with a very expensive paperweight.

        Since his talk, Ethernet interfaces have totally obsoleted "smart" network cards. They now come with lots of internal buffering to avoid losing packets when interrupt latencies are high, and they take relatively few instructions per byte of user data moved. What more could you want?

        • by adolf ( 21054 ) *
          Ya know, that's the same sort of argument I've been using to promote software RAID vs. hardware RAID.

          I've learned this: Nobody cares. People will blindly spend hundreds, sometimes thousands, of dollars on specialized gear to offload their precious CPUs.

          When it is explained to them that better system performance can be had for less money by simply buying a faster CPU, they throw up their hands and blindly reassert that dedicated hardware must be better, by simple virtue of the fact that it is dedicated.
        • by 0rbit4l ( 669001 ) on Tuesday June 21, 2005 @12:32AM (#12869764)
          The reason is very simple. A modern, well-tuned and optimized TCP/IP stack can process a packet with only about 20 instructions on average.
          No, Van Jacobson showed that the fast-path receive could be done in 30 instructions. This doesn't include ("smart"-NIC features, which you ignorantly deride) TCP/IP checksum calculation, nor is it even remotely realistic for a modern TCP/IP stack that supports modern RFCs. You're not including timeout code, connection establishment, state updates, or reassembly on the receive side, and you conveniently completely ignore segmentation issues on the send side. If "smart" NICs are such a bad idea, then I guess Intel and Sun are really up a creek - Intel currently supports TCP segmentation offload (pushing the packet segmentation task from the TCP stack onto the hardware), and is moving to push the entire TCP stack to a dedicated processor + NIC combo.
          Since his talk, Ethernet interfaces have totally obsoleted "smart" network cards.
          You couldn't be more wrong. Since the 90s, the boundary of what the NIC should do and what the OS should do has been repeatedly re-examined, and industry leaders in networking have successfully deployed products that big-iron servers rely on.
    • If you use an nforce motherboard under linux, you have to set it to high throughput or low cpu use.

      It causes studdering audio and other funky stuff on one of the settings. that was the nforce 1 I believe on RH8 and RH9 perhaps. I have not tested in a long while though. The issue showed up on windows at one point too. It could be poor drivers.
    • by SuperBanana ( 662181 ) on Monday June 20, 2005 @09:49PM (#12868993)
      . Intel had the 586-driven smart-cards, and I believe 3Com had them as well. They were intended to offload the CPU by putting parts of the stack on the card.

      You're probably thinking of the i960-based cards, though Intel's PRO series adapters (not i960 based) do something similar (TCP checksumming is now builtin to the chipset and most OS drivers now know how to take advantage of that). That processor, and variants, were used in everything from network cards to RAID controllers.

      They failed because the performance gain and CPU offload numbers were never enough to justify the price difference.

      Ding ding ding. I forget who said it (maybe Alan Cox, but I'm REALLY not sure about that), but the opinion was along the lines that it would always be more benefitial to throw the money at a faster processor (or a second processor etc), because you'd get a performance boost everywhere. $300 buys quite a bit extra CPU horsepower these days, and there's no need for the hassles of custom drivers and such. Nowadays CPUs are just so damn fast, it's also not really necessary.

      • Ding ding ding. I forget who said it (maybe Alan Cox, but I'm REALLY not sure about that), but the opinion was along the lines that it would always be more benefitial to throw the money at a faster processor (or a second processor etc), because you'd get a performance boost everywhere.

        Interrupts are the one place where it's not remotely true. A faster processor will allow your system to handle significantly more interrupts. The whole interrupt model needs to be thrown out and replaced with something mu

    • by X ( 1235 )
      I wonder what has changed? I have never known the CPU to get dragged down by network traffic, but maybe in the network server markets it is different

      The thing that has changed is that the frequency that frames arrive at has gone up. Unless you can use jumbo frames (and even then, if the payloads are small), GigE is delivering the same sized frames as fast ethernet, just 10x faster. This tends to create a hell of a lot more interrupts for the processor to handle (a condition made worse by the deeper pipeli
      • But while the frame rate has increased, so has the CPU. In 1990, we were looking at 200 Mhz CPUs max and the frame rate was 10Mb less collision loss, which was at least 50%. Now we have 3.5GHz systems dealing with ethernet over 1GHz fiber with the same collision rate problem. If you do the math, we haven't really made the problem any worse.
      • Any NIC worth its salt has DMA, which drastically cuts down on the number of interrupts.
    • There are a lot of Cards that do this now. You can get a 3com 3c905c, which does at least partial offloading, for about 20 bucks.
      • by Fweeky ( 41046 ) on Monday June 20, 2005 @10:29PM (#12869212) Homepage
        I expect you can get an Intel 1000/Pro for around $30; full TCP/IP checksum offloads in both directions, interrupt moderation, jumbo frames, and Intel even write their own open source drivers.

        Heh, my on-board Realtek GigE chip has checksum offloads too, but even with them on, 300Mbps would have me up to 70% system/interrupt CPU load (and I hear the checksumming is a bit.. broken); I barely scrape 30% with a PRO/1000GT.
    • It's not exactly uncommon when you get into high load non-pc derived devices.

      Take for instance the APX-8000..

      This beast has a dialup port density that will serve an entire small town.

      The ethernet controller has it's own intel risc processor... though the versions I had were using the older cast of that cpu which looks like a pentium die cast. (newer ones are the size of a pinky)

      Looks like they salvaged parts from the ascend/lucent max series to build one. (the early units were interesting)

      In any event,
    • Remember how hardware used to be so much more expensive (transistor prices were magnitudes more expensive) in the 80's? It is now economical to distribute load throughout the system. Price/performance is favorable - we have so many extra cycles it makes sense to no longer centralize them. High bandwidth between systems are much in demand so this product addresses the lower end of that need.

      Multiport (4+) gigabyte network traffic can generate significant load on a machine. This can be multiplied with b
    • The biggest thing that has changed is the speed.

      10BaseT (mabye even 100BaseT) isn't slamming the bus with interrupts at the rates possible for Gigabit Ethernet. Add enough interrupts to the CPU and you're going to have the CPU running the network interrupt handler at the cost of moving whatever it was working on off of the CPU. Even modest gains can make a big difference in performance.

      If there's a group of people believing it would be more cost effective to buy "super" network cards than to rewrite the
    • by jamesh ( 87723 )
      If you've ever had one of the recent worms come in behind a linux router, then you'll see how network traffic can make a cpu stop.

      I made a boo boo in a firewall rule and opened up an unpatched mssql server to the internet (*hangs head in shame*). Within 30 seconds it had caught one of the mssql worms and had stopped the linux router dead. Pulling the network plug from the mssql server caused the linux router to come instantly back to life. With TCP and all its flow control goodness it's probably not a prob
    • by EtherMonkey ( 705611 ) on Monday June 20, 2005 @11:40PM (#12869546)
      Agreed, but it's even been more recent than the early 90's. The late 90's also had its run of so-called "Intelligent" network cards.

      I worked for a large HP/Intel VAR at the time and I we were selling $500 Intel Intelligent Server NICs like they were Big Macs. Then one day one of our biggest customers called in a fit. It seems that his manager asked him to do a quick comparison between a smart Intel NIC and a regular Intel NIC, so he could tell his bean-counters to get stuffed. It turns out that we were NOT ABLE TO FIND ANY SYSTEM OR TRAFFIC CONFIGURATION that would result in higher throughput, lower CPU utilization or lower memory utilization when using the smart NIC.

      In other words, the standard $100 Intel NIC (PILA8465B, as I recall) beat the piss out of the much more expensive Intel intelligent NIC with on-board co-processor.

      Within 3 months we stopped getting any orders for the smart NICS. In 6 months Intel retailiated by disabling server features (adapter fault tolerence, VLAN tagging, load balancing and Cisco Fast Etherchannel support) on the basic NIC, in an effort to save the smart NIC. When this didn't work they modified the driver so the server features would only work with a re-released version of the "dumb" NIC at a higher price (the only difference between the cheapest and most expensive version was an ID string burned into a PAL on the NIC).

      Similiar experiences with earlier cards from Intel, IBM, and others. In every instance I tested, a plain old NIC (not junk, but the basic model from a reputable manufacturer) always outperformed the NIC's with on-board brains and/or co-processors.

      Maybe this Level-5 NIC has some new voodoo engineering, but I'd have to see real-world testing to believe it. Especially from a company that apparently intentionally is playing-off Level-3 Communications' name recognition for its own benefit.
  • by grub ( 11606 ) <slashdot@grub.net> on Monday June 20, 2005 @09:05PM (#12868742) Homepage Journal

    Is There a Place for a $500 Ethernet Card?

    Of course there is, assuming the card performs as advertised. Sheer conjecture: the card likely has a lot of the smarts onboard. Maybe it has some of the TCP and IP stuff on board too (checksum, etc). Compare that to a crapbox $10.95 RealTek[a] card which generates interrupts like mad because it has no smarts and you'd probably be very suprised. (Think of comparing a decent hardware modem to a software based WinModem.)

    [a] I had a sales-drone at Computer Boulevard here in Winnipeg just RAVE about RealTek cards. I said I really wanted 3 Intel or 3COM cards for a new work proxy server and he said 'Why? RealTeks are way cheaper and run at the same speed!' Retard.

    • Amazing. You'd think he'd be more than happy to sell more expensive items to you. Maybe he's trying to build your trust so he can pull a sleight-of-hand later? Or he really thinks RealTek cards are better. Ugh.
    • From the article I couldn't tell how this different from TCP/IP Offload Engines (TOE). Looks like there are a bunch of companies coming up with TOE implementations, is this going to be a product that fails once the TOE standards come out?
      • Uh - what TOE standards...

        The standard it TCP - has been around for 20 years or so, not likely to change much.

        What you might be thinking about are these abominations called RDMA protocols coming out of the RDMAC (Remote DMA - think about it and shudder) - the idea with strict TOE is that external to the box, you can't tell it is running TOE or just a BSD networking stack (or your favorite flavor of TCP anyway)

      • This is different from a TCP/IP offload engine in one critical feature: it is designed so that the network interface is run directly out of user-space, without involving the kernel. It may still do the network stack in software, but it does it in-process. This means that there is no copying from/to userspace and no context switch for every read or write. When you're handling gigabit-speed traffic, this becomes a big issue. Just like video players today use special OS features to open a video port direct
    • Back in the early-mid 80's (and probably even before then) IBM mainframes using SNA instead of TCPIP used special networking processors that handled all of that "networking stuff" so that the mainframe CPU (which really was a "unit" and not just a single chip) could just concentrate on running its jobs and not be interrupted by the communications end of things. Everything old is new again. Same situation, just smaller and faster (CPU and helper communications card take up 1U in a rack instead of 1 whole c
    • The fact is that RealTek and the likes are very much responsible for the proliferation of ethernet on inexpensive desktop PCs.

      Sure, you may not like them very much because their equipment is cheap, but in order to put $300 PCs into homes, you're going to need to cut corners somewhere, and thankfully we were able to support broadband in the process.

      Although I'd rather have a 3com in a server, I honestly don't see the huge benefits of having premium network gear anymore. The divide between the Premium and
    • I walked into Computer Boulevard about three weeks ago looking for a replacement hardware modem (unfortunately I don't live in Winnipeg anymore, but about 250 km north east, no cable or even DSL here.)

      So the sales guy asks me if I need help and I tell him I want a 56k hardware modem. So he ushers me to the modem section to let me browse. Sure enough, there's nothing there but winmodems. He comes back a couple minutes later and asks if I found what I was looking for. I said no, and that what I was looking f
      • >I don't think I should know more than the clerk at the store.

        I don't really think that's a valid complaint. In a perfect world, yes, but in retail? Not really.

        Running linux is like owning a foreign car and expecting GM/Ford guys to fix it just as easily. Its one of the real liabilities of not running the monopoly/defacto standard OS. As a linux user, you should know what you're buying. I mean, users ofter get criticized for being ignorant of their systems, but you want the same ignorance and expect
  • by Anonymous Coward on Monday June 20, 2005 @09:06PM (#12868751)
    right inside my computer :)
  • by Anonymous Coward
    Short answer;

    Where there are PHB's, there is overpriced hardware.
  • yes, there is (Score:4, Informative)

    by commo1 ( 709770 ) on Monday June 20, 2005 @09:08PM (#12868765)
    Million dollar PCs (sans gold-plating) and (quite seriously) mission-critical blade servers, customer ip routers, etc.... I have clients that pay upwards of $600 canadian now (though that's for quad cards with ample on-board processing to off-load from cpu horsepower).
    • But at $500, and running linux, it's like having a router in my PC. I could be wrong, but this allows me to make my media center PC a firewall and router... all kinds of stuff.

      Not that I own a media center PC - but I suspect this type of thing will catch on with the geeks if they can login to it.
  • They mention latency without saying what it is in the press release. I couldnt find it on the site. Maybe the tech docs have it. They compare it to Myrinet without saying what the latency is. It could be great. It could be crap.
  • by Sv-Manowar ( 772313 ) on Monday June 20, 2005 @09:12PM (#12868784) Homepage Journal
    This isn't exactly an entirely new concept. Intel have been selling their ethernet chips with built in SSL accelerators for quite some time, and the advantage of offloading duties from the software to the hardware (see Intel etherexpress vs RealTek style cards) is obvious. Whether these cards offload enough of the normal duties of a typical cluster node to be worthwhile should be interesting to see, as there are a wide variety of cluster load types and obviously these cards will have a niche to fit into alongside their competitors in the diverse set of demands around cluster networks. As for the price tag, I seem to remember gigabit cards being extremely expensive a few years back, and its probably pretty competitive with where they're aiming this product, alongside myrinet and infiniband.
  • Knock-Offs (Score:5, Insightful)

    by randomErr ( 172078 ) <ervin.kosch@nOspAm.gmail.com> on Monday June 20, 2005 @09:13PM (#12868792) Journal
    I give Realtek 6 months tops to make thier own knock-off of the card for $24.95.
  • by Famanoran ( 568910 ) on Monday June 20, 2005 @09:16PM (#12868808)
    But not necessarily where the vendors think it is.

    Back when I was working at a startup developing anti-DDoS technology, one of the biggest problems we were faced when implemented GigE, was the load on the PCI bus. (This was before we started using PCI-X).

    It depends on exactly how customisable the network card software is, but if you could plonk a couple of those into whatever system you wanted - and if the cards themselves could do, say, signature detection of various flood types, or basic analysis of traffic trends then that is a very definite market.

    I realise the core issue is not addressed (if your physical pipe is full, then you're fucked), but it takes the load of dropping the malicious packets off the host CPU so it can attempt to service whatever valid traffic actually gets through.

    And then there is IP fragmentation. Bad fragments? Perhaps a dodgy fragmentation implementation in the stack? (you know which OS I mean) Lets just drop that before the host sees it and crashes.

    I don't know, I can't find any real information describing what they do, but I can certainly see uses for this.

  • by sysadmn ( 29788 )
    Wonder how if differs from Sun's Network Coprocessor [sun.com], which used an onboard 16 MHz M68000 to offload TCP packet processing from the mighty 40 MHz SPARC processors in an SS690. Sounds like the Level 5 company's product (not Level 3, as the intro implies) also includes "improved" networking protocols that are supposed to be compatible.
  • Linux before Windows (Score:3, Interesting)

    by mepperpint ( 790350 ) on Monday June 20, 2005 @09:18PM (#12868812)
    It's nice to see a piece of hardware that ships with linux drivers and promises Windows support later. So frequently applications and hardware are first supported under Windows and occasionally ported to other platforms.
  • by Ingolfke ( 515826 ) on Monday June 20, 2005 @09:23PM (#12868842) Journal
    The name Level 5 refers to the network protocol stack where level 5 delivers data from the network to the application, according to Karr. The company isn't concerned about any potential confusion with Internet Protocol telecom Level 3 Communications Inc. On the contrary, he quipped, "It's working in our favor. People say, 'Yes, we've heard of you. You're a big company.'"

    As lawyers at Level 3 begin salivating at thought of all of the potential lawsuits.
    • "As lawyers at Level 3 begin salivating at thought of all of the potential lawsuits."

      Actually, salivation does not come till Level 8. However, at Level 3 you do get "Detect Potential Lawsuit." Don't worry, once you hit Level 3 you'll be a Level 8 in no time.
  • by crusty_architect ( 642208 ) on Monday June 20, 2005 @09:24PM (#12868846) Homepage
    We use Filers for storage at Gigabit speeds. Compared to our SAN/FC evironments, we see much higher CPU utilisation on our Sol 8 boxes, especially when attempting to get to Gigibit speeds.
  • by jellomizer ( 103300 ) * on Monday June 20, 2005 @09:27PM (#12868862)
    $500 for a network card you have to have a good reason that you will need it. I am sure there are applications that will utilize it but for the price it may not be worth it. With sub $500 computers coming to age. It is probably cheaper just to split all your services onto smaller boxen and have a load balancing switch/router. Computers are cheap today $500 for a network card is steep and will only fill a niche market. Perhaps if the price was in the $50 range it would be more widely accepted. But with good enough systems at 1k and additional 500 could be used for a faster CPU other then a faster network CPU
    • I highly doubt they're aiming these cards at the general public. The kind of folks who worry about this kind of performance aren't buying $500 computers, they're buying $5,000 + computers, and trying to tweak every ounce of performance out of them. I'm willing to bet my employer is going to look pretty seriously at these cards for some of our heavy-use systems.

      Sometimes you can't "split all your services onto smaller boxen and have a load balancing switch/router". Not everything on the network is a web ser
    • Your PC is not the target market. Clusters, large datacenters and applications that require communications as close to instantaneous as possible are. $500 is a drop in the bucket with the potential of a huge payback for those installations

      Not everything is about Slashdotters home computers.
  • How about they sell it for $100, when you buy the $600:year remote network admin package? Premium packages can include live security monitoring by companies like Counterpane. If you've got a need for network performance like that offered by these cards, you've got a need for sysadmin, including security. Or you've just got too much money.
  • Level 5 Networks, which emerged from the stealth startup status with its own brand of network cards and software

    I just saw a story on slashdot today that related to this [slashdot.org].
  • by ChozCunningham ( 698051 ) <slashdot@org.chozcunningham@com> on Monday June 20, 2005 @09:52PM (#12869012) Homepage
    Only one place. Amongst gamers.

    "A $500 LAN Card? Oh my God, Stevie, thats almost as much as my GeForce9900XTLSI+ cost!" Said the kid with the Lone Gunmen T-Shirt.*

    "That's nothing, This 8-Track-ROM player off of ThinkGeekcost almost a cool grand" Stevie said, as the other nerds bowed around his glowing and chromed Frag Machine.

    *Lone Gunmen T-Shirts [thinkgeek.com] coming soon. 8-Track-ROM [thinkgeek.com]'s, too.

  • by sporktoast ( 246027 ) on Monday June 20, 2005 @09:53PM (#12869025) Homepage

    Sure [amazon.com] there [amazon.com] is [amazon.com].
  • In a word, no (Score:4, Interesting)

    by bofkentucky ( 555107 ) <bofkentucky@NOSpAm.gmail.com> on Monday June 20, 2005 @09:55PM (#12869035) Homepage Journal
    Take sun, some of their new server kit this year is going to ship 10Gbit/s ethernet on the board, which acording to their docs, is going to take 3 USIV procs to keep the bus saturated (6 cores). But when you are looking at 8 to 64 way server boxes, who cares about those 3 procs, especially when in 24-30 months it will take less than one proc to handle that load (Quad Cored + Moore's Law), and the eventually one thread will have the horsepower.

    Surely those smart dudes at Via, AMD, Intel, Samsung, Nat Semi, and/or Motorolla aren't going to:
    A) FUD this to death if it really works
    B) File patent suit until doomsday to keep it locked up
    C) Buy them out
    D) Let them wither on the vine and then buy the IP.
  • It's called a TOE card, and companies have been producing them commercially for a few years now.
  • I've had a hard time learning that there is a lot more to computers than a thousand dollar workstation or laptop.

    There are a LOT of > USD 10K servers bought every year. If a USD 500 NIC can improve the total performance of such a server by 5%, then yeah it's worth it.
  • The place where things like TCP offload and RDMA support really matter is the high-end space. The major limitations on building really large clusters is the interconnect. If you look at the top 500 supercomputer list, you'll find that the top computers all use something better than gigabit ethernet. (Mostly Infiniband and maybe Myrinet) The reason for using Infiniband is that the communication latency is around 5us. For comparison , a standard GigE card has a latency around 70us.

    That being said, ht [ammasso.com]

  • IPSEC (Score:4, Insightful)

    by Omnifarious ( 11933 ) <eric-slashNO@SPAMomnifarious.org> on Monday June 20, 2005 @10:06PM (#12869096) Homepage Journal

    If this card can do most of the work of IPSEC for me, it'd be a big win.

    My main concern though is that with two ports, how can I be absolutely certain the packet has to go through my firewall rules before it can go anywhere?

    Of course, the extra ports could be an advantage. If it could handle all the rules for you, then it might even be capable of functioning as a layer 4 switch and sending out a new IP packet before completely recieving said packet.

    But, I'd want all the software on that card to be Open Source.

    • Re:IPSEC (Score:3, Informative)

      by dbIII ( 701233 )

      If this card can do most of the work of IPSEC for me, it'd be a big win.

      Snapgear/Cybergaurd make little embedded firewall/VPN/router cards that can do this and plug into a PCI slot and pretend to be a network card as far as the host computers OS is concearned - but that's a completely different thing to what is talked about in the article.

      I'd want all the software on that card to be Open Source

      They run on linux and the IPSEC implementation is open source as well.

  • One word... (Score:3, Insightful)

    by jcdick1 ( 254644 ) on Monday June 20, 2005 @10:11PM (#12869113)
    Virtualization.

    These are the kinds of NICs that would be put into a datacenter that is leaning heavily toward VMware GSX or ESX servers. Any bit of offload of the CPU in sharing the NICs is a good thing.
  • by Chmarr ( 18662 ) on Tuesday June 21, 2005 @01:16AM (#12869909)
    From the article:
    The name Level 5 refers to the network protocol stack where level 5 delivers data from the network to the application, according to Karr. The company isn't concerned about any potential confusion with Internet Protocol telecom Level 3 Communications Inc. On the contrary, he quipped, "It's working in our favor. People say, 'Yes, we've heard of you. You're a big company.'"

    Congrations guys... you just admitted to causing actual trademark confusion... have fun in the courtroom.
  • Accelerating Ethernet in hardware, while remaining 100% compatible with the standard protocols on the wire, isn't all that new. Just over 2 years ago, I worked on a TOE (TCP offload engine) card at Adaptec.

    http://www.adaptec.com/worldwide/product/prodfamil ymatrix.html?cat=%2fTechnology%2fNAC+Cards%2fNAC+C ards [adaptec.com]

    It was a complete TCP stack in hardware (with the exception of startup/teardown, which still was intentionally done in software, for purposes of security/accounting).

    Once the TCP connection was established, the packets were completely handled in hardware, and the resulting TCP payload data was DMA'ed directly to the application's memory when a read request was made. Same thing in the other direction, for a write request. Very fast!

    I'm not sure of the exact numbers but we reduced CPU utilization to around 10%-20% of what it was under a non-accelerated card, and were able to saturate the wire in both directions using only a 1.0Ghz CPU. This is something that was difficult to do, given the common rule of thumb that you need 1Mhz of CPU speed to handle every 1Mbit of data on the wire.

    To make a long story short, it didn't sell, and I (among many others) was laid off.

    The reason was mostly about price/performance: who would pay that much for just a gigabit ethernet card? The money that was spent on a TOE-accelerated network card would be better spent on a faster CPU in general, or a more specialized interconnect such as InfiniBand.

    When 10Gb Ethernet becomes a reality, we will once again need TOE-accelerated network cards (since there are no 10GHz CPU's today, as we seem to have hit a wall at around 4Ghz). I'd keep my eye on Chelsio [chelsio.com]: of the Ethernet TOE vendors still standing, they seem to have a good product.

    BTW, did you know that 10Gb Ethernet is basically "InfiniBand lite"? Take InfiniBand, drop the special upper-layer protocols so that it's just raw packets on the wire, treat that with the same semantics as Ethernet, and you have 10GbE. I can predict that Ethernet and InfiniBand will conceptually merge, sometime in the future. Maybe Ethernet will become a subset of InfiniBand, like SATA is a subset of SAS....
  • by Ancient_Hacker ( 751168 ) on Tuesday June 21, 2005 @06:54AM (#12870893)
    This is yet another round of the GCMOH. Anytime there are idlle hardware engineers they find something that can be moved off the main CPU to hardware (or these days, almost always, another processor). This is almost always a bad idea:
    • Erecting yet another edifice brings on the huge and unavoidable overheads of yet another different CPU instruction set, yet another real-time scheduler, another code base, another set of performance and timing bottlenecks. Another group of programmers. Another set of in-circuit emulators, debugging tools, and system kernel. Another cycle of testing, bug fixes, updates.
    • It sets up a split in the programming team-- there's now much more reason for finger-pointing and argument and mistrust.
    • The extra money would usually buy you another CPU and lots of RAM, resources that would benefit every part of the system, not just the network I/O.
    • The separate I/O processor usually requires the geekiest and least communicative of the programmers-- not a good thing. The manuals for the I/O card are likely to be very brief and sketchy, and rarely up to date.
    • The I/O processor is almost always at least one generation of silicon technology older than the CPU, so even though the glossy brochures just drip with Speeeed! and Vrooom!-y adjectives, it's not that speedy in comparison to the CPU.
    For examples, see the $4000 graphics co-processor that IBM tried to sell for the PC (IIRC the CPU could outdo-it). The various disk-compression cards for the PC. Also see the serial ports on the Mac IIvx (very expensive and not noticeably better). Don't forget the P-code chip for the PDP-11/03. All very expensive and blase' performance/$.

No spitting on the Bus! Thank you, The Mgt.

Working...