Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking Software IT Hardware Linux

Is There a Place for a $500 Ethernet Card? 423

prostoalex writes "ComputerWorld magazine runs a story on Level 5 Networks, which emerged from the stealth startup status with its own brand of network cards and software called EtherFabric. The company claims they are reducing the load on the servers CPUs and improve the communications between the servers. And it's not vaporware: 'The EtherFabric software shipping Monday runs on the Linux kernel 2.4 and 2.6, with support for Windows and Unix coming in the first half of next year. High volume pricing is $295 for a two-port, 1GB-per-port EtherFabric network interface card and software, while low volume quantities start from $495.'"
This discussion has been archived. No new comments can be posted.

Is There a Place for a $500 Ethernet Card?

Comments Filter:
  • by bananahead ( 829691 ) * on Monday June 20, 2005 @10:04PM (#12868739) Journal
    This sounds very similar to the 'smart card' concept back in the late 80's and early 90's. Intel had the 586-driven smart-cards, and I believe 3Com had them as well. They were intended to offload the CPU by putting parts of the stack on the card. They failed because the performance gain and CPU offload numbers were never enough to justify the price difference.

    I wonder what has changed? I have never known the CPU to get dragged down by network traffic, but maybe in the network server markets it is different, However with the Ethernet chipsets being designed into the motherboard and integrated into the tight circle of RAM and CPU, it isn't clear there is a need for this.

    How long before the network control is put into the CPU? It is going to be tough to beat that type of performance.

  • by grub ( 11606 ) <slashdot@grub.net> on Monday June 20, 2005 @10:05PM (#12868742) Homepage Journal

    Is There a Place for a $500 Ethernet Card?

    Of course there is, assuming the card performs as advertised. Sheer conjecture: the card likely has a lot of the smarts onboard. Maybe it has some of the TCP and IP stuff on board too (checksum, etc). Compare that to a crapbox $10.95 RealTek[a] card which generates interrupts like mad because it has no smarts and you'd probably be very suprised. (Think of comparing a decent hardware modem to a software based WinModem.)

    [a] I had a sales-drone at Computer Boulevard here in Winnipeg just RAVE about RealTek cards. I said I really wanted 3 Intel or 3COM cards for a new work proxy server and he said 'Why? RealTeks are way cheaper and run at the same speed!' Retard.

  • by kc32 ( 879357 ) on Monday June 20, 2005 @10:09PM (#12868767)
    if your internet connection is anything less than fiber, which is about 99.9% of all connections? Not to mention the fact that not many computers can actually handle that much data at once anyway.
  • by Sv-Manowar ( 772313 ) on Monday June 20, 2005 @10:12PM (#12868784) Homepage Journal
    This isn't exactly an entirely new concept. Intel have been selling their ethernet chips with built in SSL accelerators for quite some time, and the advantage of offloading duties from the software to the hardware (see Intel etherexpress vs RealTek style cards) is obvious. Whether these cards offload enough of the normal duties of a typical cluster node to be worthwhile should be interesting to see, as there are a wide variety of cluster load types and obviously these cards will have a niche to fit into alongside their competitors in the diverse set of demands around cluster networks. As for the price tag, I seem to remember gigabit cards being extremely expensive a few years back, and its probably pretty competitive with where they're aiming this product, alongside myrinet and infiniband.
  • Knock-Offs (Score:5, Insightful)

    by randomErr ( 172078 ) <.ervin.kosch. .at. .gmail.com.> on Monday June 20, 2005 @10:13PM (#12868792) Journal
    I give Realtek 6 months tops to make thier own knock-off of the card for $24.95.
  • by Anonymous Coward on Monday June 20, 2005 @10:15PM (#12868802)
    The Pentagon.
  • by Famanoran ( 568910 ) on Monday June 20, 2005 @10:16PM (#12868808)
    But not necessarily where the vendors think it is.

    Back when I was working at a startup developing anti-DDoS technology, one of the biggest problems we were faced when implemented GigE, was the load on the PCI bus. (This was before we started using PCI-X).

    It depends on exactly how customisable the network card software is, but if you could plonk a couple of those into whatever system you wanted - and if the cards themselves could do, say, signature detection of various flood types, or basic analysis of traffic trends then that is a very definite market.

    I realise the core issue is not addressed (if your physical pipe is full, then you're fucked), but it takes the load of dropping the malicious packets off the host CPU so it can attempt to service whatever valid traffic actually gets through.

    And then there is IP fragmentation. Bad fragments? Perhaps a dodgy fragmentation implementation in the stack? (you know which OS I mean) Lets just drop that before the host sees it and crashes.

    I don't know, I can't find any real information describing what they do, but I can certainly see uses for this.

  • if your internet connection is anything less than fiber, which is about 99.9% of all connections?

    The other 0.1%, obviously.
  • by ScentCone ( 795499 ) on Monday June 20, 2005 @10:22PM (#12868839)
    if your internet connection is anything less than fiber, which is about 99.9% of all connections? Not to mention the fact that not many computers can actually handle that much data at once anyway

    Listen, when I've got 30 web servers banging away on a single database server, I want each web server in and out as quickly as possible. Every bit of the handshake, query, and results is going to wrap up that much faster if things are faster, period. When you're dealing with a huge data-driven e-commerce site, where every page renders around a hundred or more queries, and there are dozens or hundreds of concurrent page views, this stuff really counts in the aggregate.

    If you sell one more widget per day, all year long, because your web presentation layer is just a little more snappy, that's sure as hell going to pay for a $500 NIC.
  • by Ingolfke ( 515826 ) on Monday June 20, 2005 @10:23PM (#12868842) Journal
    The name Level 5 refers to the network protocol stack where level 5 delivers data from the network to the application, according to Karr. The company isn't concerned about any potential confusion with Internet Protocol telecom Level 3 Communications Inc. On the contrary, he quipped, "It's working in our favor. People say, 'Yes, we've heard of you. You're a big company.'"

    As lawyers at Level 3 begin salivating at thought of all of the potential lawsuits.
  • by crusty_architect ( 642208 ) on Monday June 20, 2005 @10:24PM (#12868846) Homepage
    We use Filers for storage at Gigabit speeds. Compared to our SAN/FC evironments, we see much higher CPU utilisation on our Sol 8 boxes, especially when attempting to get to Gigibit speeds.
  • by jellomizer ( 103300 ) * on Monday June 20, 2005 @10:27PM (#12868862)
    $500 for a network card you have to have a good reason that you will need it. I am sure there are applications that will utilize it but for the price it may not be worth it. With sub $500 computers coming to age. It is probably cheaper just to split all your services onto smaller boxen and have a load balancing switch/router. Computers are cheap today $500 for a network card is steep and will only fill a niche market. Perhaps if the price was in the $50 range it would be more widely accepted. But with good enough systems at 1k and additional 500 could be used for a faster CPU other then a faster network CPU
  • by Desert Raven ( 52125 ) on Monday June 20, 2005 @10:37PM (#12868909)
    I highly doubt they're aiming these cards at the general public. The kind of folks who worry about this kind of performance aren't buying $500 computers, they're buying $5,000 + computers, and trying to tweak every ounce of performance out of them. I'm willing to bet my employer is going to look pretty seriously at these cards for some of our heavy-use systems.

    Sometimes you can't "split all your services onto smaller boxen and have a load balancing switch/router". Not everything on the network is a web server.
  • by njcoder ( 657816 ) on Monday June 20, 2005 @10:39PM (#12868920)
    " When you're dealing with a huge data-driven e-commerce site, where every page renders around a hundred or more queries, "

    Each page renders a hundred or more queries? Sounds like you're better off investing in a better design than better hardware.

  • IPSEC (Score:4, Insightful)

    by Omnifarious ( 11933 ) <eric-slash@omnif ... g minus language> on Monday June 20, 2005 @11:06PM (#12869096) Homepage Journal

    If this card can do most of the work of IPSEC for me, it'd be a big win.

    My main concern though is that with two ports, how can I be absolutely certain the packet has to go through my firewall rules before it can go anywhere?

    Of course, the extra ports could be an advantage. If it could handle all the rules for you, then it might even be capable of functioning as a layer 4 switch and sending out a new IP packet before completely recieving said packet.

    But, I'd want all the software on that card to be Open Source.

  • One word... (Score:3, Insightful)

    by jcdick1 ( 254644 ) on Monday June 20, 2005 @11:11PM (#12869113)
    Virtualization.

    These are the kinds of NICs that would be put into a datacenter that is leaning heavily toward VMware GSX or ESX servers. Any bit of offload of the CPU in sharing the NICs is a good thing.
  • by ProfaneBaby ( 821276 ) on Monday June 20, 2005 @11:16PM (#12869145)
    Built-in ports have direct access... depending on the chipset/motherboard.

    I've seen some 'built-in' broadcom gig-e ports that were on the PCI bus, even though they were technically built into the board. Horrible performance.
  • by Phil Karn ( 14620 ) <karn.ka9q@net> on Monday June 20, 2005 @11:52PM (#12869325) Homepage
    And how long ago was that? What kind of servers had loads increase by 20% when you dumped the "smart" NICs? How much faster have general purpose CPUs gotten since then? And whose unusually inefficient TCP/IP stack and/or Ethernet driver were you running?

    "Smart" network cards are one of those bad ideas that keep coming back from the grave, because computer science seems to lose its collective memory every decade or so.

    Fifteen years ago, Van Jacobsen did a wonderful presentation at SIGCOMM 1990 on just why they were such a bad idea. The reason is very simple. A modern, well-tuned and optimized TCP/IP stack can process a packet with only about 20 instructions on average. Very few "smart" controller cards have host interfaces that can be spoken to with so few instructions! The switch to and from kernel context will usually cost you more than TCP/IP.

    Not only that, but the coprocessor on the "smart" controller card inevitably ends up being considerably slower than the host CPU, because typical host CPUs are made in much larger quantities, enjoy large economies of scale, and are updated frequently. So you often have the absurd situation of a blazingly fast and modern host CPU twiddling its thumbs waiting for some piss-poor slow CPU on a "smart" controller to execute a protocol handler that could have been done on the host with fewer instructions than it takes just to move a packet to or from the "smart" card.

    And if that weren't enough, rarely do these "smart" network controllers come with open source firmware. Usually the company that makes them obsoletes them quickly (because they don't sell well) and/or goes out of business, and you're left with a very expensive paperweight.

    Since his talk, Ethernet interfaces have totally obsoleted "smart" network cards. They now come with lots of internal buffering to avoid losing packets when interrupt latencies are high, and they take relatively few instructions per byte of user data moved. What more could you want?

  • "Smaller boxes" is relative. Google's cluster nodes are dual Xeons with terabyte+ HDs. For Google it is small, for anyone else, that is powerful computer you're going to be paying alot for. If you're buying one of those computers you're probably going to look at one of these cards, and that is exactly the market they're looking for.
  • by Harry Balls ( 799916 ) on Tuesday June 21, 2005 @12:03AM (#12869384)
    I looked at their benchmark web page http://www.level5networks.com/prod_etherfabricperf .htm [level5networks.com] where they claim that a typical PC with "conventional" ethernet burns 83.5% of CPU for communication overhead while only 16.5% remain to the application.
    But they don't say which CPU was used - probably an 850 MHz Pentium III or something similar outdated.

    Fact is, on a current 3.x GHz Pentium IV or an equivalent Athlon or Opteron the communication overhead is in one digit range, percentage-wise.

    A famous computer science quote is:
    "Lies, damned lies and benchmarks"
    and another one is
    "Don't trust any statistics that you haven't forged yourself."
  • by truesaer ( 135079 ) on Tuesday June 21, 2005 @12:42AM (#12869556) Homepage
    Not only that, but you set up your internal network to operate at gigabit speeds, can't you? There is more to the network than the connection to the public internet even for those who don't have a fiber connection.
  • by jimsingh ( 314245 ) on Tuesday June 21, 2005 @12:48AM (#12869585)
    As CPUs get faster an interrupt costs you more in terms of lost CPU time. So, reducing the number of interrupts is more important now than ever before.

    My 100 Mbs ethernet card generates about 5k interrupts / second when transferring data at about 30 Mbps. Gigabit cards are engineered to hold interrupts until a few packets of data come in so that a DMA can move a larger chunks of data. If this NIC reduces the use of interrupts even further (say by off boarding computation or even the entire TCP/IP stack and thus allows for even larger DMA transfers) the impact could be substantial.

    Unfortunetly, my knowledge of computer innards stops here, so I can't calculate how much cpu time 5000 interrupts actually take or how the new PCI-Express bus changes interrupt processing or how much a benefit it would be to have say only 1000 interrupts / second instead of the 5000.
  • by klui ( 457783 ) on Tuesday June 21, 2005 @12:54AM (#12869609)
    It would depend on the implementation. Not all mobos with built-in ports have "direct access." Some of them go through a shared bus or worse, the PCI bus.

    Intel's implementation for the 865P/875P chipset goes through the memory hub directly http://www.intel.com/design/chipsets/schematics/25 281202.pdf [intel.com] while the i845 chipset has the ethernet interface connected to the ICH4 controller hub that is shared among other devices like the PCI bus http://www.intel.com/design/chipsets/datashts/2519 2401.pdf [intel.com]. VIA's PT894/PT880 ethernet connection goes through a "VIA Connectivity" bus much like the Intel 845 http://www.via.com.tw/en/products/chipsets/p4-seri es/pt894pro [via.com.tw] and http://www.via.com.tw/en/products/chipsets/p4-seri es/pt880 [via.com.tw]. There were some value motherboards that although I recall that they use good/decent chipsets, their designers decided to connect the built-in gigabit ethernet ports off the PCI bus. I cannot recall what these were but I read about them in anandtech several years ago.

  • by evilviper ( 135110 ) on Tuesday June 21, 2005 @01:00AM (#12869632) Journal
    Ding ding ding. I forget who said it (maybe Alan Cox, but I'm REALLY not sure about that), but the opinion was along the lines that it would always be more benefitial to throw the money at a faster processor (or a second processor etc), because you'd get a performance boost everywhere.

    Interrupts are the one place where it's not remotely true. A faster processor will allow your system to handle significantly more interrupts. The whole interrupt model needs to be thrown out and replaced with something much better.

    And while I'm at it, there are many cases where it's not true. Wherever you have a significant bottleneck, hardware acceleration helps tremendously. Tasks like encryption and (HighDef) video playback can max-out the highest-end systems available, while a $50 card can handle those tasks easily.

    I don't think purpose-built hardware everywhere is the answer, but I do think having an FPIC/ASIC as a standard computer component could make for incredible speed improvements in most/all of the tasks that are hard for CPUs to perform.
  • by adolf ( 21054 ) * <flodadolf@gmail.com> on Tuesday June 21, 2005 @01:16AM (#12869703) Journal
    Ya know, that's the same sort of argument I've been using to promote software RAID vs. hardware RAID.

    I've learned this: Nobody cares. People will blindly spend hundreds, sometimes thousands, of dollars on specialized gear to offload their precious CPUs.

    When it is explained to them that better system performance can be had for less money by simply buying a faster CPU, they throw up their hands and blindly reassert that dedicated hardware must be better, by simple virtue of the fact that it is dedicated. That such reasoning is plainly a crock of shit seems to escape them.

    Van Jacobsen be damned, people are an illogical bunch. They're always doing stuff for all the wrong reasons, and trying to solve problems with solutions that are only vaguely related.

    That all said, if the card manages to improve ethernet latency even a little bit, it might be worthwhile in some circumstances. I'm thinking of applications like Cobranet for professional audio (where latency is always critically important), or perhaps clustering.

    I mean, can you imagine a Beowulf cluster with these?

  • by arivanov ( 12034 ) on Tuesday June 21, 2005 @01:36AM (#12869777) Homepage
    No.

    Realistically there are bottlenecks all over the place and out of them these 2 prevent nearly any computer from reaching 1G.

    1. Interrupt handling bottleneck. Even with interrupt mitigation your typical pps value for a single CPU P4 is under 100 kps. It falls down to under 60 kps when using Intel dual CPUs (dunno about AMD or Via) or SMT due to the overly deep pipeline on the P4. That is way less then 1G for small packets.

    2. IO bottleneck. Many motherboards have IO-to-memory speeds which are realistically way under 1G in total, usually around 600-700 Mbit.

    No card can help for these 2 problems.
  • by gad_zuki! ( 70830 ) on Tuesday June 21, 2005 @02:53AM (#12870028)
    >I don't think I should know more than the clerk at the store.

    I don't really think that's a valid complaint. In a perfect world, yes, but in retail? Not really.

    Running linux is like owning a foreign car and expecting GM/Ford guys to fix it just as easily. Its one of the real liabilities of not running the monopoly/defacto standard OS. As a linux user, you should know what you're buying. I mean, users ofter get criticized for being ignorant of their systems, but you want the same ignorance and expect retailers to spend all this extra time and traning on what is really a minority OS they might get a tiny amount of sales for?

    I *always* expect the salesman to be next to useless, that's why I do a little research and buy what I need. The retail sales position is there to push product, not to solve problems. It blows my mind when I see friends and family chat up the salesman and be semi-sweet talked into something thats good for them, but actually costs them an extra couple hundred of dollars or has things they don't need or is missing things they'll want in the future all because they wouldnt spend 10 minutes on the internet researching the purchase or reading reviews.
  • by Sj0 ( 472011 ) on Tuesday June 21, 2005 @02:57AM (#12870035) Journal
    It's just like people who see gold ends on something and figure it's the automatic winner in any contest.

    Never mind that using gold connectors and non-gold connectors together causes corrosion. :P
  • by Ancient_Hacker ( 751168 ) on Tuesday June 21, 2005 @07:54AM (#12870893)
    This is yet another round of the GCMOH. Anytime there are idlle hardware engineers they find something that can be moved off the main CPU to hardware (or these days, almost always, another processor). This is almost always a bad idea:
    • Erecting yet another edifice brings on the huge and unavoidable overheads of yet another different CPU instruction set, yet another real-time scheduler, another code base, another set of performance and timing bottlenecks. Another group of programmers. Another set of in-circuit emulators, debugging tools, and system kernel. Another cycle of testing, bug fixes, updates.
    • It sets up a split in the programming team-- there's now much more reason for finger-pointing and argument and mistrust.
    • The extra money would usually buy you another CPU and lots of RAM, resources that would benefit every part of the system, not just the network I/O.
    • The separate I/O processor usually requires the geekiest and least communicative of the programmers-- not a good thing. The manuals for the I/O card are likely to be very brief and sketchy, and rarely up to date.
    • The I/O processor is almost always at least one generation of silicon technology older than the CPU, so even though the glossy brochures just drip with Speeeed! and Vrooom!-y adjectives, it's not that speedy in comparison to the CPU.
    For examples, see the $4000 graphics co-processor that IBM tried to sell for the PC (IIRC the CPU could outdo-it). The various disk-compression cards for the PC. Also see the serial ports on the Mac IIvx (very expensive and not noticeably better). Don't forget the P-code chip for the PDP-11/03. All very expensive and blase' performance/$.

Say "twenty-three-skiddoo" to logout.

Working...