Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking Software IT Hardware Linux

Is There a Place for a $500 Ethernet Card? 423

prostoalex writes "ComputerWorld magazine runs a story on Level 5 Networks, which emerged from the stealth startup status with its own brand of network cards and software called EtherFabric. The company claims they are reducing the load on the servers CPUs and improve the communications between the servers. And it's not vaporware: 'The EtherFabric software shipping Monday runs on the Linux kernel 2.4 and 2.6, with support for Windows and Unix coming in the first half of next year. High volume pricing is $295 for a two-port, 1GB-per-port EtherFabric network interface card and software, while low volume quantities start from $495.'"
This discussion has been archived. No new comments can be posted.

Is There a Place for a $500 Ethernet Card?

Comments Filter:
  • Linux before Windows (Score:3, Interesting)

    by mepperpint ( 790350 ) on Monday June 20, 2005 @10:18PM (#12868812)
    It's nice to see a piece of hardware that ships with linux drivers and promises Windows support later. So frequently applications and hardware are first supported under Windows and occasionally ported to other platforms.
  • by justsomecomputerguy ( 545196 ) on Monday June 20, 2005 @10:23PM (#12868843) Homepage
    Back in the early-mid 80's (and probably even before then) IBM mainframes using SNA instead of TCPIP used special networking processors that handled all of that "networking stuff" so that the mainframe CPU (which really was a "unit" and not just a single chip) could just concentrate on running its jobs and not be interrupted by the communications end of things. Everything old is new again. Same situation, just smaller and faster (CPU and helper communications card take up 1U in a rack instead of 1 whole corner of the head end room).
  • by bluelip ( 123578 ) on Monday June 20, 2005 @10:27PM (#12868863) Homepage Journal
    I've noticed a slowdown in computer response when using gig cards and moving lotsa' data. I thought the bottleneck may have moved to the file systems. Didn't seem to be the case as pumping dummy data throught the nic also caused issues.

    I didn't pursue it far enough to see where where the actual problem was. These cards may help, but my money is on a faster cpu.
  • by SuperBanana ( 662181 ) on Monday June 20, 2005 @10:49PM (#12868993)
    . Intel had the 586-driven smart-cards, and I believe 3Com had them as well. They were intended to offload the CPU by putting parts of the stack on the card.

    You're probably thinking of the i960-based cards, though Intel's PRO series adapters (not i960 based) do something similar (TCP checksumming is now builtin to the chipset and most OS drivers now know how to take advantage of that). That processor, and variants, were used in everything from network cards to RAID controllers.

    They failed because the performance gain and CPU offload numbers were never enough to justify the price difference.

    Ding ding ding. I forget who said it (maybe Alan Cox, but I'm REALLY not sure about that), but the opinion was along the lines that it would always be more benefitial to throw the money at a faster processor (or a second processor etc), because you'd get a performance boost everywhere. $300 buys quite a bit extra CPU horsepower these days, and there's no need for the hassles of custom drivers and such. Nowadays CPUs are just so damn fast, it's also not really necessary.

  • In a word, no (Score:4, Interesting)

    by bofkentucky ( 555107 ) <bofkentucky&gmail,com> on Monday June 20, 2005 @10:55PM (#12869035) Homepage Journal
    Take sun, some of their new server kit this year is going to ship 10Gbit/s ethernet on the board, which acording to their docs, is going to take 3 USIV procs to keep the bus saturated (6 cores). But when you are looking at 8 to 64 way server boxes, who cares about those 3 procs, especially when in 24-30 months it will take less than one proc to handle that load (Quad Cored + Moore's Law), and the eventually one thread will have the horsepower.

    Surely those smart dudes at Via, AMD, Intel, Samsung, Nat Semi, and/or Motorolla aren't going to:
    A) FUD this to death if it really works
    B) File patent suit until doomsday to keep it locked up
    C) Buy them out
    D) Let them wither on the vine and then buy the IP.
  • by __aaclcg7560 ( 824291 ) on Monday June 20, 2005 @10:55PM (#12869037)
    Jerry Pournelle had a column in the February 2005 issue of Dr. Dobb's Journal about Gigabit hardware. If you have a Gigabit PCI card, expect to see a doubling of speed over 100Mb PCI card. If the motherboard has a built-in Gigabit port, you can see a five to six times speed over 100Mb PCI card or port. PCI cards are limited by the PCI bus, but built-in ports have direct access.
  • by Fweeky ( 41046 ) on Monday June 20, 2005 @11:29PM (#12869212) Homepage
    I expect you can get an Intel 1000/Pro for around $30; full TCP/IP checksum offloads in both directions, interrupt moderation, jumbo frames, and Intel even write their own open source drivers.

    Heh, my on-board Realtek GigE chip has checksum offloads too, but even with them on, 300Mbps would have me up to 70% system/interrupt CPU load (and I hear the checksumming is a bit.. broken); I barely scrape 30% with a PRO/1000GT.
  • by jamesh ( 87723 ) on Monday June 20, 2005 @11:47PM (#12869306)
    If you've ever had one of the recent worms come in behind a linux router, then you'll see how network traffic can make a cpu stop.

    I made a boo boo in a firewall rule and opened up an unpatched mssql server to the internet (*hangs head in shame*). Within 30 seconds it had caught one of the mssql worms and had stopped the linux router dead. Pulling the network plug from the mssql server caused the linux router to come instantly back to life. With TCP and all its flow control goodness it's probably not a problem, but when something is sending udp or icmp packets at you as fast as it can, you'll really see the difference.
  • Looking into EPSRC (Score:2, Interesting)

    by mislam ( 755292 ) on Monday June 20, 2005 @11:50PM (#12869315) Homepage
    I would wait on jumping to any type of conclusion and see what happens when EPSRC adopts this for ther 512 node cluster. If it really improves the performance by 10% it certainly should be something to look into. Now, is there is a market for it at end user's desktop? I don't think so. 9$ Realtek card would be just fine. :-)
  • by njcoder ( 657816 ) on Tuesday June 21, 2005 @12:05AM (#12869391)
    asp on iis 5 years ago, ok, that makes sense too.

    Not trying to knock your design if it works it works. Since you're working on another flavor of it, let me give you my opinion on what I would have done differently. I've worked on webapps like this in java, not sure if you could do the same in php or asp.net. For something like this I would go with Java from my experience with it and also doing PHP. Whatever you do it in this advice might be helpful.

    Your merchant info.... This probably doesn't get updated every day so you can cache it on an application level and let the cache refresh itself in a smart way when there are updates. You can do the same on a per session basis with shopper info. Sounds like your tax logic can be streamlined a bit as well. You might want to think of havinga seperate process that does the tax and can keep a cache of all the information, that way you don't have to hit the database for each item like it sounds you're doing now. Most of your logging probably doesn't have to be done in real time to the database. Or even in the database at all. There are ways to link your application logic with the webservers logging mechanism. The webserver usually does it in a smarter way, then you aggregate that info on a regular basis. If that doesn't work for you try asynchronous logging. Start up a seperate thread that writes to the log. That way the user doesn't have to wait for the logging to finish. Also caching the logs locally and then aggregating it even every minute or so on a heavy site should increase network and db performance since a few larger writes are faster than a lot of smaller ones. Even with everything you mentioned I don't see how you can have a hundred or more queries per page. I'm thinking at most 5-10 queries per page will get you all of what you need to display products, cross reference products/specials, and a bunch of other stuff. Your checkout pages might need a little more because you'll want to make sure you get fresh data from the database even though a good caching method shouldn't require it. Doesn't hurt to play it safe when the money actually trades hands. Your datamodel might need some going over as well.

    You might want to add some more ram to handle the extra caching and there are many open source distributed caching tools that make it easy. OSCache is good for Java apps, memcached is in C but can be used with other languages including PHP and Java and I think ASP?

    Since you're thinking of doing another version of it, you might want to consider these things, and probably more. Hard to say concretely without knowing much more. YOu can probably cut down on your hardware too. A site like hotornot.com, which is granted a lot simpler can serve up about 20 million pages a day across I think 50 servers. If you have a strong DB with 30 front end webservers (assuming you got them all 5 years ago and they're standard issue type webservers) I would expect a well designed, complex e-commerce site to be able to handle around 6-9 million page views a day with good response times easily. That's trying to be conservative too considering what you said.

    Getting this NICs for 30something servers is going to run you between 10-15k depending where the volume discount sits in. Like I said I don't know your whole system but that money being spent so you're pages don't hit the database 100 or more times per page should do more for you than these cards. That's my long answer so you don't think I was just being flippant.

  • Um, no. (Score:5, Interesting)

    by holysin ( 549880 ) on Tuesday June 21, 2005 @12:20AM (#12869454) Homepage
    If you have a machine (say on a machine running linux kernel 2.4.20-30.9smp) with a built in gig port (say with eth0 identified as eth0: Tigon3 [partno(BCM95704A6) rev 2003 PHY(5704)] (PCI:66MHz:64-bit) 10/100/1000BaseT) connected to a decent gigabit switch, and another machine (same card, same os)with a gigabit card, those two machines will achieve 940Mbps talking to each other (results via iperf, 0.0-10.0 sec 1.09 GBytes 940 Mbits/sec).

    However, if you plug a windows box (2000 or xp, didn't have a 2003 handy) with either an add on card, OR built in gig (2000 vs xp) you get a rather less impressive figure of 550-630. Coincidentally, you'll get the same basic number if you run two instances of iperf on the same computer... This tells me the bottleneck isn't the PCI bus, it's the OS. If you can prove me wrong please do so...
  • by postbigbang ( 761081 ) on Tuesday June 21, 2005 @12:38AM (#12869531)
    All GBE cards are FC on the MAC layer. Get over it.

    Here's where the problems come in:

    1) buses suck. PCI-X is fast; a faster bus clock is better still
    2) the problems for GBE NICs are, in no special order: dropping crap packets; cleaning up dirty cache (a huge problem for clusters, where this product is poised); session/protocol relationship managment; buffering up misrouted UDP; managing evil ports (setting them up and tearing them down); managing proxies and work arounds (a little SIP anyone? Burping up your IPSec???)
    3) making the driver aware of what's going on so sessions don't vomit
    4) not bothering the freaking CPU chipset every few milliseconds with useless crap

    So, if they do any of these things, bless them and send me the bill. Because (save TOE cards), all of them hassle the drivers and chipsets to no end with stuff that could easily be offloaded. And to those that say, just put more cheapo load-balanced hardware on the job-- you're a chump and deserve to have stuff blown to bits when you mulitply failure points with junko doorstop hardware boxes with all of the brains of a goose.
  • by EtherMonkey ( 705611 ) on Tuesday June 21, 2005 @12:40AM (#12869546)
    Agreed, but it's even been more recent than the early 90's. The late 90's also had its run of so-called "Intelligent" network cards.

    I worked for a large HP/Intel VAR at the time and I we were selling $500 Intel Intelligent Server NICs like they were Big Macs. Then one day one of our biggest customers called in a fit. It seems that his manager asked him to do a quick comparison between a smart Intel NIC and a regular Intel NIC, so he could tell his bean-counters to get stuffed. It turns out that we were NOT ABLE TO FIND ANY SYSTEM OR TRAFFIC CONFIGURATION that would result in higher throughput, lower CPU utilization or lower memory utilization when using the smart NIC.

    In other words, the standard $100 Intel NIC (PILA8465B, as I recall) beat the piss out of the much more expensive Intel intelligent NIC with on-board co-processor.

    Within 3 months we stopped getting any orders for the smart NICS. In 6 months Intel retailiated by disabling server features (adapter fault tolerence, VLAN tagging, load balancing and Cisco Fast Etherchannel support) on the basic NIC, in an effort to save the smart NIC. When this didn't work they modified the driver so the server features would only work with a re-released version of the "dumb" NIC at a higher price (the only difference between the cheapest and most expensive version was an ID string burned into a PAL on the NIC).

    Similiar experiences with earlier cards from Intel, IBM, and others. In every instance I tested, a plain old NIC (not junk, but the basic model from a reputable manufacturer) always outperformed the NIC's with on-board brains and/or co-processors.

    Maybe this Level-5 NIC has some new voodoo engineering, but I'd have to see real-world testing to believe it. Especially from a company that apparently intentionally is playing-off Level-3 Communications' name recognition for its own benefit.
  • by Swaffs ( 470184 ) <swaff@fFORTRANudo.org minus language> on Tuesday June 21, 2005 @12:54AM (#12869607) Homepage
    I walked into Computer Boulevard about three weeks ago looking for a replacement hardware modem (unfortunately I don't live in Winnipeg anymore, but about 250 km north east, no cable or even DSL here.)

    So the sales guy asks me if I need help and I tell him I want a 56k hardware modem. So he ushers me to the modem section to let me browse. Sure enough, there's nothing there but winmodems. He comes back a couple minutes later and asks if I found what I was looking for. I said no, and that what I was looking for was a hardware modem, because I'm running linux. I told him they probably only had it in OEM.

    So he goes and gets the big book of parts and finds it, and I tell him that yes, that's exactly what I came for. Well off he goes and comes back several minutes later to tell me that those are in the back, which is why I couldn't find it. Gee, thanks, that's why I mentioned it was OEM, since they don't usually put OEM parts on the shelf. So he asks if I want one, and I say yes, and off he goes only to come back and report that they're out of stock.

    I went over to Computer Avenue and was met with someone (unforuntately I didn't get his name) that knew what he was talking about and was quickly walking away with what I needed, albeit at a $10 higher price than I would have paid at CBIT if they'd had one.

    The sad part about this is that I'm not a computer professional, just a user, and one that's not even that familiar with hardware. I'm all isolated from the computer world and out of the loop up here on MTS's when-they-feel-like-it internet access. I don't think I should know more than the clerk at the store.
  • Wow!! (Score:2, Interesting)

    by EightBits ( 61345 ) on Tuesday June 21, 2005 @01:36AM (#12869780)
    I can't believe what I'm seeing here. The majority think this is a bad thing, it seems. I disagree.

    I have a few Sun E450s in my shop and I am going to be moving them to gigabit ethernet soon. A gigabit ethernet card from Sun costs considerably more than this so it is an option as long as it will run on Sparc hardware with Solaris drivers. Sorry, but the Intel e1000 just isn't going to cut it here.

    I'd like to see that article about 20 instructions (I assume these are ML instructions) handling an entire packet. This may be the case on CISC CPUs, but I just don't see it happening on RISC CPUs. I am not saying it's impossible, but I would definitely have to see that to believe it and I am genuinely interested in reading this article. Please let me know where I can find it.

    I don't think some of you understand the difference between intelligent network chips and networkchips with a CPU core inside them. Take a look at Cisco's solutions. Their line is moving hastily towards ASICs. The idea here is that specialized hardware designed to perform a task will ALWAYS be faster at handling that task than a CPU running on the same clock. Cisco is proving this with Layer 3 switching vs routing. It's not clear what their solution is, but I'm willing to bet this NIC is an ASIC optimized for the purpose of handling TCP/IP traffic from an ethernet network. I have a hard time seeing how any CPU will be able to beat out an ASIC in this field today.

    Also of note is memory and bus bandwidth. I have seen some comments about CPU usage and how it's negligible and what not. While I don't believe that either (I pay a lot for those cycles, I want to use as many for data processing as possible), I do believe that the CPU handling the TCP/IP stack takes a little more BUS bandwidth as well as memory bandwidth. If this is all handled on the card, both bandwidth usages will be reduced. Bus and memory bandwidth is already lagging way behind CPU speed as it is. It is my number one system performance limiter right now. The more I can eliminate it, the more productive I can be. Someone already mentioned large numbers of packets. This is a good argument as well. When dealing with large numbers of small packets, CPU usage on a CPU-based TCP/IP stack increases as opposed to a smaller number of larger packets. So in some cases, it depends on your network and it's configuration.

    Also, consdier that maybe I do only get 10 more cycles per second from using this card. Is the card worth it? With CPU cycles at a premium and everyone here trying to purchase as many as possible and never a single idle CPU in any of my servers, I have to give a resounding YES! 10 cycles per second per CPU times the numebr of CPUs and seconds the NIC is in place is a LOT of cycles and most certainly worth $500 over the lifetime of a $30,000 machine. If they can prove it does what they claim on my hardware, count me in.
  • by tob ( 7310 ) on Tuesday June 21, 2005 @02:44AM (#12870008)
    100kips is actually enough to get 1Gbps throughput. 1 ethernet packet = 1500 bytes = 12k bits. 100k ethernet packets are 1.2Gbits.

    In practice (did some testing 2 years ago, on very modern hardware for the time) you can do 70 MB/s NFS on an untuned Linux box. By tuning buffersizes and using jumboframes (9kB ethernet packets instead of 1.5kB) you can get 110MB/s which is amazing if you realize what kind of everhead is in that transfer (NFS, RPC, TCP, IP, Ethernet).

    Tobias

The last thing one knows in constructing a work is what to put first. -- Blaise Pascal

Working...