Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking

SoHo NAS With Good Network Throughput? 517

An anonymous reader writes "I work at a small business where we need to move around large datasets regularly (move onto test machine, test, move onto NAS for storage, move back to test machine, lather-rinse-repeat). The network is mostly OS X and Linux with one Windows machine (for compatibility testing). The size of our datasets is typically in the multiple GB, so network speed is as important as storage size. I'm looking for a preferably off-the shelf solution that can handle a significant portion of a GigE; maxing out at 6MB is useless. I've been looking at SoHo NAS's that support RAID such as Drobo, NetGear (formerly Infrant), and BuffaloTech (who unfortunately doesn't even list whether they support OS X). They all claim they come with a GigE interface, but what sort of network throughput can they really sustain? Most of the numbers I can find on the websites only talk about drive throughput, not network, so I'm hoping some of you with real-world experience can shed some light here."
This discussion has been archived. No new comments can be posted.

SoHo NAS With Good Network Throughput?

Comments Filter:
  • by LWATCDR ( 28044 ) on Tuesday December 16, 2008 @06:59PM (#26138959) Homepage Journal

    FreeNAS or OpenFiler on a PC with a raid controller and GigE should work. It might even be cheaper than a NAS box.
    As to OS/X support. I thought OS/X supported Windows networks out of the box. Odds are very good that if it supports Windows OS/X will work.

  • Cmon people... (Score:5, Informative)

    by Creepy Crawler ( 680178 ) on Tuesday December 16, 2008 @07:00PM (#26138961)

    You might as well build it yourself.

    Go get a lowbie Core2, mobo, good amount of ram, and 4 1TB disks. Install Ubuntu on them with LVM and encryption. Run the hardening packages, install Samba, install NFS, and install Webmin.

    You now have a 100% controlled NAS that you built. You can also duplicate it and use DRBD, which I can guarantee that NO SOHO hardware comes near. You also can put WINE on there and Ming on your windows machines for remote-Windows programs... The ideas are endless.

  • Re:SMB (Score:5, Informative)

    by Anthony_Cargile ( 1336739 ) on Tuesday December 16, 2008 @07:03PM (#26139007) Homepage
    One more thing: if it says gigabit ethernet, for me that usually means anywhere between 200-800Mbps of speed on a fairly busy network, which should suffice for large data backups in a matter of say 2-5 minutes tops for moving several gigs. Your throughput really depends on other factors, so yours may be higher or lower than mine but typically that range should suffice with the proper switching and routing equipment.
  • dedicated PC (Score:5, Informative)

    by spire3661 ( 1038968 ) on Tuesday December 16, 2008 @07:03PM (#26139013) Journal
    In terms of cost/benefit ratio, nothing beats a stripped down PC with a lot of drives stuffed in it or in an external esata enclosure. I run a HP NAS MV2020, and a linksys NAS200 and they both cant hold a candle to a PC in throughput. Ive heard of some commercial systems out there, but they cost a small fortune. Just my $.02
  • ReadyNAS (Score:3, Informative)

    by IceCreamGuy ( 904648 ) on Tuesday December 16, 2008 @07:07PM (#26139059) Homepage
    We have a ReadyNAS 1100, it's alright, but I wouldn't call it stellar. I get around 80Mb/sec to it over the network, but the management interface is IE only (as far as I can tell, since it has problems with FF and Chrome), and it has these odd delays when opening shares and browsing directories. Some of the nice features are the out-of-the-box NFS support and small, 1U size.
  • OpenSolaris / ZFS (Score:3, Informative)

    by msdschris ( 875574 ) on Tuesday December 16, 2008 @07:12PM (#26139133)
    Build it yourself and install Opensolaris. ZFS rocks.
  • by Gizzmonic ( 412910 ) on Tuesday December 16, 2008 @07:13PM (#26139141) Homepage Journal

    I have a Terastation 2 (by Buffalo) and I am plugged into 100Mbps ethernet at work, so I can't tell you about the throughput, but I can tell you that the Terastation Mac stuff is very half-assed. I couldn't get AFP/Appletalk to work at all and while SMB is rock solid for large files, it cannot handle large amounts of small files. It chokes on directories with huge amounts of files (not sure if that's a limitation of the Finder or the Terastation's fault, though). I had a user's backup program run amok and generate millions of tiny .tmp files over the course of about a month, and I was unable to delete them from OS X, even when waiting days. I had to use Windows Explorer, which was slow but eventually worked.

    The built-in webpage used for administration is pretty terrible too. It works best with IE 6 on Windows, but even with that, sometimes the columns don't line up properly. If you misclick, you could end up changing the wrong shared folder.

    On the plus side, the Terastation 2 is pretty cheap. I'd give it about a B minus in terms of what I need it to do.

  • NAS Charts (Score:2, Informative)

    by Anonymous Coward on Tuesday December 16, 2008 @07:13PM (#26139151)

    www.smallnetbuilder.com maintaines a NAS Chart, I find it quite complete and recent.(http://www.smallnetbuilder.com/component/option,com_nas/Itemid,190/)

  • by sco_robinso ( 749990 ) on Tuesday December 16, 2008 @07:18PM (#26139209)
    They have the most comprehensive benchmarks and NAS's around (that I've stumbled across, at least). Also, lots of good tests showing various things like Jumbo frames, etc. Very good overall.

    I frequent the site a bit, and there's a couple tricks to getting good performance out of a NAS, or LAN throughput in general.

    1. Use Jumbo Frames, period.
    2. Use PCI-e NIC's, onboard or PCI just can't deliver the speeds offered by GigE. You can find smiple intel PCI-e nics for under $20.
    3. Drives make a big difference, obviously.

    www.smallnetbuilder.com -- Good site.
  • by nhtshot ( 198470 ) on Tuesday December 16, 2008 @07:23PM (#26139265)

    My situation is similar to yours. I bought and tested several off the shelf solutions and was continuously disappointed.

    My solution was an off the shelf AMD PC filled with HDD's and linux software raid.

    It's MUCH Faster (90MB/Sec) then any of the NAS solutions I tested.

    With Christmas specials abounding right now, HDD's are cheap. Use independent controllers for each port and a reasonable CPU. Also make sure that the GIGe interface is PCI-E.

  • by Overzeetop ( 214511 ) on Tuesday December 16, 2008 @07:26PM (#26139301) Journal

    Okay, unRaid is not particularly fast compared to an optimized system, but it's expandable, had redundancy, is expandable, is web managed, plays nice with windows, sets up in about 20 minutes, costs $0 for a three disc license and $69(?) for a 6 disk license.

    My total unoptimized box on an utterly unoptimized Gb network (stock cards, settings, with 100 and 1000 nodes) and unmanaged switches just transferred an 8.3GB file in a hair under three minutes. From a single, cheap SATA drive to a Vista box with an old EIDE drive. Now 380Mb/s is not blazingly fast, but remember that it took almost no effort.

    http://lime-technology.com/ [lime-technology.com]

    No connection except as a happy customer with a 4TB media server that took longer to assemble the case than to get the SW running. If only my Vista Media Center install has been this easy.

  • by m0e ( 55482 ) on Tuesday December 16, 2008 @07:37PM (#26139421)

    Disk will always be. Since disk is your slowest spot you will always be disk I/O bound. So in effect there's no real reason to worry about network throughput from the NIC. NICs are efficient enough these days to just about never get bogged down. What you would want to look at for the network side would be your physical topology -- make sure you have a nice switch with nice backplane throughput.

    About disks:

    Your average fibre channel drive will top out at 300 IO/s because few people sell drives that can write any faster to the spindle (cost prohibitive for several reasons). Cache helps this out greatly. SATA is slightly slower at between 240-270 IO/s depending on manufacturer and type.

    Your throughput will depend totally upon what type of IO is hitting your NAS and how you have it all configured (RAID type, cache size, etc). If you have a lot of random IO, your total throughput will be low once you've saturated your cache. Reads will always be worse than writes even though prefetching helps.

    If you're working with multi-gigabyte datasets, you'll want to increase the number of spindles (ie number of disks) to as high as you can go within your budget and make sure you have gobs of cache. If you decide to RAID it, which type you use will depend on how much integrity you need (we use a lot of RAID 10 with lots of spindles for many of our databases). That will speed you up significantly more than worrying about the NICs throughput. don't worry about that until you start topping a significant portion of your bandwidth -- for example, say 60MB/sec sustained over the wire.

    This doesn't get fun until you start having to architect petabytes worth of disk. ;)

  • by AngelofDeath-02 ( 550129 ) on Tuesday December 16, 2008 @07:47PM (#26139533)

    Sadly, my off the shelf pc is woefully insufficient ... I get 24MB/s max from a raid over gigabit ...

    The pc was originally an AMD 1800+ with SDRAM.

    there are 8 drives total, one boot (80 gig IDE)

    7 250 gig Seagates, all IDE - Originally they were all on a separate controller, and I used a raid controller to do it (acting as ide, no raid, in this case) the 7 250 gigs are setup in a software raid5 configuration in linux. Individually hdparm rates them as 60MB/s, and the whole raid as 70MB/s but for whatever reason file transfers from the raid to the boot drive topped out at 20-30MB/s. The gigabit card was also on the same PCI bus. However, copying from the boot drive over the network went at 50MB/s.

    Thinking it might be a PCI bus limitation somehow, I moved the raid into a newer motherboard, sporting a 2200+ and ddr memory. Now being limited in ide controllers, all 7 drives are plugged into the raid controller in a master/slave setup. I get similar performance (average is now 23MB/s vs 20) and the gigabit ethernet controller is onboard.

    I can't figure it out -_- I also don't have the money for a second raid controller (to put them all on their own channel) or to rebuild the pc with a PCI-E bus and sata.... so for now, that 20MB/s will have to be sufficient.

  • Re:Cmon people... (Score:2, Informative)

    by Anonymous Coward on Tuesday December 16, 2008 @07:47PM (#26139539)

    Yeah, it's not like a mac runs UNIX or has a freeBSD userland with a full ports tree or anything.

  • by Anonymous Coward on Tuesday December 16, 2008 @07:53PM (#26139597)

    It sounds like you do this as your day job working with big expensive NAS and SAN equipment. Yes, in those environments you'll be I/O bound long before you're disk-bound or NIC bound. Sadly, the SOHO equipment is far, far worse. By and large, their throughput ranges from sad to atrocious. See SmallNetBuilder's NAS Charts for some benchmarks that will make you weep.

  • Re:SMB (Score:5, Informative)

    by drsmithy ( 35869 ) <drsmithy&gmail,com> on Tuesday December 16, 2008 @07:54PM (#26139603)

    A custom-built box, as many commenters suggested, seemed a tad inappropriate to me as he asked for an NAS device, not a server. Installing Ubuntu or whatever on it seems like more of a performance hit than a properly optimized "off the shelf" NAS box, since they most likely don't run Dbus, GNOME, Hald, bluetooth or any other desktop software atop the basic kernel and networking services.

    While this is true, for noticably less than you'll pay for a NAS appliance, you can build a PC with vastly more CPU power and RAM (in particular, storage vendors - even with high-end, full-blown SAN solutions - are offensively stingy with cache), which will more than make up for any extra stuff that might be running.

    You need to spend a LOT on an "appliance" type storage system to get something that has higher performance and/or better features than a "server". Particularly with cache, storage vendors across the board are offensively stingy (16 gigs of high-quality ECC RAM costs maybe $800, but you'll be lucky if your $100k SAN comes with half that amount).

    Personally I would recommend the OP looks at Server/NAS-style "appliances" like Dell's NF500. They're the only sort of "cheap" turnkey devices he'll find that will deliver the performance he seems to want, and will probably only cost a grand or two more than DIY.

  • Re:Not Buffalo (Score:3, Informative)

    by dave562 ( 969951 ) on Tuesday December 16, 2008 @07:56PM (#26139615) Journal
    I agree with the suggestion to avoid Buffalo. Someone else responded to this thread and said that their UI is good. My experience was just the opposite. The UI sucked and trying to get the thing integrated into Active Directory was a nightmare. The setup appears to be straight forward. Specify domain name, specify domain username/password combo. The reality of the situation turned out to be decidedly different and required numerous calls to tech support, firmware updates and a lot of headaches.
  • by kwabbles ( 259554 ) on Tuesday December 16, 2008 @08:10PM (#26139797)

    For example:

    Best home network NAS?
    http://ask.slashdot.org/article.pl?sid=07/11/21/141244&from=rss [slashdot.org]

    What NAS to buy?
    http://ask.slashdot.org/article.pl?sid=08/06/30/1411229 [slashdot.org]

    Building a Fully Encrypted NAS On OpenBSD
    http://hardware.slashdot.org/article.pl?sid=07/07/16/002203 [slashdot.org]

    Does ZFS Obsolete Expensive NAS/SANs?
    http://ask.slashdot.org/article.pl?sid=07/05/30/0135218 [slashdot.org]

    What the hell? Is this the new quarterly NAS discussion?

  • by jbwiv ( 266761 ) on Tuesday December 16, 2008 @08:11PM (#26139805)
    The problem with that is power consumption. Build your own, and you'll be burning a lot of power unnecessarily because it's overkill. Contrast that with the ReadyNas Duo, which I own, that pulls on average around 30-40W. Much better and green.
  • Re:SMB (Score:5, Informative)

    by LoRdTAW ( 99712 ) on Tuesday December 16, 2008 @08:13PM (#26139841)

    A NAS is pretty much a server that is dedicated to storage.

    If he wants to roll his own I would suggest either a light install of Ubuntu server or FreeNAS: http://www.freenas.org/ [freenas.org]. FreeNAS is based on the stripped down Free BSD core that m0n0wall uses. It is very small and is managed using a simple and easy to use web interface. I don't know about gigabit performance as I only set it up once for a friend using 100mbit. He had the Linksys NAS box and it was dog slow. On 100Mb it couldn't push more then 3-4 MB sec. I could get 8-9Mb sec using FreeNAS on an Athlon 1.3Ghz with 128MB ram and two SATA 500GB drives in RAID 1 (mirroring). He also added a USB 2.0 card to hook up another 500GB drive. It pretty much saturates his 100Mbit connection.

    And here is my related question to others here:
    I have fought with SAMBA on Ubuntu 8.04 server and I cant get it going faster than 10-11MB/sec when copying to/from Windows XP. Even with the tcp_nodelay setting and a few others it just barely breaks 11MB/sec. I can get 25-30MB sec when copying from one Windows PC to another. And the server hardware isn't puny: dual P4 2.4GHz Xeons, 4GB RAM, dual PCIX Intel gigabit and a PCIX SATA controller. Any one have any suggestions? NFS also runs at the same speed and when downloading from the Apache server I get 5-6MB sec. Something is wrong somewhere but I cant tell. I have changed kernels played with conf files but nothing works. Someone once told me SAMBA will always be slow but I don't believe that to be true.

  • Re:Cmon people... (Score:5, Informative)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday December 16, 2008 @08:14PM (#26139847) Journal

    Your network connection is the limiting factor here. On large sequential reads, modern SATA drives with a mobo's onboard controller can easily maintain the 100MB/s or so it takes to max out your gigE connection.

    I second this.

    A good way to test your network connection is with netcat and pv. Both are packaged by all major Linux distos.

    On one machine run "nc -ulp 5000 > /dev/null". This sets up a UDP listener on port 5000 and directs anything that is sent to it to /dev/null. Use UDP for this to avoid the overhead of TCP.

    On the other machine, run "pv < /dev/zero | nc -ulistenerhost 5000", where "listenerhost" is the hostname or IP address of the listening machine. That will fire an unending stream of zero-filled packets across the network to the listener, and pv will print out an ongoing report on the speed at which the zeros are flowing.

    Let it run for a while and watch the performance. If the numbers you're getting aren't over 100 MB/s -- and they often won't be, on a typical Gig-E network -- then don't worry about disk performance until you get that issue fixed. The theoretical limit on a Gig-E network is around 119 MBps.

    Do the same thing without the "-u" options to test TCP performance. It'll be lower, but should still be knocking on 100 MBps. To get it closer to the UDP performance, you may want to look into turning on jumbo frames.

    pv is also highly useful for testing disk performance, if you're building your own NAS (highly recommmended -- a Linux box with 3-4 10K RPM SATA drives configured as software RAID0 array will generally kick the ass of anything other than very high end stuff. It's nearly always better than hardware RAID0, too).

  • Re:Cmon people... (Score:5, Informative)

    by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday December 16, 2008 @08:15PM (#26139859) Journal

    pv < /dev/zero | nc -ulistenerhost 5000

    Slashdot at the space after "-u". That should be "pv < /dev/zero | nc -u listenerhost 5000".

  • by crispytwo ( 1144275 ) on Tuesday December 16, 2008 @08:16PM (#26139871)

    They don't do too badly for xfer speed and are quite reliable. They seem to use less power and aren't noisy like other NAS systems (especially the RYO).

    Linux is their OS and if you need to add some functionality, you can get in and do it, but it works well out of the box.

    RAID 5 or 6 with the 508

    I've done the Windows SMB and it sucks for maintenance and you're back at RYO - patch and crotch rub. I've built many a linux box for this and, though they work, I have better things to do with my time. I really appreciate buying a few HD and sticking them into a box and having a system that can store data, xfer data, backup themselves, etc. in a matter of minutes.

    Oh yes, compatible... via CIFS with most systems. NFS with Mac and Linux if you are so inclined. rsync for backup.

  • by bu1137 ( 979245 ) on Tuesday December 16, 2008 @08:22PM (#26139921)
    Get yourself an AMD 64 X2 4850BE (2.5 Ghz, 45W), a mainboard with AMD 780G chipset, an efficient power supply and two western digital green power drives. That'll eat about 40 watts on idle.
  • by gelfling ( 6534 ) on Tuesday December 16, 2008 @08:22PM (#26139923) Homepage Journal

    They are a little on the high end, cost wise for consumer boxes but they are very reliable, the firmware actually works WELL, they support NTFS and their network interfaces function up to spec. And they support Mac.

    They make units from 1 bay SATA up to 4 bay 1U hot swappable dual 1Gb dual power supply rackmounts.

    www.synology.com

  • BuffaloTech sucks. (Score:1, Informative)

    by Anonymous Coward on Tuesday December 16, 2008 @08:45PM (#26140139)

    I had their TeraStation a few years ago. I bought it from Newegg, whose site (at the time) said that the TeraStation came with a 2 year warranty. 1 year after I bought it, it started acting funny. I called up BuffaloTech, only to be informed (after a near 2 hour wait on hold), that the TeraStation warranty is in fact only ONE YEAR and that they DO NOT repair TeraStations out of warranty. Yes, you heard that right... they won't even repair it and bill you. The jackass had the audacity to tell me that I should buy another one to get my data off. I told him to fuck off. I plugged the hard drives into my linux box and got the data off myself. Assholes.

    P.S.--Newegg saved the day. At first, they told me to go fly a kite. After asking them very nicely to ask their manager, they said OK and issued me a RMA number. I got my money back minus a small restocking fee (which is reasonable considering I didn't have the original box anymore). Newegg FTW.

  • by ceoyoyo ( 59147 ) on Tuesday December 16, 2008 @08:50PM (#26140173)

    Most NAS devices, particularly the consumer ones, cheap out on the processor. You might have great hard drive throughput, maybe even a nice fast network interface, but the poor little processor just can't keep up.

    If you want speed, definitely throw a PC at it.

  • by Score Whore ( 32328 ) on Tuesday December 16, 2008 @08:50PM (#26140189)

    You have seven drives in a software raid5. Anytime you do a write, the entire stripe has to be available to recompute parity. If you aren't doing full stripe writes, that will often mean having to read data in from a portion of the drives. A normal PCI slot will give you 132 MB/s max. Possibly that is a limitation, but it's higher than gigabit speeds so you may not care that much. Also your raid controller may not exactly be lightning. But I'd personally suspect the number of columns in your RAID5.

    Also, as a little learning experiment, take a drive, make two partitions of a few gig each. Put one of them at the beginning of the drive and put the other at then end of the drive. Benchmark the speed of those two partitions. In case you're not really that interested, the laws of physical make the bits at the outer edge of the platter go by about twice as quickly as the inner edge. So if you are doing a sequential benchmark you'll find that a disk that rates 60MB/s on the outer edge will drop to 35MB/s on the inner edge. So on average, you'll find that the majority of your disk isn't as fast as simple sequential tests suggest.

  • Re:SMB (Score:3, Informative)

    by bkeeler ( 29897 ) on Tuesday December 16, 2008 @09:09PM (#26140357)

    run "ethtool eth0" and have a look at the output. It's possible that it's autonegotiated a stupid setting like half-duplex or some lower speed.

    Do the same with the windows box; that information is the properties dialog for the network device.

  • Re:SMB (Score:1, Informative)

    by Anonymous Coward on Tuesday December 16, 2008 @09:22PM (#26140447)

    A custom-built box, as many commenters suggested, seemed a tad inappropriate to me as he asked for an NAS device, not a server. Installing Ubuntu or whatever on it seems like more of a performance hit than a properly optimized "off the shelf" NAS box, since they most likely don't run Dbus, GNOME, Hald, bluetooth or any other desktop software atop the basic kernel and networking services.

    Most "Properly optimized" NAS units run a cut back Linux system anyway. There's no magic in them.

    You can also install Ubuntu Server (or any other distro for that matter) without Dbus, GNOME, Hald, bluetooth, etc. I know it's an amazing concept for some to be able to install an OS in a custom way, rather than vendor-enforced configs. That would be the "free as in freedom" part these Linux kids keep rabbiting on about.

  • Re:SMB (Score:3, Informative)

    by Bearhouse ( 1034238 ) on Tuesday December 16, 2008 @09:25PM (#26140477)

    I have fought with SAMBA on Ubuntu 8.04 server and I cant get it going faster than 10-11MB/sec when copying to/from Windows XP. ...Someone once told me SAMBA will always be slow but I don't believe that to be true.

    Well, for SAMBA tuning, try (pdf):

    http://tinyurl.com/5rfjvu [tinyurl.com]

    Alternatively, if you don't need all the Win network support that SAMBA provides, you can install ext2ifs on the XP boxes and enjoy easy and fast access to your *nix volumes. Works well for me. Caution: Security issues...

    http://www.fs-driver.org/index.html [fs-driver.org]

  • by aaarrrgggh ( 9205 ) on Tuesday December 16, 2008 @09:36PM (#26140569)

    The Buffalo Terastation uses a software RAID, which slows it considerably, with the side benefit of being nearly impossible to recover if it crashes.

    It does support SMB, NFS, and AFS out of the box though.

    These boxes are cheap crap, and have a very limited useful lifespan. Our company lost a good deal of information when ours crapped out after 366 days. (Yes, we had backups, No they weren't perfect. They happened to be with me halfway around the globe at the time...)

    Really seems like the product offerings in this space are limited usability, poor reliability, imperfect implementations, and grossly overpriced. Doing it over again, I would go for a build-it-yourself box hands down.

  • by PiSkyHi ( 1049584 ) on Tuesday December 16, 2008 @09:41PM (#26140621)

    There's a paradox in it - those who know how to build a NAS would never buy a ready made 1.

    That's also the reason ready made NAS's don't have the features required, because only the people who don't know what to look for, buy them.

  • Re:SMB (Score:1, Informative)

    by Anonymous Coward on Tuesday December 16, 2008 @09:45PM (#26140643)

    Depending on their budget, linksys/netgear is cheaper, yet trades off quality as you mentioned due to lower-end hardware. For a little more (going a long ways) a smaller Cisco NAS would suffice as an out-of-the-box solution that does not sacrifice speed and throughput (depending on the model, of course).

    A custom-built box, as many commenters suggested, seemed a tad inappropriate to me as he asked for an NAS device, not a server. Installing Ubuntu or whatever on it seems like more of a performance hit than a properly optimized "off the shelf" NAS box, since they most likely don't run Dbus, GNOME, Hald, bluetooth or any other desktop software atop the basic kernel and networking services.

    Thats why you when run a server, you don't install GNOME or anything like that on it...

  • by blincoln ( 592401 ) on Tuesday December 16, 2008 @09:46PM (#26140651) Homepage Journal

    I'd be interested to know if anyone wants to make a case that AFP is necessary, but my personal opinion is that it's only worth using if you're running an OSX server.

    Our Mac people at work claim that the only way for the OS X file search utility to work correctly is via AFP. The third-party server software they use as an AFP server on Windows maintains a server-side index, which I imagine is why, although I don't know how much of that is a requirement with OS X as opposed to their specific configuration.

  • Re:Cmon people... (Score:2, Informative)

    by xthor ( 625227 ) <xthor&xthorsworld,com> on Tuesday December 16, 2008 @10:01PM (#26140783) Homepage

    A good way to test your network connection is with netcat and pv. Both are packaged by all major Linux distos.

    Another network speed test that is cool, although not included with a distro, is netio [www.ars.de]. It's cross-platform, with versions for Linux and for Windows in the archive. I just wish I could find a version for my MacBook (or, knew enough to get it to compile under fink).

  • Forget SOHO boxes (Score:3, Informative)

    by TopSpin ( 753 ) * on Tuesday December 16, 2008 @10:16PM (#26140895) Journal

    What you're expecting is really beyond the capability of common SOHO NAS equipment. These devices lack the RAM and CPU to approach the capacity of GB Ethernet.

    Unless you're willing to roll your own, you should consider a better class of gear and spend your time arguing for the funds to pay for it (a NetApp S550, perhaps.) If you are willing to roll your own, you can get there for $1-2k using all new hardware.

    Beware reusing older hardware; many GB NICs can't approach GBE saturation, either due to PCI bus contention or low end, low cost implementation. Yes, in some cases older hardware can get there, but this will require careful configuration and tuning.

    You want a PCI-E bus, a decent 'server' class NIC, recent SATA disks, a modern CPU (practically any C2D is sufficient) and enough RAM (2-4 GB). Personally I stick to Intel based MB chipsets and limit myself to the SATA ports provided by Intel (as opposed to the third party provided by jaton, silcon image, et al.) Linux, md raid 10. Will saturate a GBE port all day long, provided your switch can handle it...

    You're serving desktops so jumbo frames are probably impractical (because some legacy hardware on that LAN will not tolerate it.) If your managed (?) switch can provide VLANs you can multihome your critical workstations and use jumbo frames. This will get you more performance with less CPU load for 'free'.

  • Re:SMB (Score:3, Informative)

    by MarcQuadra ( 129430 ) on Tuesday December 16, 2008 @11:11PM (#26141261)

    8-9 MB/Sec? Really?

    I was getting 45-60MB/Sec (basically drive speed) on an old dual-cpu 1Ghz Pentium 3. I had Linux and Samba and no GUI running on it.

    Try throwing a low-end dual Core 2 (like an E5200) in an Intel board with a recent ICH chipset. Choose some -quality- drives, like WD RE3s, and a good network switch, like an SMC 8508-T if you don't have something already. Load Ubuntu from the mini.iso, no GUI, only Ubuntu Server and Samba.

  • by ischorr ( 657205 ) on Wednesday December 17, 2008 @03:57AM (#26142725)

    OS X doesn't support the ability to CHANGE CIFS (SMB) permissions, so that's a concern. It can at least change NFS permissions, if only from the CLI or other Unix permissions-aware apps.

  • Re: not anymore (Score:3, Informative)

    by phoenix321 ( 734987 ) * on Wednesday December 17, 2008 @05:32AM (#26143137)

    I have numbers to back it up: D-LINK DNS-323, 2x 500gb 5400rpm Samsung drives in Raid-1 configuration. I don't know the exact model, but I certainly selected these for low noise, low energy consumption and low heat output. So they're absolutely no high performers, but in regular, day-to-day operations, the Gigabit adapter manages a throughput at a steady 15 percent of 1000mbit push and pull from/to medium performance Windows workstations.

    This NAS unit is on the market for well over a year and it took several firmware revisions before other problems were worked out - but raw speed above 100mbit was never an issue. I don't have any real high performance client workstations, so I cannot say if these steady 150mbit throughput is limited by client or the NAS itself, but it certainly is enough to max out any and all WiFi links, which is enough for many applications except full disk backups, which take some hours in any case.

    I researched for a while before buying and got pretty much what other users described. I suggest you do the same so you can avoid the bad apples in the crowd of NAS units.

  • Re:SMB (Score:3, Informative)

    by diskis ( 221264 ) on Wednesday December 17, 2008 @06:54AM (#26143447)

    That's not Samba's fault. It's the TCP window size on XP that is the problem.
    I have at home a cheap server running Ubuntu and Samba with older drives that max out at 35-40 MB/s.
    Clients using OS X, Linux or Vista gets the full ~30 MB/s, but XP clients seem to max out at 10-15MB/s. After tweaking the TCP window size, I've gotten the speed up to 20-25MB/s.

  • Synology DS408 (Score:2, Informative)

    by ImdatS ( 958642 ) on Wednesday December 17, 2008 @06:58AM (#26143461) Homepage

    I used 2 LaCies for a while, but they both had a throughput of 10MB/s (the NAS with XP as OS) and 6MB/s (LaCie with Linux).

    Then I switched to Synology DS408. Mine has 4x Seagate 1.5TB HDs, RAID 5, so I have around 4TB of space.

    The network throughput maxes out at around 60MB/s(!). But this might be due to my not-so-good switch. It's all on a Gbps-Network.

    I used it only with Mac OS X (iMac, MBP, MBA, MB) with AFP. I haven't tested performance with SMB or NFS, but should be as fast as AFP (probably even faster).

    One thing, which really convinced me of Synology, was their support. Since the Seagate 1.5TB HDs have some problems (make sure you buy those with Firmware >=SD1A), I had a lot of issues at the beginning and thought that it's a problem with the NAS. I even thought I lost data. When I contacted Synology, they offered to log-on on to the NAS and try recovery, local check and everything - for free. And in the end, they found the problem with the Seagate HDs, proposed the solution and I am now even more happy then before.

    And no, I'm not working at Synology...

  • by cciRRus ( 889392 ) on Wednesday December 17, 2008 @08:55AM (#26143941)
    Instead of FreeNAS, I've tried . I managed to configure an iSCSI target with DRBD [openfiler.com] as the datastore for my VMware ESX 3.5 server.

    OpenFiler is neat and easy to use. Check it out too.
  • Re:SMB (Score:3, Informative)

    by joib ( 70841 ) on Wednesday December 17, 2008 @08:14PM (#26153079)

    Unfortunately Using Samba is almost 10 years old by now, and some of the tuning advice might not be applicable any more. In particular, newer versions of the linux kernel (2.6.17+) have full tcp autotuning. But explicitly specifying buffer sizes (socket options SO_RCVBUF and SO_SNDBUF) will disable this autotuning. So using some value that was good 10 years ago (8192) might be pretty far from optimal these days.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...