Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking

SoHo NAS With Good Network Throughput? 517

An anonymous reader writes "I work at a small business where we need to move around large datasets regularly (move onto test machine, test, move onto NAS for storage, move back to test machine, lather-rinse-repeat). The network is mostly OS X and Linux with one Windows machine (for compatibility testing). The size of our datasets is typically in the multiple GB, so network speed is as important as storage size. I'm looking for a preferably off-the shelf solution that can handle a significant portion of a GigE; maxing out at 6MB is useless. I've been looking at SoHo NAS's that support RAID such as Drobo, NetGear (formerly Infrant), and BuffaloTech (who unfortunately doesn't even list whether they support OS X). They all claim they come with a GigE interface, but what sort of network throughput can they really sustain? Most of the numbers I can find on the websites only talk about drive throughput, not network, so I'm hoping some of you with real-world experience can shed some light here."
This discussion has been archived. No new comments can be posted.

SoHo NAS With Good Network Throughput?

Comments Filter:
  • by LWATCDR ( 28044 ) on Tuesday December 16, 2008 @05:59PM (#26138959) Homepage Journal

    FreeNAS or OpenFiler on a PC with a raid controller and GigE should work. It might even be cheaper than a NAS box.
    As to OS/X support. I thought OS/X supported Windows networks out of the box. Odds are very good that if it supports Windows OS/X will work.

    • by nhtshot ( 198470 ) on Tuesday December 16, 2008 @06:23PM (#26139265)

      My situation is similar to yours. I bought and tested several off the shelf solutions and was continuously disappointed.

      My solution was an off the shelf AMD PC filled with HDD's and linux software raid.

      It's MUCH Faster (90MB/Sec) then any of the NAS solutions I tested.

      With Christmas specials abounding right now, HDD's are cheap. Use independent controllers for each port and a reasonable CPU. Also make sure that the GIGe interface is PCI-E.

      • Re: (Score:2, Informative)

        Sadly, my off the shelf pc is woefully insufficient ... I get 24MB/s max from a raid over gigabit ...

        The pc was originally an AMD 1800+ with SDRAM.

        there are 8 drives total, one boot (80 gig IDE)

        7 250 gig Seagates, all IDE - Originally they were all on a separate controller, and I used a raid controller to do it (acting as ide, no raid, in this case) the 7 250 gigs are setup in a software raid5 configuration in linux. Individually hdparm rates them as 60MB/s, and the whole raid as 70MB/s but for whatever rea

        • by Score Whore ( 32328 ) on Tuesday December 16, 2008 @07:50PM (#26140189)

          You have seven drives in a software raid5. Anytime you do a write, the entire stripe has to be available to recompute parity. If you aren't doing full stripe writes, that will often mean having to read data in from a portion of the drives. A normal PCI slot will give you 132 MB/s max. Possibly that is a limitation, but it's higher than gigabit speeds so you may not care that much. Also your raid controller may not exactly be lightning. But I'd personally suspect the number of columns in your RAID5.

          Also, as a little learning experiment, take a drive, make two partitions of a few gig each. Put one of them at the beginning of the drive and put the other at then end of the drive. Benchmark the speed of those two partitions. In case you're not really that interested, the laws of physical make the bits at the outer edge of the platter go by about twice as quickly as the inner edge. So if you are doing a sequential benchmark you'll find that a disk that rates 60MB/s on the outer edge will drop to 35MB/s on the inner edge. So on average, you'll find that the majority of your disk isn't as fast as simple sequential tests suggest.

          • This is misleading. (Score:4, Interesting)

            by Jane Q. Public ( 1010737 ) on Tuesday December 16, 2008 @09:30PM (#26140979)
            While it is true that the outside of the disk is spinning faster than the inner portion, in a modern HDD there are also several times more sectors in those outer rings. So while strictly speaking the read times might be faster, the seek times are not, and may even be slower. The sectors might even be interleaved, making any such comparison almost meaningless.

            However, as you say, benchmarking is the only way to really tell. Highly recommended.
      • by ceoyoyo ( 59147 ) on Tuesday December 16, 2008 @07:50PM (#26140173)

        Most NAS devices, particularly the consumer ones, cheap out on the processor. You might have great hard drive throughput, maybe even a nice fast network interface, but the poor little processor just can't keep up.

        If you want speed, definitely throw a PC at it.

    • I thought OS/X supported Windows networks out of the box. Odds are very good that if it supports Windows OS/X will work.

      Yes, OSX supports SMB via Samba, which means it has solid support for Windows file sharing. You can run AFP on Linux or Windows, but frankly it's not really worth it. I'd be interested to know if anyone wants to make a case that AFP is necessary, but my personal opinion is that it's only worth using if you're running an OSX server.

      • by blincoln ( 592401 ) on Tuesday December 16, 2008 @08:46PM (#26140651) Homepage Journal

        I'd be interested to know if anyone wants to make a case that AFP is necessary, but my personal opinion is that it's only worth using if you're running an OSX server.

        Our Mac people at work claim that the only way for the OS X file search utility to work correctly is via AFP. The third-party server software they use as an AFP server on Windows maintains a server-side index, which I imagine is why, although I don't know how much of that is a requirement with OS X as opposed to their specific configuration.

    • by syntax ( 2932 )

      I haven't directly diagnosed this issue since 10.3, but it still might be an issue:

      OSX does support SMB pretty well (they actually use the samba suite under the hood for client and server). There's a catch though. In MacOS (classic and X), there are two parts to the file: the "data fork" (what you would normally think of as the file), and the "resource fork" (contains meta data, and executable code for "classic" programs). Over SMB, the resource forks are stored as a separate file; example.txt's resource

    • by jbwiv ( 266761 ) on Tuesday December 16, 2008 @07:11PM (#26139805)
      The problem with that is power consumption. Build your own, and you'll be burning a lot of power unnecessarily because it's overkill. Contrast that with the ReadyNas Duo, which I own, that pulls on average around 30-40W. Much better and green.
      • Re: (Score:2, Informative)

        by bu1137 ( 979245 )
        Get yourself an AMD 64 X2 4850BE (2.5 Ghz, 45W), a mainboard with AMD 780G chipset, an efficient power supply and two western digital green power drives. That'll eat about 40 watts on idle.
      • Re: (Score:3, Interesting)

        by edmudama ( 155475 )

        The OP stated they have a business need for moving gigabytes of data quickly around the office. Spending the extra money on a real server's power consumption would save them thousands of dollars a day worth of their time.

        Even the cost of the power is pretty minimal for this... Figure 500 watts for 24 hours is 12 kWh. At worst you pay $0.20/kWh, which is a hair over $2/day, assuming 24 hours/day usage. My linux PC NAS in the basement saturates gigE and is under 100 watts active power consumption, or about

    • by Luthair ( 847766 )
      One consideration with a PC is that they will likely require significantly more power than a NAS box, even if you buy a low power system.
    • Re: (Score:3, Informative)

      by ischorr ( 657205 )

      OS X doesn't support the ability to CHANGE CIFS (SMB) permissions, so that's a concern. It can at least change NFS permissions, if only from the CLI or other Unix permissions-aware apps.

  • Cmon people... (Score:5, Informative)

    by Creepy Crawler ( 680178 ) on Tuesday December 16, 2008 @06:00PM (#26138961)

    You might as well build it yourself.

    Go get a lowbie Core2, mobo, good amount of ram, and 4 1TB disks. Install Ubuntu on them with LVM and encryption. Run the hardening packages, install Samba, install NFS, and install Webmin.

    You now have a 100% controlled NAS that you built. You can also duplicate it and use DRBD, which I can guarantee that NO SOHO hardware comes near. You also can put WINE on there and Ming on your windows machines for remote-Windows programs... The ideas are endless.

    • by emmons ( 94632 )

      I'll second this with a couple notes:

      Encryption isn't so important unless you're worried about someone coming in and physically stealing your hardware, but it will complicate setup a bit and will slow down IO a bit (depending on CPU speed).

      Webmin is great for this type of thing.

      Your network connection is the limiting factor here. On large sequential reads, modern SATA drives with a mobo's onboard controller can easily maintain the 100MB/s or so it takes to max out your gigE connection.

      Spend your money on s

      • ---Encryption isn't so important unless you're worried about someone coming in and physically stealing your hardware, but it will complicate setup a bit and will slow down IO a bit (depending on CPU speed).

        Yeah, it is a hit on I/O, but if we're using a Core2Duo, there's a bit of CPU available.. And you can sell it as "All your data is encrypted on the disk as per Sarbanes Oxley/HIPPA/governmental org standard." It's not terribly that important, but a selling point.

        ---Webmin is great for this type of thing.

        V

      • Re:Cmon people... (Score:5, Informative)

        by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday December 16, 2008 @07:14PM (#26139847) Journal

        Your network connection is the limiting factor here. On large sequential reads, modern SATA drives with a mobo's onboard controller can easily maintain the 100MB/s or so it takes to max out your gigE connection.

        I second this.

        A good way to test your network connection is with netcat and pv. Both are packaged by all major Linux distos.

        On one machine run "nc -ulp 5000 > /dev/null". This sets up a UDP listener on port 5000 and directs anything that is sent to it to /dev/null. Use UDP for this to avoid the overhead of TCP.

        On the other machine, run "pv < /dev/zero | nc -ulistenerhost 5000", where "listenerhost" is the hostname or IP address of the listening machine. That will fire an unending stream of zero-filled packets across the network to the listener, and pv will print out an ongoing report on the speed at which the zeros are flowing.

        Let it run for a while and watch the performance. If the numbers you're getting aren't over 100 MB/s -- and they often won't be, on a typical Gig-E network -- then don't worry about disk performance until you get that issue fixed. The theoretical limit on a Gig-E network is around 119 MBps.

        Do the same thing without the "-u" options to test TCP performance. It'll be lower, but should still be knocking on 100 MBps. To get it closer to the UDP performance, you may want to look into turning on jumbo frames.

        pv is also highly useful for testing disk performance, if you're building your own NAS (highly recommmended -- a Linux box with 3-4 10K RPM SATA drives configured as software RAID0 array will generally kick the ass of anything other than very high end stuff. It's nearly always better than hardware RAID0, too).

    • by StaticEngine ( 135635 ) on Tuesday December 16, 2008 @07:10PM (#26139795) Homepage

      See, the problem with responses like this is that they ignore the request of the original poster, and, while being valid instructions for a home-built, it is only a good solution if the time of the OP has zero value. Your instructions involve eight steps: Order (multiple) parts, wait for delivery, assemble, learn how and then install OS, learn now and install three other packages. The OP is looking for three steps: Order one thing, wait for delivery, plug in and use.

      Your post has value to the DIY crowd, certainly. But for someone looking for a product recommendation, it totally missed the boat.

  • None (Score:5, Interesting)

    by afidel ( 530433 ) on Tuesday December 16, 2008 @06:01PM (#26138987)
    If you want decent throughput build it yourself. Seriously. I have a coworker that bought 5 different NAS devices to do a bakeoff for a small skunkworks office and they all sucked for throughput. We ended up buying a $1K NAS that still wasn't great but sure beat all the SOHO ones. Numbers were ~8MB/s max on the fastest SOHO unit vs 25MB/s on the midrange one.
    • That mirrors my experience with a ReadyNAS NV+ - reliable, not particularly cheap, slow. For my purposes (just a backup of terabytes of photographic images) it's fine. For anything needing throughput, I'd roll my own.

      I'd avoid Drobo as well. Although cute and brainless it's really not a NAS (has to be hooked to firewire or BogForbid, USB2. Software is proprietary and they use a non standard RAID format.
      • Software is proprietary and they use a non standard RAID format.

        Sounds exactly like ReadyNAS.

        Not that it matters, as a user -- doesn't it just present itself as a mass storage device, no software needed on the host box?

  • dedicated PC (Score:5, Informative)

    by spire3661 ( 1038968 ) on Tuesday December 16, 2008 @06:03PM (#26139013) Journal
    In terms of cost/benefit ratio, nothing beats a stripped down PC with a lot of drives stuffed in it or in an external esata enclosure. I run a HP NAS MV2020, and a linksys NAS200 and they both cant hold a candle to a PC in throughput. Ive heard of some commercial systems out there, but they cost a small fortune. Just my $.02
    • I've had a similar experience. When I decided I wanted to make myself a small home file server I just took an old 3GHz P4 and put it into a new case with a big hard drive cage. Add a SATA card and a couple terabyte drives and I've got a nice Gigabit NAS setup with a much faster processor / RAM than anything you're going to get in a consumer level NAS. Best of all, I only had to spend money on the hard drives and the SATA card. Everything was already laying around.
  • ReadyNAS (Score:3, Informative)

    by IceCreamGuy ( 904648 ) on Tuesday December 16, 2008 @06:07PM (#26139059) Homepage
    We have a ReadyNAS 1100, it's alright, but I wouldn't call it stellar. I get around 80Mb/sec to it over the network, but the management interface is IE only (as far as I can tell, since it has problems with FF and Chrome), and it has these odd delays when opening shares and browsing directories. Some of the nice features are the out-of-the-box NFS support and small, 1U size.
  • Dlink DNS-323 (Score:3, Insightful)

    by speeDDemon (nw) ( 643987 ) on Tuesday December 16, 2008 @06:08PM (#26139073) Homepage
    I have evaluated a few different products (I have a retail store) and so far I have been very happy with the DLINK DNS-323 [dlink.com.au]
    Disclaimer: I have no affiliation with DLINK other than I stock some of their goods
    • by Burdell ( 228580 )

      I had a DNS-323 and never could get what I would consider good throughput with it (why bother with gigabit when it can barely fill 100 megabit). I ended up building a cheap PC out of spare parts and a few new things for not a lot more than the DNS-323, and it performs much better.

  • They have neat solutions, but their throughput is horrible. They support GigE, but the CPUs they use in their boxes are so underpowered they never achieve anything reasonably higher than 100-base-T (if that).

    I'd post links, but typing "Buffalo NAS throughput" in google comes up with multiple hits of reviews complaining about throughput.

    • I second this. The Buffalo units have a reasonably good UI and are easy to manage, but they are hideously slow.

    • Re: (Score:3, Informative)

      by dave562 ( 969951 )
      I agree with the suggestion to avoid Buffalo. Someone else responded to this thread and said that their UI is good. My experience was just the opposite. The UI sucked and trying to get the thing integrated into Active Directory was a nightmare. The setup appears to be straight forward. Specify domain name, specify domain username/password combo. The reality of the situation turned out to be decidedly different and required numerous calls to tech support, firmware updates and a lot of headaches.
  • OpenSolaris / ZFS (Score:3, Informative)

    by msdschris ( 875574 ) on Tuesday December 16, 2008 @06:12PM (#26139133)
    Build it yourself and install Opensolaris. ZFS rocks.
    • Consider Solaris + ZFS too. Especially now that Solaris 10 u6(?) now can install to ZFS root partition (HINT: Use Text installer - options 3 or 4 if memory serves).

      Solaris is free as in beer, even if it isn't open source. Plus you get the benefit of some of the proprietary drives if you have older hardware. Plus, Solaris proper won't leave you in a lurch when things change in OpenSolaris and you can't do updates or run some programs. [Admittedly this problem seems to be mostly resolved, but for mostly produ

  • I have a Terastation 2 (by Buffalo) and I am plugged into 100Mbps ethernet at work, so I can't tell you about the throughput, but I can tell you that the Terastation Mac stuff is very half-assed. I couldn't get AFP/Appletalk to work at all and while SMB is rock solid for large files, it cannot handle large amounts of small files. It chokes on directories with huge amounts of files (not sure if that's a limitation of the Finder or the Terastation's fault, though). I had a user's backup program run amok and

    • I second regarding Buffalo's half-assedess.

      Our organization has been using a Terastation for a few years now. While its generally a solid product for basic usage, it becomes difficult to work with when attempting any particularly complex configuration. And don't ask the Buffalo support staff for help, they don't know anything about the backend of their product.

      If you're looking for flexibility, I'd recommend ditching the NAS idea entirely and going for a basic file server.
  • Great little debian server, really bad performance as a NAS. Even with Debian on there.

    I like the idea of the QNAP Turbo stations - effectively a modernised NSLU2 with 256 MB of RAM and a 500MHz chip, but then I want another server rather than an actual NAS...

  • NAS Charts (Score:2, Informative)

    by Anonymous Coward

    www.smallnetbuilder.com maintaines a NAS Chart, I find it quite complete and recent.(http://www.smallnetbuilder.com/component/option,com_nas/Itemid,190/)

  • by sco_robinso ( 749990 ) on Tuesday December 16, 2008 @06:18PM (#26139209)
    They have the most comprehensive benchmarks and NAS's around (that I've stumbled across, at least). Also, lots of good tests showing various things like Jumbo frames, etc. Very good overall.

    I frequent the site a bit, and there's a couple tricks to getting good performance out of a NAS, or LAN throughput in general.

    1. Use Jumbo Frames, period.
    2. Use PCI-e NIC's, onboard or PCI just can't deliver the speeds offered by GigE. You can find smiple intel PCI-e nics for under $20.
    3. Drives make a big difference, obviously.

    www.smallnetbuilder.com -- Good site.
    • I've benchmarked onboard and PCI NICs and get over 850Mb/s throughputs with iperf and netperf. Sure you could get another 50-100Mb/s with PCI-e, but that's practically a rounding error.

    • I notice someone else pointed out Tom's Hardware for a review. Tim Higgins was part of smallnetbuilder and Tom's Networking. I know he is still reviewing for smallnetbuilder, but I'm not sure about the Tom's part.

  • by syousef ( 465911 ) on Tuesday December 16, 2008 @06:19PM (#26139227) Journal

    If your testing is highly automated, I can't help you as I don't have a lot of experience with high speed networking.

    If your testing is reasonably manual, consider storing your data set on removable hard drives which are manually plugged into one computer, data is copied, then disconnected and moved to the other. A USB 2 interface will give you the most compatibility given the wide variety of hardware you're using, but perhaps there may even be hardware that does hot plugging E-SATA properly if you're willing to pay a premium.

    Remember, for really high bandwidth physical media being shipped from one location to another is still a solution which should be considered.

  • Off-shelf NAS device will be not only slow but also full of various bogus bugs with which you need to wait for vendor to issue firmware update...

    Just build it yourself - build a PC. You have plenty of options:

    1. If you have a rack somewher buy a low end rack 2U rack server with enclosures for SATA disks and some decent RAID controller.

    Or:

    2. Build yourself a PC in tower enclosure. Get some Core 2 Duo mobo (cheapest), medicore ammount of RAM - SMB and NFS and AppleTalk servers with Linux operating system will

    • You don't want software raid and the (cheapest) MB sucks way out of data chipset also you don't have to have on board vidoe as it takes up system ram and chip set i/o even if you are useing the system for much.

      you will want 2gb or more ram + dual gig-e or more in teaming + some kind of a raid card a good pci-e x4 or better one is about $250+

      • Actually, you dont want any RAID card, because it limits your upgrade and recovery options. Any modern CPU is not going to have any problems doing memcopy and XORing required for RAID.
        You do want as much memory as you can afford, especially that memory is cheap now.
        My little home server has 8GB of memory, it can sink huge write transfers very quickly. It uses 3 laptop SATA HDDs in RAID5 so it can take it's sweet time to write the data to HDD later because it effectively has 8GB disk cache.

      • What if it's for a small, say =6 person office? 4 (5400RPM) SATA drives are perfectly fine.. if they're document and spreadsheet workers. The situation is completely different if these were video editors or CAD/CAM software types. Then you need 10GBps server, hardware raid10, say 15 drives, quad-core, and max ram.

  • Your best performance is likely to come by rolling your own. Off the shelf SOHO devices are built for convenience, not throughput.

    Grab a PC (need not be anything top-of-the-line), a good server NIC, a decent hardware RAID card (you can usually get a good price on a Dell PERC SATA RAID on ebay), and a few SATA hard drives. Install something like FreeNAS or NexentaStor (or, if you want to go all the way, FreeBSD or Linux and Samba).

  • by Overzeetop ( 214511 ) on Tuesday December 16, 2008 @06:26PM (#26139301) Journal

    Okay, unRaid is not particularly fast compared to an optimized system, but it's expandable, had redundancy, is expandable, is web managed, plays nice with windows, sets up in about 20 minutes, costs $0 for a three disc license and $69(?) for a 6 disk license.

    My total unoptimized box on an utterly unoptimized Gb network (stock cards, settings, with 100 and 1000 nodes) and unmanaged switches just transferred an 8.3GB file in a hair under three minutes. From a single, cheap SATA drive to a Vista box with an old EIDE drive. Now 380Mb/s is not blazingly fast, but remember that it took almost no effort.

    http://lime-technology.com/ [lime-technology.com]

    No connection except as a happy customer with a 4TB media server that took longer to assemble the case than to get the SW running. If only my Vista Media Center install has been this easy.

  • by anegg ( 1390659 ) on Tuesday December 16, 2008 @06:35PM (#26139405)

    If you use a single disk NAS solution and you are doing sequential reads through your files and file system, your throughput can't be greater than the read/write speed of a single disk, which is no where near GigE (1000 Gbps is about 125 MB/second ignoring network protocol overhead). So you will need RAID (multiple disks) in your NAS, and you will want to use striped RAID (RAID 0) for performance. This means that you will not have any redundancy, unless you go with the very expensive striped mirror or mirrored stripes (1+0/0+1). RAID 5 gives you redundancy, and isn't bad for read, but will not be that great for writes.

    As you compare/contrast NAS device performance, be sure that you understand the disk architecture in each case and see oranges to oranges comparisons (i.e, how does each one compare with the RAID architecture that you are interested in using - NAS devices that support RAID typically offer several RAID architectures). Also be sure that the numbers that you see are based on the kind of disk activity you will be using. It doesn't do much good to get a solution that is great at random small file reads (due to heavy use of cache and read-ahead) but ends up running out of steam when faced with steady sequential reads through the entire file system where cache is drained and read-ahead can't stay ahead.

    Once you get past the NAS device's disk architecture, you should consider the file sharing protocol. Supposedly (I have no authoritative testing results) CIFS/SMB (Windows file sharing) has a 10% to 15% performance penalty compared to NFS (Unix file sharing). I have no idea how Apple's native file sharing protocol (AFP) compares, but (I think) OS X can do all three, so you have some freedom to select the best one for the devices that you are using. Of course, since there are multiple implementations of each file sharing protocol and the underlying TCP stacks, there are no hard and fast conclusions that you can draw about which specific implementation is better without testing. One vendor's NFS may suck, and hence another vendors good CIFS/SMB may beat its pants off, even if the NFS protocol is theoretically faster than the CIFS/SMB protocol.

    Whichever file sharing protocol you choose, its very possible it will default to operation over TCP rather than UDP. If so, you should pay attention to how you tune your file sharing protocol READ/WRITE transaction sizes (if you can), and how you tune your TCP stack (windows sizes) to get the best performance possible. If you use an implementation over UDP, you still have to pay attention to how you set your READ/WRITE buffer sizes and how your system deals with IP fragmentation if the UDP PDU size exceeds what fits in a single IP packet due to the READ/WRITE sizes you set.

    Finally, make sure that your network infrastructure is capable of supporting the data transfer rates you envision. Not all gigabit switches have full wire-speed non-blocking performance on all ports simultaneously, and the ones that do are very expensive. You don't necessarily need full non-blocking backplanes based on your scenario, but make sure that whatever switch you do use has enough backplane capacity to handle your file transfers and any other simultaneous activity you will have going through the same switch.

    • your throughput can't be greater than the read/write speed of a single disk, which is no where near GigE (1000 Gbps is about 125 MB/second ignoring network protocol overhead)

      Most bog-standard SATA drives should be able to push over 100 MBps on a sequential read. I'd go RAID0 to help with the case when files are a bit fragmented and other activity is going on, but under ideal conditions a single drive should be able to very nearly saturate a Gig-E TCP stream.

  • by m0e ( 55482 ) on Tuesday December 16, 2008 @06:37PM (#26139421)

    Disk will always be. Since disk is your slowest spot you will always be disk I/O bound. So in effect there's no real reason to worry about network throughput from the NIC. NICs are efficient enough these days to just about never get bogged down. What you would want to look at for the network side would be your physical topology -- make sure you have a nice switch with nice backplane throughput.

    About disks:

    Your average fibre channel drive will top out at 300 IO/s because few people sell drives that can write any faster to the spindle (cost prohibitive for several reasons). Cache helps this out greatly. SATA is slightly slower at between 240-270 IO/s depending on manufacturer and type.

    Your throughput will depend totally upon what type of IO is hitting your NAS and how you have it all configured (RAID type, cache size, etc). If you have a lot of random IO, your total throughput will be low once you've saturated your cache. Reads will always be worse than writes even though prefetching helps.

    If you're working with multi-gigabyte datasets, you'll want to increase the number of spindles (ie number of disks) to as high as you can go within your budget and make sure you have gobs of cache. If you decide to RAID it, which type you use will depend on how much integrity you need (we use a lot of RAID 10 with lots of spindles for many of our databases). That will speed you up significantly more than worrying about the NICs throughput. don't worry about that until you start topping a significant portion of your bandwidth -- for example, say 60MB/sec sustained over the wire.

    This doesn't get fun until you start having to architect petabytes worth of disk. ;)

    • Re: (Score:2, Informative)

      by Anonymous Coward

      It sounds like you do this as your day job working with big expensive NAS and SAN equipment. Yes, in those environments you'll be I/O bound long before you're disk-bound or NIC bound. Sadly, the SOHO equipment is far, far worse. By and large, their throughput ranges from sad to atrocious. See SmallNetBuilder's NAS Charts for some benchmarks that will make you weep.

    • by thesupraman ( 179040 ) on Tuesday December 16, 2008 @08:08PM (#26140345)

      Ah, wrong.

      This guy is talking about SOHO type NAS boxes, their cpu and network throughput is their bottleneck.

      If he was talking about 'real' NAS, then that is very different (although it is still trivial to get a NAS that can saturate GBit for many workloads).

      Our 16/32 drive Raid6 SATA raid arrays easily sustain 400MB/sec locally for moderately non-random workloads - there are workloads for which this of course does not apply, but since he is apparently moving around GByte lumps, it would not be his case.

      SOHO NAS devices normally run out of grunt at around 6MB/secish, even for long linear reads, some do better at up to 25.

      I am thinking your workload is TPC type database loads, dont assume everyones is (we have a mix of video files and software development, very different..). TPC type disk loads are a corner case.

      We also love ATAOE but that is DEFINITELY not what he is looking for.

  • by M0b1u5 ( 569472 ) on Tuesday December 16, 2008 @06:45PM (#26139523) Homepage

    Never underestimate the bandwidth of a guy carrying a bundle of removable hard drives around the office.

    Or a station wagon loaded with hard drives.

    Nothing can beat them.

    • Nothing can beat them.

      Maybe a C-5 Galaxy full of hookers carrying hard drives.
      But that might be cost prohibitive for small office use.

  • by opk ( 149665 ) on Tuesday December 16, 2008 @06:47PM (#26139541) Journal

    I've got an Thecus N2100 and the performance as a NAS isn't great. The CPU isn't powerful enough to take advantage of the gigE interface. For what you want, I'd get something more powerful which probably means an x86 box. For anyone who just wants a home server that doesn't consume too much electricity so can be left on all the time, a small ARM based box is great. I'm running Debian on it and it's really useful.

  • I bought a Buffalo NAS about three years ago; I bought it because of the 1000base-T interface and low cost. I persevered with it for about three months, and then demanded and got a full refund from the retailer.

  • get a small pc case such as one of the many small cube cases that come as a barebones. Put a dual core chip and 2GB ram. The you can install something like openfiler which will give you a nice web interface and the ability to do nfs,cifs,ftp,and iscsi. Alternatively, install solaris or opensolaris and use ZFS and have the ability to compress the files at the filesystem level and also do a raidz with 3 drives for reliability and speed.

    either way you can bond two ethernet interfaces together for 2Gbit whic

  • I have an Infrant ReadyNAS+ and it is not fast. It has a TON of features (most of which I don't use) but transfer speeds are pegged at approx 7% to 8% network utilization through a gigE switch even with jumbo frames on and an upgraded stick of ram for the NAS cache. I get the same transfer rates with 3 different computers of various types including an older laptop and a very fast gaming machine, and my transfer rates are fairly close to what others report, which tells me the bottleneck is the NAS device.

  • I have a Linksys NSLU2 (running Debian Lenny) and it maxes out at about 4 MB/sec, the dinky amount of RAM means almost no FS or other buffering is possible, and the limp CPU (266 MHz) just can't push the IO fast enough.
    A barebones PC is probably $200 + drives, slap OpenFiler or a real distro on it, and share out 1 big /share filesystem via Samba.

  • by kwabbles ( 259554 ) on Tuesday December 16, 2008 @07:10PM (#26139797)

    For example:

    Best home network NAS?
    http://ask.slashdot.org/article.pl?sid=07/11/21/141244&from=rss [slashdot.org]

    What NAS to buy?
    http://ask.slashdot.org/article.pl?sid=08/06/30/1411229 [slashdot.org]

    Building a Fully Encrypted NAS On OpenBSD
    http://hardware.slashdot.org/article.pl?sid=07/07/16/002203 [slashdot.org]

    Does ZFS Obsolete Expensive NAS/SANs?
    http://ask.slashdot.org/article.pl?sid=07/05/30/0135218 [slashdot.org]

    What the hell? Is this the new quarterly NAS discussion?

    • by sootman ( 158191 ) on Wednesday December 17, 2008 @12:13AM (#26142043) Homepage Journal

      What the hell? Is this the new quarterly NAS discussion?

      Yes, I hope it is. Maybe not quarterly, but I have no problem "revisiting the classics" periodically. Technology marches on, best practices come and go, so it is useful to cover the same ground every so often. Seven years ago the coolest story ever was covered here: build a Terabyte fileserver for less than $5,000!!! [slashdot.org] (Note to visitors from the future: it is late 2008 and you can buy an external terabyte hard drive for a little over $100. Call it $125. That same five grand could buy you FORTY terabytes today. You probably got a 1TB USB jump drive in your cereal this morning.)

      Plus, not everyone has been around as long as you and I. Won't somebody please think of the n00bs?!? :-)

  • by raw-sewage ( 679226 ) on Tuesday December 16, 2008 @07:15PM (#26139861)

    How many gigabytes are "multiple" gigabytes? Seriously, moving around five GB is much easier than 50 GB and enormously easier than 500 GB.

    Another thing to consider: how many consumers are there? A "consumer" is any process that requests the data. If this post is a disguised version of "how do I serve all my DVD rips to all the computers in my house" then you probably won't ever have too many consumers to worry about. On the other hand, I work for an algorithmic trading company; we store enormous data sets (real-time market data) that range anywhere from a few hundred MB to upwards of 20 GB per day. The problem is that the traders are constantly doing analysis, so they may kick off hundreds of programs that each read several files at a time (in parallel via threads).

    From what I've gathered, when such a high volume of data is requested from a network store, the problem isn't the network, it's the disks themselves. I.e., with a single sequential transfer, it's quite easy to max out your network connection: disk I/O will almost always be faster. But with multiple concurrent reads, the disks can't keep up. And note that this problem is compounded when using something like RAID5 or RAID6, because not only does your data have to be read, but the parity info as well.

    So the object is to actually get many smaller disks, as opposed to fewer huge disks. The idea is to get the highest number of spindles as possible.

    If, however, your needs are more modest (e.g. serving DVD rips to your household), then it's pretty easy (and IMO fun) to build your own NAS. Just get:

    • a case that can hold a lot of disks
    • a fairly recent motherboard
    • the cheapest CPU supported by the motherboard (your load is virtually all I/O; very little CPU is needed with modern I/O chipsets)
    • some RAM
    • a high quality, high capacity power supply
    • the disks themselves
    • and your favorite free operating system of choice

    You might also want to purse the Ars Technica Forums [arstechnica.com]. I've seen a number of informative NAS-related threads there.

    One more note: lots of people jump immediately to the high performance, and high cost RAID controllers. I personally prefer Linux software RAID. I've had no problems with the software itself; my only problem is getting enough SATA ports. It's hard to find a non-server grade (i.e. cheap commodity) motherboard with more than six or eight SATA ports. It's even harder to find non-PCI SATA add-on cards. You don't want SATA on your PCI bus; maybe one disk is fine, but that bus is simply too slow for multiple modern SATA drives. It's not too hard to find two port PCI express SATA cards; but if you want to run a lot of disks, two ports/card isn't useful. I've only seen a couple [newegg.com] of four-port non-RAID PCIe SATA cards [newegg.com]. There's one eight port gem [newegg.com], but it requires PCI-X, which, again, is hard to find on non-server grade boards.

  • They don't do too badly for xfer speed and are quite reliable. They seem to use less power and aren't noisy like other NAS systems (especially the RYO).

    Linux is their OS and if you need to add some functionality, you can get in and do it, but it works well out of the box.

    RAID 5 or 6 with the 508

    I've done the Windows SMB and it sucks for maintenance and you're back at RYO - patch and crotch rub. I've built many a linux box for this and, though they work, I have better things to do with my time. I really appr

  • by gelfling ( 6534 ) on Tuesday December 16, 2008 @07:22PM (#26139923) Homepage Journal

    They are a little on the high end, cost wise for consumer boxes but they are very reliable, the firmware actually works WELL, they support NTFS and their network interfaces function up to spec. And they support Mac.

    They make units from 1 bay SATA up to 4 bay 1U hot swappable dual 1Gb dual power supply rackmounts.

    www.synology.com

  • I'm going to suggest that you skip the NAS and just get a large-capacity eSata or firewire drive. Plug it into your current test machine, do your thing, unplug it and move along to the next machine. This approach sidesteps any limitations of your LAN, host machine, RAID cards, or NICs.

  • by aaarrrgggh ( 9205 ) on Tuesday December 16, 2008 @08:36PM (#26140569)

    The Buffalo Terastation uses a software RAID, which slows it considerably, with the side benefit of being nearly impossible to recover if it crashes.

    It does support SMB, NFS, and AFS out of the box though.

    These boxes are cheap crap, and have a very limited useful lifespan. Our company lost a good deal of information when ours crapped out after 366 days. (Yes, we had backups, No they weren't perfect. They happened to be with me halfway around the globe at the time...)

    Really seems like the product offerings in this space are limited usability, poor reliability, imperfect implementations, and grossly overpriced. Doing it over again, I would go for a build-it-yourself box hands down.

  • Forget SOHO boxes (Score:3, Informative)

    by TopSpin ( 753 ) * on Tuesday December 16, 2008 @09:16PM (#26140895) Journal

    What you're expecting is really beyond the capability of common SOHO NAS equipment. These devices lack the RAM and CPU to approach the capacity of GB Ethernet.

    Unless you're willing to roll your own, you should consider a better class of gear and spend your time arguing for the funds to pay for it (a NetApp S550, perhaps.) If you are willing to roll your own, you can get there for $1-2k using all new hardware.

    Beware reusing older hardware; many GB NICs can't approach GBE saturation, either due to PCI bus contention or low end, low cost implementation. Yes, in some cases older hardware can get there, but this will require careful configuration and tuning.

    You want a PCI-E bus, a decent 'server' class NIC, recent SATA disks, a modern CPU (practically any C2D is sufficient) and enough RAM (2-4 GB). Personally I stick to Intel based MB chipsets and limit myself to the SATA ports provided by Intel (as opposed to the third party provided by jaton, silcon image, et al.) Linux, md raid 10. Will saturate a GBE port all day long, provided your switch can handle it...

    You're serving desktops so jumbo frames are probably impractical (because some legacy hardware on that LAN will not tolerate it.) If your managed (?) switch can provide VLANs you can multihome your critical workstations and use jumbo frames. This will get you more performance with less CPU load for 'free'.

  • by FirstOne ( 193462 ) on Wednesday December 17, 2008 @09:11AM (#26144571) Homepage

    If the NAS supports the non-routable NetBeui protocal.

    Install the optional "Netbeui" protocal stack located on the XP install disk. (same add-on will also work on Vista.)

    Don't forget to disable (uncheck) the "QOS Packet Scheduler", it will limit you to 20-25% of max link speed.

    Lastly, one must also disable the NetBIOS over TCP/IP, if it connects first you won't see any performance boost. (Option located in the TCP/IP Advanced/WINS dialog).

    The older/non-routable NetBeui protocal stack in the NT/W2K days was roughly 10x more CPU efficient per byte than NetBios over TCP/IP.

    In XP/Vista environments it's still 5x more CPU eff than NetBios over TCP/IP.

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"

Working...