SoHo NAS With Good Network Throughput? 517
An anonymous reader writes "I work at a small business where we need to move around large datasets regularly (move onto test machine, test, move onto NAS for storage, move back to test machine, lather-rinse-repeat). The network is mostly OS X and Linux with one Windows machine (for compatibility testing). The size of our datasets is typically in the multiple GB, so network speed is as important as storage size. I'm looking for a preferably off-the shelf solution that can handle a significant portion of a GigE; maxing out at 6MB is useless. I've been looking at SoHo NAS's that support RAID such as Drobo, NetGear (formerly Infrant), and BuffaloTech (who unfortunately doesn't even list whether they support OS X). They all claim they come with a GigE interface, but what sort of network throughput can they really sustain? Most of the numbers I can find on the websites only talk about drive throughput, not network, so I'm hoping some of you with real-world experience can shed some light here."
You could roll your own. (Score:5, Informative)
FreeNAS or OpenFiler on a PC with a raid controller and GigE should work. It might even be cheaper than a NAS box.
As to OS/X support. I thought OS/X supported Windows networks out of the box. Odds are very good that if it supports Windows OS/X will work.
Re:You could roll your own. (Score:5, Informative)
My situation is similar to yours. I bought and tested several off the shelf solutions and was continuously disappointed.
My solution was an off the shelf AMD PC filled with HDD's and linux software raid.
It's MUCH Faster (90MB/Sec) then any of the NAS solutions I tested.
With Christmas specials abounding right now, HDD's are cheap. Use independent controllers for each port and a reasonable CPU. Also make sure that the GIGe interface is PCI-E.
Re: (Score:2, Informative)
Sadly, my off the shelf pc is woefully insufficient ... I get 24MB/s max from a raid over gigabit ...
The pc was originally an AMD 1800+ with SDRAM.
there are 8 drives total, one boot (80 gig IDE)
7 250 gig Seagates, all IDE - Originally they were all on a separate controller, and I used a raid controller to do it (acting as ide, no raid, in this case) the 7 250 gigs are setup in a software raid5 configuration in linux. Individually hdparm rates them as 60MB/s, and the whole raid as 70MB/s but for whatever rea
Re:You could roll your own. (Score:4, Informative)
You have seven drives in a software raid5. Anytime you do a write, the entire stripe has to be available to recompute parity. If you aren't doing full stripe writes, that will often mean having to read data in from a portion of the drives. A normal PCI slot will give you 132 MB/s max. Possibly that is a limitation, but it's higher than gigabit speeds so you may not care that much. Also your raid controller may not exactly be lightning. But I'd personally suspect the number of columns in your RAID5.
Also, as a little learning experiment, take a drive, make two partitions of a few gig each. Put one of them at the beginning of the drive and put the other at then end of the drive. Benchmark the speed of those two partitions. In case you're not really that interested, the laws of physical make the bits at the outer edge of the platter go by about twice as quickly as the inner edge. So if you are doing a sequential benchmark you'll find that a disk that rates 60MB/s on the outer edge will drop to 35MB/s on the inner edge. So on average, you'll find that the majority of your disk isn't as fast as simple sequential tests suggest.
This is misleading. (Score:4, Interesting)
However, as you say, benchmarking is the only way to really tell. Highly recommended.
Re:To this whole chain of comments, I would like (Score:4, Insightful)
... to say that software RAID is almost invariably a poor solution. It is woefully slow compared to even a slow hardware RAID implementation.
Spend a few bucks and get the right hardware. It is not expensive these days.
This may have been true years ago, but it's not anymore. Modern CPUs can handle parity computations without a problem. As long as your controllers can support the throughput needed, there is no need for hardware RAID. After all, we have ZFS.
Storage is undergoing a massive paradigm shift and folks like EMC are being caught with their pants down. Their spindle cost and price per GB is just too high.
Re:To this whole chain of comments, I would like (Score:4, Interesting)
The only shops that actually look at cost/GB as a measuring stick are small shops, or shops with very specific needs.
Large corporations, government and high tech companies are usually more concerned with management costs, retention, migration and so forth.
This is simply not true. There are plenty of commodity storage requirements that do not require Fibre Channel or even NetApp level NAS. On the other end of the spectrum, cost/GB might not be a huge factor, but the cost of getting necessary IOPS is certainly a factor.
I work on Wall St. and we have multiple PB of storage. We have tons of EMC. However, things like the Sun X4500 and similar products from HP are changing the game. Couple that with being able to do 48 ports of line-rate 10GigE in a 1 RMU stackable, per priority pause coming into use, and Data Center Ethernet down the road and you have many reasons to seriously reconsider the scope of your fibre channel deployment.
Re:To this whole chain of comments, I would like (Score:4, Insightful)
Wrong. Go do your homework.
Re:You could roll your own. (Score:4, Informative)
Most NAS devices, particularly the consumer ones, cheap out on the processor. You might have great hard drive throughput, maybe even a nice fast network interface, but the poor little processor just can't keep up.
If you want speed, definitely throw a PC at it.
Re: (Score:2)
I thought OS/X supported Windows networks out of the box. Odds are very good that if it supports Windows OS/X will work.
Yes, OSX supports SMB via Samba, which means it has solid support for Windows file sharing. You can run AFP on Linux or Windows, but frankly it's not really worth it. I'd be interested to know if anyone wants to make a case that AFP is necessary, but my personal opinion is that it's only worth using if you're running an OSX server.
Re:You could roll your own. (Score:4, Informative)
I'd be interested to know if anyone wants to make a case that AFP is necessary, but my personal opinion is that it's only worth using if you're running an OSX server.
Our Mac people at work claim that the only way for the OS X file search utility to work correctly is via AFP. The third-party server software they use as an AFP server on Windows maintains a server-side index, which I imagine is why, although I don't know how much of that is a requirement with OS X as opposed to their specific configuration.
Re: (Score:2)
I haven't directly diagnosed this issue since 10.3, but it still might be an issue:
OSX does support SMB pretty well (they actually use the samba suite under the hood for client and server). There's a catch though. In MacOS (classic and X), there are two parts to the file: the "data fork" (what you would normally think of as the file), and the "resource fork" (contains meta data, and executable code for "classic" programs). Over SMB, the resource forks are stored as a separate file; example.txt's resource
Re:You could roll your own. (Score:4, Informative)
Re: (Score:2, Informative)
Re: (Score:3, Interesting)
The OP stated they have a business need for moving gigabytes of data quickly around the office. Spending the extra money on a real server's power consumption would save them thousands of dollars a day worth of their time.
Even the cost of the power is pretty minimal for this... Figure 500 watts for 24 hours is 12 kWh. At worst you pay $0.20/kWh, which is a hair over $2/day, assuming 24 hours/day usage. My linux PC NAS in the basement saturates gigE and is under 100 watts active power consumption, or about
Re: (Score:2)
Re: (Score:3, Informative)
OS X doesn't support the ability to CHANGE CIFS (SMB) permissions, so that's a concern. It can at least change NFS permissions, if only from the CLI or other Unix permissions-aware apps.
Cmon people... (Score:5, Informative)
You might as well build it yourself.
Go get a lowbie Core2, mobo, good amount of ram, and 4 1TB disks. Install Ubuntu on them with LVM and encryption. Run the hardening packages, install Samba, install NFS, and install Webmin.
You now have a 100% controlled NAS that you built. You can also duplicate it and use DRBD, which I can guarantee that NO SOHO hardware comes near. You also can put WINE on there and Ming on your windows machines for remote-Windows programs... The ideas are endless.
Re: (Score:2)
I'll second this with a couple notes:
Encryption isn't so important unless you're worried about someone coming in and physically stealing your hardware, but it will complicate setup a bit and will slow down IO a bit (depending on CPU speed).
Webmin is great for this type of thing.
Your network connection is the limiting factor here. On large sequential reads, modern SATA drives with a mobo's onboard controller can easily maintain the 100MB/s or so it takes to max out your gigE connection.
Spend your money on s
Re: (Score:2)
---Encryption isn't so important unless you're worried about someone coming in and physically stealing your hardware, but it will complicate setup a bit and will slow down IO a bit (depending on CPU speed).
Yeah, it is a hit on I/O, but if we're using a Core2Duo, there's a bit of CPU available.. And you can sell it as "All your data is encrypted on the disk as per Sarbanes Oxley/HIPPA/governmental org standard." It's not terribly that important, but a selling point.
---Webmin is great for this type of thing.
V
Re:Cmon people... (Score:5, Informative)
Your network connection is the limiting factor here. On large sequential reads, modern SATA drives with a mobo's onboard controller can easily maintain the 100MB/s or so it takes to max out your gigE connection.
I second this.
A good way to test your network connection is with netcat and pv. Both are packaged by all major Linux distos.
On one machine run "nc -ulp 5000 > /dev/null". This sets up a UDP listener on port 5000 and directs anything that is sent to it to /dev/null. Use UDP for this to avoid the overhead of TCP.
On the other machine, run "pv < /dev/zero | nc -ulistenerhost 5000", where "listenerhost" is the hostname or IP address of the listening machine. That will fire an unending stream of zero-filled packets across the network to the listener, and pv will print out an ongoing report on the speed at which the zeros are flowing.
Let it run for a while and watch the performance. If the numbers you're getting aren't over 100 MB/s -- and they often won't be, on a typical Gig-E network -- then don't worry about disk performance until you get that issue fixed. The theoretical limit on a Gig-E network is around 119 MBps.
Do the same thing without the "-u" options to test TCP performance. It'll be lower, but should still be knocking on 100 MBps. To get it closer to the UDP performance, you may want to look into turning on jumbo frames.
pv is also highly useful for testing disk performance, if you're building your own NAS (highly recommmended -- a Linux box with 3-4 10K RPM SATA drives configured as software RAID0 array will generally kick the ass of anything other than very high end stuff. It's nearly always better than hardware RAID0, too).
Re:Cmon people... (Score:5, Informative)
pv < /dev/zero | nc -ulistenerhost 5000
Slashdot at the space after "-u". That should be "pv < /dev/zero | nc -u listenerhost 5000".
Re: (Score:3, Funny)
slashdot apparently also ate the 'e' in 'ate' :-)
OP: "off the shelf" (Score:5, Insightful)
See, the problem with responses like this is that they ignore the request of the original poster, and, while being valid instructions for a home-built, it is only a good solution if the time of the OP has zero value. Your instructions involve eight steps: Order (multiple) parts, wait for delivery, assemble, learn how and then install OS, learn now and install three other packages. The OP is looking for three steps: Order one thing, wait for delivery, plug in and use.
Your post has value to the DIY crowd, certainly. But for someone looking for a product recommendation, it totally missed the boat.
Re:Cmon people... (Score:5, Insightful)
If they have a windows machine for "compatability testing" and the rest of the units are Macs, you know damn well this guy couldn't "build his own"!
For what it's worth, I have worked in a place that almost exactly matches that description -- ton of macs, some leftover Windows PCs (rarely if ever used), and I ran Linux.
Everyone in that office could have built their own, if they had a reason to.
It is possible to actually like a Mac and not be technically illiterate / incapable of assembling a PC.
Re: (Score:2, Informative)
Yeah, it's not like a mac runs UNIX or has a freeBSD userland with a full ports tree or anything.
Re: (Score:2)
Yeah, it is a hit on I/O, but if we're using a Core2Duo, there's a bit of CPU available.. And you can sell it as "All your data is encrypted on the disk as per Sarbanes Oxley/HIPPA/governmental org standard." It's not terribly that important, but a selling point.
Many businesses like to take lip service of "security", so feed it to them. Honestly, encryption would prevent gaining access to data if physically stolen, so would only make sense on laptops, not servers in a locked room. But it still looks good a
None (Score:5, Interesting)
Re: (Score:2)
I'd avoid Drobo as well. Although cute and brainless it's really not a NAS (has to be hooked to firewire or BogForbid, USB2. Software is proprietary and they use a non standard RAID format.
Re: (Score:2)
Software is proprietary and they use a non standard RAID format.
Sounds exactly like ReadyNAS.
Not that it matters, as a user -- doesn't it just present itself as a mass storage device, no software needed on the host box?
dedicated PC (Score:5, Informative)
Re: (Score:2)
ReadyNAS (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Dlink DNS-323 (Score:3, Insightful)
Disclaimer: I have no affiliation with DLINK other than I stock some of their goods
Re: (Score:2)
I had a DNS-323 and never could get what I would consider good throughput with it (why bother with gigabit when it can barely fill 100 megabit). I ended up building a cheap PC out of spare parts and a few new things for not a lot more than the DNS-323, and it performs much better.
Not Buffalo (Score:2)
They have neat solutions, but their throughput is horrible. They support GigE, but the CPUs they use in their boxes are so underpowered they never achieve anything reasonably higher than 100-base-T (if that).
I'd post links, but typing "Buffalo NAS throughput" in google comes up with multiple hits of reviews complaining about throughput.
Re: (Score:2)
I second this. The Buffalo units have a reasonably good UI and are easy to manage, but they are hideously slow.
Re: (Score:3, Informative)
OpenSolaris / ZFS (Score:3, Informative)
Solaris and ZFS (Score:2)
Consider Solaris + ZFS too. Especially now that Solaris 10 u6(?) now can install to ZFS root partition (HINT: Use Text installer - options 3 or 4 if memory serves).
Solaris is free as in beer, even if it isn't open source. Plus you get the benefit of some of the proprietary drives if you have older hardware. Plus, Solaris proper won't leave you in a lurch when things change in OpenSolaris and you can't do updates or run some programs. [Admittedly this problem seems to be mostly resolved, but for mostly produ
Buffalo Tech Mac compatibility (Score:2, Informative)
I have a Terastation 2 (by Buffalo) and I am plugged into 100Mbps ethernet at work, so I can't tell you about the throughput, but I can tell you that the Terastation Mac stuff is very half-assed. I couldn't get AFP/Appletalk to work at all and while SMB is rock solid for large files, it cannot handle large amounts of small files. It chokes on directories with huge amounts of files (not sure if that's a limitation of the Finder or the Terastation's fault, though). I had a user's backup program run amok and
Re: (Score:2)
Our organization has been using a Terastation for a few years now. While its generally a solid product for basic usage, it becomes difficult to work with when attempting any particularly complex configuration. And don't ask the Buffalo support staff for help, they don't know anything about the backend of their product.
If you're looking for flexibility, I'd recommend ditching the NAS idea entirely and going for a basic file server.
I got a linksys NSLU2 (Score:2)
Great little debian server, really bad performance as a NAS. Even with Debian on there.
I like the idea of the QNAP Turbo stations - effectively a modernised NSLU2 with 256 MB of RAM and a 500MHz chip, but then I want another server rather than an actual NAS...
NAS Charts (Score:2, Informative)
www.smallnetbuilder.com maintaines a NAS Chart, I find it quite complete and recent.(http://www.smallnetbuilder.com/component/option,com_nas/Itemid,190/)
Go to SmallNetBuilder.com (Score:4, Informative)
I frequent the site a bit, and there's a couple tricks to getting good performance out of a NAS, or LAN throughput in general.
1. Use Jumbo Frames, period.
2. Use PCI-e NIC's, onboard or PCI just can't deliver the speeds offered by GigE. You can find smiple intel PCI-e nics for under $20.
3. Drives make a big difference, obviously.
www.smallnetbuilder.com -- Good site.
Re: (Score:2)
I've benchmarked onboard and PCI NICs and get over 850Mb/s throughputs with iperf and netperf. Sure you could get another 50-100Mb/s with PCI-e, but that's practically a rounding error.
Re: (Score:2)
I notice someone else pointed out Tom's Hardware for a review. Tim Higgins was part of smallnetbuilder and Tom's Networking. I know he is still reviewing for smallnetbuilder, but I'm not sure about the Tom's part.
How automated is your testing? (Score:3, Insightful)
If your testing is highly automated, I can't help you as I don't have a lot of experience with high speed networking.
If your testing is reasonably manual, consider storing your data set on removable hard drives which are manually plugged into one computer, data is copied, then disconnected and moved to the other. A USB 2 interface will give you the most compatibility given the wide variety of hardware you're using, but perhaps there may even be hardware that does hot plugging E-SATA properly if you're willing to pay a premium.
Remember, for really high bandwidth physical media being shipped from one location to another is still a solution which should be considered.
Build it yourself (Score:2)
Off-shelf NAS device will be not only slow but also full of various bogus bugs with which you need to wait for vendor to issue firmware update...
Just build it yourself - build a PC. You have plenty of options:
1. If you have a rack somewher buy a low end rack 2U rack server with enclosures for SATA disks and some decent RAID controller.
Or:
2. Build yourself a PC in tower enclosure. Get some Core 2 Duo mobo (cheapest), medicore ammount of RAM - SMB and NFS and AppleTalk servers with Linux operating system will
You don't want software raid and the (cheapest) MB (Score:2)
You don't want software raid and the (cheapest) MB sucks way out of data chipset also you don't have to have on board vidoe as it takes up system ram and chip set i/o even if you are useing the system for much.
you will want 2gb or more ram + dual gig-e or more in teaming + some kind of a raid card a good pci-e x4 or better one is about $250+
Yes you want software RAID and lots of memory. (Score:2, Insightful)
Actually, you dont want any RAID card, because it limits your upgrade and recovery options. Any modern CPU is not going to have any problems doing memcopy and XORing required for RAID.
You do want as much memory as you can afford, especially that memory is cheap now.
My little home server has 8GB of memory, it can sink huge write transfers very quickly. It uses 3 laptop SATA HDDs in RAID5 so it can take it's sweet time to write the data to HDD later because it effectively has 8GB disk cache.
Re: (Score:2)
What if it's for a small, say =6 person office? 4 (5400RPM) SATA drives are perfectly fine.. if they're document and spreadsheet workers. The situation is completely different if these were video editors or CAD/CAM software types. Then you need 10GBps server, hardware raid10, say 15 drives, quad-core, and max ram.
Roll your own (Score:2, Insightful)
Your best performance is likely to come by rolling your own. Off the shelf SOHO devices are built for convenience, not throughput.
Grab a PC (need not be anything top-of-the-line), a good server NIC, a decent hardware RAID card (you can usually get a good price on a Dell PERC SATA RAID on ebay), and a few SATA hard drives. Install something like FreeNAS or NexentaStor (or, if you want to go all the way, FreeBSD or Linux and Samba).
UnRaid: when build-from-scratch isn't fast enough (Score:4, Informative)
Okay, unRaid is not particularly fast compared to an optimized system, but it's expandable, had redundancy, is expandable, is web managed, plays nice with windows, sets up in about 20 minutes, costs $0 for a three disc license and $69(?) for a 6 disk license.
My total unoptimized box on an utterly unoptimized Gb network (stock cards, settings, with 100 and 1000 nodes) and unmanaged switches just transferred an 8.3GB file in a hair under three minutes. From a single, cheap SATA drive to a Vista box with an old EIDE drive. Now 380Mb/s is not blazingly fast, but remember that it took almost no effort.
http://lime-technology.com/ [lime-technology.com]
No connection except as a happy customer with a 4TB media server that took longer to assemble the case than to get the SW running. If only my Vista Media Center install has been this easy.
NAS disk architecture (Score:5, Interesting)
If you use a single disk NAS solution and you are doing sequential reads through your files and file system, your throughput can't be greater than the read/write speed of a single disk, which is no where near GigE (1000 Gbps is about 125 MB/second ignoring network protocol overhead). So you will need RAID (multiple disks) in your NAS, and you will want to use striped RAID (RAID 0) for performance. This means that you will not have any redundancy, unless you go with the very expensive striped mirror or mirrored stripes (1+0/0+1). RAID 5 gives you redundancy, and isn't bad for read, but will not be that great for writes.
As you compare/contrast NAS device performance, be sure that you understand the disk architecture in each case and see oranges to oranges comparisons (i.e, how does each one compare with the RAID architecture that you are interested in using - NAS devices that support RAID typically offer several RAID architectures). Also be sure that the numbers that you see are based on the kind of disk activity you will be using. It doesn't do much good to get a solution that is great at random small file reads (due to heavy use of cache and read-ahead) but ends up running out of steam when faced with steady sequential reads through the entire file system where cache is drained and read-ahead can't stay ahead.
Once you get past the NAS device's disk architecture, you should consider the file sharing protocol. Supposedly (I have no authoritative testing results) CIFS/SMB (Windows file sharing) has a 10% to 15% performance penalty compared to NFS (Unix file sharing). I have no idea how Apple's native file sharing protocol (AFP) compares, but (I think) OS X can do all three, so you have some freedom to select the best one for the devices that you are using. Of course, since there are multiple implementations of each file sharing protocol and the underlying TCP stacks, there are no hard and fast conclusions that you can draw about which specific implementation is better without testing. One vendor's NFS may suck, and hence another vendors good CIFS/SMB may beat its pants off, even if the NFS protocol is theoretically faster than the CIFS/SMB protocol.
Whichever file sharing protocol you choose, its very possible it will default to operation over TCP rather than UDP. If so, you should pay attention to how you tune your file sharing protocol READ/WRITE transaction sizes (if you can), and how you tune your TCP stack (windows sizes) to get the best performance possible. If you use an implementation over UDP, you still have to pay attention to how you set your READ/WRITE buffer sizes and how your system deals with IP fragmentation if the UDP PDU size exceeds what fits in a single IP packet due to the READ/WRITE sizes you set.
Finally, make sure that your network infrastructure is capable of supporting the data transfer rates you envision. Not all gigabit switches have full wire-speed non-blocking performance on all ports simultaneously, and the ones that do are very expensive. You don't necessarily need full non-blocking backplanes based on your scenario, but make sure that whatever switch you do use has enough backplane capacity to handle your file transfers and any other simultaneous activity you will have going through the same switch.
Re: (Score:2)
your throughput can't be greater than the read/write speed of a single disk, which is no where near GigE (1000 Gbps is about 125 MB/second ignoring network protocol overhead)
Most bog-standard SATA drives should be able to push over 100 MBps on a sequential read. I'd go RAID0 to help with the case when files are a bit fragmented and other activity is going on, but under ideal conditions a single drive should be able to very nearly saturate a Gig-E TCP stream.
Network won't be your bottleneck. (Score:5, Informative)
Disk will always be. Since disk is your slowest spot you will always be disk I/O bound. So in effect there's no real reason to worry about network throughput from the NIC. NICs are efficient enough these days to just about never get bogged down. What you would want to look at for the network side would be your physical topology -- make sure you have a nice switch with nice backplane throughput.
About disks:
Your average fibre channel drive will top out at 300 IO/s because few people sell drives that can write any faster to the spindle (cost prohibitive for several reasons). Cache helps this out greatly. SATA is slightly slower at between 240-270 IO/s depending on manufacturer and type.
Your throughput will depend totally upon what type of IO is hitting your NAS and how you have it all configured (RAID type, cache size, etc). If you have a lot of random IO, your total throughput will be low once you've saturated your cache. Reads will always be worse than writes even though prefetching helps.
If you're working with multi-gigabyte datasets, you'll want to increase the number of spindles (ie number of disks) to as high as you can go within your budget and make sure you have gobs of cache. If you decide to RAID it, which type you use will depend on how much integrity you need (we use a lot of RAID 10 with lots of spindles for many of our databases). That will speed you up significantly more than worrying about the NICs throughput. don't worry about that until you start topping a significant portion of your bandwidth -- for example, say 60MB/sec sustained over the wire.
This doesn't get fun until you start having to architect petabytes worth of disk. ;)
Re: (Score:2, Informative)
It sounds like you do this as your day job working with big expensive NAS and SAN equipment. Yes, in those environments you'll be I/O bound long before you're disk-bound or NIC bound. Sadly, the SOHO equipment is far, far worse. By and large, their throughput ranges from sad to atrocious. See SmallNetBuilder's NAS Charts for some benchmarks that will make you weep.
Yes it will (well, crap SOHO cpu/network). (Score:4, Insightful)
Ah, wrong.
This guy is talking about SOHO type NAS boxes, their cpu and network throughput is their bottleneck.
If he was talking about 'real' NAS, then that is very different (although it is still trivial to get a NAS that can saturate GBit for many workloads).
Our 16/32 drive Raid6 SATA raid arrays easily sustain 400MB/sec locally for moderately non-random workloads - there are workloads for which this of course does not apply, but since he is apparently moving around GByte lumps, it would not be his case.
SOHO NAS devices normally run out of grunt at around 6MB/secish, even for long linear reads, some do better at up to 25.
I am thinking your workload is TPC type database loads, dont assume everyones is (we have a mix of video files and software development, very different..). TPC type disk loads are a corner case.
We also love ATAOE but that is DEFINITELY not what he is looking for.
Never underestimate the bandwidth.... (Score:3, Insightful)
Never underestimate the bandwidth of a guy carrying a bundle of removable hard drives around the office.
Or a station wagon loaded with hard drives.
Nothing can beat them.
Re: (Score:2)
Nothing can beat them.
Maybe a C-5 Galaxy full of hookers carrying hard drives.
But that might be cost prohibitive for small office use.
Thecus N2100 (Score:3)
I've got an Thecus N2100 and the performance as a NAS isn't great. The CPU isn't powerful enough to take advantage of the gigE interface. For what you want, I'd get something more powerful which probably means an x86 box. For anyone who just wants a home server that doesn't consume too much electricity so can be left on all the time, a small ARM based box is great. I'm running Debian on it and it's really useful.
Would not recommend the Buffalo (Score:2)
I bought a Buffalo NAS about three years ago; I bought it because of the 1000base-T interface and low cost. I persevered with it for about three months, and then demanded and got a full refund from the retailer.
tiny cube pc,few drives, and openfiler or solaris (Score:2)
get a small pc case such as one of the many small cube cases that come as a barebones. Put a dual core chip and 2GB ram. The you can install something like openfiler which will give you a nice web interface and the ability to do nfs,cifs,ftp,and iscsi. Alternatively, install solaris or opensolaris and use ZFS and have the ability to compress the files at the filesystem level and also do a raidz with 3 drives for reliability and speed.
either way you can bond two ethernet interfaces together for 2Gbit whic
Features vs. speed (Score:2)
I have an Infrant ReadyNAS+ and it is not fast. It has a TON of features (most of which I don't use) but transfer speeds are pegged at approx 7% to 8% network utilization through a gigE switch even with jumbo frames on and an upgraded stick of ram for the NAS cache. I get the same transfer rates with 3 different computers of various types including an older laptop and a very fast gaming machine, and my transfer rates are fairly close to what others report, which tells me the bottleneck is the NAS device.
Build yer own (Score:2)
I have a Linksys NSLU2 (running Debian Lenny) and it maxes out at about 4 MB/sec, the dinky amount of RAM means almost no FS or other buffering is possible, and the limp CPU (266 MHz) just can't push the IO fast enough. /share filesystem via Samba.
A barebones PC is probably $200 + drives, slap OpenFiler or a real distro on it, and share out 1 big
Already been extensively discussed... (Score:5, Informative)
For example:
Best home network NAS?
http://ask.slashdot.org/article.pl?sid=07/11/21/141244&from=rss [slashdot.org]
What NAS to buy?
http://ask.slashdot.org/article.pl?sid=08/06/30/1411229 [slashdot.org]
Building a Fully Encrypted NAS On OpenBSD
http://hardware.slashdot.org/article.pl?sid=07/07/16/002203 [slashdot.org]
Does ZFS Obsolete Expensive NAS/SANs?
http://ask.slashdot.org/article.pl?sid=07/05/30/0135218 [slashdot.org]
What the hell? Is this the new quarterly NAS discussion?
Re:Already been extensively discussed... (Score:4, Insightful)
What the hell? Is this the new quarterly NAS discussion?
Yes, I hope it is. Maybe not quarterly, but I have no problem "revisiting the classics" periodically. Technology marches on, best practices come and go, so it is useful to cover the same ground every so often. Seven years ago the coolest story ever was covered here: build a Terabyte fileserver for less than $5,000!!! [slashdot.org] (Note to visitors from the future: it is late 2008 and you can buy an external terabyte hard drive for a little over $100. Call it $125. That same five grand could buy you FORTY terabytes today. You probably got a 1TB USB jump drive in your cereal this morning.)
Plus, not everyone has been around as long as you and I. Won't somebody please think of the n00bs?!? :-)
Understand your performance requirements (Score:4, Interesting)
How many gigabytes are "multiple" gigabytes? Seriously, moving around five GB is much easier than 50 GB and enormously easier than 500 GB.
Another thing to consider: how many consumers are there? A "consumer" is any process that requests the data. If this post is a disguised version of "how do I serve all my DVD rips to all the computers in my house" then you probably won't ever have too many consumers to worry about. On the other hand, I work for an algorithmic trading company; we store enormous data sets (real-time market data) that range anywhere from a few hundred MB to upwards of 20 GB per day. The problem is that the traders are constantly doing analysis, so they may kick off hundreds of programs that each read several files at a time (in parallel via threads).
From what I've gathered, when such a high volume of data is requested from a network store, the problem isn't the network, it's the disks themselves. I.e., with a single sequential transfer, it's quite easy to max out your network connection: disk I/O will almost always be faster. But with multiple concurrent reads, the disks can't keep up. And note that this problem is compounded when using something like RAID5 or RAID6, because not only does your data have to be read, but the parity info as well.
So the object is to actually get many smaller disks, as opposed to fewer huge disks. The idea is to get the highest number of spindles as possible.
If, however, your needs are more modest (e.g. serving DVD rips to your household), then it's pretty easy (and IMO fun) to build your own NAS. Just get:
You might also want to purse the Ars Technica Forums [arstechnica.com]. I've seen a number of informative NAS-related threads there.
One more note: lots of people jump immediately to the high performance, and high cost RAID controllers. I personally prefer Linux software RAID. I've had no problems with the software itself; my only problem is getting enough SATA ports. It's hard to find a non-server grade (i.e. cheap commodity) motherboard with more than six or eight SATA ports. It's even harder to find non-PCI SATA add-on cards. You don't want SATA on your PCI bus; maybe one disk is fine, but that bus is simply too slow for multiple modern SATA drives. It's not too hard to find two port PCI express SATA cards; but if you want to run a lot of disks, two ports/card isn't useful. I've only seen a couple [newegg.com] of four-port non-RAID PCIe SATA cards [newegg.com]. There's one eight port gem [newegg.com], but it requires PCI-X, which, again, is hard to find on non-server grade boards.
I like the Synology boxes (CS508 or RS407) (Score:2, Informative)
They don't do too badly for xfer speed and are quite reliable. They seem to use less power and aren't noisy like other NAS systems (especially the RYO).
Linux is their OS and if you need to add some functionality, you can get in and do it, but it works well out of the box.
RAID 5 or 6 with the 508
I've done the Windows SMB and it sucks for maintenance and you're back at RYO - patch and crotch rub. I've built many a linux box for this and, though they work, I have better things to do with my time. I really appr
Synology boxes are a GREAT (Score:3, Informative)
They are a little on the high end, cost wise for consumer boxes but they are very reliable, the firmware actually works WELL, they support NTFS and their network interfaces function up to spec. And they support Mac.
They make units from 1 bay SATA up to 4 bay 1U hot swappable dual 1Gb dual power supply rackmounts.
www.synology.com
along the lines of sneaker net (Score:2)
I'm going to suggest that you skip the NAS and just get a large-capacity eSata or firewire drive. Plug it into your current test machine, do your thing, unplug it and move along to the next machine. This approach sidesteps any limitations of your LAN, host machine, RAID cards, or NICs.
Skip Software RAID boxes (Score:4, Informative)
The Buffalo Terastation uses a software RAID, which slows it considerably, with the side benefit of being nearly impossible to recover if it crashes.
It does support SMB, NFS, and AFS out of the box though.
These boxes are cheap crap, and have a very limited useful lifespan. Our company lost a good deal of information when ours crapped out after 366 days. (Yes, we had backups, No they weren't perfect. They happened to be with me halfway around the globe at the time...)
Really seems like the product offerings in this space are limited usability, poor reliability, imperfect implementations, and grossly overpriced. Doing it over again, I would go for a build-it-yourself box hands down.
Forget SOHO boxes (Score:3, Informative)
What you're expecting is really beyond the capability of common SOHO NAS equipment. These devices lack the RAM and CPU to approach the capacity of GB Ethernet.
Unless you're willing to roll your own, you should consider a better class of gear and spend your time arguing for the funds to pay for it (a NetApp S550, perhaps.) If you are willing to roll your own, you can get there for $1-2k using all new hardware.
Beware reusing older hardware; many GB NICs can't approach GBE saturation, either due to PCI bus contention or low end, low cost implementation. Yes, in some cases older hardware can get there, but this will require careful configuration and tuning.
You want a PCI-E bus, a decent 'server' class NIC, recent SATA disks, a modern CPU (practically any C2D is sufficient) and enough RAM (2-4 GB). Personally I stick to Intel based MB chipsets and limit myself to the SATA ports provided by Intel (as opposed to the third party provided by jaton, silcon image, et al.) Linux, md raid 10. Will saturate a GBE port all day long, provided your switch can handle it...
You're serving desktops so jumbo frames are probably impractical (because some legacy hardware on that LAN will not tolerate it.) If your managed (?) switch can provide VLANs you can multihome your critical workstations and use jumbo frames. This will get you more performance with less CPU load for 'free'.
If supported by NAS.. NetBeui.. (Score:3, Interesting)
If the NAS supports the non-routable NetBeui protocal.
Install the optional "Netbeui" protocal stack located on the XP install disk. (same add-on will also work on Vista.)
Don't forget to disable (uncheck) the "QOS Packet Scheduler", it will limit you to 20-25% of max link speed.
Lastly, one must also disable the NetBIOS over TCP/IP, if it connects first you won't see any performance boost. (Option located in the TCP/IP Advanced/WINS dialog).
The older/non-routable NetBeui protocal stack in the NT/W2K days was roughly 10x more CPU efficient per byte than NetBios over TCP/IP.
In XP/Vista environments it's still 5x more CPU eff than NetBios over TCP/IP.
Re:SMB (Score:5, Informative)
Re: (Score:2, Insightful)
Saying "Gigabit ethernet" means nothing. For instance, Intel SS4000-E comes with "dual gigabit ethernet ports". Wow. This must mean that it supports up to 2Gbps, right?
Wrong.
First, the two ports don't support link aggregation, they're independent. Second, instead of a real-world performance of about 50-70 MB/sec on a gigabit link, this unit gives you... wait for it... 5 to 10 MB/sec.
That's right, no typo there. Its CPU is so sleezy that that's all it can manage on small files. Large files get you up to 15MB
Re: (Score:3, Interesting)
I concur with this. Anything that says "GigE" only means that it's offering an interface that is compliant to the specification, not that it can pass 1000Mb/s.
A few days ago, I went digging for some information on switches. I'm a big Cisco fan, and I have specs on everything that I use. I know which of my switches can handle more traffic than others. That's kind of important.
Someone (to remain nameless) bought a GigE "switch". A name brand, but consumer grad
Re:SMB (Score:5, Funny)
The user believed he had increased performance, because his switch said "GigE" on it
Does his Cat 6 say "Monster Cable"?
Re: not anymore (Score:3, Informative)
I have numbers to back it up: D-LINK DNS-323, 2x 500gb 5400rpm Samsung drives in Raid-1 configuration. I don't know the exact model, but I certainly selected these for low noise, low energy consumption and low heat output. So they're absolutely no high performers, but in regular, day-to-day operations, the Gigabit adapter manages a throughput at a steady 15 percent of 1000mbit push and pull from/to medium performance Windows workstations.
This NAS unit is on the market for well over a year and it took severa
Re: (Score:2)
Well it looks like SMB is your best bet for compatibility. For a budget, just go with a small Linksys or Cisco device, as you can specify the hard drive and the network around it governs the speed.
This isn't really true. For *lots* of low-end NAS devices, the performance limitation is their puny CPUs, that can barely shift bits fast enough to saturate a 100M link.
Re: (Score:2)
A custom-built box, as many commenters suggested, seemed a tad inappropriate to me as he asked for an NAS device, not a server. Installing Ubuntu or whatever on it seems like more of a performance hit tha
Re:SMB (Score:5, Informative)
A custom-built box, as many commenters suggested, seemed a tad inappropriate to me as he asked for an NAS device, not a server. Installing Ubuntu or whatever on it seems like more of a performance hit than a properly optimized "off the shelf" NAS box, since they most likely don't run Dbus, GNOME, Hald, bluetooth or any other desktop software atop the basic kernel and networking services.
While this is true, for noticably less than you'll pay for a NAS appliance, you can build a PC with vastly more CPU power and RAM (in particular, storage vendors - even with high-end, full-blown SAN solutions - are offensively stingy with cache), which will more than make up for any extra stuff that might be running.
You need to spend a LOT on an "appliance" type storage system to get something that has higher performance and/or better features than a "server". Particularly with cache, storage vendors across the board are offensively stingy (16 gigs of high-quality ECC RAM costs maybe $800, but you'll be lucky if your $100k SAN comes with half that amount).
Personally I would recommend the OP looks at Server/NAS-style "appliances" like Dell's NF500. They're the only sort of "cheap" turnkey devices he'll find that will deliver the performance he seems to want, and will probably only cost a grand or two more than DIY.
Re:SMB (Score:5, Informative)
A NAS is pretty much a server that is dedicated to storage.
If he wants to roll his own I would suggest either a light install of Ubuntu server or FreeNAS: http://www.freenas.org/ [freenas.org]. FreeNAS is based on the stripped down Free BSD core that m0n0wall uses. It is very small and is managed using a simple and easy to use web interface. I don't know about gigabit performance as I only set it up once for a friend using 100mbit. He had the Linksys NAS box and it was dog slow. On 100Mb it couldn't push more then 3-4 MB sec. I could get 8-9Mb sec using FreeNAS on an Athlon 1.3Ghz with 128MB ram and two SATA 500GB drives in RAID 1 (mirroring). He also added a USB 2.0 card to hook up another 500GB drive. It pretty much saturates his 100Mbit connection.
And here is my related question to others here:
I have fought with SAMBA on Ubuntu 8.04 server and I cant get it going faster than 10-11MB/sec when copying to/from Windows XP. Even with the tcp_nodelay setting and a few others it just barely breaks 11MB/sec. I can get 25-30MB sec when copying from one Windows PC to another. And the server hardware isn't puny: dual P4 2.4GHz Xeons, 4GB RAM, dual PCIX Intel gigabit and a PCIX SATA controller. Any one have any suggestions? NFS also runs at the same speed and when downloading from the Apache server I get 5-6MB sec. Something is wrong somewhere but I cant tell. I have changed kernels played with conf files but nothing works. Someone once told me SAMBA will always be slow but I don't believe that to be true.
Re: (Score:3, Informative)
run "ethtool eth0" and have a look at the output. It's possible that it's autonegotiated a stupid setting like half-duplex or some lower speed.
Do the same with the windows box; that information is the properties dialog for the network device.
Re: (Score:3, Informative)
I have fought with SAMBA on Ubuntu 8.04 server and I cant get it going faster than 10-11MB/sec when copying to/from Windows XP. ...Someone once told me SAMBA will always be slow but I don't believe that to be true.
Well, for SAMBA tuning, try (pdf):
http://tinyurl.com/5rfjvu [tinyurl.com]
Alternatively, if you don't need all the Win network support that SAMBA provides, you can install ext2ifs on the XP boxes and enjoy easy and fast access to your *nix volumes. Works well for me. Caution: Security issues...
http://www.fs-driver.org/index.html [fs-driver.org]
Re: (Score:3, Informative)
That's not Samba's fault. It's the TCP window size on XP that is the problem.
I have at home a cheap server running Ubuntu and Samba with older drives that max out at 35-40 MB/s.
Clients using OS X, Linux or Vista gets the full ~30 MB/s, but XP clients seem to max out at 10-15MB/s. After tweaking the TCP window size, I've gotten the speed up to 20-25MB/s.
Re: (Score:3, Informative)
Unfortunately Using Samba is almost 10 years old by now, and some of the tuning advice might not be applicable any more. In particular, newer versions of the linux kernel (2.6.17+) have full tcp autotuning. But explicitly specifying buffer sizes (socket options SO_RCVBUF and SO_SNDBUF) will disable this autotuning. So using some value that was good 10 years ago (8192) might be pretty far from optimal these days.
Re: (Score:3, Informative)
8-9 MB/Sec? Really?
I was getting 45-60MB/Sec (basically drive speed) on an old dual-cpu 1Ghz Pentium 3. I had Linux and Samba and no GUI running on it.
Try throwing a low-end dual Core 2 (like an E5200) in an Intel board with a recent ICH chipset. Choose some -quality- drives, like WD RE3s, and a good network switch, like an SMC 8508-T if you don't have something already. Load Ubuntu from the mini.iso, no GUI, only Ubuntu Server and Samba.
Openfiler is good too (Score:3, Informative)
OpenFiler is neat and easy to use. Check it out too.
Re: (Score:2)
While that might be true in certain circumstances, the article does say "I work at a small business where we need to move around large datasets regularly...network speed is as important as storage size."
This is for a business, not a home fileserver to share pictures and videos of the family vacation. If network speed really is a top priority then nothing will beat a custo
Re: (Score:3, Insightful)
Well it looks like SMB is your best bet for compatibility.
OS X doesn't support NFS? Linux doesn't support AFP?
Besides which, don't the better NAS boxes support pretty much everything, all at once?
Re: (Score:2)
OS X doesn't support NFS? Linux doesn't support AFP?
Not as well as they (respectively) support SMB.
Re: (Score:3, Insightful)
I hate to point this out, but 5G in 15 minutes is about 5 megabytes per second.
GigE peak theoretical throughput is like 125MB/s.
Consumer grade hard drives can average throughput in the 60MB/s range.
If this is the fastest NAS solution they tested and CNET is thrilled with their blazing 5MB/s sustained throughput to the NAS - I don't want one.
I'm going to have to suggest going with a cheapo 2.8GHz HyperThreaded P4 based 'server' w/ GigE, 1G of RAM and a few SATA drives on a RAID controller. Use whatever OS y
Re: (Score:2)
Yes, they're Linux-based. I've just installed an ssh server on my N3200, now I have a very cheap hackable Linux box with space for 3 disks.
There's a Wiki [onbeat.dk] for more information about hacking Thecus products.
Btw, they're using netatalk for Mac-Support, which appears to work really well.
Re: (Score:2)
Note that you have to unzip the files there ONCE to get the .mod-file to upload to the device (which is really a .tar.gz-file). Took me a while to figure that out.
Re: (Score:2)
I also have a Drobo, and had a Drobo Share. In my situation the Drobo Share didn't make sense to keep.
The Drobo v2 can transfer at over 30MB/s across its firewire. I didn't test the USB, but I imagine it is not too much less than that.
The Drobo Share gets maybe up to 15MB/s, which is half of what the Drobo is capable of. If you want to use a Drobo, use a stripped down machine with a bit of RAM and a streamlined OS just for sharing. That way you can use the throughput of the Drobo more effectively.
Of cou
Re: (Score:2, Insightful)
Stability is SUPERBE --- system has NEVER crashed --- only downtime is when the power goes out or I go on vacation.
Speed is satisfactory --- everyone seems happy with the network. It just works.
C