Super Fast NVMe RAID Comes To Threadripper (zdnet.com) 59
Adrian Kingsley-Hughes, writing for ZDNet: A week later than planned, AMD has released a free driver update for the X399 platform to support NVMe RAID. The driver allows X399 motherboards to combine multiple NVMe SSDs together into a RAID 0, 1, or 10 array, which will greatly enhance disk performance or data integrity. Benchmarking carried out by AMD shows that the platform allows for a throughput of 21.2GB/s from six 512GB Samsung 960 Pro NVMe SSDs in RAID0. But there are a couple of caveats. The first is that X399 motherboards will require BIOS updates before they will support NVMe RAID, so when it will be available for your system will depend on your motherboard vendor. The second -- and perhaps more important -- is that currently the NVMe RAID driver is in beta, and as such things may go wrong, so you might want to test this before rolling it out onto systems you rely on.
Cool! (Score:4, Insightful)
While it's annoying that I had to push the Threadripper upgrade further down the line, at least AMD is polishing the hell out of it until I'm ready to buy. Ir Zen2 will be a thing by then and everything starts anew :D.
Re: (Score:2)
Re: (Score:2)
Threadripper looks nice, but I don't see the need for raid in consumer desktop (especially gaming) systems in the age of the Samsung 960 pro. There won't be any benefit in load times for any consumer applications I can think of anyways; video game load times became GPU bottlenecked long ago.
Re: (Score:2)
I'm using RAID1 anyway.
Threadripper isn't particularly useful for gaming anyway. In my case, this will go in into my homelab server.
Re:Sweet (Score:4, Interesting)
That's a mighty powerful FREE program, but whew...it eats up resources.
It appears Resolve now also works on Linux for the free version. I'd really like to play with that.
muilt node ceph better can do update with reboots (Score:2)
muilt node ceph better can do update with reboots with no storage down time.
Re: (Score:2)
Different solution space. Object Storage should be built on stable filesystems, like ZFS.
Re: (Score:2)
Indeed - and if I didn't use ZFS, I'd use good old MD-RAID. I don't like to be beholden to non-portable RAID, whether it's BIOS-based or a hardware controller.
On the other hand, portability is a bit less of an issue when the drives are bolted down to the motherboard, and my recent Ryzen 7 build doesn't really have enough lanes to fully take advantage of a second M.2 drive. I'd have loved a Threadripper, but that's a bit too luxurious for my budget, and a Ryzen 7 1700 is still a huge upgrade from an FX-6100.
Re: (Score:2)
You can use it as a boot drive. You may be given greater flexibility for the sector sizing; huge sectors are nice for video. Some may find it easier to configure. It may not be superior to do it in UEFI, but neither is it particularly inferior.
Re: (Score:2)
Don't lock yourself into anything. Use the hardware you have to its fullest and have a backup regimen. Hardware goes tits up? Replace and restore.
intel boards share chipset pci-e lanes with sata (Score:2)
intel boards share chipset pci-e lanes with sata all down linked from cpu over DMI.
Threadripper has NVME on cpu pci-e lanes.
Re: (Score:2)
It was worse on my motherboard. If you used one of the NVMe connectors you lost 4 SATA ports. If you used two of the NVMe connectors, you lost all 6. With both in RAID, it was also bottlenecked via the DMI->CPU connect and limited to 4GB/s (in theory), and in practice it was closer to 3.4GB/s. I had multiple Samsung 950 PROs in a RAID-0, and it wasn't worth it the hassle, so I replaced both with a larger single Samsung 960 PRO.
Excellent (Score:2)
I assume they're talking about bootable nvme fakeraid, which I think is underrated. Still, I've had regular trouble getting operating systems to work with it. I don't know why that is. It's enthusiast level stuff, not bleeding edge supercomputer stuff.
Re: (Score:2)
If you have the bandwidth you can do the same with your OS based RAID instead of BIOS based. Whether you load your couple MB of boot code from a RAID or a single drive won't matter.
Re: (Score:2)
You are absolutely right, but isn't this the kind of thing that would bug any enthusiast?
Re: (Score:2)
The trick is Windows (especially modern versions) absolutely hates whenever isn't in charge of booting itself.
on any pci-e cpu or chipset or just chipset pci-e? (Score:2)
on any pci-e cpu or chipset or just chipset pci-e?
still better then intel that is intel disk only + DMI feed chipset only that also needs a $$ raid key.
Where is the Raid 5 offload support (Score:2)
Pretty Please
Re: (Score:2)
and wa la -> Raid...
Wa la??
Re: (Score:2)
Re: (Score:2)
Sadly, the first Google hit to "wa la" is "Voilà".
Re: (Score:3)
Hardware RAID is obsolete. Even the big boys like NetApp use SW RAID with SIMD instructions on standard CPUs these days.
Re: (Score:2)
Yeah, this is just for non-technical benchmark lovers who want a nice easy "click to win" button. Still damn impressive though.
A single NVMe SSD is about as fast as DDR2 RAM in terms of raw transfer speed. This RAID array is about as fast as DDR4-3200 RAM, which is the fastest RAM that the CPU will take. Think about that for a moment - terabytes of solid state permanent storage that is as fast as the fastest RAM.
Re: (Score:2)
Yup, the Threadripper machine I'm planning will definitely have RAIDed NVME SSDs but probably only RAID1. With RAID5 the write performance would suffer (as well as wear) and I don't need the capacity - yet.
Re: (Score:2)
Combining RAID 5 with SSD is ignorant.
Re: (Score:2)
Combining RAID 5 with SSD is ignorant.
Why is that? RAID 5 increases write multiplication but SSDs are always advertised as having plenty of write longevity.
Re: (Score:2)
RAID5 is no good these days, use at least RAIDZ2 if your data is important or triple mirrors.
Re: (Score:2)
Why? what the heck is the use case? Dedicated hardware that can to it case enough is going to expensive, complicate the platform, and destory some of the advantages of nvme (few abstraction layers, nvme). The man reason for nvme in hardware raid 0 is as a large scratch disk cheaper than RAM, but a lot faster than thrashing to disk. Raid 1 lets you keep a server running after a disk fails until the maintenance window. and you can bring it down to place in a new disk. However you I don't think a lot of admi
Re: (Score:2)
Its called the CPU. It had I/O offloading for awhile now this decade in hardware with zero latency to process compared to something dangling off the PCI bus.
bios fake raid sucks and needs a driver to hide it (Score:5, Informative)
bios fake raid sucks and needs a driver to hide the disks from the os.
you are better at least on Linux with os level software raid or an hardware raid card that only shows the os the raided disk and does not need to hide the backing disks with a driver.
Re: (Score:2)
OK, before you go off on the usual rant against "fake RAID", ask yourself what alternative you're advocating. We're talking about NVMe SSD's here - the kind that insert directly into a PCIe or M.2 slot. They are not SSD's with a SAS or SATA interface, so they cannot be attached to a hardware RAID controller.
Personally I'm very happy to have BIOS support for using these devices in a RAID configuration, and it doesn't bother me at all that "OMG - A DRIVER IS REQUIRED!".
Re: (Score:1)
Re: (Score:2)
Nice! Four M.2 SSD's in a PCIe x16 slot [highpoint-tech.com].
Re: (Score:2)
OMG - A DRIVER IS REQUIRED!
well with out a driver the os sees it as 2 disks and not 1 raided disk.
Re: (Score:1)
1) M.2 can be SATA, and there exists adapters. https://www.newegg.com/Product... [newegg.com]
2) I don't see any technical reason a RAID controller can't connect NVMe disks.
Re: (Score:2)
First off you bottleneck on the SATA portion of the chip-set at 6Gbps. The SATA over nvme m.2 operated by bridging the SATA controller data stream over a pci-e link. Secondly nvme is is an entirely different block layer, which the kernel expect to be directed to the drive controller. A controller in the middle is going to double latency and minimum and would have to redesigned for the different protocol. Possible but still I ask why?
Re: (Score:1)
I was just pointing out the claim no m.2 raid controller. I agree, that would be a silly thing to do.
Re: (Score:2)
You're wrong and 10 years out of date.
It is not fake raid as the CPU since 2009 does I/O. With hardly any latency at all compared to going through a bus and being limited by it's speed and slow latency for the overhead.
CPU I/O raid is superior in almost everyway with the exception of battery backup in case of a power failure. It isn't 2003 anymore
"Driver" is such a weird name when in fact... (Score:2)
And yes, of course you can boot from a RAID configured via "mdadm", if that is what you really need.
Re: (Score:2)
Intel's latest RAID solution is "Virtual RAID on Chip" - it's basically a software solution assisted by a few tweaks to the PCI lanes, and to take full advantage, you have to use a separate card to connect the NVMe chips to actual CPU PCI-E lanes, since the motherboard slots are all connected through DMI, and STILL are hobbled if you don't add the expensive upgrade "key"
Intel's VROC $$$$ scam vs. AMD's Free solution (Score:2)
Performance being equal, and no special cards on X399 motherboards to use it (Since Intel's X299 MB onboard M.2 slots connect through DMI/chipset PCI-E lanes) or an expensive VROC "Key" to enable features makes this a no-brainer.
Goodbye Intel.
Hardware RAID (Score:2)
I have trouble trusting hardware RAID. Like OS-based RAID, it uses software (firmware to be precise) that can have bugs, but it is much less tested than OS-based RAID.
Moreover, disaster recovery requires to have the same hardware/firmware ready for replacement, otherwise you risk your hardware RAID to be inaccessible from a replacement machine.
/. ads (Score:2)
RAID-0 isn't RAID, because there's no redundancy.
SSDs aren't really inexpensive.
So what are you left with? ADs! Yes, thanks for the ads /.
Incidentally, why would anybody want to use something labled as "beta" for RAID-0?