Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs (tomshardware.com) 62
Western Digital this week laid out a roadmap that stretches its 3.5-inch hard drive platform to 14 platters and pairs it with a new vertical-emitting laser for heat-assisted magnetic recording, a combination the company says will push individual drive capacities beyond 140 TB in the 2030s.
The vertical laser, developed over six years and already working in WD's labs, emits light straight down onto the disk rather than from the edge, delivering more thermal energy while occupying less vertical space -- enabling areal densities up to 10 TB per platter, up from today's 4 TB, and room for additional platters in the same enclosure. WD's first commercial HAMR drives arrive in late 2026 at 40-44 TB on an 11-platter design, ramping into volume production in 2027. A 12-platter platform follows in 2028 at 60 TB, and WD expects to hit 100 TB by around 2030.
The vertical laser, developed over six years and already working in WD's labs, emits light straight down onto the disk rather than from the edge, delivering more thermal energy while occupying less vertical space -- enabling areal densities up to 10 TB per platter, up from today's 4 TB, and room for additional platters in the same enclosure. WD's first commercial HAMR drives arrive in late 2026 at 40-44 TB on an 11-platter design, ramping into volume production in 2027. A 12-platter platform follows in 2028 at 60 TB, and WD expects to hit 100 TB by around 2030.
Taking a while (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: Taking a while (Score:2)
HAMR is already a thing, in all of the cheap high capacity disks. What's new is the perpendicular laser which allows heating a smaller spot, and more platters.
Re: (Score:2)
I feel like that's too big (Score:5, Funny)
that's going to take _forever_ to scan for corruption
Re: (Score:1)
scan: scrub in progress since Tue Jan 1 12:34:56 2030
1.02T scanned out of 100.0T at 100M/s, 7y134d11h45m to go
Re: (Score:3)
The purpose of this type of storage is for use cases where the touch rate is very long. For data that is updated/accessed once a year in very big chunks, these work.
Re:I feel like that's too big (Score:4, Funny)
You really should do the math. 11 days, which is a still a long time. The purpose of this type of storage is for use cases where the touch rate is very long. For data that is updated/accessed once a year in very big chunks, these work.
I think he meant the time it would take to scan one such disk full of justice department evidence for cases of corruption.
Re:I feel like that's too big (Score:5, Interesting)
Well let's see...
Today they offer 32TB drives on SATA 6gpbs... If that is 'acceptable' then reading the entire drive takes at least 18 hours or so in theory. If same interface, then you'd be limited to 78 hours...
But wait, there's been talk about spinning platters being upgraded to NVME interfaces. Largely because "why accommodate spinning drives with a separate interface", but if this comes in, and could credibly in that timeframe have NVMe with PCIe 6, then the total drive read time could be reduced to about 90 minutes, in theory.
So in theory, such a drive with a credible storage interface could push this in a more reasonable time period. Historically spinning platters seek performance made the nvme overkill, but streaming performance with that many platters and that density may remedy the 'drive too big' problem. Of course, in the *consumer* market this means that systems would have to start accommodating that sort of connectivity... Of course, in the timeframe perhaps the drives would just use USB to connect, and could credibly connect at 120Gbps, which would mean about 180 minutes of time to read the full time...
In short, it's time to move to PCIe connectivity to tame these capacities...
Re: (Score:2)
Well they assert 140TB in 14 platters, so 10 TB/platter suggests that it would be about a 3x increase in density over current state of the art
Further, number of platters, since your array of drive heads could read from 14 platters concurrently in a single drive interface, if you could get the drive to read everything to you in the order that is best for it.
So if your number is accurate for current platters, then for the hypothetical indicated, you would at least be in PCIe 5 x4 territory, for streaming perf
Re: (Score:2)
True, the linear versus track density isn't known, and I've been a bit optimistic on that....
Of course, now they have dual-actuator drives, though currently it's just splitting the lower and upper platters to be served by independent actuators, one could imagine quad-actuator with two more tracking the same platters to double the potential throughput. As well as investing in making the heads capable of concurrent operation.
While my example numbers may be optimistic by pretending the interface technology wou
Re: (Score:2)
Further, number of platters, since your array of drive heads could read from 14 platters concurrently in a single drive interface, if you could get the drive to read everything to you in the order that is best for it.
Even though all heads move together as part of a single head stack assembly, only one head seeks and reads at one time. We theoretically speak about disk cylinders, but the tracks on each platter are not exactly lined up in perfect cylinders. So, reading from the next platter still requires a short seek.
Re: (Score:2)
Even though all heads move together as part of a single head stack assembly, only one head seeks and reads at one time.
Anyone know if there have been any attempts, production or otherwise, of having multiple actuators that can work simultaneously and independently? Or why we don't see more of that?
Re: (Score:1)
Historically some hard drives did have multiple arms to speed up access. Taken to its logical conclusion, very fast drives were "fixed head", that is, each cylinder had its own set of read-write heads. No movement of arms were necessary. Maybe that'll make a comeback.
Social changes (Score:4, Interesting)
I was surprised to discover that you can purchase a 30TB hard drive for about half a grand.
That's 30,000 gigabytes, or about 30,000 hours of recorded video. How much of a person's life could be recorded on this?
There's about 8800 hours in a year, but you're asleep for 1/3 of that so call it 6000 hours. You can get 5 years of continuous video of your life on a device the size of a paperback book. If you can compress the video of your mundane activities, such as driving to/from work or waiting in line, only record single frames every second during these times, or do lower resolution during those times with key frames at higher resolution, you might get away with 4,000 hours of continuous video in a year. Probably less.
So this new disk could conceivable make a continuous record of 30 years of someones' life - all the interactions, all the people, all the information you see, all the places you've been.
(And probably more, probably more like 50 years. And if cloud storage is easily available everywhere, you wouldn't even need the appliance on you.)
This will inevitably lead to some interesting social changes.
For example, 50 Years of video using an AI assistant to search through and answer your questions (have I met that person before?) would be quite useful.
Also, the AI could train itself on your video and behaviour. The AI could then simulate you once you're gone.
Lots of possibilities here...
Re: (Score:2)
Ya know, that piqued my interest in something related to recording every minute of your waking life in video, and I think this is more reasonable, useful, and feasible today.
1. Record audio 24x7. I'm not even going to bother running the numbers - it's a hell of a lot smaller, and the next step means we don't need to calculate out that far.
2. Transcribe speech-to-text.
That's it. That could be done on a nearly any phone today. Could chunk the audio into 1hr segments, or daily, or whatever works for you. I'd p
Re: (Score:2)
1GB/hour recording is a very low quality recording - even streaming services use several GB per hour. If you want to go 4K you can record 100GB in about 2-3 hours for UHD Blu-Ray discs. So your 30TB drive now only holds maybe 900 hours of recording at 4K. UHD Blu-Ray quality.
Re: (Score:2)
My 28TB drive is getting full. Laserdisc raw captures are in the range of 150-300GB compressed, per disc. VHS tapes are about 200GB/hour with HiFi audio.
Re: (Score:2)
Use a modern filesystem like BTRFS or ZFS that takes care of that while mounted.
Re: (Score:2)
pool: vpool
state: ONLINE
scan: scrub repaired 0 in 213503982334601 days 06:58:54 with 0 errors on Wed Aug 2 09:21:17 2023
config:
NAME STATE READ WRITE CKSUM
vpool ONLINE 0 0 0
c5t6d0 ONLINE 0 0 0 (trimming)
c5t7d0 ONLINE 0 0 0 (trimming)
Good point -- a focus on errors? (Score:2)
That brings up a good point. Drives that large need an interface that is at least 2-10 times as fast as what we have now, or otherwise rebuild times can go into months. Or we need to move to wider parity RAID (I've used RAID-Z3 before on some vdevs for backup destinations.) Or, we start moving to MinIO-like systems where each server has eight drives, and it takes knocking a numbers of servers and drives offline before data is unavailable. Maybe even dedicated compute clusters that take six drives, use t
Re: (Score:2)
And require *very* small blocks in the scandisk interface.
Laser Lifetime (Score:5, Insightful)
Re: (Score:3)
Will you be able to write the whole disk before it fails ?
Re: Laser Lifetime (Score:2)
Well, it's not a Seagate... so probably
Re: (Score:2)
Easily many times over. I see you've not worked in a data intensive environment before. Try dealing with something like raw camera footage and you'll be easily working with many MANY TB of data for a single project, often in excess of 1TB of footage per hour.
As it stands video editing rigs often have massive RAID arrays of very large drives for this purpose. A friend of mine is in video production and his workstation already has more than 100TB of spinning disk space (and 10TB of SSDs) just to get him throu
Re: (Score:2)
Reading should be no problem. You heat the surface to be able to write narrower tracks, but you can read them as usual. The only question is, if the drive will then behave like a printer telling you "I have no yellow, I cannot printer your black and white document!"
For improved data loss (Score:2)
I welcome this new improved data loss mechanism; now instead of losing a terabyte or two, you can lose 140TB all at once.
And that's because backing up a 140TB drive is a bitch. Backing up a datacenter full of them is worse.
Re: (Score:3)
You back it up to another 140TB drive and hope both don't fail. And then back it up to a third 140TB drive just in case.
Re: (Score:2)
And a fourth copy to put in the fireproof safe.
And a fifth copy in case there's a fire in the fireproof safe.
Re: (Score:2)
No, no, you go to erasure coding across disks, then you're down to well under two copies. Unless you want to be able to read it immediately.
Re: (Score:2)
You back it up to another 140TB drive and hope both don't fail. And then back it up to a third 140TB drive just in case.
MUST have an offsite backup as well!
Speedtest.net says I'm getting 11.39Mbps right now. So that should only take... 27,314 hours to upload (or a over 37 months)!
costs (Score:1)
Spinning rust storage is still really good for archival. Although syncing to a new array at SATA3 speed
We'll see about that (Score:2)
I like my g'old reliable HDDs, but HDD technologies seems pretty much stagnant since more than a decade. Today's a 2TB HD cost pretty much the same as a 2010's 2TB HD, same speed, same everything.
Meanwhile I heard a lot of promises but none of them did hit the shelves.
Plus, SSDs got more robust and were getting close to same $/TB of HDs, this trend only stopped due to the recent RAM/Flash shortages (thanks AI big techs).
I'd love to see HDDs to make such a comeback, but I'm not holding my breath.
Re: (Score:3)
HDDs have also become much more difficult to find at reasonable prices lately. The one I bought in December is now nearly twice as expensive.
I also ordered a decently-priced USB external drive from Amazon in January which didn't say anything about being out-of-stock but is now saying to expect delivery in March. I'm guessing they'll cancel the order at some point because it will probably cost them more than I paid.
Re: We'll see about that (Score:2)
Lack of demand is going to limit the research and technology advancement.
We could improve HDD someone wants to put up the money and wait a long while. I guess blame capitalism in that it doesn't solve everything, at least not on consumer demand alone.
Re: (Score:2)
Yeah it has, but... (Score:2)
At some point it can only be so cheap. The cost of churning out boards, packaging the things, and handling warranties and such puts a hard floor on the cost. They're making their billions in the same way Change Bank makes theirs.
Re: (Score:3)
You're correct. Who wants a 2TB HDD to be faster and cheaper when you can buy an SSD that's 100x faster? The technology is for bulk storage and that's what they've been investing in. You couldn't get a 22TB HDD in 2010 for a reasonable price. You can now. HDDs are never going to make a comeback for primary storage, but for bulk storage they're king.
Re: We'll see about that (Score:2)
Frickin' lasers on their heads?! (Score:2)
Seemed stupid for sharks, but maybe it's better for hard drives?
We need a new form factor for HDDs... (Score:4, Interesting)
One thing I'm wondering is about changing form factors. For SSDs, the current NVMe form factors and such make sense. However, HDDs need a new form factor. 2.5" HDDs are pretty much abandoned with the last space increase to 6 TB happening a year or two ago. 3.5" HDDs really need more height to allow for more disks to be stacked or more room to place platters.
Maybe we need to bring back the 5.25" full height form factor? It obviously would not work with the 1-3 rack unit systems, so the drives would have to go in an external rack and be hooked up via SAS, FC, or some other protocol. Or maybe start clean completely and have a format that is future-resistant and can grow in whatever dimensions are needed.
Re: (Score:2)
There are other well-established form factors [wikipedia.org]
Re: (Score:2)
Will platters get so thin they can flex?? since they get more solid spinning...if they are larger in diameter... could we get a variation on the 5.25 floppy? lasers?
So... a huge stack of "floppy" platters and using lasers to write like a CDR?
Just in time for RAID7? (Score:2)
RAID5 became dangerous because of the size of disks. RAID6 is pretty close to having the same issue. I wonder how possible close to reality a RAID7 is?
Re: (Score:2)
Re: (Score:2)
No, I don't mean some proprietary system built into a single file system, I mean RAID7. There is no RAID7 at this point, the point I'm making is that it's clear we'll need a successor to RAID6. If ZFS has some non-standard way to do it, great, but not everyone uses ZFS, and even among those that do, not everyone wants their file system to be implementing RAID stacks.
How Thick? (Score:3)
How thick is that disk with 14 platters?
Impact on cost of smaller drives (Score:2)
Re: (Score:3)
If this is going to lower the price of 5 TB drives, bring it on: I have a good use for such drives, but not really so for 100+ TB drives.
No, it will not. A drive housing or the pivot arms of a RW head do not care if the drive platers are high coercitivity exotic materials, or if the heads are ePMR or HAMR.
More production lines that made housings and pivots and motors for 5TB drives will be converted to handle 100TB ones, and more of the remaining normal housingns and pivots and motors will be used for 15TB and up drives.
Expect low capacity drives to slowly dry up.
Plan accordingly
Um, let someone else test them (Score:2)
Re: (Score:2)
Um, let someone else test them. They don't call it "bleeding edge" for nothing.
They are being tested as we speak. Engineering samples are being sent to the main hyperscalers (Google, Microsoft, Amazon, Facebook), smaller Hyperscalers (Oracle, IBM, OVH, Destuche Telecom, Telefonica-Movistar) and the AI Datacenter crowd for pre-validation and feedback.
These drives are aimed squarely at hiperscalers first and foremost, we consumers are an aftetought. We will see consumer version of these drives long aftert Hyperscalers adopted them en-mass
I'd settle for more affordable 8TB drives (Score:1)
Re: (Score:2)
You might have better results with a burn-in test. DD write zeros to the entire drive followed by a SMART long scan. Helps to weed out shipping damage
Re: (Score:2)
What he said!
Seriously, my drives have a strong tendency to do one of two things: Die within a few weeks or last until it doesn't make sense to keep them in the drive enclosure because they are so small (relatively)!
I'd just be happy with MTBF high... (Score:2)
I'd be happy with MTBF high enough on these drives that RAID isn't needed, or at the minimum, "first gen" RAID, like single parity RAID 5, RAID 1, etc. No needing to have 3-4 levels of parity, or splitting drives up into sections, each section and each set of sections with RAID.