Forgot your password?
typodupeerror
Data Storage

Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs (tomshardware.com) 62

Western Digital this week laid out a roadmap that stretches its 3.5-inch hard drive platform to 14 platters and pairs it with a new vertical-emitting laser for heat-assisted magnetic recording, a combination the company says will push individual drive capacities beyond 140 TB in the 2030s.

The vertical laser, developed over six years and already working in WD's labs, emits light straight down onto the disk rather than from the edge, delivering more thermal energy while occupying less vertical space -- enabling areal densities up to 10 TB per platter, up from today's 4 TB, and room for additional platters in the same enclosure. WD's first commercial HAMR drives arrive in late 2026 at 40-44 TB on an 11-platter design, ramping into volume production in 2027. A 12-platter platform follows in 2028 at 60 TB, and WD expects to hit 100 TB by around 2030.
This discussion has been archived. No new comments can be posted.

Western Digital Plots a Path To 140 TB Hard Drives Using Vertical Lasers and 14-Platter Designs

Comments Filter:
  • Wasn't Segate supposed to put these out pre 2010?
  • by suutar ( 1860506 ) on Thursday February 05, 2026 @01:20PM (#65970598)

    that's going to take _forever_ to scan for corruption

    • by Anonymous Coward

      scan: scrub in progress since Tue Jan 1 12:34:56 2030
      1.02T scanned out of 100.0T at 100M/s, 7y134d11h45m to go

      • You really should do the math. 11 days, which is a still a long time.

        The purpose of this type of storage is for use cases where the touch rate is very long. For data that is updated/accessed once a year in very big chunks, these work.
        • by ffkom ( 3519199 ) on Thursday February 05, 2026 @03:59PM (#65970906)

          You really should do the math. 11 days, which is a still a long time. The purpose of this type of storage is for use cases where the touch rate is very long. For data that is updated/accessed once a year in very big chunks, these work.

          I think he meant the time it would take to scan one such disk full of justice department evidence for cases of corruption.

    • by Junta ( 36770 ) on Thursday February 05, 2026 @02:17PM (#65970724)

      Well let's see...

      Today they offer 32TB drives on SATA 6gpbs... If that is 'acceptable' then reading the entire drive takes at least 18 hours or so in theory. If same interface, then you'd be limited to 78 hours...

      But wait, there's been talk about spinning platters being upgraded to NVME interfaces. Largely because "why accommodate spinning drives with a separate interface", but if this comes in, and could credibly in that timeframe have NVMe with PCIe 6, then the total drive read time could be reduced to about 90 minutes, in theory.

      So in theory, such a drive with a credible storage interface could push this in a more reasonable time period. Historically spinning platters seek performance made the nvme overkill, but streaming performance with that many platters and that density may remedy the 'drive too big' problem. Of course, in the *consumer* market this means that systems would have to start accommodating that sort of connectivity... Of course, in the timeframe perhaps the drives would just use USB to connect, and could credibly connect at 120Gbps, which would mean about 180 minutes of time to read the full time...

      In short, it's time to move to PCIe connectivity to tame these capacities...

      • Social changes (Score:4, Interesting)

        by Okian Warrior ( 537106 ) on Thursday February 05, 2026 @03:01PM (#65970800) Homepage Journal

        I was surprised to discover that you can purchase a 30TB hard drive for about half a grand.

        That's 30,000 gigabytes, or about 30,000 hours of recorded video. How much of a person's life could be recorded on this?

        There's about 8800 hours in a year, but you're asleep for 1/3 of that so call it 6000 hours. You can get 5 years of continuous video of your life on a device the size of a paperback book. If you can compress the video of your mundane activities, such as driving to/from work or waiting in line, only record single frames every second during these times, or do lower resolution during those times with key frames at higher resolution, you might get away with 4,000 hours of continuous video in a year. Probably less.

        So this new disk could conceivable make a continuous record of 30 years of someones' life - all the interactions, all the people, all the information you see, all the places you've been.

        (And probably more, probably more like 50 years. And if cloud storage is easily available everywhere, you wouldn't even need the appliance on you.)

        This will inevitably lead to some interesting social changes.

        For example, 50 Years of video using an AI assistant to search through and answer your questions (have I met that person before?) would be quite useful.

        Also, the AI could train itself on your video and behaviour. The AI could then simulate you once you're gone.

        Lots of possibilities here...

        • by unrtst ( 777550 )

          Ya know, that piqued my interest in something related to recording every minute of your waking life in video, and I think this is more reasonable, useful, and feasible today.

          1. Record audio 24x7. I'm not even going to bother running the numbers - it's a hell of a lot smaller, and the next step means we don't need to calculate out that far.

          2. Transcribe speech-to-text.

          That's it. That could be done on a nearly any phone today. Could chunk the audio into 1hr segments, or daily, or whatever works for you. I'd p

        • by tlhIngan ( 30335 )

          That's 30,000 gigabytes, or about 30,000 hours of recorded video. How much of a person's life could be recorded on this?

          1GB/hour recording is a very low quality recording - even streaming services use several GB per hour. If you want to go 4K you can record 100GB in about 2-3 hours for UHD Blu-Ray discs. So your 30TB drive now only holds maybe 900 hours of recording at 4K. UHD Blu-Ray quality.

        • by AmiMoJo ( 196126 )

          My 28TB drive is getting full. Laserdisc raw captures are in the range of 150-300GB compressed, per disc. VHS tapes are about 200GB/hour with HiFi audio.

    • by sjames ( 1099 )

      Use a modern filesystem like BTRFS or ZFS that takes care of that while mounted.

    • root@xxxxx:/usr # zpool status vpool
      pool: vpool
      state: ONLINE
      scan: scrub repaired 0 in 213503982334601 days 06:58:54 with 0 errors on Wed Aug 2 09:21:17 2023
      config:

      NAME STATE READ WRITE CKSUM
      vpool ONLINE 0 0 0
      c5t6d0 ONLINE 0 0 0 (trimming)
      c5t7d0 ONLINE 0 0 0 (trimming)
    • That brings up a good point. Drives that large need an interface that is at least 2-10 times as fast as what we have now, or otherwise rebuild times can go into months. Or we need to move to wider parity RAID (I've used RAID-Z3 before on some vdevs for backup destinations.) Or, we start moving to MinIO-like systems where each server has eight drives, and it takes knocking a numbers of servers and drives offline before data is unavailable. Maybe even dedicated compute clusters that take six drives, use t

    • by allo ( 1728082 )

      And require *very* small blocks in the scandisk interface.

  • Laser Lifetime (Score:5, Insightful)

    by weirdow ( 9298 ) on Thursday February 05, 2026 @01:25PM (#65970612) Homepage
    One has to wonder how long the lasers will last, and when they finally fail, will the drive still be usable as a Read Only drive .
    • Will you be able to write the whole disk before it fails ?

      • Well, it's not a Seagate... so probably

      • Easily many times over. I see you've not worked in a data intensive environment before. Try dealing with something like raw camera footage and you'll be easily working with many MANY TB of data for a single project, often in excess of 1TB of footage per hour.

        As it stands video editing rigs often have massive RAID arrays of very large drives for this purpose. A friend of mine is in video production and his workstation already has more than 100TB of spinning disk space (and 10TB of SSDs) just to get him throu

    • by allo ( 1728082 )

      Reading should be no problem. You heat the surface to be able to write narrower tracks, but you can read them as usual. The only question is, if the drive will then behave like a printer telling you "I have no yellow, I cannot printer your black and white document!"

  • I welcome this new improved data loss mechanism; now instead of losing a terabyte or two, you can lose 140TB all at once.

    And that's because backing up a 140TB drive is a bitch. Backing up a datacenter full of them is worse.

    • by 0123456 ( 636235 )

      You back it up to another 140TB drive and hope both don't fail. And then back it up to a third 140TB drive just in case.

      • And a fourth copy to put in the fireproof safe.

        And a fifth copy in case there's a fire in the fireproof safe.

        • No, no, you go to erasure coding across disks, then you're down to well under two copies. Unless you want to be able to read it immediately.

      • by unrtst ( 777550 )

        You back it up to another 140TB drive and hope both don't fail. And then back it up to a third 140TB drive just in case.

        MUST have an offsite backup as well!

        Speedtest.net says I'm getting 11.39Mbps right now. So that should only take... 27,314 hours to upload (or a over 37 months)!

  • The consumer versions of the 30TB finally came out half way through last year. I started to see prices going up and bit the bullet early December. I could by a set of 4x30TB for $2500 at the beginning of December (79TB usable with ZRAID/zfs). I'm so glad I did because those jumped over $950 each in January! ... although it looks like they're back down to ~$700 now on Newegg? .. and they have 32TB too?!

    Spinning rust storage is still really good for archival. Although syncing to a new array at SATA3 speed
  • I like my g'old reliable HDDs, but HDD technologies seems pretty much stagnant since more than a decade. Today's a 2TB HD cost pretty much the same as a 2010's 2TB HD, same speed, same everything.
    Meanwhile I heard a lot of promises but none of them did hit the shelves.
    Plus, SSDs got more robust and were getting close to same $/TB of HDs, this trend only stopped due to the recent RAM/Flash shortages (thanks AI big techs).
    I'd love to see HDDs to make such a comeback, but I'm not holding my breath.

    • by 0123456 ( 636235 )

      HDDs have also become much more difficult to find at reasonable prices lately. The one I bought in December is now nearly twice as expensive.

      I also ordered a decently-priced USB external drive from Amazon in January which didn't say anything about being out-of-stock but is now saying to expect delivery in March. I'm guessing they'll cancel the order at some point because it will probably cost them more than I paid.

    • Lack of demand is going to limit the research and technology advancement.
      We could improve HDD someone wants to put up the money and wait a long while. I guess blame capitalism in that it doesn't solve everything, at least not on consumer demand alone.

    • SSD tech stagnated just like HDD. It never lived up to its hype
      • At some point it can only be so cheap. The cost of churning out boards, packaging the things, and handling warranties and such puts a hard floor on the cost. They're making their billions in the same way Change Bank makes theirs.

    • You're correct. Who wants a 2TB HDD to be faster and cheaper when you can buy an SSD that's 100x faster? The technology is for bulk storage and that's what they've been investing in. You couldn't get a 22TB HDD in 2010 for a reasonable price. You can now. HDDs are never going to make a comeback for primary storage, but for bulk storage they're king.

      • Youâ(TM)d buy say an 8TB spinning hard drive for backups if it is cheaper than a 2TB SSD. (On Amazon right now they are very close in price). 100 times faster is pointless if you donâ(TM)t need it.
  • Seemed stupid for sharks, but maybe it's better for hard drives?

  • by ctilsie242 ( 4841247 ) on Thursday February 05, 2026 @01:53PM (#65970676)

    One thing I'm wondering is about changing form factors. For SSDs, the current NVMe form factors and such make sense. However, HDDs need a new form factor. 2.5" HDDs are pretty much abandoned with the last space increase to 6 TB happening a year or two ago. 3.5" HDDs really need more height to allow for more disks to be stacked or more room to place platters.

    Maybe we need to bring back the 5.25" full height form factor? It obviously would not work with the 1-3 rack unit systems, so the drives would have to go in an external rack and be hooked up via SAS, FC, or some other protocol. Or maybe start clean completely and have a format that is future-resistant and can grow in whatever dimensions are needed.

  • RAID5 became dangerous because of the size of disks. RAID6 is pretty close to having the same issue. I wonder how possible close to reality a RAID7 is?

    • I think you mean triple parity, which is not raid7, but there is a raidz3 in zfs. Typically, once you get to a certain amount of storage, you are better off using an object store with 3 copies in multiple physical locations (validated with checksums) rather than counting on one machine to store everything (and not having it all die at once).
      • No, I don't mean some proprietary system built into a single file system, I mean RAID7. There is no RAID7 at this point, the point I'm making is that it's clear we'll need a successor to RAID6. If ZFS has some non-standard way to do it, great, but not everyone uses ZFS, and even among those that do, not everyone wants their file system to be implementing RAID stacks.

  • by SlashbotAgent ( 6477336 ) on Thursday February 05, 2026 @02:36PM (#65970770)

    How thick is that disk with 14 platters?

  • If this is going to lower the price of 5 TB drives, bring it on: I have a good use for such drives, but not really so for 100+ TB drives.
    • If this is going to lower the price of 5 TB drives, bring it on: I have a good use for such drives, but not really so for 100+ TB drives.

      No, it will not. A drive housing or the pivot arms of a RW head do not care if the drive platers are high coercitivity exotic materials, or if the heads are ePMR or HAMR.

      More production lines that made housings and pivots and motors for 5TB drives will be converted to handle 100TB ones, and more of the remaining normal housingns and pivots and motors will be used for 15TB and up drives.

      Expect low capacity drives to slowly dry up.

      Plan accordingly

  • They don't call it "bleeding edge" for nothing.
    • Um, let someone else test them. They don't call it "bleeding edge" for nothing.

      They are being tested as we speak. Engineering samples are being sent to the main hyperscalers (Google, Microsoft, Amazon, Facebook), smaller Hyperscalers (Oracle, IBM, OVH, Destuche Telecom, Telefonica-Movistar) and the AI Datacenter crowd for pre-validation and feedback.

      These drives are aimed squarely at hiperscalers first and foremost, we consumers are an aftetought. We will see consumer version of these drives long aftert Hyperscalers adopted them en-mass

  • A lower price, RELIABLE 8TB drive would be nice. I bought two Seagate Ironwolf drives for my father for his 2-bay NAS, which he uses for backup (very low use). One failed within three months and it took almost as long to get them to replace it. Ideally, some 8TB SSD drives suitable for NAS use that don't cost £600 a piece would be good too...
    • by Wolfrider ( 856 )

      You might have better results with a burn-in test. DD write zeros to the entire drive followed by a SMART long scan. Helps to weed out shipping damage

      • What he said!

        Seriously, my drives have a strong tendency to do one of two things: Die within a few weeks or last until it doesn't make sense to keep them in the drive enclosure because they are so small (relatively)!

  • I'd be happy with MTBF high enough on these drives that RAID isn't needed, or at the minimum, "first gen" RAID, like single parity RAID 5, RAID 1, etc. No needing to have 3-4 levels of parity, or splitting drives up into sections, each section and each set of sections with RAID.

Nothing in progression can rest on its original plan. We may as well think of rocking a grown man in the cradle of an infant. -- Edmund Burke

Working...