Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage

Western Digital Aims For 100TB Hard Drives by 2030 (tomshardware.com) 62

Western Digital plans to introduce its first heat-assisted magnetic recording (HAMR) drives in late 2026, with 36TB conventional magnetic recording (CMR) and 44TB shingled UltraSMR variants. Volume production won't begin until the first half of 2027, following qualification by cloud data center providers in late 2026.

The company projects that HAMR technology, combined with OptiNAND, increased platter count, and mechanical improvements, will enable drives reaching 80TB CMR and 100TB UltraSMR capacities around 2030 -- a departure from Western Digital's previous commitment to microwave-assisted magnetic recording (MAMR) in 2017, which evolved into the energy-assisted perpendicular magnetic recording (ePMR) technology used in current drives.

Western Digital Aims For 100TB Hard Drives by 2030

Comments Filter:
  • Whoa.... (Score:5, Insightful)

    by iAmWaySmarterThanYou ( 10095012 ) on Friday February 14, 2025 @03:09PM (#65167003)

    I used to be storage guy back in the day dealing with petabytes when many people weren't sure of what gigabytes were. (Shitty job btw, never be storage guy).

    As drives got larger over the years it was great for cost and data center space but transfer rates and raid rebuild times became a problem. It's verrrrrry cool they can jam 100TB on a standard size drive soon and there are numerous real world uses for these but JFC you don't want to be the guy responsible when an array collapses before the rebuild completes or get yelled at by non-technical C level who think he's technical about why it takes so fucking long to back up, copy, transfer, or restore from these things.

    The tub gets bigger n bigger but the faucet and drain remain the same size.

    • As drives got larger over the years it was great for cost and data center space but transfer rates and raid rebuild times became a problem. It's verrrrrry cool they can jam 100TB on a standard size drive soon and there are numerous real world uses for these but JFC you don't want to be the guy responsible when an array collapses before the rebuild completes or get yelled at by non-technical C level who think he's technical about why it takes so fucking long to back up, copy, transfer, or restore from these things.

      The tub gets bigger n bigger but the faucet and drain remain the same size.

      Assuming physical form factor/platter count of the disk remains constant and the disk spins at the same speed I/O bandwidth is a function of density. Higher density drives generally have commensurately higher bandwidth.

      • Re: (Score:3, Insightful)

        They're all on the same bus. Be it SATA or SCSI or whatever. Usually SATA but shrug. There have been no external connector/bus improvements since SATA II.

        So that's where I was going with this. You get a tub upgraded to the size of an Olympic pool but you still fill and drain it with the same tiny pipes.

        Don't get me wrong. I really honestly do think 100TB is awesome but the connection to the system really needs an upgrade. Maybe these things are supposed to be usb5 or direct connection PCIE 5.0 or some

        • >"They're all on the same bus. Be it SATA or SCSI or whatever. Usually SATA but shrug. There have been no external connector/bus improvements since SATA II. So that's where I was going with this. You get a tub upgraded to the size of an Olympic pool but you still fill and drain it with the same tiny pipes."

          SAS is currently 12Gb/s. SAS4 will be 22.5Gb/s
          SATA version 3 is old, but 6Gb/s. I believe that is 732MB/s.

          The reality is that no existing spinning rust comes anywhere near to a sustained even 6Gb/s.

          • I'm talking data centers and raid systems. You're also talking wire speeds not real speeds.

            Anyway back to my original point: it takes a lot longer to rebuild a 100TB drive than a 5TB drive. We can easily agree on this. When you're waiting on a rebuild, 20x longer than current numbers (anywhere from a few hours to possibly days on some systems) is terrifying.

          • Re:Whoa.... (Score:5, Informative)

            by Temkin ( 112574 ) on Friday February 14, 2025 @08:15PM (#65167653)

            SAS4 will be 22.5Gb/s

            Is 22.5Gb/s... SAS4 kit has been available for purchase for at least 2 years. But... I have yet to see a 3.5" SAS4 drive. So all the large bulk storage switched enclosures are giving people SAS3 speeds anyway. I did get to play with a 48 drive SAS4 SSD array for a couple weeks. It was nice, but not "48 NVMe drives with hookers & blow" nice.

            SATA version 3 is old, but 6Gb/s. I believe that is 732MB/s.

            SATA3 is dead. There are working groups trying to spec out SAS5 now. It will probably be futile, but they are trying. Nobody is working on SATA at this point. There will never be a SATA4 spec, not ever. They all still sell SATA3, and there are lots of SATA boot drives getting sold to enterprise accounts, but... All the engineers & mid-level execs are asking "Why?" SAS3 & 4 get you thousands of devices, multipath, and cable lengths of 5 meters or so. SATA gets you... pig Latin when you could have had the full SCSI command set (IOW full conversational Latin...)

            If I had to guess, there will be some kind of switched PCIe over fiber technology that will solve the signal integrity at distance problem, and end SAS5 before it's born.

            I don't think we know yet what the areal density, spindle speed, or number of platters of the huge HAMR drives will be yet. It is quite possible that 80TB drive might be much faster than a current non-HAMR 20TB drive. I doubt it will be 4 times, though.

            Spindle speed is the big unsolvable problem. You're hit the limit of the materials used. The rotational latency caps out, you have to wait for the platters to spin the sector under the heads. So it's going to be something between 3600 & 7200 RPM for 3.5", and perhaps as much as 10 - 15k RPM for 2.5". I haven't heard of any progress on improving this. This is actually one of the reasons I got excited and then disappointed by the Seagate multi-actuator drives. I had hoped the controller would act as a "multi-thread" capability, but the FW just presents the other actuator as a second drive, which then becomes a "two-fer" point of failure...

            T

            • >"I had hoped the controller would act as a "multi-thread" capability, but the FW just presents the other actuator as a second drive, which then becomes a "two-fer" point of failure..."

              Yeah, it is stupid. I couldn't believe it. Let's totally destroy all logic and reason surrounding drives, failure modes, all drive tools/drivers/utilities/etc. What a mess.

              If you want more heads, then why not just double-wide on a single arm and read/write two tracks at once? Or double-deep on one arm and spin faster a

              • by uncqual ( 836337 )

                WRT two+ heads on one arm (or even one "big" head with multiple read/write units), if one head is tracking its track is it guaranteed the other is right on its track - even in varying temps and therefore expansion of disks/heads/arms - all of which are made of different materials and are of different structures? The tracks are very close together so positioning needs to be pretty precise.

                The two synced arms would add to the drive cost and probably somewhat reduce its reliability. About all spinning rust has

          • by cusco ( 717999 )

            In a well-designed array

            You may not have realized who you're replying to, it's the guy whose every post reveals the comprehension of technology that a not particularly bright 12 year-old might possess (and the world at large). He probably ended up as "the storage guy" because his supervisors needed someone to do the grunt work that more competent staff was too busy to deal with.

        • by tlhIngan ( 30335 )

          They're all on the same bus. Be it SATA or SCSI or whatever. Usually SATA but shrug. There have been no external connector/bus improvements since SATA II.

          SATA 3 is a thing, 650MB/s. SSDs abandoned it because they ended up saturating SATA3. Hard drives right now are barely breaking 200MB/sec off the media and are perfectly happy at SATA2 speeds. These large drives could easily start saturating SATA3 to/from the media.

          Still takes days to rebuild, but with these large arrays it's time to have RAID6 at a minimu

        • by AmiMoJo ( 196126 )

          These days the slow transfer speed of mechanical drives is somewhat mitigated by things like tiered storage. Huawei has developed tapes that include some flash memory storage too.

          HDDs will probably adopt U.2 or whatever the consumer version is called, once they start to push past the limits of SATA 3 bandwidth. Or maybe not, they may just largely abandon the consumer market for internal drives and only offer them in USB, as the consumer market is mostly SSDs now.

    • by Kisai ( 213879 )

      OptiNAND is such a shitshow, "Here you go, nice super-large HDD" NAND dies "Now you have a 3lb paperweight"

    • RAID is so last century, proper large scale storage uses software defined erasure coding.
    • Go parallel or go home.

    • As an individual user, I agree. It takes too long to replace the data on a huge drive. It's especially critical when it comes to copying data out of a drive that is beginning to fail.

  • My first computer was a 80286, with two, 8MB hard drives! I built my first computer around 1995 as a teenager. It was a Pentium 100Mhz, 16MB RAM, 1MB dedicated video card and a 1GB hard drive. I used to think "I will never fill this 1GB drive!" It honestly felt it was unlimited storage at the time. Today my current system has a 1TB solid state drive and is attached to a 16TB NAS server. A 100TB drive in 5 years doesn't seem all that far fetched. It may not be in your common off the shelf machine by tha
    • It was a Pentium 100Mhz, 16MB RAM, 1MB dedicated video card and a 1GB hard drive.

      My first "affordable" upgrade after my 386 40Mhz was the Pentium 90 Mhz and a 128 MB disk, a just under the top of the line PC for fl 2000,-.

      I was never a data hoarder and didn't need expensive large disks, and nowadays I use 2x 500 GB in Btrfs RAID 1 on my laptops with an about 300 GB volume for miscellaneous data (mostly occupied by game and media downloads). It's just about the right size that when it's close to filling up I feel that I have too much junk on it and need to clean it up.

      The two data server

    • 8MB luuuuuxury!

      Why when I was a lad, uphill both ways, sawn in half wit o bread knife etc

      My first machine was a BBC with a 5.25" disk drive. Hundreds of kilobytes! I could fit more on a D120 cassette tape though. My next teenage purchase later was a P133 with 72M of RAM (bought of a mate, his dad was in IT and had spare apparently) with a glorious 700M. And an Nvidia Riva128 graphics card with gloriously janky opengl drivers.

      I later added a 9 GB drive for Linux (dead rat 5.2) and a fancy 16x read 4x write

      • 5.25 inch hard drive? We should have been so lucky! Half a punch card, an abacus and a clip round the earhole and we would have thought wed died and gone to heaven! ...back downt pit int evening an straight up chimney of mainframe come sunrise Iâ(TM)ll have you knowâ¦

        • Abacus and paper punch cards? Look at you, Mr Fancy. My first computer used stone tablets for memory. Write speed was as fast as we could chisel. The upside was read speed was much faster, and no one has come up with a more durable storage that could last centuries.
    • OK I'll join in. My first PC was the original IBM PC with 8088 processor, 64K of RAM and a pair of 360KB floppy disks. I think that was 1985 or 1986.

      My first hard drive was a 5.25" 20MB drive in an external SCSI enclosure for my Macintosh Plus.

      Fast forward and today I'm on a 64 GB RAM Windows PC, booting from a 500 GB SSD with main storage being a 4TB NVMe SSD, and another 1TB NVMe. In my garage I have an old HP server with a 6TB spinning rust array (RAID 5 I think, but maybe RAID 6 - not sure).
    • by antdude ( 79039 )

      My IBM PS/2 model 30 286 10 Mhz PC came with a 30 MB HDD! I also bought Stacker software, without its hardware addon to double it but it only gave me like 1.5X more and of course slower speed. :P

  • Yeah you loose 100TB when it breaks, How dumb is that? How about developing Better optical Devices for large storage?
    • Much as I morn the end of that tech it appears they couldn't keep up.

    • by jedidiah ( 1196 )

      If it's spinning rust, it's not going to break suddenly and die unexpectedly. You will probably have enough time to copy your data before the drive dies assuming you didn't already have another copy or two already.

      • by kmoser ( 1469707 )
        Maybe...and maybe not. Spinning rust drives can fail catastrophically, too. If you're banking on them failing slowly enough for you to salvage data, the real problem is that you didn't make regular backups to begin with.
      • > If it's spinning rust, it's not going to break suddenly and die unexpectedly. You will probably have enough time to copy your data before the drive dies assuming you didn't already have another copy or two already.

        Nope, working in IT HDDs usually die suddenly, only rarely do they go gracefully as you'll only detect that if you are running smart tests frequently.

        It's particually a problem when it's an important PC that nobody bothered to mention to IT actually exists and of course it's 1997 HDD went bel

    • by thegarbz ( 1787294 ) on Friday February 14, 2025 @07:02PM (#65167535)

      But what is that 100TB? One of the features of the modern world is that you end up storing the same amount of content in the larger data. Think of your porn collection.

      Back in the days of 30GB HDDs you were jerking to 35MB real media files.
      In the days of 256GB HDDs we started spanking the monkey to 350-600MB DIVX files.
      In the days of eh 2TB HDDs we were gooning at 2-3GB H.264 4K movies.
      And currently in the days of 16TB HDDs we're shaking hands with the milkman to 20GB 8K VR porn.

      As the size of the HDD increases, we don't necessarily loose any more porn as a result.

      • by shanen ( 462549 )

        Not exactly the joke I was looking for, but filing it under obligatory, even with the typo/s.

        Reminds me of a period when I was working for Internet startups and one of the owners said the whole thing was based on porn. Also not funny that he was murdered not long after that conversation. Rumor said it was an attempted marijuana purchase that went bad...

        Story also reminds me of my earliest hard disks around 10 MB.

  • Some notes (Score:4, Interesting)

    by Okian Warrior ( 537106 ) on Friday February 14, 2025 @03:53PM (#65167115) Homepage Journal

    An hour of video is roughly 1GB, so one drive can store roughly 100,000 hours of video.

    Hypothetically, a person wearing a system containing one of these drives could record roughly 4 hours a day for 60 years. Assuming that most of the (awake) time we spend in our daily lives is unremarkable and not worth recording, one could make the claim that one of these drives could record a single person's entire lifetime. All the interactions you have, everything that everyone else says (including all the school lessons you receive), everyone you meet, all the books and articles and papers you read - everything significant in your lifetime could be recorded on one drive.

    Add an AI indexing system and you can have a quick index of your entire life.

    (And apropos this system, you could replay traumatic events, and this would help you get over the trauma. Or play the events to a trusted medical professional and get advice, and so on.)

    Secondly, one problem with app installation (on linux, I don't know how bad it is on other systems) is access to shared libraries of various revision and date. Compatibility has become a nightmare, and we now have to deal with multiple installation systems as well (apt-get, pip-install, CPAN, and so on). I've been in installation hell several times on my linux system, trying to get some bespoke configuration of library versions just to get some standard installed application to run. It's not fun.

    With large amounts of storage and fast internet, we might as well build apps with statically-compiled libraries (flatpaks and such) and just not have to worry about library versioning. This would also make supply chain compromising a little harder, since when a bad library is discovered it only affects certain compiled apps (which will be recompiled), and not have been blindly downloaded by all users during system update.

    I can see a lot of uses for large hard drives.

    • Yeah, I see AI being a big user of huge drives in the future.
      I suppose, in the future, Netflix could just send you an encrypted drive with all their shows in the mail so they don't have to pay for bandwidth. Could be a great deal for them.
      A third big reason for these things is that a lot of paranoid people are getting footage from their 12 security cameras 24/7 and need someplace to store all that useless data.

      For me, someone who isn't very interested in AI, security cameras or tv, I don't see many uses, bu

    • by Kisai ( 213879 )

      Nope. Not if it's 4K.

      The minimum acceptable quality is 16-25mbits for 1080p and 32-100mbits for 4K.

      I can fill up a 20TB hard drive with just 450 hours of 4K video.

      • by dgatwood ( 11270 )

        An hour of video is roughly 1GB,

        I can fill up a 20TB hard drive with just 450 hours of 4K video.

        Yeah, 1995 called and wants its 720p resolution back. That's about a gigabyte per hour. Even 1080p is higher than that at typical quality settings.

        Apple ProRes 4444 XQ video is 764 gigabytes per hour [vashivisuals.com], and Canon RAW 4k (12-bit) is 1036 GB per hour.

        And on the extreme end, one hour of 8k ProRes 4444 XQ video at 120 fps takes a whopping 14.57 TB [multiversemediagroup.com].

        So in the worst case, the GP was more than four orders of magnitude low with that estimate. :-)

    • Nice idea, recording ones whole life. Unfortunatley as a HDD it's not suited for being worn, nor will it last more than a good few years in constant use.

      Put a couple in a NAS that you upgrade / replace as maintenance requires, use a contantly 4G connected camera with SD card cache for when you have no signal, that might do it.

  • I recently put together a security system running Blue Iris and bought a bunch of really cheap used HGST drives in the 10 to 12 TB range. They're great for footage that will likely never be looked at, but the few times I've wanted to move a large file, the speed of file transfer was really disappointing. Obviously it is hard to improve this due to physical characteristics of drives, but it seems like little to no advancement has happened in a long time in regard to read write speeds.

  • I'm kind of surprised the usual "But we all use SSDs these days!" comments haven't started, but from my point of view this is fantastic anyway because I've started using HDDs as back-up media. You can get a SATA hotswap bay that'll fit in a standard half height 5.25" bay for about $30. Just insert the disk, copy your files, and then remove it.

    This is an application this kind of disk size seems built for and I suspect will still be relevant for in 5 years, as I suspect while demands on disk space will contin

  • " Finally, I can backup my ZFS pool! "

    " ...whaddayamean it's gonna cost $4k per drive--!? "

  • And... are they ok (as in not affected) if they are switched off and disconnected sitting on a shelf... in a decent environment, dryish, not too hot or cold, not dusty.
    • Depends on the innards as well as the quality of the compenents in the ancillary systems.

      So enterprise grade drives would have a higher MTBF than consumer ones ast they do today, but no drive is going to like sitting on a shelf for a decade (which is the minimum useful archive life) without periodic excersice for more than a couple of years each stretch.

      To actually get a real answer you will need to wait for these to be released and take a look at the datasheet, but remember MTBF is only an approximation an

  • As with every "great advancement" of the hard drive, the same questions apply every time. They can keep stacking the internal components as much as they want, but the real question is the durability and long-term storage accuracy of the device. Will the data on it last 50 years on a shelf? Will there be any component failure in the future? Capacitor rot, storage disk bit loss, etc. Are solid state drives more susceptible to this than hard disk drives? Everybody turns to China to get it's things made, but Ch
    • > Will the data on it last 50 years on a shelf?

      Yes, in fact as these are Heat Dot drives that have very advanced platters with uniform placement of uniformly shaped particles they will have an SNR so low as to retain data with longevity that todays drives can only dream of.

      But as you say, all the other components of what is essentially a computer with a motor and heads to read platters, all of those have shelf lives too. Plus you have the issues with the way such components are connected, via lead free

  • Haven't they been talking about HAMR and the like for over 5 years now?! Seems like more promises but we haven't seen these big storage increases materialise for years now.

Round Numbers are always false. -- Samuel Johnson

Working...