Forgot your password?
typodupeerror
Data Storage

Unpowered SSDs in Your Drawer Are Slowly Losing Data (xda-developers.com) 79

An anonymous reader shares a report: Solid-state drives sitting unpowered in drawers or storage can lose data over time because voltage gradually leaks from their NAND flash cells, and consumer-grade drives using QLC NAND retain data for about a year while TLC NAND lasts up to three years without power. More expensive MLC and SLC NAND can hold data for five and ten years respectively. The voltage loss can result in missing data or completely unusable drives.

Hard drives remain more resistant to power loss despite their susceptibility to bit rot. Most users relying on SSDs for primary storage in regularly powered computers face little risk since drives typically stay unpowered for only a few months at most. The concern mainly affects creative professionals and researchers who need long-term archival storage.

This discussion has been archived. No new comments can be posted.

Unpowered SSDs in Your Drawer Are Slowly Losing Data

Comments Filter:
  • by ffkom ( 3519199 ) on Tuesday November 25, 2025 @11:34AM (#65817147)
    That the few extra electrons sitting isolated on some "floating gate" in a Flash RAM cell have the tendency to tunnel away over extended periods of time - and more so when exposed to more than the usual background gamma radiation - has been known since the early days of Flash RAM. If you want to archive data on some medium that you can put away for decades, magneto-optical disks are your best bet. I never lost a single bit from the 128MB MO-disks I wrote in early 1990s.

    But if you want to use NVMe bars as long-term storage, you'll need to setup some rack where they can be powered occasionally - and depending on their own intelligence to test-read and re-write their content, you might also need to actually re-write their content intentionally.
    • by reanjr ( 588767 ) on Tuesday November 25, 2025 @11:46AM (#65817155) Homepage

      I put mine into a machine once a year and dd the entire disk to /dev/null. So far, so good.

      • by piojo ( 995934 ) on Tuesday November 25, 2025 @11:57AM (#65817169)

        So far, so good.

        Just wait until you accidentally dd /dev/null to the entire disk.

        (Good thing that wouldn't actually work, until our AI overlords can correct "/dev/null" to "/dev/zero")

      • Seriously? Why not run extended SMART tests? It cycles through all the active areas *AND* gets and stores valuable diagnostic information and spits out a lovely report at the end of it.

        Your comment sounds like the kind of person who threw away the manual to their car and simply changes oil every 6 months hoping for the best.

        • Are there tools that work on all drives? My understanding is none of the SMART tools for Linux are reliable on all drives. They might report the wrong metrics. My approach doesn't rely on reporting metrics, but on the drive automatically moving data that's on failing cells.

          • I have never once come across a drive HDD or SSD that doesn't correctly respond to "smartctl -t long". Some drives may not respond with diagnostic data as expected because this is a vendor specific dataset without a standard implementation, but all drives ship with some mechanism for quick and extended tests, and they all respond to the same SMART command.

            That being said, your past experience may not be relevant here. Unlike HDDs, SSD controllers are actually governed by a standard of what SMART metrics to

    • Is that longevity of MO-disks partly based on how they were just built differently (less dense) back then? Kind of like the analogy to how SLC SSDs store for 10 years unlike QLC lasting far less time. My constant worry nowdays is that, in the pursuit of increased density, storage of every type has become fragile and more susceptible to errors/loss, especially when you leave it unpowered over time.
      • by ffkom ( 3519199 ) on Tuesday November 25, 2025 @11:58AM (#65817175)

        Is that longevity of MO-disks partly based on how they were just built differently (less dense) back then?

        No, the more dense MO-disks are just as reliable. The important point is that MO-media are far enough below their Curie-temperature at usual ambient conditions that they are "magnetically hard", so even sizable magnetic fields would not damage the information on the media. This is unlike the classic magnetic media, which is relatively easy to erase with modest magnetic fields. Would be interesting to know if modern "heat assisted magnetic recording" hard drives are of similar resilience while at ambient temperature.

        • I continue to use burned DVDs for backing up the critical stuff. Not perfect, of course, but not electromechanically-failure prone like a hard disk drive, not "terms of service" failure prone like cloud storage, and not "the charge magically held in the gate leaked away" failure prone. I have optical discs over 25 years old which are still perfectly readable.
          • by swillden ( 191260 ) <shawn-ds@willden.org> on Tuesday November 25, 2025 @05:11PM (#65817707) Journal

            I continue to use burned DVDs for backing up the critical stuff. Not perfect, of course, but not electromechanically-failure prone like a hard disk drive, not "terms of service" failure prone like cloud storage, and not "the charge magically held in the gate leaked away" failure prone. I have optical discs over 25 years old which are still perfectly readable.

            DVD-R? DVD+R? DVD+RW? Single or dual layer? Gold metallic layer? Silver metallic layer? How are they stored?

            Depending on how you answer those questions, your 25 year-old media may be past due and you've just gotten lucky, may be just entering the timeframe where it may die, or may have decades of reliable life left.

            DVD-R single layer disks with a gold metallic layer are good for 50-100 years. Other recordable DVD options are less durable, some as little as 5-10 years.

          • I do the same, two copies on different burners with different software to different brands of media. Problem is the media is getting really hard to find, the few remaining types out there are branded but made God-knows-where under who-knows-what conditions.
            • I do the same, two copies on different burners with different software to different brands of media. Problem is the media is getting really hard to find, the few remaining types out there are branded but made God-knows-where under who-knows-what conditions.

              And unless your on desktop, there are only some Japanese market laptops (h/t to /. for that?) that come with DVD drives. Yes you still could use a external one via USB or whatever but they stopped putting them in about half a decade ago on new machines in lots of other markets. That being said I have used old DVD's from the library and they work great 10+ years latter depending on what I am watching.

              How do you handle the different burners, brands of media if your only making two copies, do you do off

              • My desktop is just under 20 years old and still runs fine, meaning it sits at single-digit CPU utilisation most of the time and runs all the software I need it to. The burner I originally put in it is an LG, the second is something from a PC that was being scrapped. Software is CDBurner XP and something else (it's been a long time since I set them up). Storage is one copy locally and the second copy at a friend's place. It's pretty straightforward, just whatever works.
        • Would be interesting to know if modern "heat assisted magnetic recording" hard drives are of similar resilience while at ambient temperature.

          I'm guessing the answer is yes. The need for heat assistance implies high coercivity at anything close to room temperature. HAMR write temperatures go beyond 400C, far above room temperatures.

    • by AmiMoJo ( 196126 ) on Tuesday November 25, 2025 @12:14PM (#65817195) Homepage Journal

      What's changed is that in the early days flash memory was one bit per cell. Now most consumer grade stuff is multi level, so instead of a single threshold voltage that separates a 1 from a 0, there are multiple thresholds that each represent a different binary code.

      SSDs sometimes have to re-read blocks with different voltage thresholds to get good data, and make use of error correction on top.

      Presumably age related degradation is worse for multi-level flash.

      • SSDs sometimes have to re-read blocks with different voltage thresholds to get good data, and make use of error correction on top.

        NAND-style Flash pretty much REQUIRES a SECDED scheme as even simple reads can corrupt the storage over time. You're pretty much guaranteed a bit-flip at some point. But modern NAND controllers pretty much handle the correction and re-write "behind the scenes" by copying the whole block somewhere else (after correction) and re-mapping.

    • Depending on the need, archival quality Blu-ray Discs are not bad either. As to the material, it is not much unlike writing modern stone tablets.

  • by Anonymous Coward

    Unpowered flash memory self-erases. Magnetic media demagnetizes and corrupts. Optical media has a reflective layer that corrodes causing bit-rot. Paper decomposes and ink fades. If you want information to last, carve it in stone and bury it in the desert.

    • by ffkom ( 3519199 )

      If you want information to last, carve it in stone and bury it in the desert.

      Or pay some celebrity to throw a hissy fit in public about _that_information_ being exposed, and use the Streisand effect to have the information stored, forever, in an almost uncountable number of copies.

    • Data tapes from the 1960s are still readable if they were stored properly. Finding working 7 track drives are a different matter.

      • by davidwr ( 791652 )

        Paper tapes from the 1750s [wikipedia.org] are still readable if they were stored properly. Finding looms from that era are a different matter (but you could make a modern reproduction).

      • by Anonymous Coward
        I've heard a million excuses for why old tapes degrade (physically - the rust falls off) including that there was a transition period when whale oil was phased out and the new formulas weren't any good, but the point remains that plenty of old magnetic tapes fell apart and it's not because the tape makers intentionally made bad tapes. Ask Brian Eno who lost a lot of his masters when the rust fell off his tapes.
    • M-Disc is built with the intention of lasting a thousand years. The data layer is a stone-like material.

  • by mckwant ( 65143 ) on Tuesday November 25, 2025 @12:09PM (#65817185)

    Good grief, friend, who sets your HW budget?

    • Exactly!
      "The concern mainly affects creative professionals and researchers who need long-term archival storage"
      Who's paying these researchers? And why would they ever think SSD is "archival" ???
    • by EvilSS ( 557649 )
      I have a bunch of older SSDs sitting in a bin in my hardware horde...er...closet. Occasionally find a use for them but for the most part they were replaced by larger drives over time. Luckily I don't care about whatever data is on them, anything important is backed up.
    • I have got about 6 unused SSDs in a drawer. Low capacity, from 64GB to 128GB. Some are 15 years old now. I use them fairly rarely, to do things like write new OS boot images. I don't store data long term on them. But they all still work - no write or read errors. SMART data looks fine, too.

  • There is a JDEC spec for both consumer and data center SSDs. The consumer spec requires 1 year power off retention at 0% health remaining. The data center spec is only 90 days.

    As the SSD wears, the retention goes down. Years ago, a tech magazine (remember those) did a torture test on an old MLC Samsung 3D SSDs. It was healthy with no errors at well over 10X the rated endurance. Unfortunately, the data retention was measured in days. This is why it is so important to watch SMART for wear and replace drives that are worn out. They might still seem error free, but they are not really storage at some point.

    Other factors are temperature (cooler is better). Some have mentioned radiation, which probably is an issue if you are putting these in aircraft. Bottom line, if you really want the data to survive, replicate and keep it moving with good data integrity checks.
  • Does powering the drive refresh every bit? Or would you want to do something to ensure everything got read?
    • The mechanisms are proprietary but generally it works as a background refresh mechanism. So probably mostly when idle but even though it likely won't do it continously as that would wear it out and even just scanning would cause constant high power usage. It would be nice to be able to monitor when a refresh starts, how far it is, resume etc.
    • To be sure, cat the device to /dev/null
    • Best to use ZFS and scrub the disks on a schedule. A ZFS scrub will read every block of data and compare it to its checksum. If an error is found it will fix it using data from its mirror or raid copy.
    • It doesn't refresh the bits- it reduces the voltage delta that causes voltage to leak from the gates.
      This charge leakage also happens when it's powered, but much slower.
  • I just hope the firmware is stored on SLC because it sure likely isn't stored on a ROM chip. I have several SSDs that have been powered off for more than a year but with no data on them. They are used spares and have no data, but I don't want them to just stop working.

    • yeah wondering if they are storing the firmware on a separate BIOS-like flash chip (retention typically 20-40-100 years) or on the actual main flash chip to avoid that cost. Even if the drive is empty and the firmware itself survives it could be that the metadata (mapping tables etc.) could be corrupted and hence won't work anyway. I'm not sure if the drive will necessarily be able to reinitialize those metadata tables after corruption (because reinitializing them also reduces hope of data recovery). But AF
  • by allo ( 1728082 ) on Tuesday November 25, 2025 @01:13PM (#65817315)

    Does one need to rewrite all cells to "recharge" them or is it enough to connect the SSD for a day, a week, or another timespan?

    • by kubajz ( 964091 )

      Good question, I wanted to ask the same... and this being Slashdot, I am hoping for a nice trustworthy answer :)

    • The rewrite happens automatically in the background while the disk is idle - while proprietary I would guess all recent SSD's (at least TLC/QLC) does this. But note that it has not always been so - for instance, for the 840 EVO this was added in a firmware update, when retention starting to become a problem (it was one of the early TLC disks so likely it had not been much of an issue before). What is not known is how long to keep the disk on... when will the disk start the process, how long does it take, ca
      • by allo ( 1728082 )

        The question is, if I put my old SSD into an USB enclosure, for how long do I need to connect it every few months to be sure the data is safe?

        One could probably try to determine a minimum amount of time based on flash speed, but how fast does the controller work? Does it work when the SSD is idle but in principle powered, or does it only do it between real writes? Are the writes about the speed of the interface (for example SATA) or can one expect them to be faster because the flash itself is faster?

        Without

    • by ForTheVeryLastTime ( 1777610 ) on Tuesday November 25, 2025 @04:46PM (#65817667)

      The cells must be erased and rewritten, as no-one has (yet) made a NAND cell with refresh functionality similar to that of DRAM.

      Powering on an SSD will cause the controller to start managing the flash memory. Or not, as the case may be, if the controller is simple and cheap. More expensive controllers do indeed move data around to avoid data loss, but in doing so they consume valuable write cycles.

      In other words, plugging a USB stick into a charger will certainly not do you any good, but powering on a high-end SSD might.

      DO. NOT. TRUST. FLASH. MEMORY. WITH. YOUR. DATA.

      (See also: BIOS chips that suddenly fail, seemingly for no reason, bricking devices like motherboards and controllers.)

    • by gweihir ( 88907 )

      SSDs do automatic scrubbing, i.e. a sort-of self-refresh when powered. They do it only for cells that have gotten weak. No idea how long that takes though.

      You should be able for force a full test cycle by either reading the full SSD or by running a long SMART self-test.

      • by allo ( 1728082 )

        SMART long is a good idea. I guess many SSD manufacturers consider checking if a block is weak a good thing to check during a SMART test as well.
        On the other hand, isn't a SMART test defined to do only read operations?

        • by gweihir ( 88907 )

          SMART tests are not really defined. But reading a cell should automatically trigger a refresh if the read was weak or did require ECC. This is not a visible "write", more like housekeeping.

    • Normally SSDs do data cleanup utilities when idle so simply having it powered on would do the job. But if you want to do this as periodic maintenance then use the time to do something useful, run an extended SMART diagnostics. It's guaranteed to touch the entire drive and spit out a nice report at the end of it.

    • A read is supposed to be fine. At read time the firmware *should* rewrite the cell if the read is weak.

      The firmware also *should* go out and patrol the cells when idle and it has power.

      you can dd if=/dev/sdX of=/dev/null bs=2M once a year if your firmware behaves.

      If your drive is offline you could
      dd if=/dev/sdX of=/dev/sdX bs=2M iflag=fullblock conv=sync,noerror status=progress

      to be sure, though write endurance is finite.

      If you're running zfs you can 'zpool scrub poolname' to force validation of all the wri

      • by allo ( 1728082 )

        I don't think write endurance is a problem for desktop-use drives of this decade. Most of them are specified to have TBW that you will never achieve, and especially not if its a drive that is most the time unpowered.

    • In my case, only a complete refresh worked. It's not like SSDs have some kind of internal clock to keep track of when cells were last refreshed (assuming the drive even does its own maintenance, which it clearly did not).

      Continuously powered, but several years old. [ninechime.com]

      After a refresh. [ninechime.com]

      Notice how temps went down a lot after the refresh, too. Clearly, the drive was struggling a lot to read cells.

      • by allo ( 1728082 )

        I think btrfs has a mode to completely rewrite data, which may come in handy. I am not sure if this really rewrites all metadata, though. What does it help to have all file data, when the metadata is missing.

    • Also, can you stick it in a USB recharger, or does it have to be a complete USB host device?
      • by allo ( 1728082 )

        I'd think that plugging a USB drive into a USB charger would not do much at all. Any smart drive would not power on if the data pins are dead, will it?

  • Gives a whole new meaning to "use it or lose it".

  • I was waiting to see a post along the lines of

    " I do a full-disk byte dump and print it out. Then all I have to do is scan the paper copy back in later."

    • by HiThere ( 15173 )

      Not a good idea, but you could try paper tape.

    • by gweihir ( 88907 )

      Depending on the data, that is a perfectly valid approach. Example: Root CA secret key. Make sure to use pigmented, non-acidic ink or laser.

  • I have a bunch of old data stored on Kodak Gold CDRs from the 1990s. Kodak claimed 100 year archive life -- although I guess this was just a "best guess" based on accelerated aging tests.

    Perhaps I should check them and make sure that bit-rot hasn't set in.

    Otherwise I don't bother with backups, they're far too stressful. I mean... if you're backing stuff up you've got to choose the right media, keep a copy off-site and have a restore strategy in place. If you don't backup then none of this is a worry an

    • by caseih ( 160668 )

      I always thought most CD-Rs have a pretty short shelf life. And recordable DVDs too. Thy use ink that changes color when the laser heats it to store data instead of a pit, and that ink degrades.

      • by gweihir ( 88907 )

        It depends on the quality of the dye layer, the quality of the coating and other factors. For DVD recordables, same thing. The exception is DVD-RAM which use phase-change and can theoretically be archive-grade. But everything has to work for that. I tried with some and apparently disk and drive need to be matched for it to work well. At the time I tried, there were no current drives and disks with that information available.

    • Have you tried to read any of those old CDRs back? I tried reading some 20 year old CDRs and got a 100% failure rate. Not premium CDRs, not stored in a sealed environment, but reasonably environmentally controlled (60-95 deg F, humidity 20-50%- no rapid changes in either).

    • by gweihir ( 88907 )

      It is a matter of luck. Better not depend on them. Some DVD-RAM were archive grade, but only when written with the right drives.

  • HTWingNut on YouTube has been running a long-term experiment to answer this question. The sample size is small but it's pretty interesting:
    https://www.youtube.com/watch?... [youtube.com]

    • by HiThere ( 15173 )

      My sample size was small (just a couple), but it decided me not to trust SSDs for backup even though everyone on Slashdot said I should trust them. What I'm afraid is that portable USB drives will start being main with SSDs rather than spinning rust without bothering to tell me.

      • SSDs work for backup with a level of redundancy. Like the tires on a car, they can get worn. They also can degrade just with time. Copy, verify the copy, store the copies separately. I think the tests done confirm that they wear.

        I trust capacitors over spinning rust because I'm better at limiting electrical damage than mechanical. It is unrealistic to assume you can eliminate failures- you can only reduce their likelihood.

        • by gweihir ( 88907 )

          No. These tests confirm that SSDs need to do data scrubbing to be reliable and hence need to have power for longer-term storage. The wear is a secondary effect.

      • by gweihir ( 88907 )

        My sample size was small (just a couple), but it decided me not to trust SSDs for backup even though everyone on Slashdot said I should trust them.

        I most certainly never said such a stupid thing. SSDs are NOT long-term storage and neither are USB-sticks.

        • by HiThere ( 15173 )

          I'm not saying any particular person said that, and the question to Slashdot was asked over 2 decades ago. But I was assured that SSDs were "now reliable as an archival store", despite my informal test failure. (I had backed up something to them, and stuck them in a drawer for perhaps a year. They became unreadable.)

          • by gweihir ( 88907 )

            I do not doubt that. We have some large-ego-small-insight "tech" people here, same as any tech forum. These then state total insightless nonsense with confidence. People like that are unable to tell when to fact-check, but have total confidence in their knowledge. And they are always around in some form.

            Come to think of it, modern LLM communication is modelled on these idiots, because they can convince people. People like that also do well in sales, religion and politics.

            Funny thing: I was asked about the s

  • Flash and DRAM are both ways of storing/reading back charge on a capacitor. If you don't read and rewrite, the charge dissipates. DRAM can be written more often, Flash lasts a longer time. If you don't periodically read and rewrite, the data stored on either goes away.

  • Anybody that did minimal research has known that for ages. SSDs do and need to do scrubbing, i.e. data refresh and for that they need to have power. If you want longer-term unpowered storage, use HDDs (but better stay away from the SMR trash). For long term storage use archive-grade tape or paper. Or stone tablets if it is low volume.

After Goliath's defeat, giants ceased to command respect. - Freeman Dyson

Working...