Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Windows IT Technology

Microsoft Tests Fix For Bug That Defrags SSD Drives Too Often (bleepingcomputer.com) 95

An anonymous reader shares a report: Windows 10 May 2020 Update, otherwise known as version 2004, was released in May with at least ten known issues. Microsoft later expanded the list of the problems and acknowledged that this feature update is also plagued with a bug that breaks Drive Optimize tool. After upgrading to Windows 10 version 2004, users observed that Optimize Drives (also known as defragmentation tool) is not correctly recording the last time a drive has been optimized. As a result, when you open the tool, you will see that your SSD drive says it 'Needs Optimization' even though you've manually optimized the drives already or automatic maintenance was run this morning. Since the last optimizations times are forgotten, Windows 10's built-in maintenance tool started defragging an SSD drive much more often when you restart Windows. With Windows 10 Build 19042.487 (20H2) for Insiders, Microsoft has finally resolved all problems with the Optimize Drives (also known as defragmentation tool).
This discussion has been archived. No new comments can be posted.

Microsoft Tests Fix For Bug That Defrags SSD Drives Too Often

Comments Filter:
  • by Indy1 ( 99447 ) on Thursday August 27, 2020 @11:57AM (#60446958)

    SSD's are NOT rotating media that depend on data being sequentially recorded !

    You don't defrag ram, so why the hell would you defrag an SSD ?!?!?!?!

    Idiots...

    • Yeah, I'm confused. What's the benefit of defragging SSDs vs the wear & tear this causes?
      • Re: (Score:2, Informative)

        There isn't one. Defragging makes no sense for SSDs. Moreover, doing it frequently will reduce the life of the drive. Moremoreover, it probably wouldn't do what defragging was originally intended to do anyway, because modern SSDs have built-in levelling technology to even out the load, so what appears as contiguous data at a certain level of your software stack might well not be physically contiguous on the SSD anyway. Again, not that it matters in the slightest.

      • by Dutch Gun ( 899105 ) on Thursday August 27, 2020 @12:53PM (#60447170)

        I was curious about this, so I did a bit of searching, resulting in this interesting blog post: https://www.hanselman.com/blog... [hanselman.com]

        As it turns out, there is a reason for Windows to occasionally defrag SSDs. Here are the meaty quotes from some developers on the Windows storage team:

        Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

        As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.

    • by ThatGype ( 5884680 ) on Thursday August 27, 2020 @12:01PM (#60446972)
      You can't defrag RAM, but at least you can download more [downloadram.net].
      • Cool, seems to be legit and virus free, just how I like my RAM! Thanks for the link!
      • by Bengie ( 1121981 )
        RAM does get fragmented but the OS already deals with it. Virtual memory fragmentation is a bigger issue because the only way to defrag is to copy around the data to different virtual address, whcih has to be done by the userland code. Physical memory is easier to defrag because the OS can page out enough virtually mapped memory, and page it back in contiguously. This usually isn't an issue until about 80% full memory. It's also mostly a useland issue because fragmentation is a function of different sized m
    • > You don't defrag ram,

      You mean like a Garbage Collector [wikipedia.org] ? /s

      Although modern OS's use ASLR [wikipedia.org] (Address Space Layout Randomization) for security reasons.

      --
      Ignoring memory management doesn't make it go away; now you have two problems.
      -- With apologies to JWZ

      • To me there is a clear difference in randomizing RAM addresses for security and defragging it. But for an SSD, the question still remains: why would you defrag it? I could see nothing but issues especially for multibit SSDs like TLC and QLC.
        • Defragging makes me feel superior to all those idiots who DON'T defrag.
        • You still defrag RAM with compacting GCs, though.
        • by Baloroth ( 2370816 ) on Thursday August 27, 2020 @02:15PM (#60447394)

          One reason is because flash memory can be erased or programmed, but not arbitrarily re-written. Erasing bits is typically done at the block level. So, for example, say you have 4k blocks, and 2 4k files. If you want to change only 1 of the files, but both are spread over both blocks, you need to copy both files to clean blocks (changing the bits you want in the file you want), and erase the old blocks. If both files are stored in their own separate block, you only need to write 1 of the files to a new block (and only erase that block). So you've doubled your effective writing speed by proper defraging. Also doubled your effective lifetime too.

          You can also get better reading speed if the files are defragged. Since defragged files will be stored in contiguous address space, you can perform block reads to read entire files, while if the files are split up among multiple blocks, you need to perform many reads of smaller pages. Worst case scenario, you could imagine a file striped so that words are stored in alternating addresses, at which point half your data transmission will be I/O request overhead, rather than actual data. This isn't purely theoretical, either: looking at actual benchmarks, there's about a factor of 10 between sequential read/writes and 4K random ones. You don't have the physical latency issue with drive head seeking, but random read/writes still add protocol overhead.

          Of course the SSD controller should handle some of this without Windows getting involved, but Windows understands the file structure in a way that the SSD cannot, so it can perform more intelligent defragging.

      • > You don't defrag ram,

        You mean like a Garbage Collector [wikipedia.org] ? /s

        Ehh, you're stretching. There's nothing inherent in the idea of garbage collection that suggests they do anything that in any way resembles defragging. Admittedly, some implementations (so-called "moving" garbage collectors) do engage in a process akin to defragging memory after they get done clearing out unused data, but the benefit they get is different than what you get when defragging a spinning disk.

        With spinning disks, defragging ensures that the data you want can be retrieved with a single seek opera

    • by nuckfuts ( 690967 ) on Thursday August 27, 2020 @12:19PM (#60447058)

      There are legitimate reasons for running defrag on an SSD, they're just different reasons than for HDDs. Here's a blog post that provides some details:

      https://www.hanselman.com/blog/TheRealAndCompleteStoryDoesWindowsDefragmentYourSSD.aspx [hanselman.com]

      • Re: (Score:3, Informative)

        Your link says that Microsoft defrags SSDs because "the filesystem" (presumably NTFS) has metadata limitations.

        That doesn't make much sense and the blogger seems to be using words he doesn't understand.

        The blogger is a self-described "failed stand-up comic" and appears to have no tech expertise.

        I googled and found a few other articles that also claim "defrag is good", but they all point back to the same ex-comedian blog as their authoritative source.

        • I think that part about being a failed stand up comic is a joke. He's a very prominent member of the tech community and has worked for Microsoft for many years. I'm surprised someone on whole been a Slashdot member as long as you isn't aware who he is.

        • by raymorris ( 2726007 ) on Thursday August 27, 2020 @01:14PM (#60447242) Journal

          The SSD presents itself to the OS as an ordered collection of sectors. "Sector 1" for the OS may not be physically the first sector, but there is some sector called "sector 1", followed by sector 2, etc. Since the OS doesn't know about the re-mapping to physical sectors, ignore that for now. It doesn't matter for the remainder of this post. For this purpose, there are a bunch of numbered sectors.

          Suppose you have a 1MB file. The filesystem would record that files lives in sectors 58472-59496.

          Supoose you have a 2 GB file. It may not be able to store that in contiguous sectors, because there might not be a contiguous 2 GB region free. In which case NTFS (or any filesystem) records that the file is in:

          sectors 1000-6780
          sectors 8600-9164
          sectors 1200-1276
          sectors 13858-14847
          sectors 18647-208574 ...

          Eventually you could have one large file spread over thousands of places. That's inefficient, and at some point there is a limit to how many NTFS can handle, something like 1024 segments per file or whatever.

          • The file system has to record in which sectors the file is in anyway. It doesn't matter what the numbers are.
            • It matters how MANY different segments there are.

              I tell my programmers that there are three numbers useful for programming:

              Zero
              One
              As many as you want

              Which means, for any property or configuration it is generally best that:
              That property isn't allowed (zero)
              The item has exactly one of those
              It's a list - however many you want

              And lean toward "as many as you want" - if you think customers will want alerts sent to their email, aomw customer is going to want those alerts to go to two people, or three pr four.

              Micro

              • by Kaenneth ( 82978 )

                Hard limits limit the impact of denial of service attacks. So no, you can't send an e-mail alert to the entire internet by stuffing mass addresses into the exchange server.

                • by Anonymous Coward
                  In the Microsoft case hard limits are encoded in software to drive people to pay for more expensive editions. Like SSAS (SQL Server Analysis Services), for example: SSAS Standard Edition can't use more than 16GB of RAM, even on a server with a couple of TB, so if you want to use more than 16GB you have to fork out heaps more dollars for the Enterprise Edition.
          • by Zuriel ( 1760072 )

            Except there's probably no correlation between logical sector numbers and physical storage, so by writing that entire file to a single contiguous set of logical sectors it's just going to be scattered across a different random bunch of flash blocks.

            The solution is to fix your filesystem so that it doesn't have a meltdown when there's more than an arbitrary amount of fragmentation. Or possibly throw details about the physical storage up to the OS level so that it isn't just randomly shuffling data around.

            • Again, this has nothing to do the mapping to physical sectors.

              Or as I said originally:
              "Since the OS doesn't know about the re-mapping to physical sectors, ignore that for now. It doesn't matter for the remainder of this post."

              To understand the issue with the filesystem, forget about physical - the filesystem doesn't know anything about physical sectors. This is a filesystem issue. It has nothing to do with physical sectors. It has to do with the length of the sector allocation lisr, which allocates logic

              • Re: (Score:3, Insightful)

                by Knuckles ( 8964 )

                So fix the filesystem?

              • by Zuriel ( 1760072 ) on Thursday August 27, 2020 @04:49PM (#60447784)
                The disconnect between physical and logical sectors is what makes this a filesystem issue, so it's important not to gloss over that part. "Defragging" isn't actually defragmenting the data on an SSD, it's just working around filesystem shortcomings.
              • I think you are missing the point entirely. There is a later of abstraction between three physical storage media (FLASH device) and the OS. Middleware as it were. The OS has no idea what it is doing and the one thing we know for sure is it isn't doing what the programmers seem to be assuming it is. Indeed different drives have different firmware. Even two drives of the same make and model may have different firmware versions that behave quite differently. The real fix here if there is one would be to increa
                • The reason it's defragging the FILESYSTEM is because the FILESYSTEM doesn't care about the physical transistors. The data structures in the FILESYSTEM are fragmented. That has absolutely nothing whatsoever to do with physical sectors. You could have the data stored on /dev/null and you'd still have the same in the filesystem.

                  • Btw if you happen to have a Linux system handy, Linux shows the analogous data structure in /etc/lvm/backup or you can see it by running:
                    lvdisplay -m

                    Try creating several lvs, deleting a couple at random, then creating ine that takes up the entire space, deleting that and making small ones, etc. If you run "lvdisplay -m" or pvdisplay -m in between you'll see your VOLUMES are getting fragmented. Which has jack to do with sectors.

            • Except there's probably no correlation between logical sector numbers and physical storage

              Except that is completely irrelevant since this is how the filesystems currently work.

              There are processes underway to improve this: Filesystems dedicated to SSD storage and controllers, OS level changes, and controllers that are able to communicate in a common language between the storage and the OS rather than converting the old "sector" based language into what the NAND flash understands. This will be necessary for the next level of speed increases.

              • That, and the fact that defragging the filesystem (not the physical drive), has absolutely nothing whatsoever to do with physical sectors. :)

                The reason to defrag a filesystem is so you can have this:

                ubuntu.iso
                logical blocks 846833-1026877
                fedora.iso
                logical blocks 2538393-57296629

                Instead of this:

                ubuntu.iso
                logical blocks 1-6
                logical blocks 100-212

            • Here is an extract from a directory entry:

              ubuntu.iso
              modified 2020-08-25 10:14
              logical blocks 846833-1026877
              fedora.iso
              modified 2020-08-25 08:28
              logical blocks 2538393-57296629

              Note the directory entry is logical blocks.
              There is no mention of physical blocks.

              Same directory entry, fragmented:

              ubuntu.iso

        • The blogger is quoting "developers on the Windows storage team", not simply proffering his own opinions.

          And you seem to have completely missed the discussion of TRIM / RETRIM for SSDs:

          Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer.

        • You conveniently left out the part where the blogger contacted and then directly quoted developers from the Microsoft Windows storage team. That's the reason people are treating it as authoritative, not due to whatever his past career happened to be, nor of his technical expertise.

          Unless you've got some more authoritative source, or are claiming the quotes are an outright fabrication, this seems about the best we have information-wise.

        • That doesn't make much sense

          Sigh. I suppose it doesn't make much sense to you that FAT32 can only store 4GB files as well. Maybe you should read up on filesystems, metadata and limitations. Also read up how fragmentation affects how filesystems need to record the allocation of data in files.

          Things start to make sense if you put effort into understanding them.

      • There are legitimate reasons for running defrag on an SSD, they're just different reasons than for HDDs. Here's a blog post that provides some details:

        https://www.hanselman.com/blog/TheRealAndCompleteStoryDoesWindowsDefragmentYourSSD.aspx [hanselman.com]

        This blog seems to be from 5 years ago, and a lot has changed in 5 years.

        It does seem reasonable to me, but I am not current on this subject. (I just want to be...)

    • You don't defrag ram, so why the hell would you defrag an SSD ?!?!?!?!

      As I recall, when you run "defrag" it actually performs a TRIM (and says that is what it is doing).

    • by DRJlaw ( 946416 )

      SSD's are NOT rotating media that depend on data being sequentially recorded !

      You don't defrag ram, so why the hell would you defrag an SSD ?!?!?!?!

      Idiots...

      Well then it's a good thing that Microsoft doesn't do that. TFA took one legitimate bug -- that the "Optimize Drives" application isn't recording an updated "Last analyzed or optimized date" -- and extrapolated it to over-optimizing SSDs.

      The problem with TFA is that the "Optimize Drives" application doesn't automatically run optimizations on SSDs, ever

    • Re: (Score:3, Informative)

      by bobbied ( 2522392 )

      SSD's are NOT rotating media that depend on data being sequentially recorded !

      You don't defrag ram, so why the hell would you defrag an SSD ?!?!?!?!

      Idiots...

      Microsoft doesn't "defrag" SSD's, they run "trim" on it, which simply tells the drive what sectors are indeed empty and even though have been written, don't need to be maintained. I don't think it will even let you run a defrag process on an SSD anymore. Trim allows the wear leveling algorithms more latitude in when managing what sectors to write to next because they have better information about what sectors it needs to keep and what sectors contain data nobody needs anymore.

      This isn't some huge hairy dea

      • Re: (Score:3, Informative)

        by EvilSS ( 557649 )
        Yep, it runs a trim operation, not a defrag. The article is a tempest in a teapot. Although you can run defrag on a SSD, but only from the command line. I just tried it for kicks and it is actually defragging my NVMe drive, not just doing a trim. Weird they allow it but I guess they figure if you go to the effort of opening an elevated command prompt and running it manually you must have some damn reason for it, stupid or not.
        • Hmm.. After some thought, I did come up with a couple of reasons to defrag, but it's a pretty rare set of use cases...

          If you where going to repartition a drive it might be a good idea to consolidate the data in the lower part of the partition. Defrag will do a pretty good job of this for you. Also, if you are planning to make a byte - by - byte copy of your SSD onto a spinning drive, it might be nice... However, in both these cases, there ae likely better ways to do this.

          Like you point out.. I suppose if

          • by EvilSS ( 557649 )
            Yea, turns out that there is a reason, and Windows does do it: If you have volume snapshots enabled, it will defrag once a month due to performance degradation as the SSD becomes fragmented.
            • Seriously.
              During a month a drive, regardless what tech, does not get fragmented.
              By what absurd usage pattern should that be even remotely possible?

          • If you where going to repartition a drive it might be a good idea to consolidate the data in the lower part of the partition.

            Not really, and in fact the opposite may be true. The idea of consolidation is somewhat misleading since SSDs are an array of chips, and put very simply it doesn't matter if two "sectors" from the point of view of the filesystem are on adjacent chips or not.

            • If you where going to repartition a drive it might be a good idea to consolidate the data in the lower part of the partition.

              Not really, and in fact the opposite may be true. The idea of consolidation is somewhat misleading since SSDs are an array of chips, and put very simply it doesn't matter if two "sectors" from the point of view of the filesystem are on adjacent chips or not.

              I'm not sure you understand how SSD's work here. They internally remap blocks of data all over the place in an effort to "wear level" how often sectors get written to. However, this is transparent to the operating system. Sector 1, is always addressed as sector 1, even if the drive controller is remapping it now and then.

              When you are adjusting partitions on a hard drive, you are moving what sectors hold what partition's information. This is by sector from the operating system's view of the data, and if

      • Comment removed based on user account deletion
      • Microsoft doesn't "defrag" SSD's, they run "trim" on it,

        Not according to the developers on the Windows Storage team [hanselman.com]. It appears that NTFS has limits that require occasional defragmentation:

        "Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. Itâ(TM)s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hi

        • If you use shadow copy, I guess you need to do that. Who among us uses shadow copy on an SSD? Where I have in the past, it was for doing quick snapshot backups of persistence data for some applications on a server (just stop the application, snap the copy, start the application and backup from the shadow copy at your leisure) but that's about it.

          So again.. No huge whoop here.. Maybe a bit annoying, but nothing to get up in arms about... Nothing that justifies such a article on Slashdot with it's mislead

    • by klashn ( 1323433 )
      Thank you for brining up this point... I can always count on slashdot! (I came here to say the same thing... defragging is not a thing on an SSD)
    • Article summary is stupid. Actual article may be stupid too, if it agrees with the summary.

      The Optimize Drive tool will TRIM solid-state drives and thin-provisioned virtual disks. It only defrags traditional HDs.

      It is perfectly safe to optimize frequently on SSDs, VMware virtual disks, and Hyper-V virtual disks. (I don't know if other hypervisors identify their disks properly, so I can't speak for them.)

      In fact, since frequently trimming allows the SSD to manage its wearing leveling more effective, this is

    • by vadim_t ( 324782 )

      This is pedantic, but in fact RAM fragmentation is a thing, and defragmenting it is also a thing.

      Especially in 32 bit systems, if you allocate a lot of RAM, you can end up with unusable holes that cut into your memory budget. Allocate 1 MB, allocate 512K, allocate 2MB, free the previous 512K. Now if you try to allocate 1MB, it'll be placed at the end, with the 512K hole in the middle being unusable.

      Enough of that kind of thing and you can actually run out of address space. It's possible for an application t

      • Now if you try to allocate 1MB, it'll be placed at the end, with the 512K hole in the middle being unusable.

        Windows never worked that way. If a new file request always looked for the first area on the disk big enough to hold the file without breaking it up, we wouldn't have needed to defrag. Instead, the file system will write the first part of the 1MB file into the 512K hole, then put the remainder into the next unused sector that it finds. Way back when, it was always suggested to put your OS files first on the HD because they rarely, if ever, needed to be rewriten. This meant the sectors with the lowest ad

        • by vadim_t ( 324782 )

          I'm not talking about the filesystem, but about RAM. As in the inner workings of malloc, which needs contiguous blocks of memory.

    • Actually, RAM fragmentation is a problem. Some patterns of heap allocation result in an application's data being scattered over widely separated addresses, which can slow down memory access.

      • You are confused, heap fragmentation does not equal RAM fragmentation. Two different concepts.

        • I am not sure what you were trying to say there. There is a load of storage, and it is allocated in lots of little bits. The storage is fragmented if the bits are scattered, so they are more difficult to access. In terms of RAM access speed, this relates to locality of reference and subsequent cache performance. I agree I may be misusing the term "fragmentation" to refer to that effect.

    • It doesn't defrag the SSD. It just calls TRIM.

      The tools is called Defragment and Optimize Drives.

      On an SSD there is no defragment option just optimize and that only calls TRIM.

    • by bobs666 ( 146801 )
      It the QDOS file system from 50 years ago that even has fragments that need defraging. 1980's UNIX systems never had MS stile Fragments. So defraging was never needed.
    • Comment removed based on user account deletion
    • by Sloppy ( 14984 )

      It probably makes sense for many filesystems. Suppose your filesystem uses extents, and your file is stored at blocks 1-3, 5-7, and 9-11. So you defrag, and now your file is stored in a single extent at blocks 10-18. Now you're using slightly less space for the extent data and it gets handled infinitesimally faster too.

      But if you're using FAT (or even worse, whatever the filesystem was called on Commodore floppy drives (*) ), the blocks used by a file are stored as a linked list so it wouldn't help.

      (*) That

    • SSD's are NOT rotating media that depend on data being sequentially recorded !
      You don't defrag ram, so why the hell would you defrag an SSD ?!?!?!?!
      Idiots...

      Occam's Razor, what's more likely:
      a) Microsoft who by and large competes well well in the job market employing top talent and spends $19bn on R&D every year decided to re-write their disk defragmentation tool to specifically identify Solid State Drives apply a different algorithm to them and a difference cadence for optimisation despite missing the "obvious" thing that you don't need to defrag SSDs,

      or...

      b) They know more about this than you do, and setup the system like this for a reason (let the killin

  • by ITRambo ( 1467509 ) on Thursday August 27, 2020 @12:08PM (#60446998)
    It's amazing how Microsoft simply breaks so many little things when they put out feature updates. How can something that worked perfectly for over four years suddenly stop working and possibly cause excessive wear on an SSD? Microsoft quality has really gone downhill over the past couple of years. But, it's okay! They reshuffled the Windows deck chairs again a couple of weeks ago. It'll all be great again, someday, maybe. Nah, not likely. They still don't use real people to fully test, like they did pre-2015.
    • Microsoft quality has really gone downhill over the past couple of years.

      And that's saying something.

    • Yeah, also the nature of the bug: the last patch doesn’t record when the tool last ran on a tool that needs to know when it last ran? That seems like a basic functionality that was missed in testing. "Hey the last patch to our shopping cart software removed the ability for people to add things to the shopping cart". No biggie, just ship it.
    • by EvilSS ( 557649 )

      How can something that worked perfectly for over four years suddenly stop working and possibly cause excessive wear on an SSD?

      The answer is that it can't, that's how. Windows does NOT defrag SSDs, despite whatever the asshole who wrote that article wants to try to imply for clicks. When you open the Optimize Drives tool and optimize a SSD, it runs a trim, not a defrag. You can trim a SSD until the cows come home each day and not cause any wear. The entire article is garbage.

      • by EvilSS ( 557649 )
        So turns out I'm the asshole and my reply the garbage. Although the article author could have pointed out WHY windows actually will defrag an SSD. Anyway, I found the original article and a blog from a MS engineer explaining it: https://www.bleepingcomputer.c... [bleepingcomputer.com]

        Also if you watch the video in the article, it does show the tool running a defrag, although if you run optimize directly from Optimize Drives instead of Security and Maintenance it will only run a TRIM, not a defrag.
  • by grep -v '.*' * ( 780312 ) on Thursday August 27, 2020 @12:16PM (#60447040)
    WOW -- it's like they should have caught that in testing before actually rolling it out.

    Oh, wait -- that's Windows, not Windows Enterprise. So I guess that they did, huh?

    And you know they meant TRIM and not defrag -- the editors are just lazy. (Or they're correct, in which case this really IS a doozie caught in public testing. You sure wouldn't roll that out to the people you care about ... aka the for-profit "people" that truly pay the bills.)
  • by Anonymous Coward

    Oh, they're fixing a bug that has to do with SSD Drives?

    When they're done fixing the bug with Solid State SSD Drives I hope they reward themselves by purchasing an All-Terrain ATV Vehicle, they're so much fun.

    If they don't have enough cash in their wallet, they can go to the nearest Automatic ATM Machine and make a withdrawal by entering their Personal PIN Number.

    • Whoa, slow your RPMs there, buddy!

      (Although in the case of SSD, I think people might get a bit touchy if you start referring to them as "SS Drives".)

  • by Anonymous Coward

    Why is Linux so far behind Microsoft that there is no defrag GUI and many distros don't even install the command-line defrag tool by default? Shame on you Mr. Torvalds!

    • because the Linux filesystems don't normally need defragged. You're running the garbage ext3 or 4? that's your problem.

  • After reading the article (I know...I'll turn in my Slashdot membership), this seems to be an issue with the copy-on-write mechanism used by Volume Snapshot which manages the snapshots created for System Restore. ZFS also uses copy-on-write for managing volume snapshots so is there anyone here well-versed in how ZFS operates to know if it also needs to perform some kind of defragmentation on SSDs in order to properly manage its snapshots? Everything I've read seems to indicate that there is no ZFS defrag
  • by EvilSS ( 557649 ) on Thursday August 27, 2020 @01:09PM (#60447230)
    So today is one time I wish there was an edit button on here. Did some digging and yea... Woops. Anyway, despite my prior replies, it turns out Windows WILL defrag a SSD under a specific circumstance:

    Actually Scott and Vadim are both wrong. Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

    As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.

    There is a good technical run down here: https://www.hanselman.com/blog... [hanselman.com]

    and the Bleeping computer article on the bug when it was found lays out some of the info from that article (and really should have been reiterated in the TFA as well).: https://www.bleepingcomputer.c... [bleepingcomputer.com]

    • I'd classify all those third-party blogsites as clickbait. Why not get the information from the engineers who who wrote the damn thing rather than rumourmongers who live by click and trace?

  • then what is the real world danger?
    Will Win10 notify user when it REALLY needs to optimize or notify of file error?
    Seems like adding wear to SSD to shorten useful life for folks who keep the same hardware for long long periods which environmentalist encourage folks to do to minimize waste.

  • I don't get this. Manufacturers are saying that defragging isn't necessary at all. SSDs are random access, so it doesn't matter where the data is on the device. I'm an old phart and I had a hard time getting used to this after decades of hard drives, but it appears to be true. Defragging an SSD does nothing but reduce it's lifespan.

    So why is Microsoft doing it at all? Planned obsolescence?

    • So why is Microsoft doing it at all?

      Why ask questions which have been answered some 10 times before you even posted?

  • Use a modern file system -- like all those provided with Linux -- and avoid all this archaic defragging nonsense. It's just one more irritating feature of Microsoft operating systems.

    • So I suppose you can provide a link on how to natively install Windows on these better filesystems? Or are you under the delusion ... sorry wrong word ... completely deranged enough to think that anyone on this entire planet even remotely considers the presence of a background optimisation task that runs once a month in the criteria for choosing an OS?

      irritating feature

      For something that has zero user impact what so ever you may either want read up the word irritating in the dictionary or take a serious look at your life, wh

  • by rossz ( 67331 )

    Why the hell are they defragging an SSD? That is not only unnecessary, it reduces the lifespan of the drive.

    Is there an option to turn that shit off?

  • Windows 10 Enterprise LTSC 2019 here. I must be missing something. When I go to 'Optimize' my SSD Windows says it's 'Trimming' NOT 'Defragmenting'(?) I've never seen my OS defragment my SSD. Trimming is useful in preventing the SSD from having to perform extra erase functions on cells before performing a write function. Was this a bug introduced in a new version of Windows 10?
  • because it feels good.

    then again, i like watching file copy progress bars.
  • It even introduced an assortment of pretty obscure bugs that aren't really acknowledged anywhere. Like now my login/lock screen blanks every 30 seconds or so, just turns black for a second and no input is recorded. It's reeeally annoying when trying to enter passwords. Also it happens every time the clock updates. Just plain weird, and there are no solutions anywhere.

A complex system that works is invariably found to have evolved from a simple system that works.

Working...