Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Windows Operating Systems Software

Windows Memory Manager To Introduce Compression 231

jones_supa writes: Even though the RTM version of Windows 10 is already out of the door, Microsoft will keep releasing beta builds of the operating system to Windows Insiders. The first one will be build 10525, which introduces some color personalization options, but also interesting improvements to memory management. A new concept is called a compression store, which is an in-memory collection of compressed pages. When memory pressure gets high enough, stale pages will be compressed instead of swapping them out. The compression store will live in the System process's working set. As usual, Microsoft will be receiving comments on the new features via the Feedback app.
This discussion has been archived. No new comments can be posted.

Windows Memory Manager To Introduce Compression

Comments Filter:
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Tuesday August 18, 2015 @03:40PM (#50342359) Homepage Journal

    Welcome to 2014 [wikipedia.org]

    • by SoftwareArtist ( 1472499 ) on Tuesday August 18, 2015 @03:42PM (#50342389)

      You mean Welcome to 1990 [thefreedictionary.com]. Everything old is new again.

      • Yeah, I had RAM Doubler for Macintosh, too. But this is actually included with the OS. The sibling commenter pointing out that OSX wins by a year wins that competition, though.

        I was actually imagining that some crusty old fart would crop up to tell us you could do it in VMS or something but so far nope

        • by lgw ( 121541 )

          We did it on our mainframe platform, but that was mid-90s, so others were first.

        • by shess ( 31691 )

          Yeah, I had RAM Doubler for Macintosh, too. But this is actually included with the OS. The sibling commenter pointing out that OSX wins by a year wins that competition, though.

          I was actually imagining that some crusty old fart would crop up to tell us you could do it in VMS or something but so far nope

          NeXTSTEP 3 (I think) had this in the early 90s. Then the rationale was that compressing pages on the way to disk reduced I/O load.

          Like: http://www.nextcomputers.org/N... [nextcomputers.org]

        • There are two big reasons for the "Hell no". First, in the 70s and 80s Computers were all about money. If you could afford it, you did it. If you did not have enough memory, you either re-wrote or paid for more. I'm sure we could have reserved some core to hold the LZ libraries, but that would consume more space than any piece of code we ran. Space conservation would be the second reason.

          I'd say the same for the first *Nix systems as well. If you had to worry about compressing in memory you were doing

          • by Cyberax ( 705495 )
            Oh, please. Simple compression/decompression code fits into 1kb with another kb or so for dictionary and decompression buffers. So with a typical 50% compression rate the breakeven point is around 8kb. In the middle of 80-s computers had around 1Mb of RAM.
            • by murdocj ( 543661 )

              Wasn't the Mac limited to 128Kb because that was all anyone would ever need? For that matter, most people were running DOS on PCs and that had the 640kb limit.

              • Wasn't the Mac limited to 128Kb because that was all anyone would ever need?

                Lisa ("Mac XL", prototype mac essentially) 512k-1MB, 2x HDD expansion only.

                Original mac, 128k expandable to 512k by soldering, no expansion bus. Followed by 512k version with no other changes.

                All macs thereafter: 1+MB, expansion bus

      • by Cafe Alpha ( 891670 ) on Tuesday August 18, 2015 @04:52PM (#50342955) Journal

        I vaguely remembered that that (or a similar product) was analyzed and it actually did nothing.

        There was code in it, but all that code was bypassed. One imagines that the programmer couldn't get it working but had to ship something - and his bosses couldn't actually tell if the driver DID anything.

        • by Karlt1 ( 231423 ) on Tuesday August 18, 2015 @04:59PM (#50343007)

          The Macintosh version actually did a few things. Mostly to help alleviate Classic Mac OS's piss poor memory management where you had to pre-allocate a contiguous chunk of memory to each process -- manually.

          • The Macintosh version actually did a few things. Mostly to help alleviate Classic Mac OS's piss poor memory management where you had to pre-allocate a contiguous chunk of memory to each process -- manually.

            this is a necessary step on any hardware that doesn't have virtual memory, regardless of operating system

            • Yeah but the problem was that classic MacOS did have virtual memory as of System 7.

            • by DrYak ( 748999 ) on Tuesday August 18, 2015 @06:42PM (#50343507) Homepage

              this is a necessary step on any hardware that doesn't have virtual memory, regardless of operating system

              That doesn't have virtual memory AND uses a flat memory model (i.e.: where there's a single huge continuous address space)
              If the OS needs to move memory around (paging, etc.) the only solution is to change the pointer which need to point elsewhere in memory, hence the complicated handles and pointer on Mac's Classic System, on 68k PalmOS, etc.

              Meanwhile, the PC's 286 also lacked a virtual memory (that did only came later with the 386), but used (and abused) the protected mode's segmented memory as a "poor's man virtual memory".
              Protected mode memory was accessed through a segment: a "handle" pointing where the chunk actually stays in memory. (A bit more complex than the real mode segment of 8088/8086 which where just spead 16bytes appart).
              The soft doesn't know much, it only uses the handle it was assigned to use. If the OS needs to move memory around, it just maps the segment to a different address space. The soft doesn't know and keeps using the same handle as before.

              I'm not saying that the 286 architecture was better, just explaining a bit why Intel choose to stick with segments in protected mode.
              (in fact the 68k architecture was better, being 32/16 bits hybrid and being able to handle pointer mapping any position in a flat memory representation, whereas the 286 was pure 16bits and required a mumbo-jumbo of segment to handle anything bigger than 64k)

              • Meanwhile, the PC's 286 also lacked a virtual memory (that did only came later with the 386), but used (and abused) the protected mode's segmented memory as a "poor's man virtual memory".
                Protected mode memory was accessed through a segment: a "handle" pointing where the chunk actually stays in memory. (A bit more complex than the real mode segment of 8088/8086 which where just spead 16bytes appart).
                The soft doesn't know much, it only uses the handle it was assigned to use. If the OS needs to move memory a

            • by sodul ( 833177 )

              Why do I have to do that with Java as well?

            • this is a necessary step on any hardware that doesn't have virtual memory, regardless of operating system

              What's preventing applications from allocating non-contiguous blocks of memory?

        • Yeah, SoftRAM [wikipedia.org] was sued and declared guilty because it did nothing (worse, it slowed down the system). Other products did at least try, but the increase in apparent RAM came at a great performance cost, which sort of defeats the point.

      • Re: (Score:3, Interesting)

        by Ravaldy ( 2621787 )

        Guys, guys, guys, guys!!!

        Come on. Companies can't list new features without being called negatively on it?

        It's silly to point out completely different implementations of the same concept as "DONE BEFORE!". Compression is old and has, could and will be used for many different strategies in the future.

        New uses for old concepts is an ongoing thing and should not be regarded as non original. By those standards flight was never a big achievement since birds have been flying for millions of years.

        • This isn't a new use for an old concept. It's precisely the same use implemented in essentially the same way: modify the virtual memory system so pages get kept in memory in compressed form, rather than being written out to disk.

          I'm not saying it's not a good idea, or that Microsoft shouldn't be doing it. But they're one of the last to arrive at the party. OS X and Linux both already have this feature, and it's been available through third party products for decades.

      • You are joking, right? RAM doubler was a scam. Machines were not fast enough to compress on the fly.

      • You mean Welcome to 1990. Everything old is new again.

        As well as it should be. We now have fast multicore CPUs that (should) have the spare capacity to handle such background tasks without degradation of performance.

      • by higuita ( 129722 )

        do you know that ram double was fake, it just increased the reported size and reverse engineering showed that it didn't even had any compression code! :)

    • OSX in 2013. (Score:5, Informative)

      by Henriok ( 6762 ) on Tuesday August 18, 2015 @03:46PM (#50342421)
      Welcome to 2013! [arstechnica.com] as it was then compressed memory was introduced in Mac OS X.
      • Welcome to 2012! [gmane.org] as it was when compressed memory was introduced in Linux.

        • Re:OSX in 2013. (Score:5, Insightful)

          by Lothsahn ( 221388 ) <Lothsahn@@@SPAM_ ... tardsgooglmailcm> on Tuesday August 18, 2015 @04:32PM (#50342789)
          Awesome! I didn't even know this was in Linux. This would be really useful on my desktop downstairs!

          ...proceeds to Google "zswap linux ubuntu"
          http://askubuntu.com/questions/361320/how-can-i-enable-zswap

          Oh, so it's not enabled by default in my distro?

          According to the kernel documentation, zswap can be enabled by setting zswap.enabled=1 at boot time. Zswap is is still an experimental technology

          Oh, great, it's experimental.

          It has been enabled and disabled at various times throughout release cycles. – Ken Sharp

          Wonderful! If I turn it on, it may suddenly turn itself off when I get a kernel update for 14.04.

          You know, I often hear "Linux already has that", but it doesn't work right, isn't enabled by default on basically all distros, or isn't configured such that 99% of Linux users aren't using it. Saying you have something when it's experimental, not enabled by default, enables and disables with updates, and not easily available to the vast majority of your users is silly.
          • ...proceeds to Google "zswap linux ubuntu"

            No. What you want is zram [wikipedia.org], not zswap.

            zram tries to compress pages in RAM, without swapping them to disk. I've only recently enabled this on one of my Debian Jessie boxes (an Intel Core 4 Duo with a motherboard that has a weird memory configuration that in practical terms limits it to 4GB of RAM), however my experience with the equivalent subsystem on OS X has been fantastic. Pages may still later be swapped to disk, but on OS X at least the system aims for a 2:1 compression ratio, holding successfully co

            • by MSG ( 12810 )

              What you want is zram, not zswap. ... zswap is about compressing the swap file

              Not according to the documentation.

              https://www.kernel.org/doc/Doc... [kernel.org]

              [Zswap] takes pages that are in the process of being swapped out and attempts to compress them into a dynamically allocated RAM-based memory pool.

            • More precisely (Score:4, Informative)

              by DrYak ( 748999 ) on Tuesday August 18, 2015 @07:02PM (#50343595) Homepage

              To be more precise:

              - ZRAM create a block device that's compressed. A bit like a regular ramdisk, except that it is compressed with LZO on the flight.
              It can be used for anything that a block device can be.
              Traditionnaly that has been compress swap in-memory, but could be used for anything else (you could put a temporary file system on it).
              Swap-on-ZRAM effectively doubles the amount of RAM: allocate 256MiB for ZRAM, get probably ~512MiB of swap on it. i.e.: you can hold extra 256MiB in-RAM.

              The draw back is that swap has no concept of ZRAM and can't intelligently fallback to harddisk. You just give some swap partition on ZRAM and on HDD. All the swap are filled according to their priority.
              Thus you can end-up with poorly compressible data on ZRAM, or with older data that's seldom using on ZRAM while the more used data is swapped to HDD.

              - Zswap : puts an extra compression stage in the swap system between RAM and Swap. Instead of swapping out memory straigh to disk-based swap, swaped-out pages are first compressed and put in a compressed store in-RAM, then once this store is full, the least-used compressed pages are sent to disk. as the swapping system is fully aware of this (it's an actual extra layer in it) it will correctly elect to write to disk least recently used part of the compressed stage.

              Another advantage is that Zswap can use any compression algorithm supported by the kernel. That includes LZ4 which is blinding fast and is usually IO-bound.
              That means the CPU load doesn't suffer much, and in fact Swap-performance improves thanks to the saved bandwidth.

              - Zcache : like Zswap. But instead of being an extra layer added only inside the swap-mecanism, Zcache can add similar intermediate store to other projects too (file cache, etc).

          • Some Android devices ship with zram enabled. It may not be easy for you to use, but it is usable.
          • Re:OSX in 2013. (Score:5, Informative)

            by MSG ( 12810 ) on Tuesday August 18, 2015 @06:18PM (#50343383)

            That document is several years old now.

            Oh, so it's not enabled by default in my distro?

            It appears to be enabled currently in both Ubuntu, Fedora, and RHEL and CentOS.

            Oh, great, it's experimental.

            It was marked experimental in 2013. In the context of a discussion about a feature that hasn't even been introduced in Windows, it's fair to note that Linux developers have been working on such a feature, and made it generally available several years earlier.

            Wonderful! If I turn it on, it may suddenly turn itself off when I get a kernel update for 14.04.

            It was disabled in Ubuntu while they tried to diagnose instability in a PPC kernel. The feature was not related to the instability.

            If you don't like Ubuntu's method of kernel maintenance, by all means, use a different distribution. However, the practices of one company should not be considered a defect in *Linux*.

            Saying you have something when it's experimental, not enabled by default, enables and disables with updates, and not easily available to the vast majority of your users is silly.

            It would be, perhaps, but you have all of your facts wrong.

            • MSG:

              Thanks for the additional information. None of this is readily available in the first links for Ubuntu, zswap, or Linux, and the items I quoted are either current documentation or statements from 6 months ago--so I expected them to be accurate. In addition, the current kernel documentation of zswap STILL lists it as experimental:
              https://www.kernel.org/doc/Doc... [kernel.org]

              That said, given this info, many of my earlier points were incorrect. I just enabled it on for my downstairs desktop. It's still not
    • Welcome to 1996 [wikipedia.org].

    • by hvdh ( 1447205 )

      zRAM's previous name was compcache, and that was available for Linux since 2008.
      https://code.google.com/p/comp... [google.com]
      In 2014, zRAM just became a part of Linux kernel tree.

  • by sillivalley ( 411349 ) <sillivalley@nospaM.comcast.net> on Tuesday August 18, 2015 @03:49PM (#50342461)
    Gee, an Apple product did this in the 90's, compressing memory segments assigned to processes not currently executing.
    (see, for example, https://www.usenix.org/legacy/... [usenix.org])

    The same product was Apple's first to use pre-emptive multitasking,

    The product? Newton.
    • Gee, an Apple product did this in the 90's, compressing memory segments assigned to processes not currently executing.

      So did a Microsoft product called DoubleMem. This is really old tech, and Microsoft has even done it before. They even got in legal trouble over it, since they stole the code from the original creators (no, not Apple) Stacker.

  • Doesn't this make ECC memory even more needed ?
    Since compression is the process of reducing redundant information, any bit flip could kill the entire compressed unit.
    • by Jeremi ( 14640 )

      A single bit flip can have catastrophic results without compression too, if it's the wrong bit.

      • Yes, but it can do more damage to compressed memory. Let's say that your compressed memory says "repeat number 126 seven times". Now that value 126 gets corrupted, and it becomes, say 94. Now when the memory is uncompressed, you get 94 seven times. The error is expanded sevenfold.
  • by 140Mandak262Jamuna ( 970587 ) on Tuesday August 18, 2015 @04:01PM (#50342561) Journal

    , which introduces some color personalization options, ...

    You no longer have to put up with the blue screen of death. Now you have the option to have speckled, sparkled, opalescent, translucent, scintillating, coruscant, flourescent, effervescent blue screens of death.

  • SoftRAM *shudders* (Score:4, Informative)

    by Anonymous Coward on Tuesday August 18, 2015 @04:06PM (#50342587)

    This seems eerily similar to what SoftRAM was trying to do in the mid-90s. Anyone remember this? "Double Your Memory!" was it's claim and in fact the tagline on the box cover. This was back when RAM cost a fortune and everyone needed more than they had in order to run Windows 95. The company made a killing... at first.

    https://en.wikipedia.org/wiki/SoftRAM

    I actually worked for them, and I saw the whole thing happen from start to finish. It was quite a wild ride. Mark Russinovich and Andrew Schulman took a particular offense to the software and set about publicly dissecting it and working feverishly to prove that it didn't work. They thought the whole thing was a scam. I personally witnessed tests that indicated it was doing exactly what it said it did - however it was difficult to prove any worthwhile effect under realistic working conditions. It seemed that the primary problem was that the program needed to reserve a chunk of memory to do it's thing, then it had to make intelligent decisions about what to put in there. If it was wrong (i.e., it compressed something that the user was going to close anyway, and the user opened a new program instead of retrieving the compressed one), the memory was wasted and overall performance (of opening the new application) was diminished. The reduction in overall memory at the outset may have been putting a strain on the system which the codec was unable to outperform. To aggravate things, the software also performed a few well-documented registry tricks to optimize the pagefile settings which led critics to claim that is indeed all that it was doing.

    The proof I saw, for example, if you made a spreadsheet with millions of 1s in each cell, then made a cell calculating the total of all the cells, with SoftRAM, the calculation would take a quarter of a second. Without SoftRAM, a ton of the data got swapped to disk and the calculation took like 30 seconds. However, as soon as you put realistic data into the spreadsheet, the improvement basically disappeared because it wasn't compressible enough with the algorithms they were using. They actually hired a very famous compression expert at the time, who liked to talk a lot and bill them at something like $350/hour or something crazy and it didn't seem to help at all.

    Eventually the company lost a class action suit and had to refund millions back to customers. They were never able to recover, despite using their wealth to acquire and improve various products. A few of the products they put out were good, like the Mac RAM management tool (though it pre-existed, and really, the company ruined the design and marketing for it), others (like BigDisk which faked your system into believing multiple disks were one volume) had problems and could be extremely dangerous if used incorrectly.

    Ahh, good times.

    • by GerbilSoft ( 761537 ) on Tuesday August 18, 2015 @04:23PM (#50342707)
      SoftRAM's problem was that it didn't actually do what it claimed to [drdobbs.com]. It adjusted some parameters that improved swapping performance on Windows 3.1, but on Windows 95 it was effectively a nop, and could actually cause problems due to non-reentrant code.
    • This seems eerily similar to what SoftRAM was trying to do in the mid-90s.

      My question is, with mid-level machines coming with 16gig of RAM, why would I need compression at all? What the hell is Windows doing that it needs more than 16gig? Can't the NSA write more efficient spyware?

      • My question is, with mid-level machines coming with 16gig of RAM, why would I need compression at all?

        Because not all machines are mid-level. With a lot of smaller machines, especially phones, tablets, and detachable laptops, the 1-2 GB that comes soldered on when you buy it is all you get.

    • by gl4ss ( 559668 )

      if you actually worked there, how come you don't know that 3rd parties decompiled it to see that the released binary did jack shit nothing of the sort? it just made the swap bigger, something that could be done without it.

      I mean, the program was supposed to compress ram but nobody could prove that it did that, but could prove that it did nothing of the sort.

      it's possible that you were witnessing different software than what they actually shipped. but it might be that the actual software they shipped was fas

  • My concern with any memory management strategy under Windows is that even the current, disk-based virtual memory system is horrible at determining the "memory pressure" statistic. Under Windows 7, when I have a memory-intensive operation running, I'll hear the disk grinding away paging the whole time, while the system monitor shows physical memory usage at 60%. Even if the other 40% is disk cache, I'm pretty sure the foreground process should take precedence.

    The other frustrating scenario is in sleep mode:

    • by Nkwe ( 604125 )

      My suspicion there is a feature which gets the machine hibernated while sleeping, to recover in the case of a power outage. The feature pretty much kills the usefulness of sleep, though, if every wake is a wake from hibernate.

      Assuming your machine is configured properly: When you sleep, as you suspected, memory is written out to disk as insurance against power is lost. When you come out of sleep (assuming you didn't lose power), Windows resumes from sleep without reading everything back in from disk. If you did lose power, Windows resumes from hibernate and reads memory back in from disk.

  • As usual, Microsoft will be receiving comments on the new features via the Feedback app.

    After our offices relocated we started having a strange unexplained auto reboot of windows 7 systems. Seemingly random, different machines on different days, whether overnight jobs were running or not it did not matter. But every other day one machine would have rebooted overnight. Took enormous amounts of digging, but the clue was that it was always between 12 midnight and 12:30 AM. Finally localized it to some service called "Windows Experience". Apparently it was introduced when Vista came along to pop u

  • The first one will be build 10525, which introduces some color personalization options

    Will I finally be able to have active/inactive windows coloured differently enough that I can tell which is which at a glance? That's been missing since Vista (unless you're willing to disable Aero)

  • by PopeRatzo ( 965947 ) on Tuesday August 18, 2015 @05:16PM (#50343113) Journal

    Does this mean I have to put HIMEM.SYS and EMM386.EXE back into my config.sys file? I think I still remember some of the MS-DOS edit commands.

  • What will this do to mitigate buffer overflows, stack exploits and other memory management bugs.
  • by Theovon ( 109752 ) on Tuesday August 18, 2015 @07:52PM (#50343827)

    I have a Mac and have therefore had compressed swap for some time now. Theoretically, it's much faster than swap, even if you have an SSD. But there's a tradeoff. When swapping, the disk is busy, but the CPU is free to do other work, although things bog down a lot when thrashing happens. When doing compressed swap, the memory management hogs the CPU, which means it's not free to run other programs, and the system slows down. And thrashing still happens. It's just that your laptop heats up more when it's happening, and things don't get any less sluggish.

    Of course, the biggest problem is Safari. I'll get Safari Web Content processes taking up 10GB or more. There's obviously some kind of run-away memory leak going on. Always when my system bogs down, it's Safari that's taking up too much RAM. Quit Safari, and the system becomes responsive again.

    • With 4 or more cores in every computer it's pretty rare for the CPU to be a bottleneck these days. In fact it's been rare for the CPU to be a bottleneck for the last 20 years.

  • For a beta-version, it would need to be at last "feature complete".

Keep up the good work! But please don't ask me to help.

Working...