Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Microsoft Operating Systems Upgrades Technology

Windows 8 To Reduce Memory Footprint 306

bheer writes "Microsoft's Windows 8 blog has a good post about the work being done to reduce Windows 8's memory footprint. The OS will use multiple approaches to do this, including combining RAM pages, re-architecting old bits of code and adding new APIs for more granular memory management. Interestingly, it will also let services start on a trigger and stop when needed instead of running all the time."
This discussion has been archived. No new comments can be posted.

Windows 8 To Reduce Memory Footprint

Comments Filter:
  • Hope so... (Score:5, Interesting)

    by IANAAC ( 692242 ) on Saturday October 08, 2011 @09:43AM (#37647436)
    It's been a long time since I've dealt with Windows other than XP in a VM, and even that is rare.

    My old Asus netbook recently died, so I was forced to go out and buy another. I bought an Aspire One loaded with W7S. I really wanted to like W7. Really. I liked the interface. But damn, it was really slow and memory hungry. With no pgms running, it was taking up about 560-580M of memory, compared to Ubuntu (11.04) taking 260-270M with no pgms running.

    I really couldn't have more than two programs running in W7 without hitting 900M memory use. Granted, they were big pgms - Thunderbird and Firefox, both latest versions. But contrast that with Ubuntu where I ran TB, FF, Pidgin, Hotot, Tobmoy, LibreOffice and Rhythmbox all at the same time and never go above the 850M mark in memory use (at least not yet).

    This release of Ubuntu has its own set of problems (Compiz, anyone?), but I much prefer it to W7. If MS can get Window's memory usage down I'd be more inclined to use the latest version.

  • by Junta ( 36770 ) on Saturday October 08, 2011 @09:55AM (#37647494)

    Too many people view 'free' memory as a good thing and would complain if IO cache was reduced to improve the 'free' memory. However, they can find a measure to soothe their worries. I assume it is also the case in Windows, but in Linux, for example, the categorization of memory usage as disposable cache is clearly delineated (though some cached memory can't be disposed and it's hard to tell what *that* value is, which is a problem). If free memory is under pressure, cache is safely dropped and it was as if the memory was 'free', just it nominally helped. A user bitching can be pointed to the second line of free and told to get over it.

    Now to say the browser memory usage scenario is ok, that is problematic. Sure, caching content is great, but if your cache is in your RSS and other processes on the system have no way to get your disposable content to drop out for the sake of memory it needs to absolutely operate, that's a problem. If a webpage you haven't visited in 4 hours has a cached rendering taking up 64 MB and another process dies because it needed to alloc 40 MB, that's not good (values pulled out of ass for illustrative purposes). Incidentally, this is also an issue in virtualization, since a guests cached pages becomes indistinguishable from other content by the hypervisor, various weird hacks go into place for the guest to coordinate this with the host.

  • And this matters? (Score:5, Interesting)

    by grasshoppa ( 657393 ) on Saturday October 08, 2011 @10:24AM (#37647650) Homepage

    With windows 7, memory has become less an issue to me. I just don't care that much; I have 4 gigs, and stuff starts right up when I click on it. As a user, that's all I care about. I could obsess about how much memory is being used at all times, I guess, but what does that metric even mean? I currently have fo:nv, mstsc, 10tabs in ie and ~20 in chrome, everything is still snappy. What does it matter that the system is showing high ram utilization?

    What I'd like to see them focus on instead is the file system, and making searches work at least as well as they did in XP. Vista utterly broke file searching ( which is amazing in and of itself ), and while w7 brought back some of the functionality, it's still a crap shoot.

  • Re:Hope so... (Score:5, Interesting)

    by Runaway1956 ( 1322357 ) on Saturday October 08, 2011 @10:45AM (#37647796) Homepage Journal

    I find it difficult to believe that people are buying new machines with less than 4 gig of ram. Memory was cheap by the time Win7 came out - cheap enough to load a new machine with 4 gig, anyway. Maybe I'm something of an asshole, but anyone who invests hundreds of dollars in a new machine, and decides to go cheap on the memory deserves to have a shitty running machine. I don't care if it's an Apply fanboy, a Windows drone, or a Linux nut. BUY MEMORY, or don't complain about performance!

    Now, if you had said that you installed all the memory that the mainboard would support, and you were getting 60% to 80% usage before you even started any programs, THEN I would agree that there was a problem, I would sympathize with you, and I would be willing to look for the problem.

    A couple of guys have commented on how much memory their browsers use. Well, I've seen FF using around 1.5 gig, while at the same time, Chromium was using in excess of a gig of memory. As someone else commented - the memory is there, why not use it? It's better than waiting for "virtual memory" to thrash the hell out of my hard disks!

  • by NotSoHeavyD3 ( 1400425 ) on Saturday October 08, 2011 @11:37AM (#37648104) Journal
    I still remember at one place I worked they had me clean up a memory leak. Unfortunately I couldn't get it past QA because they didn't understand caching. Basically the deal is that when you free memory it goes back to the memory pool for the process and then the pool decides when to release it to the OS.(Which may be never) So when I freed my memory in debug build the pool immediately returned it to the OS. When QA did that in release the pool held on to the memory and reused it. I even showed them how if you did multiple processes one after the other you could actually see the app use more and more memory while after the fix it would plateau. (Because it was just re-using the memory it had already allocate.) They totally didn't understand, I might as well have explained it to the pavement outside the building. (In the end it just got marked as unfixable. After that if I saw any memory leaks while coding I fixed them as part of other bugs and then didn't mention it to QA.)
  • by mikael ( 484 ) on Saturday October 08, 2011 @12:37PM (#37648400)

    In the old days, there was two ways for a programmer to optimize code; for speed, or for size. You couldn't afford to not design your code, otherwise you would immediately run into memory and performance issues. 16-bit compilers wouldn't allow you to allocate more than 64K at a time. After your application loaded, there would be less than 128K free anyway.

    So you would have to take care to plan ahead where and when you were going to use memory. Is the variable going to be a persistent data block that is allocated when the module is first started (IO cache block), something that is just loaded and then discarded (configuration parameters), or loaded until the user no longer wants it (data file). For every variable, you would have to decide whether it was 8-bit, 16-bit or 32-bit, signed or unsigned and assign it accordingly.

    Floating point was expensive and you would use fixed-point integers whenever possible, at least until the 80486 came out.

    2D FFT on a large image (512x512) was
    implemented by loading in each row of pixels separately from disk, applying the transform, and writing out that row again. This would be repeated again for each column.

    Even if you did get everything planned out, there was still the chance you would run out of memory. Then you would have to go back and prune every variable for size. Do name strings really need 128 bytes? Do attribute flags really need to be 16-bit? Do coordinates need to be 16-bit?

    These days, there are two ways to write code; for shortest project completion time; or for reusable code. Either deadlines are so tight that everyone just throws in code on top of each other, or there is actually time to design and plan ahead.

    No one really bothers with whether structure or class variables are 8-bit, 16-bit or 32-bit, or whether an array should have an upper limit of 32, 128 or 1024, whether result codes should be returned to indicate whether the memory was allocated. Just defining variables as 'int' is good enough, and C++ container classes takes care of the dynamic allocation of arrays.

  • Re:Hope so... (Score:5, Interesting)

    by IANAAC ( 692242 ) on Saturday October 08, 2011 @12:52PM (#37648470)

    The Russinovich/Ionescu book "Windows Internals" has some pretty good sections that talk about how Windows memory management really works, if you're curious about it -- it would likely be enlightening about some of the misunderstands that people have about Windows.

    I will fully admit to not knowing the internals of memory management. But I can say without a doubt that W7 definitely takes more of a performance hit than Ubuntu with the same programs. Thunderbird and Firefox bring the machine to a crawl in W7, while they don't in Ubuntu. Memory management is the only reasonable cause I can think of. It certainly not the processor - it's an Atom 570 dual core running at 1.66GHz. Add a third largish program (Media Monkey in my case) and W7 becomes unresponsive - REALLY unresponsive.

    I don't get this behavior at all under Ubuntu, with more programs running, Granted, Ubuntu makes it slightly easier for me to see how memory is being used - probably because I'm a bit more familiar with it - by showing me buffers/cache. So as a layperson, I come to the conclusion that it's memory management.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...