Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Windows

Ars Analysis Calls Windows 7 Memory Usage Claims "Scaremongering" 334

Via newsycombinator comes a reaction at Ars Technica to the recently reported claims of excessive memory use on machines running Windows 7. From the article: "I installed the XPnet performance monitoring tool and waited for it to upload my data to see what it might be complaining about. The cause of the problem was immediately apparent. It's no secret that Windows 7, just like Windows Vista before it, includes aggressive disk caching. The SuperFetch technology causes Windows to preload certain data if the OS detects that it is used regularly, even if there is no specific need for it at any given moment. Though SuperFetch is a little less aggressive in Windows 7, it will still use a substantial amount of memory—but with an important proviso. The OS will only use memory for cache when there is no other demand for that memory."
This discussion has been archived. No new comments can be posted.

Ars Analysis Calls Windows 7 Memory Usage Claims "Scaremongering"

Comments Filter:
  • So (Score:5, Informative)

    by sopssa ( 1498795 ) * <sopssa@email.com> on Saturday February 20, 2010 @03:30PM (#31212712) Journal

    Though SuperFetch is a little less aggressive in Windows 7, it will still use a substantial amount of memory—but with an important proviso. The OS will only use memory for cache when there is no other demand for that memory.

    I really wonder when people will get this. In the earlier thread I saw people commenting that Windows 95 didn't need so much memory and so on..

    To state it again. This is not RAM memory you need, use or have purpose for. IF you do need it, it is zeroed-out and free'd to application in like 30ms (one frame in usual FPS games).

    If you have fast memory, do use it to it's full extend.

    • 30ms? (Score:3, Informative)

      by Colin Smith ( 2679 )

      Really? 30ms. Shit that's slow.

      This IS RAM we're talking about here, right? y'know nanosecond stuff, 10^-9 not 10^-3 seconds.

      • Re: (Score:2, Interesting)

        by sopssa ( 1498795 ) *

        This is in the case you need to free like 1GB of normal RAM. Other than like Photoshop and games don't need that much. But anyway, it has to be zeroed out for security, and so that other apps cannot randomly get data from other apps.

        Besides the technical points, 30ms to free that RAM is really low. You wont even notice that.

        • Re:30ms? (Score:5, Informative)

          by ashridah ( 72567 ) on Saturday February 20, 2010 @04:34PM (#31213294)

          You might want to grab a copy of Process Explorer sometime, and look at the stats it reports. you'll notice that windows actually spends idle time pre-zeroing ram, so that this is already done, in more than enough amounts. If your system is slammed, i could see having to pre-zero the pages, just before use, however, but it's not like it's not something that couldn't be done while waiting for other I/O operations to complete (since your system is slammed anyway :) )

          My laptop currently has 2.8 million pages zeroed atm (it has 8gb, and I don't have much running right now, so there's not a lot to cache.)

      • Re: (Score:2, Informative)

        This IS RAM we're talking about here, right? y'know nanosecond stuff, 10^-9 not 10^-3 seconds.

        Yes, and nanoseconds (10^-9) multiplied by the number of memory locations to clear (10^6 when you're talking multi-MB chunks of memory) gets us right back in the millisecond (10^-3) range. Which is just a blink of the eye for us humans, btw.

    • Re: (Score:2, Informative)

      by Amnenth ( 698898 )

      People aren't taking the time to learn the meaning of the 'available' memory stat in the Task Manager.

      Based on my experience, it's usually very close to the total you get when adding 'free' memory and 'cached' memory together.

      'Available' in this case means that, as parent suggests, Windows will free it for use as soon as it's needed.

      • Re:So (Score:5, Informative)

        by ls671 ( 1122017 ) * on Saturday February 20, 2010 @03:53PM (#31212938) Homepage

        Yep, same for Linux. My Linux boxes use ALL the memory available even if I do not run many applications on it. The left over memory SHOULD be used as buffers/cache. If Windows 7 seems to use more memory from a newbie point of view, it might be because it does things like it should better than previous versions. I can't tell for sure since I have never tried win 7.

        See this 4 GB Linux machine below, it only has ~49 MB of "absolutely free" memory and uses ~449 MB of swap.

        In realty, it has ~2842 MB of "available memory" since it uses ~2792 MB of buffer/cache.

        Using buffer/cache makes the system order of magnitude faster. If programs need that memory, the OS will give to them and use less buffer/cache.

        free
                                  total used free shared buffers cached
        Mem: 4133252 4083380 49872 0 26852 2766248
        -/+ buffers/cache: 1290280 2842972
        Swap: 1999800 449244 1550556

    • Re:So (Score:5, Insightful)

      by Dutch Gun ( 899105 ) on Saturday February 20, 2010 @03:55PM (#31212952)

      More to the point, the company that wrote this little monitoring tool badly misunderstood basic principles of how the operating system works. At this point, I think we can move on and completely disregard any conclusion they came to. It either demonstrated profound ignorance or a deliberate attempt to mislead people it what turned out to be a slashvertisement of their products and company.

      From the article:

      One might almost think that this whole exercise was simply a cynical ploy. Allegations of Microsoft bloatware are, of course, nothing new, and oblique references to the old canard that what Intel gives, Microsoft takes away does nothing to dispel the impression that this is another case of Microsoft bashing.

      What a surprise. Fortunately, people really didn't even let them get away with it even in the previous article. Microsoft deserves plenty of what slashdot slings its way, but let's stick try sticking to facts.

    • "The OS will only use memory for cache when there is no other demand for that memory"

      Ok, I'm not going to bother and read the smart people. I'm going to go straight to my point.

      If you are using nearly all available RAM for disk cache, then EVERY REQUEST FOR RAM WILL REQUIRE CACHE DUMP.

      It's like this;

      If you have 4GB RAM and are using, say, 1.5GB for applications and system, and you use 2.2GB RAM for cache, then you are left with 300MB approx for any new demand. So any demand in excess is going to make your

      • You seem to think that its not a read-only cache. SuperFetch caches disk blocks as they appear on the disk. The "dumping" of them means to not consider them as valid cache anymore. There is no need to write them out.
      • Re:So (Score:5, Insightful)

        by Blakey Rat ( 99501 ) on Saturday February 20, 2010 @04:40PM (#31213348)

        The vast, vast, vast, vast majority of that cached memory is read-only caches (like DLL caching and superfetch) which doesn't need to be "dumped". Some small, very small, portion of it is read/write disk cache, but that portion is never going to be dumped unless you're *completely* out of memory otherwise. And that's basically a "last resort failure mode" at that point.

        You're as bad as the guys who wrote that article in the first place. If you don't know how Windows works, please don't talk about it.

        • Re:So (Score:5, Informative)

          by Rockoon ( 1252108 ) on Saturday February 20, 2010 @04:45PM (#31213390)

          You're as bad as the guys who wrote that article in the first place. If you don't know how Windows works, please don't talk about it.

          Hell, its not just windows. All operating systems do this.. and to be quite frank, programmers of all kinds should have cache techniques well understood. So the GP is neither a windows guru nor a decent programmer. The odds are very good that hes just an I-use-software geek, rather than someone who knows anything about computers.

    • To state it again. This is not RAM memory you need, use or have purpose for. IF you do need it, it is zeroed-out and free'd to application in like 30ms (one frame in usual FPS games).

      It's more like 100ms on an average PC, but yes, you are correct.

      But since background stuff will be happening too, maybe 120ms...

      If 120ms isn't an acceptable delay, then you need an OS where programs are geared for low disk IO usage, and low memory usage. That will prevent any software from interfering with any other software, giving very fast and consistent performance.

      Selection of software is big. For example, the difference between My Uninstaller [nirsoft.net] and Add/Remove in XP is huge. You wouldn't notice on a fast

      • To state it again. This is not RAM memory you need, use or have purpose for. IF you do need it, it is zeroed-out and free'd to application in like 30ms (one frame in usual FPS games).

        It's more like 100ms on an average PC, but yes, you are correct.

        Err.. what is more like 100ms? Where are you getting these numbers from?

        Superfetch is a crutch. A handy one, but it shouldn't actually be necessary to use it have great startup performance for your favourite apps.

        By design, your favorite apps would be precisely the ones to benefit from SuperFetch.

        • By design, your favorite apps would be precisely the ones to benefit from SuperFetch.

          Exactly, I still can't believe all the complaints people have against Superfetch ever since Vista came out. The whole purpose of it is that it monitors your computer usage and keeps copies of commonly used DLLs and applications in memory. So when you want to start one of your favorite apps, the files it needs are already in memory and it can start instantly. Without Superfetch you'd be reading the files off the hard driv

    • Re:So (Score:5, Insightful)

      by Jah-Wren Ryel ( 80510 ) on Saturday February 20, 2010 @04:45PM (#31213398)

      To state it again. This is not RAM memory you need, use or have purpose for. IF you do need it, it is zeroed-out and free'd to application in like 30ms (one frame in usual FPS games).

      The problem with previous versions of windows (I haven't used anything newer than XP) is in how the OS decides that you do not "need, use or have a purpose for" certain types of memory.

      The pathological, and yet all too common case with XP is the OS's decision that text pages should be dumped in favor of disk cache far too soon. The result being that if you have multiple apps open and a few that you haven't touched for roughly 10 minutes and then go to copy a couple of gigabytes of files around the text pages for those 'idle' applications are flushed out and the disk cache loaded with parts of those copied files (which you are unlikely to ever need). When you click on the iconbar to bring one of those formerly idle apps back to the foreground the system grinds away for a long time (obviously machine dependent but never instantly and frequently way beyond the point of annoying) as it reloads those text pages from disk before the application even starts to redraw itself much less starts becoming fully interactive again.

      The worst part about that behavior is that, to the best of my knowledge, there are no knobs to tweak it. I can't specify how long a text page needs to be idle before it should be a candidate for flushing or even if it should be pinned down permanently so that is never paged out. I once went looking to see if there was a way to do it from within the application code itself - something like mlock()/mlockall() in posix - and I couldn't find an equivalent, which may just be a reflection of my own inexperience with the Windows API but I figured I would throw that out there anyway.

      • Re:So (Score:5, Informative)

        by Foolhardy ( 664051 ) <csmith32@gmai l . com> on Saturday February 20, 2010 @09:21PM (#31215360)

        I once went looking to see if there was a way to do it from within the application code itself - something like mlock()/mlockall() in posix - and I couldn't find an equivalent, which may just be a reflection of my own inexperience with the Windows API but I figured I would throw that out there anyway.

        The function you're looking for is VirtualLock [microsoft.com]. You may also look into increasing the process's minimum working set with SetProcessWorkingSetSize [microsoft.com]. This requires SeIncreaseBasePriorityPrivilege.

        A process that is scanning through a file is supposed to use the FILE_FLAG_SEQUENTIAL_SCAN hint so that the cached pages are recycled first, but that doesn't always happen. It also doesn't help that csrss will ask the kernel to minimize a process's working set when its main window is minimized.

        • Re:So (Score:5, Informative)

          by m_pll ( 527654 ) on Saturday February 20, 2010 @10:41PM (#31215752)

          Starting with Vista, working sets of GUI processes are no longer emptied when the main window is minimized.

          For the standby cache recycle problem, Superfetch can help a lot. First of all, it can detect when apps do things like read lots of files sequentially without using FILE_FLAG_SEQUENTIAL_SCAN (or when they do this through a mapped view) and deprioritize these pages so they don't affect normal standby memory. And if useful pages still end up being recycled (e.g. because some app temporarily consumed lots of memory), Superfetch can re-populate them from disk later.

  • by Hadlock ( 143607 ) on Saturday February 20, 2010 @03:43PM (#31212848) Homepage Journal

    I think it's just a sign of the times. I regularly bump up against my 2GB ram limit (once a day) if I have GIMP/Photoshop open, 3 or 4 Chrome windows open with 10-20 tabs each (many of those being youtube videos), usually a videogame in the background (Windowed No Border mode at full or almost full screen resolution rules), along with whatever else I'm doing, a paused VLC video, steam, and any other background apps + whatever I'm working on currently. This isn't a problem in Win7, it's a problem of Leaving a Bunch of Shit open all the time.

    • Re: (Score:3, Funny)

      by Courageous ( 228506 )

      *shrug*

      Windows 7, 64 bit here. 8GB RAM. Intel X25-M SSD. Seems like nothing I do can make my computer "sluggish".

      C//

      • by Hadlock ( 143607 )

        I haven't had any issues with sluggishness, it's just that Chrome tends to die when I hit the 2GB cap. It's speedy as all get out up until that point.

      • Windows 7, 64 bit here. 8GB RAM. Intel X25-M SSD. Seems like nothing I do can make my computer "sluggish".

        Same here, down to the SSD. I (obviously) still have only 60MB "free" memory but 6240-6250MB available.

    • Re: (Score:2, Offtopic)

      Just curious, whats the point of having so many open tabs with youtube videos? Do you keep having to refer back to them?

      • I often find myself with a long list of tabs open presenting a history of my travels during searches allowing me quick backtracking to various points. Youtube or other flash content isn't unusual in many tabs. It's just easier than closing this or that tab only to find the one you closed had a potential link or piece of information you now need. When all is said and done I just X the Window and start fresh, but up till that point you could potentially have a vast number of pages of all sorts of content open
        • by jon3k ( 691256 )
          I highly HIGHLY recommend a flash blocking add-on like FlashBlock [mozilla.org] for Firefox. There will be a play button where all the embeded flash videos would be and it won't load them until you click play. You can of course whitelist sites that you'd like to load all flash from. But now you don't have to have those 10 pages in tabs each with 2, 3 or more flash ads or graphics eating up CPU cycles.
      • by Hadlock ( 143607 )

        Sometimes it's a catchy song I want to listen to again later, but probably don't want to favorite. Or I did a search and found more than one interesting, tangential video I might want to watch later, but don't have time to now. Other times I simply forget to close them. Sometimes I leave them open to link to later in a blog or email/facebook etc. Maybe if there was some sort of intermediate between "youtube favorites" and "web browser history", I would replace my current system.

    • by ls671 ( 1122017 ) *

      > I regularly bump up against my 2GB ram limit

      I ALWAYS bump up against my 4GB ram limit and it is perfectly normal ! ;-)))

      The shit you "leave open" will be swapped out, so there is no problem there either....

      I currently have 355 processes running on my system; 51 bash shells, 33 httpd, 65 rotatelogs, 3 XVNC server, 3 VMWare hosts, etc...

      And it all runs as smooth as a baby as long as you do not try to use ALL the processes at once ;-))

      http://slashdot.org/comments.pl?sid=1557492&cid=31212938&art_po [slashdot.org]

  • by mschuyler ( 197441 ) on Saturday February 20, 2010 @03:53PM (#31212928) Homepage Journal

    Seems to use 1.3 gig no matter what I do. It's got 4 gig total. Boots in less than 2 minutes. It makes mistakes a lot faster than the old machine, which was just shy of a brick when I switched.

  • Linux does that (Score:5, Informative)

    by Animats ( 122034 ) on Saturday February 20, 2010 @03:59PM (#31212994) Homepage

    Linux uses available memory for cache, and rather aggressively. All available memory can be filled with cached file blocks. This happens routinely on systems which have big randomly-accessed files open, like databases.

    There's nothing wrong with this, except that, once in a while, Linux hits a race condition in prune_one_dentry, causing an "oops" crash, when there's an unblockable need for a memory page and something is locking the file block cache.

    This is one of the Great Unsolved Mysteries of Linux. Linus wrote about it in 2001 [indiana.edu] ("I'll try to think about it some more, but I'd love to have more reports to go on to try to find a pattern.. "). As of 2009, this area is still giving trouble. [google.com] The locking in this area is very complex. [lwn.net]

  • by davmoo ( 63521 ) on Saturday February 20, 2010 @04:16PM (#31213128)

    if everyone is so afraid of their computer memory being used to the fullest, why do these people install so much of it?

    I've got 8GB of ram in the machine I'm on at the moment, and I want the OS and applications to use it to the fullest and most efficient extent possible at all times. I didn't install a 64-bit OS and 8GB of ram so that I can see 6GB free at all times.

    • by Overzeetop ( 214511 ) on Saturday February 20, 2010 @04:35PM (#31213300) Journal

      if everyone is so afraid of their computer memory being used to the fullest, why do these people install so much of it?

      Most users remember back to at least the 90s. You had to install enough ram to do what you needed (load the OS and program(s) - WinNT had the audacity to require 8MB to run well). There was no caching of any useful sort, so your free memory was really a measure of how many programs you could load. Programs, like Photoshop, added scratch files to overcome the physical RAM limits, but at a horrible performance penalty should have to actually use it. "Free RAM" became synonymous with "how many things you could do or open simultaneously."

      All modern operating systems have moved on, but people haven't been educated about this. They remember how bad it was when they ran out of memory, and panic when the OS reports it's almost full. Honestly, it would be far better if MS would have reported the cached memory differently. I don't really care how much memory is used as superfetch cache most of the time - I'm more concerned with the total active usage. My netbook "only" has 2GB, but I do some 24/96 audio recording with it, and will occasionally work with photoshop images, so I am concerned if I have less than 500-600MB free when I open a session as I'm likely to exceed the physical RAM. I can read the data, so it's not a big deal, but others freak out about it.

    • But it makes me feel so awesome to see 6GB free. I'm all like, "Damn, I have a lot of RAM!" When the RAM is all fully, my system monitoring graphs don't look at cool. I also like seeing my CPU utilization showing 4 cores, each idling around 1%, and having a multiple terabytes of free space on my hard drives. All those graphs get ruined if you actually use your computer for stuff.

  • Too much? (Score:3, Insightful)

    by beej ( 82035 ) on Saturday February 20, 2010 @04:25PM (#31213196) Homepage Journal

    If Windows refused to use your RAM that you had installed, now that would be an issue. But fully using RAM? This, on its own, is not something to complain about.

  • The last article specifically said RAM was nearly exhausted and there was excessive paging to disk. No one cares if RAM is full or not, if it's unused it's wasted anyway. The concern is having 85% memory utilization and then paging memory out to the pagefile.
  • In theory (Score:2, Insightful)

    by Dunbal ( 464142 ) *

    A good OS uses all the RAM, and allocates available free blocks of RAM to the programs as required.

    However using the greater part of a gigabyte plus paging to the hard drive just to display the desktop and run the low level functions is inexcusable and points to either a) memory leak b) the OS is doing something legitimate you are unaware of, like indexing files, etc c) the OS is doing something illegitimate like sending the contents of your hard drive to someone in Redmond, the NSA/FBI or the RIAA/MPAA or

  • Isn't this how RAM is supposed to utilized anyway? I don't want my OS flushing out RAM if I'm just going to use the same data later on.
  • by Rashkae ( 59673 ) on Saturday February 20, 2010 @05:11PM (#31213606) Homepage

    Both articles miss some very big and important points. Back in the day of Windows 2000 and XP, the Task Manager chart reported the memory comit charge. Basically, that was the amount of memory applications (and Windows) requested allocated. This does not mean that much memory was actually used, but with the exception of very badly written/buggy programs, it should be close. As a rule of thumb, if you look at that and see that your commit is significantly larger than your RAM, you know you're probably in trouble and will be very reliant on swap.

    Windows Vista and 7 report something completely different. The chart shows ram memory used minus cache, an almost useless metric, but it does not indicate how much 'total' memory, real and virtual, is allocated. If you look at the screenshot in the ars aritcle, you will see that the commit charge is over 3GB. That's a lot of memory, and doesn't include cache!.

    At the end of the day, however, a bare bones Windows XP would require about 120MB of memory, whereas Windows 7 is around 1GB. That sounds like a big difference, but we are talking several years of new features and eye candy. Ultimately, when you drill it down, it means that Windows 7 requires $20 more worth of memory. An insignificant issue, so long as you keep that in mind when designing a system for Vista / Windows 7. (ie, make sure that any computer or device destined for those OS's have at least 2GB of ram)

    • by m_pll ( 527654 ) on Saturday February 20, 2010 @06:39PM (#31214168)

      Back in the day of Windows 2000 and XP, the Task Manager chart reported the memory comit charge. Basically, that was the amount of memory applications (and Windows) requested allocated. This does not mean that much memory was actually used, but with the exception of very badly written/buggy programs, it should be close

      Not necessarily. Many programs commit large chunks of memory in case they need it later but only use a small portion initially. This simplifies program logic because you don't have to free and reallocate the buffer when you need more space, deal with potential reallocation failures etc. Or a program might want to specify a larger-than-default stack commit size to make sure it doesn't hit a stack overflow if it tries to extend the stack while the system is temporarily out of commit (most services and other system critical processes do that). Or it might map a copy-on-write view of a file, in which case commit is charged for the entire view but no extra physical memory is used until the program actually writes to the pages. Etc etc... The end result of this is that you can't really say anything conclusive about physical memory usage by looking at commit charge

      Commit charge is a virtual memory metric. It's great for detecting memory leaks and deciding how big your pagefile needs to be, but not so great for understanding physical memory usage. Often it might seem like there is a correlation between commit charge and physical memory, but you can also find systems that are very low on available RAM yet have plenty of available commit, and vice versa.

      Task manager now shows used physical memory (defined as Total - Available). Available memory is the most straightforward way to understand whether your system needs more memory or not, and this is why in Vista/Win7 it was chosen as the main indicator of "memory usage".

      • Re: (Score:3, Interesting)

        by Rashkae ( 59673 )

        Still fails badly though, because the task manager will show lots of available memory when lot of caching is being done, depending on how the system is tuned. (sometimes known as swappiness in Linux, I'm not entirely sure where to find the tuning parameters in Windows). It's not very hard to find systems that consider more than 30% memory available, but are considerably slowed down by swap activity. Of course, the only way to really prove that is with some kind of swap monitor that looks for excessive sw

        • Re: (Score:3, Interesting)

          by Eskarel ( 565631 )

          That's why you have metrics for page faults. If you don't have any page faults you're not swapping. If you have some you're probably slightly over utilized or doing something odd. If you have a lot, you have a problem.

  • No simple answers (Score:3, Insightful)

    by Ancient_Hacker ( 751168 ) on Saturday February 20, 2010 @05:26PM (#31213714)

    The issue is not a simple one.

    On the one hand it's potentially a good idea to use "unneeded" RAM to pre-fetch possibly useful in the future disk data.

    But a whole lot of apps were written without that "feature", so a lot of them already pre-fetch data. Now you have twice the amount of RAM tied up, for no benefit.

    And nobody can predict the future, so the pre-fetching is speculative at best, and has no way to compensate for other tasks the user may double-click into competion with ones being pre-fetched.

    Worse yet, a lot of apps look at the amount of free RAM when they decide whether to prune their working set or make other cache/purge decisions.

    If all of a sudden the OS reports less free RAM, all those apps are going to make the wrong decision.

  • by noidentity ( 188756 ) on Saturday February 20, 2010 @06:08PM (#31213964)
    As someone commented on the last story a couple of days ago about this, if you don't want all your memory to be actually used, pull some of it out and put it in your desk drawer. What, you do want it all used? Well, that's what Windows 7 is doing, using all of it all the time, rather than leaving some of it unused much of the time. Oh, you only want it used for certain purposes? Why? If it's not being used for anything at the moment, using it for something is clearly better than that. And that's what Windows 7 (and Linux) do! If a more important use for it comes along, it repurposes it for that.
  • by Low Ranked Craig ( 1327799 ) on Saturday February 20, 2010 @06:57PM (#31214336)
    All good operating systems do this. My Mac, for instance has "inactive memory", which is not exactly the same as Windows, but close enough. If your memory is free, it's not doing anything for you. End of story.

Avoid strange women and temporary variables.

Working...