Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Windows Technology

86% of Windows 7 PCs Maxing Out Memory 613

CWmike writes "Citing data from Devil Mountain Software's community-based Exo.performance.network (XPnet), Craig Barth, the company's chief technology officer, said that new metrics reveal an unsettling trend. On average, 86% of Windows 7 machines in the XPnet pool are regularly consuming 90%-95% of their available RAM, resulting in slow-downs as the systems were forced to increasingly turn to disk-based virtual memory to handle tasks. The 86% mark for Windows 7 is more than twice the average number of Windows XP machines that run at the memory 'saturation' point, and this comes despite more RAM being available on most Windows 7 machines. 'This is alarming,' Barth said of Windows 7 machines' resource consumption. 'For the OS to be pushing the hardware limits this quickly is amazing. Windows 7 is not the lean, mean version of Vista that you may think it is.'"
This discussion has been archived. No new comments can be posted.

86% of Windows 7 PCs Maxing Out Memory

Comments Filter:
  • by sopssa ( 1498795 ) * <sopssa@email.com> on Thursday February 18, 2010 @08:57AM (#31183000) Journal

    RAM is wasted when it isn't in use. The fact that the task manager in Windows says your RAM is used 95% tells nothing, and no it won't "result in slow-downs as the systems were forced to increasingly turn to disk-based virtual memory to handle tasks". I'm actually really surprised, and not in a good way, that "chief technology officer" of the company doesn't know this.

    The new memory models in recent OS's try to utilize all the available RAM (as they should) to speed up things otherwise. It makes a lot of sense to cache things from hard-drive in low-peak usage points, and in such such way that it doesn't interfere with other perfomance. When the things that are most often used are already cached in RAM, their loading works a lot faster. This doesn't include only files, icons or such, but everything the OS could use or do that takes time.

    If theres a sudden need for more RAM, the cached data can be "dropped" in no time. It doesn't matter if it averages at 25% or 95%, just that the perfomance overally is better when you utilize all the resources you can to speed up things in general.

  • by A beautiful mind ( 821714 ) on Thursday February 18, 2010 @09:01AM (#31183030)
    If it is filesystem cache, then it's not wasted or "maxed out". If it is application/system memory, then it is indeed a problem.
  • by Mr Thinly Sliced ( 73041 ) on Thursday February 18, 2010 @09:03AM (#31183036) Journal

    My understanding was that memory used for disk caching doesn't show up in task manager as "used".

    It's been a while since I booted win7 though, so I might be mistaken.

    Certainly under linux ram used as disk cache is marked "free".

    It wouldn't surprise me that win7 has a heavier memory footprint though - as more applications move to .net and web browsers use lots of flash / silverlight etc - all of these things have a RAM cost.

  • by dr.newton ( 648217 ) on Thursday February 18, 2010 @09:15AM (#31183134) Homepage

    From TFA:

    "On average, 86% of Windows 7 machines in the XPnet pool are regularly consuming 90%-95% of their available RAM, resulting in slow-downs as the systems were forced to increasingly turn to disk-based virtual memory to handle tasks."

  • by jibjibjib ( 889679 ) on Thursday February 18, 2010 @09:15AM (#31183138) Journal
    Using more RAM doesn't use more energy. Either your RAM is powered on, or it's not. And if it's powered on it maintains its contents, no matter whether the OS has actually written anything useful to it.
  • by Sockatume ( 732728 ) on Thursday February 18, 2010 @09:16AM (#31183160)

    If they'd measured pagefaults, they could've reported pagefaults. They didn't. RAM usage appears to be the total basis for the article, so his concern is a genuine one. We don't know enough about the study at this stage to dismiss it.

  • by snemarch ( 1086057 ) on Thursday February 18, 2010 @09:16AM (#31183164)

    It shows up as part of the memory commit bar - which is what regular users will look at, and then go off screaming about "OMG IT USES ALL MY SYSTEM MEMORY!1!!! one one". It's also deducted from the "free" count, since technically it isn't free (it can be freed quickly, but has to be zeroed out before it can be handed off to a new app - security and all).

    The Win7 task manager does show a "cached" stat, though, so your effectively free memory is "free"+"cached". And if you want more comprehensive memory stats, you should look at perfmon.msc or SysInternals' Process Explorer.

    I wonder if TFA has actually measured that disk swapping happens (easy with procexp or perfmon), or are just shouting their heads off without understanding what's going on... it's well-known that SuperFetch utilizes otherwise unused memory for disk caching, and does so proactively :)

  • by Anonymous Coward on Thursday February 18, 2010 @09:26AM (#31183272)

    You cannot study virtual memory performance without considering how many page faults occur.

    It is perfectly reasonable to use RAM as a filesystem cache, which is why Linux has adopted this approach. The effect is that almost all of the physical RAM is always in use. The cost is that pages are more likely to be wrongly swapped out - however, in typical cases, this increased cost is tiny in relation to the huge reduction in the number of disk accesses.

  • Re:Slow Slow 7 (Score:0, Informative)

    by Anonymous Coward on Thursday February 18, 2010 @09:36AM (#31183388)

    You're a retard.

    Congrats.

  • by lcarnevale ( 1691570 ) on Thursday February 18, 2010 @09:43AM (#31183460)
    HDD caching and swap are two completely different things. HDD caching is loading things from the disk to RAM to speed up things. SWAP is using the HDD as extra RAM when the system doesn't have any more memory left to use. So, what I think you wanted to do was to turn off swap, not hdd caching.
  • by Moryath ( 553296 ) on Thursday February 18, 2010 @09:46AM (#31183508)

    You obviously don't understand memory access design. It's all about feeding the CPU. There are two sorts of relationships we can use to make this work: temporal and sequential.

    Hard drives are the largest-capacity storage (well unless you want to go to tape). But they're slow. Even the fastest high-RPM SCSI or SATA drives are SLOW compared to what's above them. This is mitigated, somewhat, by putting some cache memory on the drive's controller board itself. Still, having to "hit" the hard drive for information is, as you say, a slowdown. Same goes for "external" storage (Optical media, USB media, etc).

    So you try to keep as much information as possible in RAM (next step up). Hitting RAM is less expensive than hitting the H/D in terms of a performance hit. In the original days of computing (up until the 486DX line for Intel CPUs), RAM and CPU operated on a 1:1 clock speed match, so that was that.

    Once you factor in the "clock multiplier" of later CPU's, even the fastest RAM available today can't keep from "starving" the CPU. So we add in cache - L3, L2, and L1. the 486 implemented 8KB (yeah a whole 8K, wow!) in order to keep itself from starving. L3 is the "slowest", but largest, L2 is faster still but smaller, and L1's the smallest of all, but the fastest because it is literally on the same die as the CPU. That distinction is important, and in general you'll find that a "slower" CPU with more L1 Cache will benchmark better than a "faster" CPU with less.

    The CPU looks for what it wants as follows:
    - I want something. Is it in L1? Nope.
    - Is it in L2? Nope.
    - Is it in L3? Nope.
    - Is it in RAM? Nope.
    - Is it in the H/D Cache? (helps avoid spin-up and seek times) Nope.
    - Crap, it's on the H/D. Big performance hit.

    Everything except for the L1 check, technically, was a performance it. The reason for pre-caching things (based on temporal and sequential relationships) is all about predicting and getting what will be needed next into the fastest available place.

    Yes, I suppose you can run an entire system where it all goes into "RAM", and you'll see it as "more responsive" simply because you never have to touch the hard drive. But turning off HDD caching is a BAD idea. It makes cache misses that much more expensive because then, instead of having even the chance of finding what you needed in RAM or in the HD's onboard cache, you have to wait for the H/D to spin up and seek to the right sector.

  • by theaveng ( 1243528 ) on Thursday February 18, 2010 @09:47AM (#31183512)

    >>>The only system where it makes sense to disable swap space is a system with no HDD at all.

    Or a system where, when you click on an icon, it opens the windows *instantly* not 1-2 seconds later (because it's retrieving data off the HDD). If you don't know what I'm talking about I respectfully suggest going to here and trying this OS on a bootable CD. You'll be amazed. http://puppylinux.com/ [puppylinux.com]

    A computer that uses no hard drive caching, and runs completely in RAM, is faaaaaast. Of course with Windows, that means you'd need somewhere around 20 gigabytes of memory. Windows won't work otherwise.

    >>>Do you really think the OS is trying it's hardest to make your computer go slower?

    Apparently you've forgotten the dark days of Windows 95 or 98, where you would click on an icon, and have to wait 1-2 minutes for the OS to finish thrashing your HDD, due to not having enough RAM to run properly. (Or more recently: Vista or WIN7 on a 256 megabyte machine.)

  • by jimicus ( 737525 ) on Thursday February 18, 2010 @09:51AM (#31183566)

    Maybe the part about HDD caching slowing things down?

    I could be wrong there, since I'm not an expert but I remember the dark, dark days when my computer when spend 2-3 minutes just to redraw a Word document. Why? Because it was using the HDD like memory, instead of using the actual memory. It seems to me that this problem, while minimized, has never completely gone away.

    Anyway telling me "you're wrong" doesn't enlighten either me, or the other readers. Please elucidate.

    There's a lot of variables, but in simple terms the theory goes that something which you have recently accessed (be it an application, a document or whatever) you are likely to want again in the near future. Hence it's worth keeping a copy in memory on the offchance.

    On the other hand, you really don't want to be swapping. So if a program needs more physical memory than what you have immediately available, it makes more sense to allocate memory which was recently holding cached data and just reduce the cache size than it does to start swapping, which is what any sane OS will do.

    If there's any real intelligence involved in this, the OS will re-allocate an area which hasn't been used in a while.

    The cache would only cause a problem in the way you describe it if the OS did not dynamically resize cache to account for other demands on system RAM.

    I can't explain the differences between yours and your brother's computer but I can tell you that OEM builds of Windows tend to have so much garbage loaded at boot that they often need serious work before they're genuinely usable. Some of the builds I've seen, it is a wonder they boot at all.

  • by lagfest ( 959022 ) on Thursday February 18, 2010 @10:07AM (#31183778)

    My current* windows 7 stats say:
    Total: 2046MB
    Used: 1.26GB
    Cache: 634MB
    Available: 743MB
    Free: 132MB

    So used RAM does not include cache. And 'available' is a nice way of telling grandma that RAM used for cache is actually available to apps.

    * not a snapshot, i can't type that fast :)

  • by mystikkman ( 1487801 ) on Thursday February 18, 2010 @10:14AM (#31183838)

    True, the technical details for interested geeks are here http://blogs.technet.com/askperf/archive/2007/03/29/windows-vista-superfetch-readyboost.aspx [technet.com]

  • by lordlod ( 458156 ) on Thursday February 18, 2010 @10:19AM (#31183906)

    You'll excuse my ignorance, but from college I remember that usually you have 0-2V represent 0 and 3-5V represent 1. Does a 0 have a corresponding increase in amperage so that it levels out and uses the same amount of power?

    It seems you must have missed the complex electronics portion of your college.

    5V TTL circuits use 0-0.8V Low and 2.2-5V High (on input), in between the high and low states is undefined. Regardless, modern RAM is almost certainly a 1.8V device externally and internally even less.

    Modern RAM (DRAM) works by each bit of memory being a floating charge in a capacitor. When you read the bit the charge is released and read as either a one or zero. I wouldn't want to make assumptions about if a high voltage corresponded to a one or zero, they would choose whichever they felt worked best. This also includes whatever state they initialise the RAM to on power on.

    In the ideal no-friction world, floating charge = no current = no power. In our world floating charges leak slowly and have to be topped up, so there is a degree of power being used depending on the high or low state of a given bit. That said, the refresh rate probably has more impact than the value.

    In all honesty though, you are barking up the wrong tree. A far greater power issue in the modern computer is the increased power required by all the high speed external connections. Transmission line theory means that as the speeds of links like ethernet have increased the power required to shovel the bits down the line becomes exponentially linked to the speed of transmission. (EMF also becomes a serious issue: Fun game, wrap a GPS antenna in a ethernet cable then plug it in.) So to really save power you should start by unplugging your Gigabit link and hooking up some environmentally friendly 10BASE2 goodness.

  • Re:Anti-Virus progs (Score:2, Informative)

    by Hierophant7 ( 962972 ) on Thursday February 18, 2010 @10:26AM (#31183998)
    That makes sense. It also makes sense that roughly 86% of users would enable a full scan every day.
  • by Z00L00K ( 682162 ) on Thursday February 18, 2010 @10:28AM (#31184018) Homepage Journal

    Just because RAM is available doesn't mean that the OS should hog it. You may want to use that RAM for something different. It may be legal to use "excess" RAM for buffers, but then those buffers must be freed fast whenever necessary.

    If you use large amounts of RAM for buffers you will either freeing the least used buffers and use them for the application, and then you will get memory fragmentation. This can be bad for some applications. The other scenario is that you will just kill a block of buffers and that may happen to be buffers that are heavily trafficked and has to be re-loaded somewhere else - and you have a performance penalty.

    And caching a lot means more caching overhead. Not everything makes sense to cache. What about the case when you run a database, caching the database file through the OS first and then have the database engine to cache the same thing again? A complete waste of performance and resources.

    So having the OS caching - it works fine for many applications, but not for every application. Some applications are also tuning themselves by looking at available memory and determining how to best allocate resources. It is of course possible to figure out how much the OS can free for application use, but it's also hard to calculate an usage budget. And if it goes wrong you will end up swapping to disk.

  • by tepples ( 727027 ) <tepples.gmail@com> on Thursday February 18, 2010 @10:31AM (#31184052) Homepage Journal

    If you're seeing an actual slowdown in performance, fine, worry about it.

    User base increases over time. Even on an intranet server, your company will probably add users when it grows. As your user base increases, you will see slowdowns. If you can catch slowdowns before they happen, you will be more prepared for the DDOS attack that comes when your site gets mentioned in a Slashdot article or when a bunch of new employees go through orientation.

    100% CPU usage is a good thing: it means there's a process that's not IO bound.

    Or it could mean that you need to optimize the process that uses the most CPU time so that it becomes I/O bound. All other things equal, once all your processes are network bound, you can serve more users.

  • by Anonymous Coward on Thursday February 18, 2010 @10:37AM (#31184126)

    Definitely the part about HDD caching slowing things down. Even in the DOS age it was well known that hdd caching utilities (I forgot the names, too long ago)

    It is smartdrv [wikipedia.org].

  • by Lumpy ( 12016 ) on Thursday February 18, 2010 @10:45AM (#31184232) Homepage

    but really there's no reason not to do so on any XP machine with 2 gigs of RAM or Vista/Win 7 machine with 4 gigs of RAM.

    Yes if you do the typical office no taxing tasks on a pc. If you do anything big it's no where near enough.

    If you edit HD Video or very large Photo arrays you bump up against 4gig without effort. I hit the 16 gig mark on a regular basis.

  • by afidel ( 530433 ) on Thursday February 18, 2010 @10:49AM (#31184300)
    Actually what you really need to do is calculate hard pages/second which for some retarded reason isn't available as a default stat counter, ie those pages which actually go to the secondary backing store (disk).
  • by sopssa ( 1498795 ) * <sopssa@email.com> on Thursday February 18, 2010 @10:53AM (#31184360) Journal

    If the system is gratuitously using 95% of the RAM nearly all the time, then it’s a completely different scenario. Everything I try to open that wasn’t cached already will force the system to dump some memory to the swap file to make room for the new application.

    Uh no. The point here is that the RAM is utilized with data that speeds up things, but that can be instantly freed if needed. It doesn't need to put that in swap file.

  • by TheLink ( 130905 ) on Thursday February 18, 2010 @11:02AM (#31184488) Journal
    Yeah I don't know why they don't set up the counter by default.

    Anyway to set it up yourself:

    Start perfmon.msc
    Then add counters
    go to Memory, add "Pages Output/sec".

    I'm not an authority on virtual memory but from what I know:
    Page Faults/sec is not usually relevant for this - the virtual memory stuff will have page fault even if it's not swapping to/from disk - it's part of how virtual memory works.
    Page Inputs/sec could happen when you launch programs (then the O/S starts paging in the stuff it needs) - it's no indication of running out of memory.
    Page Output/sec on the other hand is when the O/S is low and needs to copy stuff in RAM and write it OUT to disk so that it can reuse that RAM for something else. This is the one you want to monitor.
  • by Eivind ( 15695 ) <eivindorama@gmail.com> on Thursday February 18, 2010 @11:11AM (#31184634) Homepage

    Actually, even that is inaccurate.

    You see, it can make sense for the OS to swap out some not-recently-used pages of a program, to free up more memory for caching. For example. Say you're playing a game, but you've got firefox open. It could make sense to page out the entirety of firefox, so as to have more physical ram free for caching of game-content.

    Life ain't so simple in a virtual world :-)

  • by Bakkster ( 1529253 ) <Bakkster@man.gmail@com> on Thursday February 18, 2010 @11:55AM (#31185358)

    But more page-faults doesn't always correlate to more slowdowns. An OS with better page-allocation prediction will run faster (from the user's perspective) with the same number of page-faults. It's only a problem if the page-faults are on cached data that the user is requesting at that moment.

    Continuing the Firefox example: it might be one page of memory to each page you want to view. A smart OS will leave the pages with the main Firefox program and current tab in RAM and cache the others first. Then when tasks switch, the cached Firefox pages are reloaded while the user is still looking at the first page. There are page faults, but the user experiences fewer delays.

    Basically, there is no meaningful conclusion we can draw just from peak RAM utilization and page fault numbers (either average or peak). To do that, we would need to measure the number of page faults for pages that were already in memory but cached to disk and that required the user to wait before continuing.

    More importantly, to claim that Windows 7 is 'bloat' just because it uses more RAM and has more page faults is erroneous without additional evidence.

  • by TheLink ( 130905 ) on Thursday February 18, 2010 @12:13PM (#31185666) Journal

    If you are running the sidebar you may like to look at this:

    http://seclists.org/bugtraq/2007/Sep/134 [seclists.org]

    See the discussion and also the pdf http://www.portcullis-security.com/uplds/Next_Generation_malware.pdf [portcullis-security.com]

    I'm sticking to perfmon.msc, task manager, resource manager and Process Explorer, depending on the circumstances.

  • by Foofoobar ( 318279 ) on Thursday February 18, 2010 @12:15PM (#31185692)
    First I said ANY GIVEN RESOURCE... not just RAM. Second, disk cache is not as fast as RAM and should not be relied upon like RAM. It is merely a safety net if you run out of RAM... like when you get a usage spike.
  • by ShadowRangerRIT ( 1301549 ) on Thursday February 18, 2010 @12:16PM (#31185700)

    If you have a reasonable amount of RAM there's no reason to leave it turned on.

    If the swapping algorithm is so bad that it swaps unnecessarily, then yes, turning off swap will help. But a good swapping algorithm remains useful even if you have 16 GB of RAM. Large sections of many processes are basically "run once, then ignore" or even "never run". Most processes have a decent amount of startup code that is never referenced after the first half second of execution, or load multi-megabyte shared libraries into process memory space to get two short functions (or, similarly, contain code that only runs under exceptional circumstances). If you disable swap, you're denying access to that memory not only to other programs (which we'll assume for the sake of argument don't run out of RAM themselves, since you use the phrase "reasonable amount of RAM"), but for I/O caching.

    If you've got a program regularly crawling part of your directory structure, or you're writing frequently to files, the RAM freed by swapping out the junk parts of each program could be used for a productive purpose. Delaying the write means you can write a contiguous block all at once, allowing higher priority I/O to go through without forcing the low priority I/O program to block, and it can also mean reduced fragmentation on the disk.

    Similarly, a predictive caching algorithm can improve responsiveness with that otherwise wasted RAM you've decided *must* be kept in memory. If you always start a program around 6 PM when you get home, the system can recognize this from the metrics and preload it. If you don't run it, oh well, the RAM was used just as effectively as if it held unused, unswappable contents. If you do use it, your program starts nigh instantaneously. If the program itself has a specific performance profile, where specific data files are predictably read, the computer can cache them using the RAM freed by swap, reducing loading times and increasing responsiveness. That might seem wasteful (spinning up the hard disk when it isn't needed), but in other cases it saves energy; if you're seeding a torrent, and the computer has enough memory, it may cache the whole file in memory; voila, no matter what piece a client asks for, the disk doesn't need to spin up.

    That said, no swap algorithm is perfect. And if you've got 4+ GB of RAM and all you're doing is running a browser, an office suite and maybe a game, the difference will be small (or non-existent if all your programs and all reasonable file system caching can fit in memory). This doesn't mean swap is useless; all it means is that it's not perfect.

  • by ShadowRangerRIT ( 1301549 ) on Thursday February 18, 2010 @12:23PM (#31185814)
    One other minor note: Windows's use of pre-emptive paging makes for a much faster hybrid sleep and/or hibernation. If your page file is larger than main memory, and you're not paging excessively, most of your memory is probably already paged out. Thus, the hibernate file only needs to have the unique data written to it; on a laptop with 4 GB of mostly used RAM and a relatively slow hard disk, it could take two minutes to hibernate the machine (hope your battery lasts). Every bit of memory paged out preemptively means less time to hibernate. My home machine is set up for hybrid sleep, and has 4 GB of RAM. Time from issuing the sleep command to hibernation is about 3-5 seconds, and that only works because of the page file.
  • by snemarch ( 1086057 ) on Thursday February 18, 2010 @12:33PM (#31185964)

    Things [i]have[/i] changed since XP :) - by default it's cache system is pretty conservative. You can get around that by setting LargeSystemCache=1 (except if you're using ATI drivers - nastier-than-normal BSODs galore!). Then 32bit and the kernel/user address space split becomes the limiting factor.

    With more physical RAM, a larger address-space, the cache has more room to play - and it doesn't hurt that Win7 (like Vista) is less conservative than XP was :)

  • by Foolhardy ( 664051 ) <[csmith32] [at] [gmail.com]> on Thursday February 18, 2010 @01:40PM (#31186902)

    But "page out" means something in RAM is going to disk - if I ever want it back in RAM, I'll have to wait.

    On Windows it doesn't necessarily mean that. Writing a page to disk != needing to read it back from disk later.

    Each process has a working set. Pages in the working set are mapped actively into the process's VM with page tables. The memory manager aggressively trims these pages from the working set and puts them into standby memory. A page in standby is not mapped for reading (and more importantly for writing) anywhere in the system. Part of putting the page into standby involves writing a copy to disk. This will show up as a page written.

    From standby, the page can be used one of two ways:

    1. Transitioned back. If one of the processes that originally had the page mapped touches the page, it will cause a soft page fault in which the page is simply put back in the process's page directory. There's no need to retrieve it from disk since it still has the same data from before. The disk copy is discarded. This will show up as a transition fault in the performance monitor.
    2. Reused for something else. Standby pages are counted as "Available" because they can be immediately re-used for another purpose without accessing the disk. The memory copy of the page is discarded and the page is re-used for something else. No disk activity is needed at this time since there is already a copy on disk. When one of the original owners of the page want the data back and the page is no longer on standby, it has to be retrieved from disk. This will count as a page fault in the performance monitor.

    The nice thing about this model is that disk activity isn't needed to either reuse pages or bring them back at the time of the demand. It helps avoid the ugly condition of paging one process out while paging another in at the same time, causing disk thrashing.

    Since Vista, the memory manager will preemptively re load pages that have been bumped out of standby back into standby if there is free unused memory available. Also since Vista, each page of memory has a priority from 0-7 that determines which pages are preferred to keep in RAM. In all versions of NT based Windows, memory mapping is very similar to page file management and will use many of the same counters (including standby memory, transition and hard faults, pages in/out). Memory mapping is used by lots of components internally and for loading executable images and libraries. Also, file caching is logically based in many ways on memory mapping, although the counters are different in many cases.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...