86% of Windows 7 PCs Maxing Out Memory 613
CWmike writes "Citing data from Devil Mountain Software's community-based Exo.performance.network (XPnet), Craig Barth, the company's chief technology officer, said that new metrics reveal an unsettling trend. On average, 86% of Windows 7 machines in the XPnet pool are regularly consuming 90%-95% of their available RAM, resulting in slow-downs as the systems were forced to increasingly turn to disk-based virtual memory to handle tasks. The 86% mark for Windows 7 is more than twice the average number of Windows XP machines that run at the memory 'saturation' point, and this comes despite more RAM being available on most Windows 7 machines. 'This is alarming,' Barth said of Windows 7 machines' resource consumption. 'For the OS to be pushing the hardware limits this quickly is amazing. Windows 7 is not the lean, mean version of Vista that you may think it is.'"
When do people get this (Score:5, Informative)
RAM is wasted when it isn't in use. The fact that the task manager in Windows says your RAM is used 95% tells nothing, and no it won't "result in slow-downs as the systems were forced to increasingly turn to disk-based virtual memory to handle tasks". I'm actually really surprised, and not in a good way, that "chief technology officer" of the company doesn't know this.
The new memory models in recent OS's try to utilize all the available RAM (as they should) to speed up things otherwise. It makes a lot of sense to cache things from hard-drive in low-peak usage points, and in such such way that it doesn't interfere with other perfomance. When the things that are most often used are already cached in RAM, their loading works a lot faster. This doesn't include only files, icons or such, but everything the OS could use or do that takes time.
If theres a sudden need for more RAM, the cached data can be "dropped" in no time. It doesn't matter if it averages at 25% or 95%, just that the perfomance overally is better when you utilize all the resources you can to speed up things in general.
Depends on what kind of memory (Score:4, Informative)
Re:When do people get this (Score:5, Informative)
My understanding was that memory used for disk caching doesn't show up in task manager as "used".
It's been a while since I booted win7 though, so I might be mistaken.
Certainly under linux ram used as disk cache is marked "free".
It wouldn't surprise me that win7 has a heavier memory footprint though - as more applications move to .net and web browsers use lots of flash / silverlight etc - all of these things have a RAM cost.
Re:When do people get this (Score:4, Informative)
From TFA:
"On average, 86% of Windows 7 machines in the XPnet pool are regularly consuming 90%-95% of their available RAM, resulting in slow-downs as the systems were forced to increasingly turn to disk-based virtual memory to handle tasks."
Re:When do people get this (Score:5, Informative)
Re:When do people get this (Score:5, Informative)
If they'd measured pagefaults, they could've reported pagefaults. They didn't. RAM usage appears to be the total basis for the article, so his concern is a genuine one. We don't know enough about the study at this stage to dismiss it.
Re:When do people get this (Score:5, Informative)
It shows up as part of the memory commit bar - which is what regular users will look at, and then go off screaming about "OMG IT USES ALL MY SYSTEM MEMORY!1!!! one one". It's also deducted from the "free" count, since technically it isn't free (it can be freed quickly, but has to be zeroed out before it can be handed off to a new app - security and all).
The Win7 task manager does show a "cached" stat, though, so your effectively free memory is "free"+"cached". And if you want more comprehensive memory stats, you should look at perfmon.msc or SysInternals' Process Explorer.
I wonder if TFA has actually measured that disk swapping happens (easy with procexp or perfmon), or are just shouting their heads off without understanding what's going on... it's well-known that SuperFetch utilizes otherwise unused memory for disk caching, and does so proactively :)
Parent is +1 informative (Score:5, Informative)
You cannot study virtual memory performance without considering how many page faults occur.
It is perfectly reasonable to use RAM as a filesystem cache, which is why Linux has adopted this approach. The effect is that almost all of the physical RAM is always in use. The cost is that pages are more likely to be wrongly swapped out - however, in typical cases, this increased cost is tiny in relation to the huge reduction in the number of disk accesses.
Re:Slow Slow 7 (Score:0, Informative)
You're a retard.
Congrats.
Re:When do people get this (Score:4, Informative)
Re:When do people get this (Score:5, Informative)
You obviously don't understand memory access design. It's all about feeding the CPU. There are two sorts of relationships we can use to make this work: temporal and sequential.
Hard drives are the largest-capacity storage (well unless you want to go to tape). But they're slow. Even the fastest high-RPM SCSI or SATA drives are SLOW compared to what's above them. This is mitigated, somewhat, by putting some cache memory on the drive's controller board itself. Still, having to "hit" the hard drive for information is, as you say, a slowdown. Same goes for "external" storage (Optical media, USB media, etc).
So you try to keep as much information as possible in RAM (next step up). Hitting RAM is less expensive than hitting the H/D in terms of a performance hit. In the original days of computing (up until the 486DX line for Intel CPUs), RAM and CPU operated on a 1:1 clock speed match, so that was that.
Once you factor in the "clock multiplier" of later CPU's, even the fastest RAM available today can't keep from "starving" the CPU. So we add in cache - L3, L2, and L1. the 486 implemented 8KB (yeah a whole 8K, wow!) in order to keep itself from starving. L3 is the "slowest", but largest, L2 is faster still but smaller, and L1's the smallest of all, but the fastest because it is literally on the same die as the CPU. That distinction is important, and in general you'll find that a "slower" CPU with more L1 Cache will benchmark better than a "faster" CPU with less.
The CPU looks for what it wants as follows:
- I want something. Is it in L1? Nope.
- Is it in L2? Nope.
- Is it in L3? Nope.
- Is it in RAM? Nope.
- Is it in the H/D Cache? (helps avoid spin-up and seek times) Nope.
- Crap, it's on the H/D. Big performance hit.
Everything except for the L1 check, technically, was a performance it. The reason for pre-caching things (based on temporal and sequential relationships) is all about predicting and getting what will be needed next into the fastest available place.
Yes, I suppose you can run an entire system where it all goes into "RAM", and you'll see it as "more responsive" simply because you never have to touch the hard drive. But turning off HDD caching is a BAD idea. It makes cache misses that much more expensive because then, instead of having even the chance of finding what you needed in RAM or in the HD's onboard cache, you have to wait for the H/D to spin up and seek to the right sector.
Re:When do people get this (Score:2, Informative)
>>>The only system where it makes sense to disable swap space is a system with no HDD at all.
Or a system where, when you click on an icon, it opens the windows *instantly* not 1-2 seconds later (because it's retrieving data off the HDD). If you don't know what I'm talking about I respectfully suggest going to here and trying this OS on a bootable CD. You'll be amazed. http://puppylinux.com/ [puppylinux.com]
A computer that uses no hard drive caching, and runs completely in RAM, is faaaaaast. Of course with Windows, that means you'd need somewhere around 20 gigabytes of memory. Windows won't work otherwise.
>>>Do you really think the OS is trying it's hardest to make your computer go slower?
Apparently you've forgotten the dark days of Windows 95 or 98, where you would click on an icon, and have to wait 1-2 minutes for the OS to finish thrashing your HDD, due to not having enough RAM to run properly. (Or more recently: Vista or WIN7 on a 256 megabyte machine.)
Re:When do people get this (Score:4, Informative)
Maybe the part about HDD caching slowing things down?
I could be wrong there, since I'm not an expert but I remember the dark, dark days when my computer when spend 2-3 minutes just to redraw a Word document. Why? Because it was using the HDD like memory, instead of using the actual memory. It seems to me that this problem, while minimized, has never completely gone away.
Anyway telling me "you're wrong" doesn't enlighten either me, or the other readers. Please elucidate.
There's a lot of variables, but in simple terms the theory goes that something which you have recently accessed (be it an application, a document or whatever) you are likely to want again in the near future. Hence it's worth keeping a copy in memory on the offchance.
On the other hand, you really don't want to be swapping. So if a program needs more physical memory than what you have immediately available, it makes more sense to allocate memory which was recently holding cached data and just reduce the cache size than it does to start swapping, which is what any sane OS will do.
If there's any real intelligence involved in this, the OS will re-allocate an area which hasn't been used in a while.
The cache would only cause a problem in the way you describe it if the OS did not dynamically resize cache to account for other demands on system RAM.
I can't explain the differences between yours and your brother's computer but I can tell you that OEM builds of Windows tend to have so much garbage loaded at boot that they often need serious work before they're genuinely usable. Some of the builds I've seen, it is a wonder they boot at all.
Re:When do people get this (Score:3, Informative)
My current* windows 7 stats say:
Total: 2046MB
Used: 1.26GB
Cache: 634MB
Available: 743MB
Free: 132MB
So used RAM does not include cache. And 'available' is a nice way of telling grandma that RAM used for cache is actually available to apps.
* not a snapshot, i can't type that fast :)
Re:It's called SuperFetch (Score:2, Informative)
True, the technical details for interested geeks are here http://blogs.technet.com/askperf/archive/2007/03/29/windows-vista-superfetch-readyboost.aspx [technet.com]
Re:When do people get this (Score:2, Informative)
You'll excuse my ignorance, but from college I remember that usually you have 0-2V represent 0 and 3-5V represent 1. Does a 0 have a corresponding increase in amperage so that it levels out and uses the same amount of power?
It seems you must have missed the complex electronics portion of your college.
5V TTL circuits use 0-0.8V Low and 2.2-5V High (on input), in between the high and low states is undefined. Regardless, modern RAM is almost certainly a 1.8V device externally and internally even less.
Modern RAM (DRAM) works by each bit of memory being a floating charge in a capacitor. When you read the bit the charge is released and read as either a one or zero. I wouldn't want to make assumptions about if a high voltage corresponded to a one or zero, they would choose whichever they felt worked best. This also includes whatever state they initialise the RAM to on power on.
In the ideal no-friction world, floating charge = no current = no power. In our world floating charges leak slowly and have to be topped up, so there is a degree of power being used depending on the high or low state of a given bit. That said, the refresh rate probably has more impact than the value.
In all honesty though, you are barking up the wrong tree. A far greater power issue in the modern computer is the increased power required by all the high speed external connections. Transmission line theory means that as the speeds of links like ethernet have increased the power required to shovel the bits down the line becomes exponentially linked to the speed of transmission. (EMF also becomes a serious issue: Fun game, wrap a GPS antenna in a ethernet cable then plug it in.) So to really save power you should start by unplugging your Gigabit link and hooking up some environmentally friendly 10BASE2 goodness.
Re:Anti-Virus progs (Score:2, Informative)
Re:When do people get this (Score:3, Informative)
Just because RAM is available doesn't mean that the OS should hog it. You may want to use that RAM for something different. It may be legal to use "excess" RAM for buffers, but then those buffers must be freed fast whenever necessary.
If you use large amounts of RAM for buffers you will either freeing the least used buffers and use them for the application, and then you will get memory fragmentation. This can be bad for some applications. The other scenario is that you will just kill a block of buffers and that may happen to be buffers that are heavily trafficked and has to be re-loaded somewhere else - and you have a performance penalty.
And caching a lot means more caching overhead. Not everything makes sense to cache. What about the case when you run a database, caching the database file through the OS first and then have the database engine to cache the same thing again? A complete waste of performance and resources.
So having the OS caching - it works fine for many applications, but not for every application. Some applications are also tuning themselves by looking at available memory and determining how to best allocate resources. It is of course possible to figure out how much the OS can free for application use, but it's also hard to calculate an usage budget. And if it goes wrong you will end up swapping to disk.
User base increases over time. (Score:3, Informative)
If you're seeing an actual slowdown in performance, fine, worry about it.
User base increases over time. Even on an intranet server, your company will probably add users when it grows. As your user base increases, you will see slowdowns. If you can catch slowdowns before they happen, you will be more prepared for the DDOS attack that comes when your site gets mentioned in a Slashdot article or when a bunch of new employees go through orientation.
100% CPU usage is a good thing: it means there's a process that's not IO bound.
Or it could mean that you need to optimize the process that uses the most CPU time so that it becomes I/O bound. All other things equal, once all your processes are network bound, you can serve more users.
Re:When do people get this (Score:1, Informative)
Definitely the part about HDD caching slowing things down. Even in the DOS age it was well known that hdd caching utilities (I forgot the names, too long ago)
It is smartdrv [wikipedia.org].
Re:When do people get this (Score:3, Informative)
but really there's no reason not to do so on any XP machine with 2 gigs of RAM or Vista/Win 7 machine with 4 gigs of RAM.
Yes if you do the typical office no taxing tasks on a pc. If you do anything big it's no where near enough.
If you edit HD Video or very large Photo arrays you bump up against 4gig without effort. I hit the 16 gig mark on a regular basis.
Re:When do people get this (Score:5, Informative)
Re:When do people get this (Score:3, Informative)
If the system is gratuitously using 95% of the RAM nearly all the time, then it’s a completely different scenario. Everything I try to open that wasn’t cached already will force the system to dump some memory to the swap file to make room for the new application.
Uh no. The point here is that the RAM is utilized with data that speeds up things, but that can be instantly freed if needed. It doesn't need to put that in swap file.
Re:When do people get this (Score:5, Informative)
Anyway to set it up yourself:
Start perfmon.msc
Then add counters
go to Memory, add "Pages Output/sec".
I'm not an authority on virtual memory but from what I know:
Page Faults/sec is not usually relevant for this - the virtual memory stuff will have page fault even if it's not swapping to/from disk - it's part of how virtual memory works.
Page Inputs/sec could happen when you launch programs (then the O/S starts paging in the stuff it needs) - it's no indication of running out of memory.
Page Output/sec on the other hand is when the O/S is low and needs to copy stuff in RAM and write it OUT to disk so that it can reuse that RAM for something else. This is the one you want to monitor.
Re:When do people get this (Score:3, Informative)
Actually, even that is inaccurate.
You see, it can make sense for the OS to swap out some not-recently-used pages of a program, to free up more memory for caching. For example. Say you're playing a game, but you've got firefox open. It could make sense to page out the entirety of firefox, so as to have more physical ram free for caching of game-content.
Life ain't so simple in a virtual world :-)
Re:When do people get this (Score:3, Informative)
But more page-faults doesn't always correlate to more slowdowns. An OS with better page-allocation prediction will run faster (from the user's perspective) with the same number of page-faults. It's only a problem if the page-faults are on cached data that the user is requesting at that moment.
Continuing the Firefox example: it might be one page of memory to each page you want to view. A smart OS will leave the pages with the main Firefox program and current tab in RAM and cache the others first. Then when tasks switch, the cached Firefox pages are reloaded while the user is still looking at the first page. There are page faults, but the user experiences fewer delays.
Basically, there is no meaningful conclusion we can draw just from peak RAM utilization and page fault numbers (either average or peak). To do that, we would need to measure the number of page faults for pages that were already in memory but cached to disk and that required the user to wait before continuing.
More importantly, to claim that Windows 7 is 'bloat' just because it uses more RAM and has more page faults is erroneous without additional evidence.
Re:Available memory != Free memory (Score:2, Informative)
If you are running the sidebar you may like to look at this:
http://seclists.org/bugtraq/2007/Sep/134 [seclists.org]
See the discussion and also the pdf http://www.portcullis-security.com/uplds/Next_Generation_malware.pdf [portcullis-security.com]
I'm sticking to perfmon.msc, task manager, resource manager and Process Explorer, depending on the circumstances.
Re:When do people get this (Score:3, Informative)
Re:When do people get this (Score:3, Informative)
If you have a reasonable amount of RAM there's no reason to leave it turned on.
If the swapping algorithm is so bad that it swaps unnecessarily, then yes, turning off swap will help. But a good swapping algorithm remains useful even if you have 16 GB of RAM. Large sections of many processes are basically "run once, then ignore" or even "never run". Most processes have a decent amount of startup code that is never referenced after the first half second of execution, or load multi-megabyte shared libraries into process memory space to get two short functions (or, similarly, contain code that only runs under exceptional circumstances). If you disable swap, you're denying access to that memory not only to other programs (which we'll assume for the sake of argument don't run out of RAM themselves, since you use the phrase "reasonable amount of RAM"), but for I/O caching.
If you've got a program regularly crawling part of your directory structure, or you're writing frequently to files, the RAM freed by swapping out the junk parts of each program could be used for a productive purpose. Delaying the write means you can write a contiguous block all at once, allowing higher priority I/O to go through without forcing the low priority I/O program to block, and it can also mean reduced fragmentation on the disk.
Similarly, a predictive caching algorithm can improve responsiveness with that otherwise wasted RAM you've decided *must* be kept in memory. If you always start a program around 6 PM when you get home, the system can recognize this from the metrics and preload it. If you don't run it, oh well, the RAM was used just as effectively as if it held unused, unswappable contents. If you do use it, your program starts nigh instantaneously. If the program itself has a specific performance profile, where specific data files are predictably read, the computer can cache them using the RAM freed by swap, reducing loading times and increasing responsiveness. That might seem wasteful (spinning up the hard disk when it isn't needed), but in other cases it saves energy; if you're seeding a torrent, and the computer has enough memory, it may cache the whole file in memory; voila, no matter what piece a client asks for, the disk doesn't need to spin up.
That said, no swap algorithm is perfect. And if you've got 4+ GB of RAM and all you're doing is running a browser, an office suite and maybe a game, the difference will be small (or non-existent if all your programs and all reasonable file system caching can fit in memory). This doesn't mean swap is useless; all it means is that it's not perfect.
Re:When do people get this (Score:3, Informative)
Re:When do people get this (Score:2, Informative)
Things [i]have[/i] changed since XP :) - by default it's cache system is pretty conservative. You can get around that by setting LargeSystemCache=1 (except if you're using ATI drivers - nastier-than-normal BSODs galore!). Then 32bit and the kernel/user address space split becomes the limiting factor.
With more physical RAM, a larger address-space, the cache has more room to play - and it doesn't hurt that Win7 (like Vista) is less conservative than XP was :)
Re:When do people get this (Score:3, Informative)
On Windows it doesn't necessarily mean that. Writing a page to disk != needing to read it back from disk later.
Each process has a working set. Pages in the working set are mapped actively into the process's VM with page tables. The memory manager aggressively trims these pages from the working set and puts them into standby memory. A page in standby is not mapped for reading (and more importantly for writing) anywhere in the system. Part of putting the page into standby involves writing a copy to disk. This will show up as a page written.
From standby, the page can be used one of two ways:
The nice thing about this model is that disk activity isn't needed to either reuse pages or bring them back at the time of the demand. It helps avoid the ugly condition of paging one process out while paging another in at the same time, causing disk thrashing.
Since Vista, the memory manager will preemptively re load pages that have been bumped out of standby back into standby if there is free unused memory available. Also since Vista, each page of memory has a priority from 0-7 that determines which pages are preferred to keep in RAM. In all versions of NT based Windows, memory mapping is very similar to page file management and will use many of the same counters (including standby memory, transition and hard faults, pages in/out). Memory mapping is used by lots of components internally and for loading executable images and libraries. Also, file caching is logically based in many ways on memory mapping, although the counters are different in many cases.