Windows Memory Manager To Introduce Compression 231
jones_supa writes: Even though the RTM version of Windows 10 is already out of the door, Microsoft will keep releasing beta builds of the operating system to Windows Insiders. The first one will be build 10525, which introduces some color personalization options, but also interesting improvements to memory management. A new concept is called a compression store, which is an in-memory collection of compressed pages. When memory pressure gets high enough, stale pages will be compressed instead of swapping them out. The compression store will live in the System process's working set. As usual, Microsoft will be receiving comments on the new features via the Feedback app.
Congratulations, Microsoft! (Score:5, Informative)
Welcome to 2014 [wikipedia.org]
Re:Congratulations, Microsoft! (Score:5, Insightful)
You mean Welcome to 1990 [thefreedictionary.com]. Everything old is new again.
Re: (Score:3)
Yeah, I had RAM Doubler for Macintosh, too. But this is actually included with the OS. The sibling commenter pointing out that OSX wins by a year wins that competition, though.
I was actually imagining that some crusty old fart would crop up to tell us you could do it in VMS or something but so far nope
Re: (Score:2)
We did it on our mainframe platform, but that was mid-90s, so others were first.
Re: (Score:2)
Yeah, I had RAM Doubler for Macintosh, too. But this is actually included with the OS. The sibling commenter pointing out that OSX wins by a year wins that competition, though.
I was actually imagining that some crusty old fart would crop up to tell us you could do it in VMS or something but so far nope
NeXTSTEP 3 (I think) had this in the early 90s. Then the rationale was that compressing pages on the way to disk reduced I/O load.
Like: http://www.nextcomputers.org/N... [nextcomputers.org]
In mainframe? Hell no! (Score:2)
There are two big reasons for the "Hell no". First, in the 70s and 80s Computers were all about money. If you could afford it, you did it. If you did not have enough memory, you either re-wrote or paid for more. I'm sure we could have reserved some core to hold the LZ libraries, but that would consume more space than any piece of code we ran. Space conservation would be the second reason.
I'd say the same for the first *Nix systems as well. If you had to worry about compressing in memory you were doing
Re: (Score:2)
Re: (Score:2)
Wasn't the Mac limited to 128Kb because that was all anyone would ever need? For that matter, most people were running DOS on PCs and that had the 640kb limit.
Re: (Score:2)
Wasn't the Mac limited to 128Kb because that was all anyone would ever need?
Lisa ("Mac XL", prototype mac essentially) 512k-1MB, 2x HDD expansion only.
Original mac, 128k expandable to 512k by soldering, no expansion bus. Followed by 512k version with no other changes.
All macs thereafter: 1+MB, expansion bus
Re:Congratulations, Microsoft! (Score:5, Interesting)
I vaguely remembered that that (or a similar product) was analyzed and it actually did nothing.
There was code in it, but all that code was bypassed. One imagines that the programmer couldn't get it working but had to ship something - and his bosses couldn't actually tell if the driver DID anything.
Re:Congratulations, Microsoft! (Score:4, Interesting)
The Macintosh version actually did a few things. Mostly to help alleviate Classic Mac OS's piss poor memory management where you had to pre-allocate a contiguous chunk of memory to each process -- manually.
Re: (Score:2)
The Macintosh version actually did a few things. Mostly to help alleviate Classic Mac OS's piss poor memory management where you had to pre-allocate a contiguous chunk of memory to each process -- manually.
this is a necessary step on any hardware that doesn't have virtual memory, regardless of operating system
Re: Congratulations, Microsoft! (Score:2)
Yeah but the problem was that classic MacOS did have virtual memory as of System 7.
Virtual Memory AND Flat memory model (Score:4, Informative)
this is a necessary step on any hardware that doesn't have virtual memory, regardless of operating system
That doesn't have virtual memory AND uses a flat memory model (i.e.: where there's a single huge continuous address space)
If the OS needs to move memory around (paging, etc.) the only solution is to change the pointer which need to point elsewhere in memory, hence the complicated handles and pointer on Mac's Classic System, on 68k PalmOS, etc.
Meanwhile, the PC's 286 also lacked a virtual memory (that did only came later with the 386), but used (and abused) the protected mode's segmented memory as a "poor's man virtual memory".
Protected mode memory was accessed through a segment: a "handle" pointing where the chunk actually stays in memory. (A bit more complex than the real mode segment of 8088/8086 which where just spead 16bytes appart).
The soft doesn't know much, it only uses the handle it was assigned to use. If the OS needs to move memory around, it just maps the segment to a different address space. The soft doesn't know and keeps using the same handle as before.
I'm not saying that the 286 architecture was better, just explaining a bit why Intel choose to stick with segments in protected mode.
(in fact the 68k architecture was better, being 32/16 bits hybrid and being able to handle pointer mapping any position in a flat memory representation, whereas the 286 was pure 16bits and required a mumbo-jumbo of segment to handle anything bigger than 64k)
Re: (Score:2)
Re: (Score:2)
Why do I have to do that with Java as well?
Re: (Score:2)
this is a necessary step on any hardware that doesn't have virtual memory, regardless of operating system
What's preventing applications from allocating non-contiguous blocks of memory?
Re: (Score:3)
Yeah, SoftRAM [wikipedia.org] was sued and declared guilty because it did nothing (worse, it slowed down the system). Other products did at least try, but the increase in apparent RAM came at a great performance cost, which sort of defeats the point.
Re: (Score:3, Interesting)
Guys, guys, guys, guys!!!
Come on. Companies can't list new features without being called negatively on it?
It's silly to point out completely different implementations of the same concept as "DONE BEFORE!". Compression is old and has, could and will be used for many different strategies in the future.
New uses for old concepts is an ongoing thing and should not be regarded as non original. By those standards flight was never a big achievement since birds have been flying for millions of years.
Re: (Score:2)
This isn't a new use for an old concept. It's precisely the same use implemented in essentially the same way: modify the virtual memory system so pages get kept in memory in compressed form, rather than being written out to disk.
I'm not saying it's not a good idea, or that Microsoft shouldn't be doing it. But they're one of the last to arrive at the party. OS X and Linux both already have this feature, and it's been available through third party products for decades.
Re: Congratulations, Microsoft! (Score:2)
You are joking, right? RAM doubler was a scam. Machines were not fast enough to compress on the fly.
Re: (Score:2)
As well as it should be. We now have fast multicore CPUs that (should) have the spare capacity to handle such background tasks without degradation of performance.
Re: (Score:2)
do you know that ram double was fake, it just increased the reported size and reverse engineering showed that it didn't even had any compression code! :)
OSX in 2013. (Score:5, Informative)
Re: (Score:3)
Welcome to 2012! [gmane.org] as it was when compressed memory was introduced in Linux.
Re:OSX in 2013. (Score:5, Insightful)
http://askubuntu.com/questions/361320/how-can-i-enable-zswap
Oh, so it's not enabled by default in my distro?
According to the kernel documentation, zswap can be enabled by setting zswap.enabled=1 at boot time. Zswap is is still an experimental technology
Oh, great, it's experimental.
It has been enabled and disabled at various times throughout release cycles. – Ken Sharp
Wonderful! If I turn it on, it may suddenly turn itself off when I get a kernel update for 14.04.
You know, I often hear "Linux already has that", but it doesn't work right, isn't enabled by default on basically all distros, or isn't configured such that 99% of Linux users aren't using it. Saying you have something when it's experimental, not enabled by default, enables and disables with updates, and not easily available to the vast majority of your users is silly.
Re: (Score:3)
No. What you want is zram [wikipedia.org], not zswap.
zram tries to compress pages in RAM, without swapping them to disk. I've only recently enabled this on one of my Debian Jessie boxes (an Intel Core 4 Duo with a motherboard that has a weird memory configuration that in practical terms limits it to 4GB of RAM), however my experience with the equivalent subsystem on OS X has been fantastic. Pages may still later be swapped to disk, but on OS X at least the system aims for a 2:1 compression ratio, holding successfully co
Re: (Score:2)
What you want is zram, not zswap. ... zswap is about compressing the swap file
Not according to the documentation.
https://www.kernel.org/doc/Doc... [kernel.org]
[Zswap] takes pages that are in the process of being swapped out and attempts to compress them into a dynamically allocated RAM-based memory pool.
More precisely (Score:4, Informative)
To be more precise:
- ZRAM create a block device that's compressed. A bit like a regular ramdisk, except that it is compressed with LZO on the flight.
It can be used for anything that a block device can be.
Traditionnaly that has been compress swap in-memory, but could be used for anything else (you could put a temporary file system on it).
Swap-on-ZRAM effectively doubles the amount of RAM: allocate 256MiB for ZRAM, get probably ~512MiB of swap on it. i.e.: you can hold extra 256MiB in-RAM.
The draw back is that swap has no concept of ZRAM and can't intelligently fallback to harddisk. You just give some swap partition on ZRAM and on HDD. All the swap are filled according to their priority.
Thus you can end-up with poorly compressible data on ZRAM, or with older data that's seldom using on ZRAM while the more used data is swapped to HDD.
- Zswap : puts an extra compression stage in the swap system between RAM and Swap. Instead of swapping out memory straigh to disk-based swap, swaped-out pages are first compressed and put in a compressed store in-RAM, then once this store is full, the least-used compressed pages are sent to disk. as the swapping system is fully aware of this (it's an actual extra layer in it) it will correctly elect to write to disk least recently used part of the compressed stage.
Another advantage is that Zswap can use any compression algorithm supported by the kernel. That includes LZ4 which is blinding fast and is usually IO-bound.
That means the CPU load doesn't suffer much, and in fact Swap-performance improves thanks to the saved bandwidth.
- Zcache : like Zswap. But instead of being an extra layer added only inside the swap-mecanism, Zcache can add similar intermediate store to other projects too (file cache, etc).
Re: (Score:2)
I have been duly corrected!
Yaz
Re: (Score:3)
Re:OSX in 2013. (Score:5, Informative)
http://askubuntu.com/questions... [askubuntu.com]
That document is several years old now.
Oh, so it's not enabled by default in my distro?
It appears to be enabled currently in both Ubuntu, Fedora, and RHEL and CentOS.
Oh, great, it's experimental.
It was marked experimental in 2013. In the context of a discussion about a feature that hasn't even been introduced in Windows, it's fair to note that Linux developers have been working on such a feature, and made it generally available several years earlier.
Wonderful! If I turn it on, it may suddenly turn itself off when I get a kernel update for 14.04.
It was disabled in Ubuntu while they tried to diagnose instability in a PPC kernel. The feature was not related to the instability.
If you don't like Ubuntu's method of kernel maintenance, by all means, use a different distribution. However, the practices of one company should not be considered a defect in *Linux*.
Saying you have something when it's experimental, not enabled by default, enables and disables with updates, and not easily available to the vast majority of your users is silly.
It would be, perhaps, but you have all of your facts wrong.
Re: (Score:3)
Thanks for the additional information. None of this is readily available in the first links for Ubuntu, zswap, or Linux, and the items I quoted are either current documentation or statements from 6 months ago--so I expected them to be accurate. In addition, the current kernel documentation of zswap STILL lists it as experimental:
https://www.kernel.org/doc/Doc... [kernel.org]
That said, given this info, many of my earlier points were incorrect. I just enabled it on for my downstairs desktop. It's still not
Re:OSX in 2013. (Score:4, Insightful)
Fed Troll +1
Re: (Score:2)
Says the clueless guy.
Re: (Score:2)
Actually, that clarifies that the zram feature did not make it to the Linux kernel until 2014, meaning that OSX had it prior to Linux. Yes, I understand some had a feature like it earlier, but full-blown reliable enough implementation to make it to the RTM release of an OS was OSX 10.9 (2013), Linux kernel 3.18 (2014) and finally Windows 10 in what appears to be late 2015, or maybe 2016 given their track record. Guess better late than never.
One further note: in OS X 10.9, the memory compression subsystem was on by default (and can only be turned off from the command line). I know of no Linux distort which ships with zram or zswap enabled by default (although if there is one, hopefully someone will jump in and let me know!). I doubt that either is particularly highly used on Linux, even though the facility is extremely useful and can enhance performance by quite a bit
Yaz
Re: (Score:2)
Actually, that clarifies that the zram feature did not make it to the Linux kernel until 2014, meaning that OSX had it prior to Linux.
zswap, which is similar to the OS X feature, was merged into the Linux kernel mainline in kernel version 3.11, released on September 2, 2013. OS X 10.9 "Mavericks" was released on October 22, 2013.
But compressed memory isn't new. It's old tech, and quibbling about which of the many implementations was released first is silly as it ignores decades of such products.
Re: (Score:2)
Given that Apple shipped this on 22 October, less than 2 months after the feature was first mainlined in the kernel, I think it is safe to say (on that basis alone) that the Mac had it first.
Re: OSX in 2013. (Score:2)
Fedora updates went out in mid-September which is all but guaranteed to be weeks behind gentoo. What's the point though?
Re: OSX in 2013. (Score:2)
I implemented this in 2005 as a coursework exercise at university.
Software patents (Score:2)
But you probably couldn't put it into large-scale production until the RAM Doubler patents ran out.
Re: Software patents (Score:2)
haha.
Re: OSX in 2013. (Score:2)
Ram doubler? You bought that? I have a bridge to sell...
Re: (Score:2)
Welcome to 1996 [wikipedia.org].
Re: (Score:2)
zRAM's previous name was compcache, and that was available for Linux since 2008.
https://code.google.com/p/comp... [google.com]
In 2014, zRAM just became a part of Linux kernel tree.
Deja Vu all over again... (Score:5, Interesting)
(see, for example, https://www.usenix.org/legacy/... [usenix.org])
The same product was Apple's first to use pre-emptive multitasking,
The product? Newton.
Re: (Score:2)
Gee, an Apple product did this in the 90's, compressing memory segments assigned to processes not currently executing.
So did a Microsoft product called DoubleMem. This is really old tech, and Microsoft has even done it before. They even got in legal trouble over it, since they stole the code from the original creators (no, not Apple) Stacker.
Re: (Score:2)
Oh please... Apple has stolen plenty of ideas from others over the years... Just ask Xerox/Palo Alto..
You think preemptive multi-tasking started with Apple? HA!
https://en.wikipedia.org/wiki/... [wikipedia.org]
Why would anyone think that?? Even Microsoft Windows had preemptive multitasking half a decade before Apple.
Re: (Score:2)
Every system that has interrupts has pre-emptive multitasking. Even DOS had the "terminate and stay resident" system call. It's just that old systems didn't have the resources to multitask complex programs so the methods to do so were undeveloped. We're seeing the same thing now with multithreading, which is still all-manual and thus a constant source of trouble.
ECC (Score:2)
Since compression is the process of reducing redundant information, any bit flip could kill the entire compressed unit.
Re: (Score:3)
A single bit flip can have catastrophic results without compression too, if it's the wrong bit.
Re: (Score:2)
Yay! Color options (Score:5, Funny)
, which introduces some color personalization options, ...
You no longer have to put up with the blue screen of death. Now you have the option to have speckled, sparkled, opalescent, translucent, scintillating, coruscant, flourescent, effervescent blue screens of death.
Re: (Score:2)
I wouldn't be surprised... the Windows 10 BSOD is already silly enough
http://www.tenforums.com/attac... [tenforums.com]
Re: (Score:2)
That's already what the Windows 8 SFOD (Sad Face Of Death) looks like.
SoftRAM *shudders* (Score:4, Informative)
This seems eerily similar to what SoftRAM was trying to do in the mid-90s. Anyone remember this? "Double Your Memory!" was it's claim and in fact the tagline on the box cover. This was back when RAM cost a fortune and everyone needed more than they had in order to run Windows 95. The company made a killing... at first.
https://en.wikipedia.org/wiki/SoftRAM
I actually worked for them, and I saw the whole thing happen from start to finish. It was quite a wild ride. Mark Russinovich and Andrew Schulman took a particular offense to the software and set about publicly dissecting it and working feverishly to prove that it didn't work. They thought the whole thing was a scam. I personally witnessed tests that indicated it was doing exactly what it said it did - however it was difficult to prove any worthwhile effect under realistic working conditions. It seemed that the primary problem was that the program needed to reserve a chunk of memory to do it's thing, then it had to make intelligent decisions about what to put in there. If it was wrong (i.e., it compressed something that the user was going to close anyway, and the user opened a new program instead of retrieving the compressed one), the memory was wasted and overall performance (of opening the new application) was diminished. The reduction in overall memory at the outset may have been putting a strain on the system which the codec was unable to outperform. To aggravate things, the software also performed a few well-documented registry tricks to optimize the pagefile settings which led critics to claim that is indeed all that it was doing.
The proof I saw, for example, if you made a spreadsheet with millions of 1s in each cell, then made a cell calculating the total of all the cells, with SoftRAM, the calculation would take a quarter of a second. Without SoftRAM, a ton of the data got swapped to disk and the calculation took like 30 seconds. However, as soon as you put realistic data into the spreadsheet, the improvement basically disappeared because it wasn't compressible enough with the algorithms they were using. They actually hired a very famous compression expert at the time, who liked to talk a lot and bill them at something like $350/hour or something crazy and it didn't seem to help at all.
Eventually the company lost a class action suit and had to refund millions back to customers. They were never able to recover, despite using their wealth to acquire and improve various products. A few of the products they put out were good, like the Mac RAM management tool (though it pre-existed, and really, the company ruined the design and marketing for it), others (like BigDisk which faked your system into believing multiple disks were one volume) had problems and could be extremely dangerous if used incorrectly.
Ahh, good times.
Re:SoftRAM *shudders* (Score:5, Informative)
Re: (Score:2)
My question is, with mid-level machines coming with 16gig of RAM, why would I need compression at all? What the hell is Windows doing that it needs more than 16gig? Can't the NSA write more efficient spyware?
Not all machines are mid-level (Score:2)
My question is, with mid-level machines coming with 16gig of RAM, why would I need compression at all?
Because not all machines are mid-level. With a lot of smaller machines, especially phones, tablets, and detachable laptops, the 1-2 GB that comes soldered on when you buy it is all you get.
Re: (Score:2)
if you actually worked there, how come you don't know that 3rd parties decompiled it to see that the released binary did jack shit nothing of the sort? it just made the swap bigger, something that could be done without it.
I mean, the program was supposed to compress ram but nobody could prove that it did that, but could prove that it did nothing of the sort.
it's possible that you were witnessing different software than what they actually shipped. but it might be that the actual software they shipped was fas
Now if only the memory pressure metric worked (Score:2)
My concern with any memory management strategy under Windows is that even the current, disk-based virtual memory system is horrible at determining the "memory pressure" statistic. Under Windows 7, when I have a memory-intensive operation running, I'll hear the disk grinding away paging the whole time, while the system monitor shows physical memory usage at 60%. Even if the other 40% is disk cache, I'm pretty sure the foreground process should take precedence.
The other frustrating scenario is in sleep mode:
Re: (Score:2)
My suspicion there is a feature which gets the machine hibernated while sleeping, to recover in the case of a power outage. The feature pretty much kills the usefulness of sleep, though, if every wake is a wake from hibernate.
Assuming your machine is configured properly: When you sleep, as you suspected, memory is written out to disk as insurance against power is lost. When you come out of sleep (assuming you didn't lose power), Windows resumes from sleep without reading everything back in from disk. If you did lose power, Windows resumes from hibernate and reads memory back in from disk.
Phoning home (Score:2)
As usual, Microsoft will be receiving comments on the new features via the Feedback app.
After our offices relocated we started having a strange unexplained auto reboot of windows 7 systems. Seemingly random, different machines on different days, whether overnight jobs were running or not it did not matter. But every other day one machine would have rebooted overnight. Took enormous amounts of digging, but the clue was that it was always between 12 midnight and 12:30 AM. Finally localized it to some service called "Windows Experience". Apparently it was introduced when Vista came along to pop u
Color options like different-coloured windows? (Score:2)
The first one will be build 10525, which introduces some color personalization options
Will I finally be able to have active/inactive windows coloured differently enough that I can tell which is which at a glance? That's been missing since Vista (unless you're willing to disable Aero)
Windows Memory Manager (Score:5, Funny)
Does this mean I have to put HIMEM.SYS and EMM386.EXE back into my config.sys file? I think I still remember some of the MS-DOS edit commands.
Windows Memory Manager and security .. (Score:2)
Compressed swap isn't all it's cracked up to be (Score:5, Informative)
I have a Mac and have therefore had compressed swap for some time now. Theoretically, it's much faster than swap, even if you have an SSD. But there's a tradeoff. When swapping, the disk is busy, but the CPU is free to do other work, although things bog down a lot when thrashing happens. When doing compressed swap, the memory management hogs the CPU, which means it's not free to run other programs, and the system slows down. And thrashing still happens. It's just that your laptop heats up more when it's happening, and things don't get any less sluggish.
Of course, the biggest problem is Safari. I'll get Safari Web Content processes taking up 10GB or more. There's obviously some kind of run-away memory leak going on. Always when my system bogs down, it's Safari that's taking up too much RAM. Quit Safari, and the system becomes responsive again.
Re: (Score:2)
With 4 or more cores in every computer it's pretty rare for the CPU to be a bottleneck these days. In fact it's been rare for the CPU to be a bottleneck for the last 20 years.
So Win10 ist still in "alpha"... (Score:2)
For a beta-version, it would need to be at last "feature complete".
Re:Great (Score:5, Insightful)
Except swapping to disk
Re: (Score:2)
Exactly. Besides, it's not as if RAM I/O is the bottleneck in most scenarios. RAM is slower than cache, but many, many times faster than HDD. HDD can achieve about 200MB/sec (according to the speeds I've seen out of ddrescue, with SATA3 7200RPM 3TB 3.5" disks) in bulk transfer, although it's a good bit slower in random access. RAM is much faster than that and not penalized by random access (no seek times). The CPU can spend the billion or so cycles it has any time there's a non-trivial hard disk access to o
Re: (Score:2)
You can also do the compression in another core, potentially having minimal impact in terms of performance in workloads that don't utilise all of the CPU.
Huge improvements when the workload includes disk IO.
Re: (Score:2)
Swapping to virtual memory is more IO intensive. I suppose if you had a moderate compression algorithm, if you preferentially compressed pages, at least for a certain amount of time, rather than pushing them to swap, it might be worth whatever extra CPU and memory channel time it took compress and decompress the page.
Re: (Score:2)
Bear in mind that, much like disk compression, there's often a time where the CPU is not the bottleneck and therefore has spare cycles to spend on things like compression. Of course, RAM is so much faster than disk access that the bottlenecks from RAM I/O won't be that significant by themselves (and even if they would be, the data to compress comes from and then goes back to RAM, so that bottleneck persists) but any time you have something that isn't CPU-bound, you can free up some working memory by compres
Re: (Score:2)
CPUs, especially on modern machines that typically have around four cores, are very rarely at 100% utilization in real-world scenarios.
On a laptop/tablet/phone the battery usage is going to be the larger consideration than straight CPU utilization.
Re: (Score:3)
Still, writing to hard disk or even to SSD or flash is going to eat just as much, if not more battery than compressing stale pages.
A lot of this depends on the algorithms used. I don't think they would be using a high compression ratio, because that would eat a lot of cycles, but there's probably a sweet spot so far as compression ratio vs CPU and bus utilization that probably would be more efficient.
Backlight time (Score:2)
On a laptop or tablet, the backlight probably draws far more juice than the CPU. So if the CPU can complete a task more quickly by not hitting the HDD or eMMC as often, the backlight won't need to be on as long, which saves power. I wonder whether this is a simple enough task to be put on the little cores in ARM's big-little configuration.
Re: (Score:2)
CPUs, especially on modern machines that typically have around four cores, are very rarely at 100% utilization in real-world scenarios.
On a laptop/tablet/phone the battery usage is going to be the larger consideration than straight CPU utilization.
using compressed ram instead of spinning up a drive to swap is a huge win
swapping on a small system is truly miserable because you're swapping on your one and only block device, shared with all other access
Re: (Score:2)
If you were to design your whole system to this, a la AS400/iSeries/System i, you wouldn't need compression. Single level storage - a single address space for everything, and let the dedicated I/O controller sort out what needs to be in memory at any one time.
Or why not try it with hybrid disk? Use the disc's solid-state portion as a kind of reserved swap space.
Or, just put 8+GB of RAM in your machine and do away with pagefiles altogether. Seriously, I didn't notice any performance impact with Premiere Pro
Soldered-in RAM (Score:2)
Single level storage - a single address space for everything, and let the dedicated I/O controller sort out what needs to be in memory at any one time.
In a single-level storage [wikipedia.org] model, the main RAM acts as a cache for mmap'd disks. The compressed part of RAM would then act as an additional cache level, which reduces the number of capacity misses that need to reach the disk.
Or, just put 8+GB of RAM in your machine
That's fine on a recent desktop, not so fine on an older desktop with few slots or on a compact or detachable laptop with soldered-in RAM.
Re: (Score:2)
It also may not be all that practical on mobile devices. Operating systems like Windows are running on a lot more than just desktops these days.
Re: (Score:2)
haha that's funny.
windows 10 for arm-mobile is not windows 10 for x86 desktop.
they're just branding it. they're trying the same trick they tried with windows 8 and windows phone 8: lying by the boatloads. fact: wp8 browser is not the same as win8 browser despite them hyping it up. the kernel is different: windows ce derivative.
and you know how you can run WINDOWS TEN!!!!! on raspberry pi2? yeah, for varying definitions of windows 10.. windows 10 for IoT isn't exactly the windows 10 ms is spamming me to ins
Re: (Score:2)
Re: (Score:2)
1. There are these things called firmware and APIs, to allow access to the functions that hardware can provide, even functions not originally envisaged by the manufacturers. An SSHD manufacturer could provide firmware with an API to allow it to be used as reserved memory cache.
2. 24GB of RAM on a workstation, and it still wants to swap? That's either lazy work practices, sloppy application software, or you need to consider a minicomputer for your work. I used to run a minicomputer with 48 MB of main memory
Re: (Score:2)
Re: (Score:2)
That's disk compression not memory compression.
What is it when a Solid State Drive (SSD) is used?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This has nothing to do with compressing data in RAM, which the artcle is about
Re: (Score:2)
The article doesn't explain which compression algorithm is being used. It may very well be the zip compression format. .NET has a in-memory compressor that uses the zip compressor.
http://www.codeproject.com/Articles/14204/Better-Than-Zip-Algorithm-For-Compressing-In-Memor [codeproject.com]
Re: (Score:2)
That was a disk compression scheme, while this is virtual memory compression, more like RAMDoubler from Connectix which worked quite well (on Macs, anyway). It's possible Microsoft acquired some of that IP when they bought VirtualPC.
Re: (Score:2)
I think you mean things like MemoryDoubler and stuff that was around at that time.
DriveSpace was purely disk-based. There were products that compressed pages in-RAM, and have been since the DOS days, and still are, and are present on every OS if you look hard enough.
DoubleGuard saved my bacon (Score:2)
I'm not afraid. DriveSpace had the DoubleGuard feature, which patched MS-DOS to add canaries [wikipedia.org] around critical file system data structures in RAM. This saved my bacon a few times when I was developing graphics code and accidentally introduced undefined behavior.
Re: (Score:2)
If your Win10 device starts to melt, don't worry. That's just the CPU compressing/decompressing as fast as it can.
<joke> PV=RT don't worry the decompression will absorb the heat generated compressing it. </joke>
Memo: The O/S wars are over. (Score:2)
When you've been using Linux since 1996 like me
What makes people like you think that the O/S should be held responsible for buggy applications? When I read that sort of emotional nonsense about operating systems it just makes me think the author hasn't got a clue about anything outside the intellectual cage he has built for himself.
Inertia is a powerful force. It's not always easy to learn something new
The problem with most of the warriors in the O/S wars is they divorced windows 20yrs ago but are still bitter about the split. If you had been paying attention you would realise that windows is not the same O/S you split up w
Included apps (Score:2)
What makes people like you think that the O/S should be held responsible for buggy applications?
If applications are included with the default install of an operating system distribution, such as Edge with Windows 10, then of course the distributor is responsible for them. And if the bug is in the standard library provided with a compiler, it's the fault of the compiler publisher, which is often also the operating system publisher.
ObZram: Once RAM compression becomes commonplace, a memory allocator that zeroes out recently freed memory will be more space-efficient. A standard library that does not do t
Re: (Score:2)
If you had been paying attention you would realise that windows is not the same O/S you split up with in the 90's, it grew up when XP was released, you would be wise to follow its example, nothing says "bitter" as clearly as someone trying to belittle the people who learn to like/love their ex.
I started with Linux back in the very early 0.98 kernel days with a Soft Landing distro that installed from many floppies and needed nearly everything configured by hand. Things are much different now.
Windows too, indeed has evolved. XP wasn't all that bad, and that was the last Windows I used for many years until a month or so ago I finally upgraded hardware, and decided to leave Windows 8.1 on the machine alongside the Linux Mint installation I set up for dual boot. I learned that Windows had evolved into
Re: (Score:2)
My normal advice would still be, that RAM today is so cheap, that you should always have enough to avoid paging
Can you get 8 GB in a 10" laptop, or are you stuck with, say, the 2 GB in an ASUS Transformer Book T100?
Re: (Score:2)
There are a few small devices that can be upgraded or come with a decent amount of memory. Acer's E3-112-C1T9 is a celery-based 11.6" that comes with 4 gigs of RAM and a 500 gig hard drive for $260ish. There's also the ES1-111M-C40S with a 32 gig eMMC drive for $145 on Newegg right now. Specs say both can have their stock memory upgraded to 8 gigs. One stick in single-channel mode, tho.
I got an Asus T205XA for $130 that's good enough and it's smaller and lighter than either of those Acers. Also has a m