24 Cores and the Mouse Won't Move: Engineer Diagnoses Windows 10 Bug (wordpress.com) 352
Longtime Slashdot reader ewhac writes: Bruce Dawson recently posted a deep-dive into an annoyance that Windows 10 was inflicting on him -- namely, every time he built Chrome, his extremely beefy 24-core (48-thread) rig would begin stuttering, with the mouse frequently becoming stuck for a little over one second. This would be unsurprising if all cores were pegged at 100%, but overall CPU usage was barely hitting 50%. So he started digging out the debugging tools and doing performance traces on Windows itself. He eventually discovered that the function NtGdiCloseProcess(), responsible for Windows process exit and teardown, appears to serialize through a single lock, each pass through taking about 200 microseconds each. So if you have a job that creates and destroys a lot of processes very quickly (like building a large application such as Chrome), you're going to get hit in the face with this. Moreover, the problem gets worse the more cores you have. The issue apparently doesn't exist in Windows 7. Microsoft has been informed of the issue and they are allegedly investigating.
The lock cycles were avg 200 us each (Score:5, Informative)
Not 200S each, which is off by a factor of one million. But, hey.
Re: (Score:2, Informative)
5 thousandths of a second
Re: (Score:3)
er no...
2 us is 200 millionths or 0.0002 seconds.
in fractions that would be 2/10,000ths; 1/5000th
It is important to note that 5 thousandths is NOT the same as 1/5000th; 5/1000ths would be 0.005 seconds; which is out by a factor of 25.
But simply expressing it as a fraction isn't american enough. It should be like their wrench sizes... so 200 us is about 7/32768ths second.
Re: (Score:2)
A millisecond (ms) is 1 thousandth of a second (0.001) or 1/1,000
A microsecond (us) is 1 millionth of a second (0.000001) or 1/1,000,000
Re: (Score:2)
ah i see i -did- have a typo; as per the thread topic, the subject is 200 microseconds, which is the number i had in my head and worked through with all the maths.
However i wrote down '2 us' instead of '200 us' when i started my post. That was just a typo -- i knew it was 200 us.
Re: (Score:2)
Use Unicode for micro, yo insensitive clod!
We're on Slashdot not Soylent, here's no Unicode support. You don't expect those who wrote Slashcode to be able to enable a feature that's ready since a decade ago, do you?
Re: (Score:2)
Slashcode and /. actually support Unicode just fine. They implement a whitelist of allowed Unicode codepoints because there was a LOT of abuse of Unicode to basically screw up the webpage. From excessive decorations of characters that cause any web browser to render 10000 pixels up and down the page unreadable to messing with the page lay
Re: (Score:2)
Yes. A whitelist that barely let's most of ASCII through. That hardly counts as support.
Re: (Score:3)
The world should stick to metric.
200uS is one five thousandth of a second
Actually, it's one five thousands of a Siemen; case matters. I guess this whole newfangled upper and lower case thing is too hard for those writing their posts on an ASR-33.
Re: (Score:2)
5 thousandths of a second
That would be 5 milliseconds. 1/1000th of a second is 1 millisecond.
This is 200 millionths of a second, or 1/5th of a thousandths of a second.
This is also why engineers prefer engineering notation, so 200us or 200x10^-6. I wish more calculators supported engineering units.
Re: (Score:2)
Re: The lock cycles were avg 200 us each (Score:3)
Re: (Score:2)
What are you taking about? UTF-8 is over 20 years old. HTML is even older. It's one thing not to use the newest emoji but to say you won't use encodings that haven't changed in 20+ years because they might change in the future isn't a great reason.
Yet here we are, in the perfect of example of one encoding not working / not being supported.
ASCII for life.
Re: (Score:2)
What are you taking about? UTF-8 is over 20 years old. HTML is even older. It's one thing not to use the newest emoji but to say you won't use encodings that haven't changed in 20+ years because they might change in the future isn't a great reason.
Especially when Unicode guarantees that these characters are _not_ going to change.
The issue apparently doesn't exist in Windows 7. (Score:2)
Windows has always been unresponsive to user input (Score:5, Informative)
We just don't have priority...
Re: (Score:2)
Yeah, I spend way too much time watching the windows wheel spin around for no apparent reason other than the OS's inability to use more than one core.
I don't get it. (Score:5, Interesting)
Unless moving the cursor also depends on terminating a bunch of processes; and hangs until that task is finished, wouldn't the inefficiency imposed on the build process be expected to keep the GUI more responsive; by preventing it from occupying as much CPU time as it otherwise would?
Am I just confused? Does keeping the desktop and cursor drawn actually involve lots of time sensitive process killing? Does this indeed not make sense?
Re:I don't get it. (Score:4, Informative)
The Windows GUI interface actually uses a separate process to update the mouse on the screen. Due to various historical reasons (compatibility with old applications, mostly), it was required to recycle this process every time the mouse moved, as the process could get a memory leak (which couldn't be fixed properly, in order to preserve compatibility with the aforementioned applications). Therefore, every time the coordinates of the mouse change, the process has to be killed and replaced, therefore putting it through the same lock that this build process is hogging. Combine that with the 200 second delay to get through the lock, and the responsiveness is easily explained.
It's worth it to keep compatibility with the "After Dark" flying toasters screensaver, though.
That design & implementation is so bad (Score:3)
It's not even wrong (to quote a famous scientist about a really ill-formed idea).
At this point with multi-core computers, the GUI and mouse etc should be on a completely separate core that is managed somewhat separately than all of the others.
Re: (Score:2, Insightful)
I didn't believe that number for the first microsecond. Where was your brain? Stuck on "easily explained"?
From the original:
Even Microsoft would notice 24 cores sharing a 200 s group hug.
If the question had been
Re: (Score:2)
The slashdot summary originally said "200S" instead of "about 200 microseconds"
It was silently changed without an update message saying so.
I too was very confused when I first read it, both that a capitol S and no space isn't any standard notation I know of, and that the only interpretation was 200 seconds which made no sense at all.
Re: (Score:2)
What an incredibly bad design!
Re: (Score:2)
Yes but I guess your mother and father did their best trying to make you a smart boy(?).
Re: (Score:2)
+1 Funny
http://www.tothepc.com/pic/fak... [tothepc.com]
Re: (Score:3)
You are not confused. A sane kernel does not have this issue. A sane GUI stays responsive even with this issue. Unfortunately, Win10 does not have either.
Re: (Score:2)
You've never had the UI go unresponsive in X11 under heavy load?
Re: (Score:3)
You've never had the UI go unresponsive in X11 under heavy load?
FTFA it appears to go unresponsive without a heavy load - the cores are unloaded. So, no, I've never had an unloaded Linux/BSD machine get unresponsive with X11 .
Re: (Score:2)
I've had Linux go unresponsive without a heavy load - back in the bad old days of a decade and a half ago, untarring the Linux kernel itself would stall out the machine. The CPU was busy, but not so much - it was pure I/O locking up the kernel. So for the 5 minutes or so it took for the kernel to untar back in those days (this was when you didn't g
Re: (Score:2)
Let's be honest though, only the old commercial unix machines could do this in the 90s (IRIX, Solaris are two good examples). I don't use a GUI on my linux machines, so I don't know how well written the GUI is there. Now, neither Apple nor MS is capable of making a responsive GUI.
Re: (Score:3)
No mainstream operating system has responsive GUIs under heavy load, especially not under heavy i/o load. GNU/Linux goes down very rapidly, Android is sluggish out of the box, and OSX have their spinning beachball of death. They are designed incorrectly.
As a test, you may surf to this [haschek.at] page to see how your system handles an embedded zip bomb. (Warning: Don't click this link unless you're willing to kill your browser session or even hard-reset your machine.)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Short answer: context switching.
I'm sure others can pipe in here with more detailed explanations because I am not that familiar with the Windows kernel, but the basic gist of the problem is that calls to this function (NtGdiCloseProcess) cause it to acquire a global kernel lock which blocks thread execution...for ~200 microseconds, usually. The problem in this scenario is that around 5,768 calls to this function are being serialized onto the Ready Thread call stack which, combined, are delaying all other pr
Re: (Score:2)
In summary, the delays in responsiveness and interactivity are being caused by context switches, which is the usual culprit. It has nothing to do with the speed and number of CPUs because it is not a CPU resource problem. It is purely a kernel scheduling issue.
It has a bit to do with the CPUs: The reporter had a machine with 24 cores that actually managed to create and destroy 5,000 processes per second. My 4 core machine would have only created and destroyed less than 1,000 processes per second, so no problem.
Re:I don't get it. (Score:5, Interesting)
It's easy to criticize from the outside, but the Linux kernel has historically had kernel locks that created similar problems, such as the "big kernel lock", removed ca 2011 (ie: not ancient history).
https://kernelnewbies.org/BigK... [kernelnewbies.org]
As noted in the article, this particular locking problem appeared in Windows 10 and wasn't present in Windows 7, so the balancing acts between the fine-grained locking mechanisms, thread performance, and backwards-compatibility are clearly challenging to maintain. Not excusing; just observing. Windows has never been known for it's ability to support massive numbers of parallel threads, so it is not surprising that previously overlooked problems can appear or become exacerbated in these situations. Many people, even here on Slashdot, laud Microsoft for the generally excellent backward- compatibility in Windows, and criticize the Linux kernel for being generally horrible at it. But here you go, a pretty nice example to illustrate that backwards-compatibility has a cost.
Re:I don't get it. (Score:5, Informative)
Yes, you are making exuses, that's exactly what a tu quoque fallacy is
The big lock was removed in 2011, Microsoft produced a regression on an already bad design a lot closer in history. That's a sign of incompetence, period.
Also, the Linux kernel has bad backwards compatibility, which is why things like drivers and such should be upstreamed as much as possible and built in the main tree, but Linux userland still happily runs old Unix software, so you are overstating that case as well.
Re: (Score:2)
The serialization happens in the kernel, which means that hardware events are not being processed and transmitted to the mouse driver, which in turn isn't informing the process responsible for drawing the cursor.
Re: (Score:2)
It doesn't work that way. Windows is designed as a message based operating system. Any change in position of the mouse cursor is posted as messages to the queue. Whoever gets to read those queued messages is whichever program currently owning screen real estate. When the ownership of the window/screen changes, the change is instantaneous as far as the message queue is concerned (although the actual screen drawing process you see might have a noticeable delay due to hardware/buffer constraints).
As far as "
Re: (Score:2)
The graphics subsystem was outside the NT kernel until NT 4.0. NT 3.51 was as close to a good true multiuser operating system as Microsoft is likely to ever come.
Marketing's response (Score:2)
Marketing - "How do we monetize this...."
Engineers - "You mean after we fix it?"
Marketing just begins laughing - "Only if it get more money then leaving it in and marketing it as a feature"
I remember BeOS (Score:5, Insightful)
Re: (Score:2)
Amiga.
Re: I remember BeOS (Score:4, Interesting)
The Amiga could scroll a "screen" vertically with zero tearing (and very little effort), because it was just updating a memory pointer during a horizontal retrace interval. Ditto, for updating the mouse pointer (it was just a sprite). Both worked even when the app (or OS) died because it was serviced semi-independently of the OS as a whole during the vertical retrace interrupt.
Intuition-rendered windows were another matter entirely... I think window gadgets & outlines were rendered in the vertical retrace interrupt, but contents & outside-erasures depended on the app and/or os running properly.
Likewise, the mouse pointer was only robust when it was a 320x200/400 sprite... apps like DeluxePaint & WordPerfect (which needed more precision on a 640x200/400 screen than sprites could provide) that used XOR'ed software-rendered overlays could still crash (though if you clicked outside of the crashed app's window, the sprite-rendered pointer returned)
AmigaDOS was groundbreaking, but it still had some serious issues of its own. Like an event queue that used single-bit flags, allowing users to click BOTH 'ok' AND 'cancel' if the app stalled/crashed with a dialog on-screen.
Re: (Score:2)
The Amiga could scroll a "screen" vertically with zero tearing (and very little effort), because it was just updating a memory pointer during a horizontal retrace interval.
Yes but no. If you had used CygnusEd [wikipedia.org] (a text editor), you'd knew what's it like to have frame-perfect "kinetic" smooth scrolling even under CPU load. And scrolling text in a window is a little bit more complex than just updating a pointer.
Re: (Score:2)
Until Windows Vista sprites were used for the mouse pointer on PCs too, including in Windows. VGA cards of the day supported hardware acceleration in the form of a single sprite used for the mouse pointer. Most Amiga graphics cards, which used the same chips, also supported that single sprite for the mouse but the Picasso96 driver did also support a "soft pointer".
Having recently booted up my old Amiga system, one thing that struck me was that everything freezes when you open the drop-down menus. I had a fi
Re: (Score:2)
Re: (Score:2)
I don't have window tearing issues on this laptop I'm using.
Windows 8.1. Even with two of the monitors connected via a docking station connected to a single USB 3 port.
Re:I remember BeOS (Score:5, Interesting)
I was accomplishing this on 486DX2 hardware using OS/2 in ~1994, and by 1995 on a P120.
Several years ago I stopped by a buddy's retail establishment. He was transitioning network to Ubuntu on more modern hardware (with OS/2 in a VM), but still had an old and crusty OS/2 machine (probably a K6-2, but maybe a DX4) on the bench by the back door.
This was the last time I ever saw such a thing in the wild.
It was remarkably snappy doing normal, productive things -- scanning documents, browsing web pages, writing and viewing proposals -- just like it was when it was built. (And what window tearing?)
Sometimes I think that the more abstraction layers we add, the slower things get. I think this coupled with programmer laziness (and/or pay based on lines of code), makes human-interactive things continue to behave just as slow as they have been for ~20 years.
Do we even use accelerated 2D desktop graphics anymore, or are we completely back to the bad old days of every application drawing into a dumb framebuffer?
Re: (Score:2)
"I once preached peaceful coexistence with Windows. You may laugh at my expense - I deserve it."
-- Jean-Louis Gassee, CEO Be, Inc.
Re: (Score:2)
Furthermore, in BeOS user input was king: no matter what shit the OS was doing, mouseclicks and keypresses trumped all else. Boy did BeOS run smooth (from my perspective). Sure, some files got copied 100 milliseconds later - nobody gives a fuck!
Re: (Score:2)
And the K6 wasn't released until 1997 with an initial clock speed of 166 MHz. It wasn't until 1998 that it could achieve clock speeds of 300 MHz. It was late 1998/early 1999 before the 400 MHz K6-2 was released, which was also around the time you could max out the FSB with 512 Mb PC100 SDRAM (the pre-DDR era). So yeah, I think that while the GPs point is valid, his dates are a bit off.
Re: (Score:2)
s/1996/1993/
and you're right. By 1996 it was a bit small.
Re: (Score:2)
Yeah, I'm going to bet it was regarded as an awesome HDD in 1992. I remember thinking the SGIs were computers from about 5 years in the future. It was more like 2-3.
Re: (Score:2)
Re: (Score:2)
??? What do you mean?
Only 24 Cores? (Score:5, Funny)
2 Core for DRM
2 Core for DRM Protection
2 Core for Telemetry
2 Core for Telemetry Protection
2 Core for Genuine Advantage
2 Core for Genuine Advantage Protection
2 Cores for Driver Signing Validation
2 Cores for Driver Signing Validation Protection
2 Cores for Cortana
2 Cores for Cortana Telemetry
2 Cores for Cortana Telemetry Protection
1 Core for the Base OS
1 Core, at 25% for user processes
So it's not just me (Score:2, Interesting)
In my basement office I have six computers I use regularly. Two are running MacOSX, one is running Ubuntu, two are Windows XP, and one is Windows 10. I just went around the room and checked uptimes. All of them were up for more than 3 months, except the Windows 10 computer. This one computer is supposed to be pretty fast compared to the rest but it gets bogged down where I feel compelled to reboot it. It also has the nasty habit of demanding to reboot when I'm trying to get work done, but that's a diff
Re: (Score:2)
So Windows 10 is the only one that is actually patched? <Ducks> :)
Re: (Score:2)
You have something of a point there about updates. I'll update the computers when it is convenient for me, like I'm forced to reboot due to a power outage. I just checked and it looks like an update is waiting for a reboot on one of my Macs. It seems only Windows 10 has "critical" monthly updates that require a reboot.
Sure, XP probably has security problems where it should not be on the internet but due to their age I don't go web surfing with them a lot, and they are behind a firewall, so risks are mini
Re: (Score:2)
> Serious question, is there something I should be doing different to keep these XP machines from becoming a security problem?
Don't have them be a member of your domain, Have unique passwords on them, don't access the internet from them, don't check email on them, and don't allow internet access on them.
There is still the possibility that they'll be compromised in a lateral traversal attack, but this minimizes the probability that they'll be the initial attack vector.
Re: (Score:2)
Re:So it's not just me (Score:4, Informative)
People may ask why I run Windows XP. It's because I have some old software that I like and it won't run on my newer Windows 10 computer.
It's why people virtualize old PCs now. You run your old PC in a window.
One of the Windows XP computers claims to have been on for over 15 years.
32 bits of milliseconds is 49 days. Windows XP is a 32 bit system and a common way to measure how long it's been up is by issuing a system call which returns the number of milliseconds since the system startup.
Happens on Win7 w/Chome startup (Score:2)
This has happened when starting Chrome since first trying Chrome.
Tried limiting Chrome to 3/6 cores and even then mouse goes jerky.
It may not be the exact same cause, but it is the exact same symptom.
Process Lasso (Score:2)
Why a toy OS on that system? (Score:3)
Why turn an expensive system into a limited toy?
If you need to run MS compatible stuff MS Win7 and various MS server systems are available.
Probably out of resources (Score:2)
Fork Bomb ! (Score:4, Insightful)
Obligatory (Score:2)
Re:Obligatory: Windows Source Code leaked (Score:2)
I was going to say something like this: https://www.neowin.net/forum/t... [neowin.net]
Since when.... (Score:2)
Since when does moving the mouse involve closing a process? Oh wait! Microsoft Windows.
Re: (Score:3)
Not a program, a process.
Re:Not just when closing a program (Score:5, Insightful)
That could be related to the hardware acceleration. If I had to guess, the Windows desktop would need to wait for the game to release it's GPU resources and load its own in to the GPU memory.
Back in the XP days, going from game to desktop was very quick, but going from desktop back to game was very slow. When Vista came along, the Os started using the GPU to accelerate the desktop. Made it slow both ways
Re: (Score:2)
Made it slow both ways
And more expensive. Oh, the irony!
Re: Not just when closing a program (Score:4, Interesting)
There's also the matter that until somewhat recently, most lower-end GPUs were designed to accelerate lower resolution and/or shallower bit depths than the max the card could use for Windows. For example, the card might have allowed up to 2560x1600 @ 24/32bpp, but only supported hardware 3D acceleration up to 1280x800@15/16bpp. Even when resolution finally caught up, bit depth w/acceleration was stuck at 16bpp until well into the Windows 7 era. This is why so many computers with semi-ok gaming specs still couldn't do Aero Glass transparency when Windows 7 came out... they couldn't hardware-accelerate 32-bit color.
The problem still semi-persists among many phones & tablets. If an Android device seems to get blurry for a moment during transitions, it's not your imagination... Android is dropping to lower-res/fewer colors to accelerate the transition, then going back to high-res/color dumb framebuffer mode when it's done (and text suddenly becomes sharp & clear a moment later)
Re: Not just when closing a program (Score:5, Informative)
Android is dropping to lower-res/fewer colors to accelerate the transition
Or did it swap the high-res texture for a low-res one to save memory while it was not in use?
GPU's have been able to do 32bit acceleration for a long time.
Semi-OK gaming video cards that didn't support DirectX 9 couldn't run Vista Aero because it used the DirectX 9 API, required hardware based Pixel Shader 2.0 (not emulated in the driver) and at least 128MB of RAM. Not because of bit-depth.
Re: (Score:2)
GPU speeds things up quite a bit. Even with Intel graphics. Throwing everything on the cpu makes things like a browser a chop chop fest.
[
Re: Not just when closing a program (Score:2)
Re:Windows... (Score:5, Insightful)
More specifically, why are OSes not designed, and computing hardware not designed, so that the GUI cannot be slowed down by other slow processes, process switching, or I/O / virtual memory thrashing.
The most brain-dead design-avoidable situation in the computing universe is where my computer is thrashing due to some resource over-use, and the UI is inoperable so I can't fix the problem e.g. by killing processes/programs. DOH!
The UI and user input devices should be a completely separate set of processes and memory than the rest of application processing. It should operate as a service, through data pipelines, to the rest of the applications. It should be completely separate, in terms of resource management. Or failing that, certain aspects of GUI, such as program kill controls, should be highly prioritized over pretty much everything else.
Again, slow and over-used everything else should not slow the UI and user input processes. This is basic.
Re:Windows... (Score:4, Insightful)
I'm glad you've volunteers to help with their concurrency programming. Good luck, it's not easy.
Usually at some point, access to shared resources needs to be controlled. There are easy ways to do it and there are hard ways. Easy isn't fast, but it's predictable and less error-prone.
Re: Windows... (Score:5, Insightful)
Re: (Score:3)
It is how I got into UNIX.
Re: (Score:2)
Re:Windows... (Score:4, Informative)
Again, slow and over-used everything else should not slow the UI and user input processes. This is basic.
The oversimplified, but short answer is that there is no such thing as a multiprocess CPU. All CPUs can execute on only a single thread per cycle. The kernel exists to allow multiple processes to be resident and to provide the illusion of multiple thread execution. In other words, the essential function of the kernel is scheduling, and in doing this the kernel has to make decisions about process priority that impact responsiveness and resource utilization in often diametrically opposite fashions. To gain responsiveness, a process that is further down the execution queue has to preempt processes further up the queue, delaying their execution. This has a negative impact on overall thread performance as your CPU will be mostly underutilized if there is a lot of preempting going on. If the kernel inhibits (or prohibits) preempting, it can more efficiently utilize your CPU, allowing many threads to get as much CPU time as possible, but this will have a very negative impact on responsiveness.
UI and user input processes are just processes to the kernel. You can, of course, just give UI and user input processes the highest possible priority at all times, but this is not automatically the best thing to do in every circumstance. For example, you probably don't want your audio stream in the background to stutter or stop playing just because you started moving the mouse. And if you are flushing a file to disk, you probably want that operation to complete atomically, rather than be interrupted by a pop-up dialog, because corrupted filesystems tend to make users pretty unhappy.
Re: (Score:3)
Re: (Score:2)
I don't really know enough about Windows process scheduling, but the basic concept of compartmentalising various process types isn't new.
The level of control in minicomputer and mainframe operating systems is astonishing when viewed from the Windows/Linux/Apple world.
My favourite OS for control of user processes was OS400/IBM i. 90* priority levels, batch/spool/compile processes were automatically lower priority than interactive sessions, etc, etc
* there were actually 100 levels but the highest 10 were rese
Re: (Score:3)
Because the GUI and 3D graphics were considered bolt-ons to an existing OS kernel. Not all systems may have 3D acceleration. Some servers even avoid having a desktop as that is considered a security risk.
When the GUI is in use, the user input processing becomes the dominant process; what event happened, which widgets have been changed. A desktop with a good number of windows might have 1000+ widgets, all of which have icon images for various states. TrueType and Unicode fonts are converted into images as we
Re: (Score:3)
...and that is all completely avoidable.
This is the result of bad design not any inherent limitation in the hardware or lack of DMA use. The PCI bus is involved in all cases (DMA doesn't transfer things magically).
Not only is there no need to keep copied data in memory (or even swap out other processes to increase the disk cache like Windows is fond of doing), but you can even turn off caches for copy processes to avoid trashing them.
Furthermore, rules can be created to govern when something is worthy of c
Re: (Score:2)
More specifically, why are OSes not designed, and computing hardware not designed, so that the GUI cannot be slowed down by other slow processes, process switching, or I/O / virtual memory thrashing.
That is EXACTLY how BeOS was designed! In the 1990s! it's pathetic, really, that in 2017 no other OS has figured out this shit, yet. Not even Linux, in spite of all the development going on, there. I guess with Linux the excuse is that it's a server OS, not a desktop one.
Re: (Score:2)
There are many cases where the back end is more important than the GUI processing. This is a RT case problem where most of the products that have this issue just dedicate hardware to the backend processing and separate the GUI using message parsing, DDS, or some other way of clearly marking the boundaries.
Having a single hardware set with processors doing both GUI and high end back processing end up having issues. Graphics Chips are a result of taking the backend software and pushing it to hardware to spe
OS and process scheduling (Score:5, Insightful)
More specifically, why are OSes not designed, and computing hardware not designed, so that the GUI cannot be slowed down by other slow processes, process switching, or I/O / virtual memory thrashing.
That's why OSes such a Linux (not even over-optimized for responsivity) have an entire zoo of CPU schedulers and IO schedulers.
(with BFQ being the latest popular IO scheduler for responsivity),
and linux specifically has hte non-POSIX "CGROUPS" extension that enables it to arrange the various processes into a tree hierarchy with each node supporting its own scheduling tactics between its childern (see demos of 256 GCC compiler jobs launched in parallel and the GUI still being responive).
(That's also part of the reasons why modern complex manager like systemd are getting popular, they have modules to handle all this : session, seats, etc. concepts that POSIX lacks)
BeOS was an OS whose entire purpose was exactly that : no matter what, keep UI responsive and avoid media stuttering.
(Well, running initially on architecture with less expensive context switches did also help a lot).
The UI and user input devices should be a completely separate set of processes and memory than the rest of application processing.
Actually, in most OSes, they already are.
It should operate as a service, through data pipelines, to the rest of the applications.
That's a tiny bit less obvious. Some graphical tool-kits run their UI in the main thread.
Some software would need to have the processing moved into a background thread or process.
WebApps are an obvious counter-exemple where the UI is an entirely different process (And depending on where the sever is executed - even different machine).
Or failing that, certain aspects of GUI, such as program kill controls, should be highly prioritized over pretty much everything else.
Again, slow and over-used everything else should not slow the UI and user input processes.
And then you'd complain that any complex calculation (compression of a video) takes ages, because the process is constantly being interrupted to give time to your GUI and mouse (i.e.: to the various driver and daemons and libraries processing USB and/or Bluetooth) even if they don't need.
Balancing responsivity (i.e. constantly interrupting everything just to be sure that everyone get their share of CPU cycles and IO) and performance (running as much un-interrupted as possible so the task finishes as fast as possible) is a complex dark art.
But, yeah, Windows is significantly worse at this compared to everyone else.
Which also explains you'll never see any deployment of Windows on the TOP500 (it's nearly Linux all the way, with a few excption like BSD - i.e. other Unix-type of OSes), and Azure is the only known cloud running it.
It's also why Linux is popular in most embed systems (modems, routers, smartphone, tons of IoT gizmos, smart TVs, etc. - basically nearly anything with a CPU that is not a desktop computer is likely to run some Unix-like kernel like Linux)
It's not that Linux is magic, it's that Windows is *THAT* awfully bad at anything.
Firewalling a GPU (Score:3)
Is there such a thing as firewalling or sandboxing a GPU for that?
Yes and no.
Yes, there's a possibility to firewall against hardware.
- that's what IOMMU is for on modern processors.
So hardware with DMA (Direct Memory Access. That can directly read the RAM e.g.: FireWire, Infiniband, 10Gigabit Ethernet, Thunderbolt, etc.) can be isolated and cannot be used to dump the whole PC memory (see earlier attack with FireWire RDMA on Windows).
- modern GPU processors have even implemented their own MMU layer for additional fencing - so that 3D game that you've downloaded (or even We
Re: (Score:2)
MS always had pretty bad engineering. This is just one place where it really shows.
Re: (Score:2)
Windows doesn't do fork. That is a technical choice.
Complaining that using fork on Windows is slow is funny (shows that the user thinks everything is Unix). Responding that it is due to bad engineering is hilarious and very sad at the same time. It shows that you are arrogant enough to think you understand the issue while completely failing to do so.
Re: (Score:3)
Windows not forking is not bad engineering? Funny. Architecture falls under engineering as well, you know.
Re: (Score:3)
Bad engineering is *always* the fault of bad management, no exceptions. It's management's job to manage the engineers, and identify and fire the bad ones, while facilitating the rest and ensuring their time is productive. Engineers have no power; only management does, so management gets 100% of the blame for the outcome.
Re: (Score:3)
Fork is heavily optimised on *NIX operating systems, because it's the primary way of creating new processes. Unfortunately, it's also a completely brain-dead one. It originates from old systems that had the running process in one form of storage and switched by writing it out to another. Fork made sense then, because you'd create the new process by writing out your current state and still have a copy of it in online storage for free. On modern systems, you need to mark all of the memory copy-on-write, c
Re: (Score:2)
Exactly.
Re: (Score:3)
posix_spawn still does the fork/exec. It is a library function after all, not a system call.
That's true on FreeBSD, and I think it's true on Linux. It's not true on NetBSD, XNU or Solaris, and I don't think it's true on AIX either. posix_spawn was designed to be possible to implement as a library routine, but (particularly in the presence of threads) it's much more efficient to implement it in the kernel.
And it doesn't take as long as you seem to think to mark things COW.
On a system designed for it, no it's not terribly expensive (though the IPIs required for synchronising the page tables across multiple cores actually add quite a bit to the cost on modern hardw
Re: (Score:3)
I was kicked off the high school varsity team because I couldn't answer a pop quiz on operating system architecture.