Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Microsoft Windows

Multicore Requires OS Rework, Windows Expert Says 631

alphadogg writes "With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft. The current approach to harnessing the power of multicore processors is complicated and not entirely successful, he argued. The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the operating systems model. Today's computers don't get enough performance out of their multicore chips, Probert said. 'Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked. Probert made his presentation at the University of Illinois at Urbana-Champaign's Universal Parallel Computing Research Center."
This discussion has been archived. No new comments can be posted.

Multicore Requires OS Rework, Windows Expert Says

Comments Filter:
  • This is new?! (Score:5, Insightful)

    by DavidRawling ( 864446 ) on Sunday March 21, 2010 @07:50PM (#31561518)
    Oh please, this has been coming for years now. Why has it taken so long for the OS designers to get with the program? We've had multi-CPU servers for literally decades.
    • by bondsbw ( 888959 )
      Just because OS designers milk every cycle from every CPU, doesn't mean web browser designers will.
      • Re:This is new?! (Score:5, Insightful)

        by PhunkySchtuff ( 208108 ) <kai@automatic[ ]om.au ['a.c' in gap]> on Sunday March 21, 2010 @08:07PM (#31561662) Homepage

        Since when have OS designers optimised their code to milk every cycle from the available CPUs? They haven't, they just wait for hardware to get faster to keep up with the code.

        • Re:This is new?! (Score:5, Insightful)

          by Cryacin ( 657549 ) on Sunday March 21, 2010 @08:26PM (#31561812)
          For that matter, since when have software vendors been willing to pay architects/designers/engineers etc to optimise their software to milk every cycle from the available CPUs and provide useful output with the minimum of effort? They don't, they just wait for hardware to get faster to keep up with code.

          The only company that I have personally been exposed to that gives half a hoot about efficient performance is Google. It annoys me beyond belief that other companies think it's acceptable to make the user wait for minutes whilst the system recalculates data derived from a large data set, and doing those calculations multiple times just because a binding gets invoked.
          • Re:This is new?! (Score:5, Insightful)

            by fuzzyfuzzyfungus ( 1223518 ) on Sunday March 21, 2010 @09:20PM (#31562282) Journal
            I doubt that it's just google. I suspect the following:

            There are(in broad strokes, and excluding the embedded market), two basic axes on which you have to place a company or a company's software offering in order to predict its attitude with respect to efficiency.

            One is problem scale. If a program is a once-off, or an obscure niche thing, or just isn't expected to have to cope with very large data sets, putting a lot of effort into making it efficient will likely not be a priority. If the program is extremely widely distributed, or is expected to cope with massive datasets, efficiency is much more likely to be considered important(if widely distributed, cost of efficient engineering per unit falls dramatically, if expeced to cope with massive datasets, amount of hardware cost and energy cost avoided becomes significant. Tuning a process that eats 50% of a desktop CPU into one that eats 40% probably isn't worth it. Tuning a process that runs on 50,000 servers into one that runs on 40,000 easily could be).

            The second is location: If a company is running their software on their own hardware, and selling access to whatever service it provides(search engine, webmail, whatever), their software's efficiency or inefficiency imposes a direct cost on them. Their customers are paying so much per mailbox, or so much per search query, they have an incentive to use as little computer power as possible to deliver that product. If a company is selling boxed software, to be run on customer machines, their efficiency incentives are indirect. This doesn't mean "nonexistent"(a game that only runs on $2,000 enthusiast boxes is going to lose money, nobody would release such a thing. Among enthusiasts, browser JS benchmarks are a point of contention); but it generally does mean "secondary to other considerations". Customers, as a rule, are more likely to use slow software with the features they want, or slow software that released first and they became accustomed to, than fast software that is missing features or requires substantial adjustment on their part. Shockingly enough, software developers act on this fact.

            On these axes, you would strongly suspect that Google would be efficiency oriented. Their software runs on a grand scale, and most of it runs on their own servers, with the rest competing against various desktop incumbents, or not actually all that dramatically efficient(Nothing wrong with Google Earth or Sketchup; but nothing especially heroic, either). However, you would expect roughly the same of any entity similarly placed on those axes.
          • Re: (Score:3, Interesting)

            by PixelSlut ( 620954 )

            Google? I'm a big Google fan (and despite the rest of my comment, also a big Android fan and totally love my Nexus One).. but if Google was so hardcore into efficiency, why the hell did they develop a new runtime for their Android that's based on Java?

            Google didn't seem like the best company to praise for efficiency. I would have picked some sort of video game company like id Software (yeah, I realize this an apples and oranges comparison though).

            • Re:This is new?! (Score:5, Insightful)

              by Mr. Freeman ( 933986 ) on Sunday March 21, 2010 @10:43PM (#31562908)
              Because Google ain't crunching data sets on fucking mobile phones. They're optimizing their servers and the applications that run on those servers because Google is so damn big that a fraction of a percent increase in efficiency translates into huge amounts of money saved through less wasted CPU time. Mobile phones aren't a part of google.

              If you phone runs a little less efficient then no one gives a damn. They want to make their phones easy to program for, which generally conflicts with efficiency.
            • Re: (Score:3, Informative)

              by LtGordon ( 1421725 )

              ... but if Google was so hardcore into efficiency, why the hell did they develop a new runtime for their Android that's based on Java?

              Because the Java gets executed on the user's hardware. Google cares about efficiency insofar as it affects their own hardware requirements.

            • Re:This is new?! (Score:5, Insightful)

              by IamTheRealMike ( 537420 ) on Monday March 22, 2010 @02:35AM (#31564186)

              Why Java for Android? This is a good question. There are several reasons (that the Android team have discussed).

              One is that ARM native code is bigger, size-wise, than Dalvik VM bytecode. So it takes up more memory. Unlike the iPhone, Android was designed from the start to multi-task between lots of different (user installed) apps. It's quite feasible to rapidly switch between apps with no delay on Android, and that means keeping multiple running programs in RAM simultaneously. So trading off some CPU time for memory is potentially a good design. Now that said, Java has some design issues that make it more profligate with heap memory than it maybe needs to be (eg utf16 for strings) so I don't have a good feel for whether the savings are cancelled out or not, but it's a justification given by the Android team.

              Another is that Java is dramatically easier to program than a C-like language. I mean, incredibly monstrously easier. One problem with languages like C++ or Objective-C is that lots of people think they understand them but very few programmers really do. Case in point - I have an Apple-mad friend who ironically programs C# servers on Windows for his day job. But he figured he'd learn iPad development. I warned him that unmanaged development was a PITA but he wasn't convinced, so I showed him a page that discussed reference counting in ObjC (retain/release). He read it and said "well that seems simple enough" - doh. Another one bites the dust. I walked him through cycle leaks, ref leaks on error paths (no smart pointers in objc!), and some basic thread safety issues. By the end he realized that what looked simple really wasn't at all.

              By going with Java, Android devs skip that pain. I'm fluent in C++ and Java, and have used both regularly in the past year. Java is reliably easier to write correct code in. I don't think it's unreasonable to base your OS on it. Microsoft has moved a lot of Windows development to .NET over the last few years for the same reasons.

              Fortunately, being based on Java doesn't mean Android is inherently inefficient. Large parts of the runtime are written in C++, and you can write parts of your own app in native code too (eg for 3D graphics). You need to use Java to use most of the OS APIs but you really shouldn't be experiencing perf problems with things like gui layout - if you are, that's a hint you need to simplify your app rather than try to micro-optimize.

              • Re:This is new?! (Score:4, Informative)

                by julesh ( 229690 ) on Monday March 22, 2010 @04:04AM (#31564518)

                One is that ARM native code is bigger, size-wise, than Dalvik VM bytecode.

                Citation needed. Dalvik is better than baseline Java bytecode, agreed. But so is ARM native code. [http://portal.acm.org/citation.cfm?id=377837&dl=GUIDE&coll=GUIDE&CFID=82959920&CFTOKEN=24064384 - "[...] the code efficiency of Java turns out to be inferior to that of ARM Thumb"]. I can find no direct comparison of ARM Thumb and Dalvik, so I can't tell you which produces the smaller code size.

                So it takes up more memory.

                Even if your first statement is true, this doesn't necessarily follow. VMs add overhead, usually using up somewhat more runtime memory to execute, particularly if a JIT is used (the current version of Dalvik doesn't have one, but the next one apparently will).

        • Re:This is new?! (Score:5, Insightful)

          by jc42 ( 318812 ) on Sunday March 21, 2010 @09:19PM (#31562278) Homepage Journal

          Since when have OS designers optimised their code to milk every cycle from the available CPUs?

          This isn't just an OS-level problem. It's a failure among programmers of all sorts.

          I've been involved in software development since the late 1970s, and for the start I've heard the argument "We don't have to worry about code speed or size, because today's machines are so fast and have so much memory. This was just as common back when machines were 1,000 times slower and had 10,000 times less memory than today.

          It's the reason for Henry Petroski's famous remark that "The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry."

          Programmers respond to faster cpu speed and more memory by making their software use more cpu cycles and more memory. They always have, and there's no sign that this is going to change. Being efficient is hard, and you don't get rewarded for it, because managers can't measure it. So it's better to add flashy eye candy and more features, which people can see.

          If we want efficient code, we have to figure out ways to reward the programmers that write it. I don't see any sign that people anywhere are interested in doing this. Anyone have suggestions for how it might be done?

          • Re:This is new?! (Score:5, Insightful)

            by Brian Gordon ( 987471 ) on Sunday March 21, 2010 @09:34PM (#31562396)

            Maybe it's not a question of whether the code is efficient. Maybe it's a question of how much you're asking the code to do. It's no surprise that hardware struggles to make gains against performance demands when software developers are adding on nonsense like compositing window managers and sidebar widgets. I'm enjoying Moore's law without any cancellation.. just run a sane environment. Qt or GTK, not both, if youre running an X desktop. Nothing other than IM in the system tray. No "upgrade fever" that makes people itch for Windows Media Player 14 when older versions work fine and mplayer and winamp work better.

          • by pslam ( 97660 ) on Sunday March 21, 2010 @10:13PM (#31562700) Homepage Journal

            If we want efficient code, we have to figure out ways to reward the programmers that write it. I don't see any sign that people anywhere are interested in doing this. Anyone have suggestions for how it might be done?

            It's happening, from a source people didn't expect: portable devices. Battery life is becoming a primary feature of portable devices, and a large fraction of that comes from software efficiency. Take your average cell phone: it's probably got a half dozen cores running in it. One in the wifi, one in the baseband, maybe one doing voice codec, another doing audio decode, one (or more) doing video decode and/or 3d, and some others hiding away doing odds and ends.

            The portable devices industry has been doing multi-core for ages. It's how your average cell phone manages immense power savings: you can power on/off those cores as necessary, switch their frequencies, and so on. They have engineers who understand how to do this. They're rewarded for getting it right: the reward is it lives on battery longer, and it's measurable.

            Yes, you can get lazy and say 'next generation CPUs will be more efficient', but you'll be beaten by your competitors for battery life. Or, you fit a bigger battery and you lose in form factor.

            The world is going mobile, and that'll be the push we need to get software efficient again.

          • Re: (Score:3, Insightful)

            by tsotha ( 720379 )

            It's not a failure among programmers at all - it's a business decision. The main reason software is less efficient is the costs are so heavily tilted toward software development instead of hardware. For the vast majority of business applications companies are using generalized frameworks to trade CPU cycles and memory for development time.

            Even in terms of development style, it just isn't worth it to optimize your code if it's going to substantially increase development time. People are expensive. Time

    • Re: (Score:3, Insightful)

      by Sir_Sri ( 199544 )

      ya but those cases, as he reasonably explains, tend to get specialized development (say scientific computing), or separate processes, or while he doesn't explain it, a lot of server stuff is embarrassingly (or close to) parallel.

      I can sort of see them not having a multi-processor OS just waiting for the consumer desktop- server processors are basically cache with some processor attached, whereas desktop processors are architected differently, and who knew for sure what the mutlicore world would look like in

      • Re: (Score:3, Interesting)

        by drsmithy ( 35869 )

        It doesn't sound easily backwards compatible (but I might be wrong there), and there's a certain simplicity to 'reserve one core for the OS, application developers can manage the rest of them themselves' sort of model like consoles.

        Those curious about what life would be like with application developers managing system resources, should try firing up an old copy of Windows 3.1 or MacOS and running 10 or so applications at the same time.

        I can only assume TFA is an atrociously bad summary of what he's actua

    • Re:This is new?! (Score:5, Insightful)

      by Jeremi ( 14640 ) on Sunday March 21, 2010 @08:19PM (#31561730) Homepage

      Why has it taken so long for the OS designers to get with the program?

      Coming up with a new OS paradigm is hard, but doable.

      Coming up with a viable new OS that uses that paradigm is much harder; because even once the new OS is working perfectly, you still have to somehow make it compatible with the zillions of existing applications that people depend on. If you can't do that, your shiny new OS will be viewed as an interesting experiment for the propeller-head set, but it won't ever get the critical mass of users necessary to build up its own application base.

      So far, I think Apple has had the most successful transition strategy: Come up with the great new OS, bundle the old OS with it, inside an emulator/sandbox, and after a few years, quietly deprecate (and then drop) the old OS. Repeat as necessary.

      • Re: (Score:3, Informative)

        by nine-times ( 778537 )

        I don't know if you had to support Mac users during the years of transition, but it wasn't quite as easy as you made it sound. It was pretty smooth for such a drastic change, but I wouldn't want to repeat it any more than necessary.

      • Re: (Score:3, Informative)

        by dudpixel ( 1429789 )

        Come up with the great new OS...

        hang on, this "new" OS you're referring to is basically UNIX (BSD). It was invented before Windows. Sure apple has modified it and put a shiny new layer on top (that works exceptionally smoothly, mind you), but if you wanna get technical, they didn't come up with a new OS, they improved an old one.

      • Re: (Score:3, Insightful)

        by steelfood ( 895457 )

        MS did the same during the transition to 32-bit. They included a 16-bit DOS emulator and had it run transparently. They did the same for the transition to 64-bit. It was so successful and so transparent a lot of IT professionals didn't even know it was even happening in the background.

        Unlike Apple though, they never removed it. Sure, it resulted in a major security hole, but it also let legacy custom business apps run far longer than they otherwise would have been able to.

        I suspect if they were ever to make

    • Re:This is new?! (Score:4, Informative)

      by Bengie ( 1121981 ) on Sunday March 21, 2010 @08:44PM (#31561966)

      developing server apps to run parallel is easy, client software is hard. Many times, the cost of syncing threads is greater than the work you get from them. So you leave it single threaded. The question is, how may you design a Framework/API that is very thread friendly while making sure everything runs in the order expected all the while making it easy for bad programmers to take advantage of it.

      The biggest issue with developing async-threaded programs is logical dependencies that don't allow part to be loaded/processed before another. If from square one, you develop an app to take advantage of extra threads, it may be less efficient, but more responsive. Most programmers I talk to have issues trying to understand the interweaving logic of multi-threaded programing.

      I guess it's up to MS to make a easy to use idiot-proof threaded framework for crappy programmers to use.

    • Re:This is new?! (Score:5, Informative)

      by stevew ( 4845 ) on Sunday March 21, 2010 @09:25PM (#31562338) Journal

      Well - I can tell you that Dave Probert saw his first multi-processor about 28 years ago at Burroughs corporation. It was a dual-processor B1855. I had the pleasure with working with the guy way back then. From what I recall he then went on to work at FPS systems which was an array processor that you could add onto other machines (I think vaxen...but I could be wrong there..)

      Anyway - he has been around ALONG time.

  • waiting (Score:5, Insightful)

    by mirix ( 1649853 ) on Sunday March 21, 2010 @07:51PM (#31561520)

    'Why should you ever, with all this parallel hardware, ever be waiting for your computer?'

    Because I/O is always going to be slow.

    • Re:waiting (Score:5, Insightful)

      by DavidRawling ( 864446 ) on Sunday March 21, 2010 @08:04PM (#31561638)

      Well, with the rise of the SSD, that's no longer as much of a problem. Case in point - I built a system on the weekend with a 40GB Intel SSD. Pretty much the cheapest "known-good" SSD I could get my hands on (ie TRIM support, good controller) at AUD $172, roughly the price of a 1.5TB spinning rust store - and the system only needs 22GB including apps.

      Windows boots from end of POST in about 5 seconds. 5 seconds is not even enough for the TV to turn on (it's a Media Center box). Logon is instant. App start is nigh-on instant (I've never seen Explorer appear seemingly before the Win+E key is released). This is the fastest box I've ever seen, and it's the most basic "value" processor Intel offer - the i3-530, on a cheap Asrock board with cheap RAM (true, there's a slightly cheaper "bargain basement" CPU in the G6950 or something). The whole PC cost AUD800 from a reputable supplier, and I could have bought for $650 if I'd wanted to wait in line for an hour or get abused at the cheaper places.

      Now, Intel are aiming to saturate SATA-3 (600MBps) with the next generation(s) of SSD, or so I'm told. Based on what I've seen - it's achievable, at reasonable cost, and it's not only true for sequential read access. So if the IO bottleneck disappears - because the SSD can do 30K, 50K, 100K IO operations per second? Yeah, I think it's reasonable to ask why we wait for the computer.

      Not that I think a redesign is necessary for the current architectures - Windows, BSD, Linux all scale nicely to at least 8 or 16 logical CPUs in the server world, so the 4, 6 or 8 on the desktop isn't a huge problem. But in 5 years when we have 32 CPUs on the desktop? Maybe. Or maybe we'll just be using the same apps that only need 1 CPU most of the time, and using the other 20 CPUs for real-time stuff (Real voice control? Motion control and recognition?)

      • Re:waiting (Score:5, Interesting)

        by Courageous ( 228506 ) on Sunday March 21, 2010 @10:34PM (#31562844)

        Well, with the rise of the SSD, that's no longer as much of a problem.

        ORLY!

        Let's do some math shall we? Take a simple 4 core Nehalem running at 2.66Ghz. Let's conservatively assume that it can complete a mere *1* double precision floating point number per clock cycle, per core. So. How big is a double? 64 bits, or 8 bytes. Now, that's 2.66 billion * 4 = 10.64 BILLION doubles per second, which is 85 GB/s.

        The trick to understanding computing is that all computing really *is* at its heart a throughput problem.

        Do you see the asymmetry in throughput b/t the Nehalem and your SSD?

        C//

        • Re: (Score:3, Insightful)

          by node 3 ( 115640 )

          The question wasn't, "why should your CPU have to wait", it was, "why should *you* have to wait". At speeds approaching 3Gb/s, I think it's fair to say, at the person you replied to actually did say, "well, with the rise of the SSD, that's no longer as much of a problem."

          The trick to understanding computing is that all computing really *is* at its heart a throughput problem.

          The trick to understanding computers is to realize that all computing really is, at its heart, a human problem. It really doesn't matter if the CPU has to wait a trillion cycles in between receiving each byte of data, if the computer respon

    • Re: (Score:2, Insightful)

      by Jimbookis ( 517778 )
      Nature abhors a vacuum. It seems that no matter how much compute power you have something will always want to snaffle it up. I have a dual PentiumD at work running WinXP and 3GB of RAM. The proprietary 8051 compiler toolset god awful slow (and pegs one of the CPUs) compiling even just a few thousands of lines of code (10's of seconds with lots of GUI seizures) because I think for some reason the compiler and IDE are running a crapload of inefficient python in the backend. Don't even get me started on ho
    • But that's exactly the way changing OS architectures and APIs can help. Right now the default behavior is to start a worker thread of some type that blocks on IO requests and then reports back. Most apps in the wild don't even bloody do this and just have a few threads do everything and some even have the main app loop block on IO.(let's all pretend we don't see our app windows grey out several times a day!)
      We've argued for decades this was a programmer issue but that sort of pedantic criticism has accomp
  • Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked.

    Because it might be waiting for I/O.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked.

      Because it might be waiting for I/O.

      That's no reason for the entire GUI to freeze on Windows when you insert a CD.

  • by indrora ( 1541419 ) on Sunday March 21, 2010 @07:54PM (#31561542)

    The problem is that most (if not all) peripheral hardware is not parallel in many senses. Hardware in today's computers is serial: You access one device, then another, then another. There are some cases (such as a few good emulators) which use muti-threaded emulation (sound in one thread, graphics in another) but fundamentally the biggest performance kill is the final IRQs that get called to process data. The structure of modern day computers must change to take advantage of multicore systems.

    • by Anonymous Coward

      The Problem with Threads [berkeley.edu] (UC Berkeley's Prof Edward Lee)
      How to Solve the Parallel Programming Crisis [blogspot.com]
      Half a Century of Crappy Computing [blogspot.com]

      The computer industry will have to wake up to reality sooner or later. We must reinvent the computer; there is no getting around this. The old paradigms from the 20th century do not work anymore because they were not designed for parallel processing.

      • by lennier ( 44736 )

        ++.

        In the 1980s there was lots of academic interest in parallel computing. Unfortunately a lot of it seemed to be driven merely by the quest for speed- once single CPUs got fast enough in the early 90s and everyone went 'whee C is good enough also objects are neat!', a whole generation of parallel language work was lost to the new&shiny.

        It's depressing.

  • Grand Central? (Score:3, Insightful)

    by volfreak ( 555528 ) on Sunday March 21, 2010 @07:55PM (#31561554)
    Isn't this the reason for Apple to have rolled out GrandCentral in Snow Leopard? If so, it seems it's not THAT hard to do - at least not that hard for a non-Windows OS.
  • Why should you ever, with all this parallel hardware, ever be waiting for your computer?

    I dunno - maybe because optimal multiprocessor scheduling is an NP-complete problem? Or because concurrent computations require coordination at certain points, which is an issue that doesn't exist with single-threaded systems, and it's therefore wishful thinking to assume you'll get linear scaling as you add more cores?

  • by syousef ( 465911 ) on Sunday March 21, 2010 @07:56PM (#31561558) Journal

    ...the implementation sucks.

    Why for example does Windows Explorer decide to freeze ALL network connections when a single URN isn't quickly resolved? Why is it that when my USB drive wakes up, all explorer windows freeze? If you are trying to tell me there's no way using the current abstractions to implement this I say you're mad. For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is. You're left piecing together what has and hasn't been moved. File requests make up a good deal of what we're waiting for. It's not the bus or the drives that are usually the limitation. It's the shitty coding. I can live with a hit at startup. I can live with delays if I have to eat into swap. But I'm sick and tired of basic functionality being missing or broken.

    • by Threni ( 635302 ) on Sunday March 21, 2010 @08:23PM (#31561772)

      Windows explorer sucks. It always just abandons copies after a fail - even if you're moving thousands of files over a network. Yes, you're left wondering which files did/didn't make it. It's actually easier to sometimes copy all the files you want to shift locally, then move the copy, so that you can resume after a fail. It's laughable you have to do this, however.

      But it's not a concurrency issue, and neither, really, are the first 2 problems you mention. They're also down to Windows Explorer sucking.

      • Re: (Score:3, Insightful)

        Windows Explorer no longer kills network transfers after a failure as of Windows Vista.

        Maybe some of the people complaining about Windows should stop using a version thats 9 years old (XP). Red Hat 7.2 isn't particularly great by today's standards either.

    • by Kenz0r ( 900338 ) on Sunday March 21, 2010 @08:42PM (#31561950) Homepage
      I wish I could mod you higher than +5, you just summed up some of the things that bother me most about the OS that is somehow still the most popular desktop OS in the world.

      To anyone using Windows (XP, Vista or 7) right now, go ahead and open up an Explorer window, and type in ftp:// [ftp] followed by any url.
      Even when it's a name that obviously won't resolve, or an ip of your very own local network of a machine that just doesn't exist, this'll hang your Explorer window for a couple of solid seconds. If you're a truly patient person, try doing that with a name that does resolve, like ftp://microsoft.com [microsoft.com] . Better yet, try stopping it.... say goodbye to your explorer.exe .

      This is one of the worst user experiences possible, all for a mundane task like using ftp. And this has been present in Windows for what, a decade?
      • Re: (Score:3, Interesting)

        by hitmark ( 640295 )

        there is a option, at least as far back as xp that allows explorer windows to run as their own tasks. Why its not enabled by default i have no clue about (except that i have seen some issues with custom icons when doing so).

    • Re: (Score:3, Informative)

      by drsmithy ( 35869 )

      For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem is.

      You can as of Vista.

    • Re: (Score:3, Informative)

      by duguk ( 589689 )

      For that matter when a copy or move fails in Explorer, why can't I simply resume it once I've fixed whatever the problem

      Try TotalCopy [ranvik.net] which adds a copy/move in the right click menu; or Teracopy [codesector.com] commercial (free version available, supports Win7) complete replacement for the sucky Windows copy system.

      USB/Network freezes and file copying isn't a fault of CPU cores like you say, Windows is just a sucky OS. Multicore stuff gets complicated, but this isn't going to be a panacea for Microsoft, it's another marketing opportunity.

  • Dumb programmers (Score:3, Insightful)

    by Sarten-X ( 1102295 ) on Sunday March 21, 2010 @07:57PM (#31561578) Homepage
    You wait because some programmer thought it was more important to have animated menus than a fast algorithm. You wait because someone was told "computers have lots of disk space." You wait because the engineers never tested their database on a large enough scale. You wait because programmers today are taught to write everything themselves, and to simply expect new hardware to make their mistakes irrelevant.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      not true, you wait because management speed tracks stuff out the door without giving developers enough time to code things properly and management ignores developer concerns in order to get something out there now that will make money at the expense of the end user, I have been coding a long time and have seen this over and over. Management doesn't care about customers or let developers code things correctly - they only care about $$$$$$$

  • by pydev ( 1683904 ) on Sunday March 21, 2010 @08:09PM (#31561670)

    Microsoft should go back and read some of the literature on parallel computing from 20-30 years ago. Machines with many cores are nothing new. And Microsoft could have designed for it if they hadn't been busy re-implementing a bloated version of VMS.

  • This is a very weak talk to give at a University. Rather than talking about 'parallel programming' and adding an "It Sucks" button., I would expect a discussion on CSP http://en.wikipedia.org/wiki/Communicating_sequential_processes [wikipedia.org] or perhaps real time hard to guarantee responsiveness. This is the indoctrination you get when you work for Microsoft, you start spruiking low-level marketing jumbo-jumbo to a very technical audience.
  • ... for NFS to give up on a disconnected server... By the original design and the continuing default settings, the stuck processes are neither killable nor interruptible. You can reboot the whole system, but you can't kill one process.

    Hurray for the OS designers!

  • I have a more basic question.

    With computers past and present -- Atari 8-bit, Atari ST, iPhone -- with "instant on", why does Windows not have this yet? This goes back to the lost decade [slashdot.org]. What has Microsoft been doing since XP was released?

    • Re: (Score:3, Informative)

      by radish ( 98371 )

      iPhone isn't even slightly "instant on" - it takes at least a minute to boot an iPhone from off. What you're seeing most of the time is "screen off" mode. Unsurprisingly, switching the screen on & cranking up the CPU clock doesn't take much time. Likewise, waking my Windows box up from sleep doesn't take very long either. Comparing modern OS software running on modern hardware I see little difference in boot times, or wake time from sleep - which would indicate that if MS are being lazy then so are Appl

  • Duh (Score:4, Funny)

    by Waffle Iron ( 339739 ) on Sunday March 21, 2010 @08:38PM (#31561910)

    Why should you ever, with all this parallel hardware, ever be waiting for your computer?'

    For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.

  • by Animats ( 122034 ) on Sunday March 21, 2010 @08:40PM (#31561934) Homepage

    A big problem is the event-driven model of most user interfaces. Almost anything that needs to be done is placed on a serial event queue, which is then processed one event at a time. This prevents race conditions within the GUI, but at a high cost. Both the Mac and Windows started that way, and to a considerable extent, they still work that way. So any event which takes more time than expected stalls the whole event queue. There are attempts to fix this by having "background" processing for events known to be slow, but you have to know which ones are going to be slow in advance. Intermittently slow operations, like an DNS lookup or something which infrequently requires disk I/O, tend to be bottlenecks.

    Most languages still handle concurrency very badly. C and C++ are clueless about concurrency. Java and C# know a little about it. Erlang and Go take it more seriously, but are intended for server-side processing. So GUI programmers don't get much help from the language.

    In particular, in C and C++, there's locking, but there's no way within the language to even talk about which locks protect which data. Thus, concurrency can't be analyzed automatically. This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.) have been bolted on to give some hints to the compiler. There's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.

    We need better hard-compiled languages that don't punt on concurrency issues. C++ could potentially have been fixed, but the C++ committee is in denial about the problem; they're still in template la-la land, adding features few need and fewer will use correctly, rather than trying to do something about reliability issues. C# is only slightly better; Microsoft Research did some work on "Polyphonic C#" [psu.edu], but nobody seems to use that. Yes, there are lots of obscure academic languages that address concurrency. Few are used in the real world.

    Game programmers have more of a clue in this area. They're used to designing software that has to keep the GUI not only updated but visually consistent, even if there are delays in getting data from some external source. Game developers think a lot about systems which look consistent at all times, and come gracefully into synchronization with outside data sources as the data catches up. Modern MMORPGs do far better at handling lag than browsers do. Game developers, though, assume they own most of the available compute resources; they're not trying to minimize CPU consumption so that other work can run. (Nor do they worry too much about not running down the battery, the other big constraint today.)

    Incidentally, modern tools for hardware design know far more about timing and concurrency than anything in the programming world. It's quite possible to deal with concurrency effectively. But you pay $100,000 per year per seat for the software tools used in modern CPU design.

    • by shutdown -p now ( 807394 ) on Sunday March 21, 2010 @09:22PM (#31562306) Journal

      This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.) have been bolted on to give some hints to the compiler.

      An interesting comment overall, but what relevance does "mutable" have to multi-threaded programming? It is just a way to say that a particular field in a class is never const, even when the object itself is as a whole. There are no optimizations the compiler could possibly derive from that (in fact, if anything, it might make some optimizations non-applicable).

      Same goes for "volatile", actually. It forces the code generator to avoid caching values in registers etc, and always do direct memory reads & writes on every access to a given lvalue, but this won't prevent one core from not seeing a write done by another core - you need memory barriers for that, and ISO C++ "volatile" doesn't guarantee any (nor do any existing C++ implementations).

      Microsoft Research did some work on "Polyphonic C#" [psu.edu], but nobody seems to use that.

      It's a research language, not intended for production use. Microsoft Research does quite a few of those - e.g. Spec# [microsoft.com] (DbC), or C-omega [microsoft.com] (this is what Polyphonic C# evolved into), or Axum [microsoft.com] (the most recent take at concurrency, Erlang-style).

      Those projects are used to "cook" some idea to see if it's feasible, what approach is the best, and how it is taken by programmers. Eventually, features from those languages end up integrated into the mainstream ones - C# and VB. For example, X# became LINQ in .NET 3.5, and Spec# became Code Contracts in .NET 4.0. So, give it time.

      • Re: (Score:3, Informative)

        by Animats ( 122034 )

        An interesting comment overall, but what relevance does "mutable" have to multi-threaded programming?

        A "const" object can be accessed simultaneously from multiple threads without locking, other than against deletion. A "mutable const" object cannot; while it is "logically const", its internal representation may change (it might be cached or compressed) and thus requires locking.

        Failure to realize this results in programs with race conditions.

    • by thesuperbigfrog ( 715362 ) on Monday March 22, 2010 @11:06AM (#31568862)

      Most languages still handle concurrency very badly. C and C++ are clueless about concurrency. Java and C# know a little about it. Erlang and Go take it more seriously, but are intended for server-side processing. So GUI programmers don't get much help from the language.

      In particular, in C and C++, there's locking, but there's no way within the language to even talk about which locks protect which data. Thus, concurrency can't be analyzed automatically. This has become a huge mess in C/C++, as more attributes ("mutable", "volatile", per-thread storage, etc.) have been bolted on to give some hints to the compiler. There's still race condition trouble between compilers and CPUs with long look-ahead and programs with heavy concurrency.

      We need better hard-compiled languages that don't punt on concurrency issues. C++ could potentially have been fixed, but the C++ committee is in denial about the problem; they're still in template la-la land, adding features few need and fewer will use correctly, rather than trying to do something about reliability issues. C# is only slightly better; Microsoft Research did some work on "Polyphonic C#" [psu.edu], but nobody seems to use that. Yes, there are lots of obscure academic languages that address concurrency. Few are used in the real world.

      Ada 2005's task model is a real world, production quality approach to include concurrency in a hard-compiled language. Ada isn't exactly known for its GUI libraries (there is GtkAda), but it could be used as a foundation for an improved concurrent GUI paradigm.

      This book [google.com] covers the subject quite well.

  • by gig ( 78408 ) on Sunday March 21, 2010 @09:31PM (#31562378)

    I love how Microsoft can come along in 2010 and with a straight face say it's about time they took multiprocessing seriously. Or say it's about time we started putting HTML5 features into our browser. And we're finally going to support the ISO audio video standard from 2002. And by the way, it's about time we let you know that our answer to the 2007 iPhone will be shipping in 2011. And look how great it is that we just got 10% of our platform modernized off the 2001 XP version! And our office suite is just about ready to discover that the World Wide Web exists. It's like they are in a time warp.

    I know they have product managers instead of product designers, and so have to crib design from the rest of the industry, necessitating them to be years behind, but on engineering stuff like multiprocessing, you expect them to at least have read the memo from Intel in 2005 about single cores not scaling and how the future was going to be 128 core chips before you know it.

    I guess when you recognize that Windows Vista was really Windows 2003 and Windows 7 is really Windows 2005 then it makes some sense. It really is time for them to start taking multiprocessing seriously.

    I am so glad I stopped using their products in 1999.

  • by Grenamier ( 12799 ) on Sunday March 21, 2010 @09:53PM (#31562536)

    The part of the article where Probert discusses the operating system becoming something like a hypervisor reminds me of the Cache Kernel from a Stanford University paper back in 1994. http://www-dsg.stanford.edu/papers/cachekernel/main.html [stanford.edu]

    The way I understand it, the cache kernel in kernel mode doesn't really have built-in policy for traditional OS tasks like scheduing or resource management. It just serves as a cache for loading and unloading for things like addresses spaces and threads and making them active. The policy for working with these things comes from separate application kernels in user mode and kernel objects that are loaded by the cache kernel.

    There's also a 1997 MIT paper on exokernels (http://pdos.csail.mit.edu/papers/exo-sosp97/exo-sosp97.html). The idea is separating the responsibility of management from the responsibility of protection. The exokernel knows how to protect resources and the application knows how to make them sing. In the paper, they build a webserver on this architecture and it performs very well.

    Both of these papers have research operating systems that demonstate specialized "native" applications running alongside unmodified UNIX applications running on UNIX emulators. That would suggest rebuilding an operating system in one of these styles wouldn't entail throwing out all the existing software or immediately forcing a new programming model on developers who aren't ready.

    Microsoft used to talk about "personalities" in NT. It had subsystems for OS/2 1.x, WIn16, and Win32 that would allow apps from OS/2 (character mode), Windows 3.1 and Windows NT running as peers on top of the NT kernel. Perhaps someday the subsystems come back, some as OS personalities running traditional apps, and some as whole applications with resource management policy in their own right. Notepad might just run on the Win32 subsystem, but Photoshop might be interested in managing its own memory as well as disk space.

    The mid-90s were fun for OS research, weren't they? :)

  • by Low Ranked Craig ( 1327799 ) on Monday March 22, 2010 @02:15AM (#31564118)
    Please move along
  • by macraig ( 621737 ) <mark@a@craig.gmail@com> on Monday March 22, 2010 @02:35AM (#31564182)

    What's wrong with at least some operating systems doesn't even have anything to do with multiple cores per se. They're simply designing the OS and its UI incorrectly, assigning the wrong priorities to events. No event should EVER supersede the ability of a user to interact and intercede with the operating system (and applications). Nothing should EVER happen to prevent a user being able to move the mouse, access the start menu, etc., yet this still happens in both Windows and Linux distributions. That's a fucked-up set of priorities, when the user sitting in front of the damned box - who probably paid for it - gets second billing when it comes to CPU cycles.

    It doesn't matter if there's one CPU core or a hundred. It's the fundamental design priorities that are screwed up. Hell should freeze over before a user is denied the ability to interact, intercede, or override, regardless how many cores are present. Apparently hell has already frozen over and I just didn't get the memo?

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...