Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Operating Systems Microsoft Windows

Multicore Requires OS Rework, Windows Expert Says 631

alphadogg writes "With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft. The current approach to harnessing the power of multicore processors is complicated and not entirely successful, he argued. The key may not be in throwing more energy into refining techniques such as parallel programming, but rather rethinking the basic abstractions that make up the operating systems model. Today's computers don't get enough performance out of their multicore chips, Probert said. 'Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked. Probert made his presentation at the University of Illinois at Urbana-Champaign's Universal Parallel Computing Research Center."
This discussion has been archived. No new comments can be posted.

Multicore Requires OS Rework, Windows Expert Says

Comments Filter:
  • This is new?! (Score:5, Insightful)

    by DavidRawling ( 864446 ) on Sunday March 21, 2010 @07:50PM (#31561518)
    Oh please, this has been coming for years now. Why has it taken so long for the OS designers to get with the program? We've had multi-CPU servers for literally decades.
  • waiting (Score:5, Insightful)

    by mirix ( 1649853 ) on Sunday March 21, 2010 @07:51PM (#31561520)

    'Why should you ever, with all this parallel hardware, ever be waiting for your computer?'

    Because I/O is always going to be slow.

  • by indrora ( 1541419 ) on Sunday March 21, 2010 @07:54PM (#31561542)

    The problem is that most (if not all) peripheral hardware is not parallel in many senses. Hardware in today's computers is serial: You access one device, then another, then another. There are some cases (such as a few good emulators) which use muti-threaded emulation (sound in one thread, graphics in another) but fundamentally the biggest performance kill is the final IRQs that get called to process data. The structure of modern day computers must change to take advantage of multicore systems.

  • Grand Central? (Score:3, Insightful)

    by volfreak ( 555528 ) on Sunday March 21, 2010 @07:55PM (#31561554)
    Isn't this the reason for Apple to have rolled out GrandCentral in Snow Leopard? If so, it seems it's not THAT hard to do - at least not that hard for a non-Windows OS.
  • Re:Because (Score:3, Insightful)

    by Anonymous Coward on Sunday March 21, 2010 @07:56PM (#31561568)

    Why should you ever, with all this parallel hardware, ever be waiting for your computer?' he asked.

    Because it might be waiting for I/O.

    That's no reason for the entire GUI to freeze on Windows when you insert a CD.

  • Dumb programmers (Score:3, Insightful)

    by Sarten-X ( 1102295 ) on Sunday March 21, 2010 @07:57PM (#31561578) Homepage
    You wait because some programmer thought it was more important to have animated menus than a fast algorithm. You wait because someone was told "computers have lots of disk space." You wait because the engineers never tested their database on a large enough scale. You wait because programmers today are taught to write everything themselves, and to simply expect new hardware to make their mistakes irrelevant.
  • by Anonymous Coward on Sunday March 21, 2010 @07:58PM (#31561590)

    I noticed the same on my mac. With a set of eight CPU graph meters in the menu bar, they're almost always evenly pitched anywhere from idle to 100%, with a few notable exceptions like second life, some photoshop filters, and firefox of all things.

    When booted into Win, more often than not I have two cores pegged high, and the others idle. Getting even use out of all cores is the exception, not the rule.

  • Re:Grand Central? (Score:1, Insightful)

    by larry bagina ( 561269 ) on Sunday March 21, 2010 @08:03PM (#31561632) Journal
    With .net it should be trivial. Seems more like an education/cultural problem than a technical one.
  • Re:waiting (Score:5, Insightful)

    by DavidRawling ( 864446 ) on Sunday March 21, 2010 @08:04PM (#31561638)

    Well, with the rise of the SSD, that's no longer as much of a problem. Case in point - I built a system on the weekend with a 40GB Intel SSD. Pretty much the cheapest "known-good" SSD I could get my hands on (ie TRIM support, good controller) at AUD $172, roughly the price of a 1.5TB spinning rust store - and the system only needs 22GB including apps.

    Windows boots from end of POST in about 5 seconds. 5 seconds is not even enough for the TV to turn on (it's a Media Center box). Logon is instant. App start is nigh-on instant (I've never seen Explorer appear seemingly before the Win+E key is released). This is the fastest box I've ever seen, and it's the most basic "value" processor Intel offer - the i3-530, on a cheap Asrock board with cheap RAM (true, there's a slightly cheaper "bargain basement" CPU in the G6950 or something). The whole PC cost AUD800 from a reputable supplier, and I could have bought for $650 if I'd wanted to wait in line for an hour or get abused at the cheaper places.

    Now, Intel are aiming to saturate SATA-3 (600MBps) with the next generation(s) of SSD, or so I'm told. Based on what I've seen - it's achievable, at reasonable cost, and it's not only true for sequential read access. So if the IO bottleneck disappears - because the SSD can do 30K, 50K, 100K IO operations per second? Yeah, I think it's reasonable to ask why we wait for the computer.

    Not that I think a redesign is necessary for the current architectures - Windows, BSD, Linux all scale nicely to at least 8 or 16 logical CPUs in the server world, so the 4, 6 or 8 on the desktop isn't a huge problem. But in 5 years when we have 32 CPUs on the desktop? Maybe. Or maybe we'll just be using the same apps that only need 1 CPU most of the time, and using the other 20 CPUs for real-time stuff (Real voice control? Motion control and recognition?)

  • Re:This is new?! (Score:3, Insightful)

    by Sir_Sri ( 199544 ) on Sunday March 21, 2010 @08:06PM (#31561650)

    ya but those cases, as he reasonably explains, tend to get specialized development (say scientific computing), or separate processes, or while he doesn't explain it, a lot of server stuff is embarrassingly (or close to) parallel.

    I can sort of see them not having a multi-processor OS just waiting for the consumer desktop- server processors are basically cache with some processor attached, whereas desktop processors are architected differently, and who knew for sure what the mutlicore world would look like in detail (or more relevantly what it will look like with 4, 8 or 16 or whatever cores). How will those cores be connected? How symmetric/asymmetric will they be? Right now OS's are built around two big asymmetric processors (cpu and gpu) and several smaller specialized ones (networking sound etc). Some of those architecture things *could* be fairly fundamental to the design you want to use, and there's no point investing huge development time trying to build software for hardware which doesn't exist and may never exist.

    I'm not sure about his proposed architecture. It doesn't sound easily backwards compatible (but I might be wrong there), and there's a certain simplicity to 'reserve one core for the OS, application developers can manage the rest of them themselves' sort of model like consoles.

  • Re:This is new?! (Score:5, Insightful)

    by PhunkySchtuff ( 208108 ) <kai&automatica,com,au> on Sunday March 21, 2010 @08:07PM (#31561662) Homepage

    Since when have OS designers optimised their code to milk every cycle from the available CPUs? They haven't, they just wait for hardware to get faster to keep up with the code.

  • by Anonymous Coward on Sunday March 21, 2010 @08:10PM (#31561676)

    not true, you wait because management speed tracks stuff out the door without giving developers enough time to code things properly and management ignores developer concerns in order to get something out there now that will make money at the expense of the end user, I have been coding a long time and have seen this over and over. Management doesn't care about customers or let developers code things correctly - they only care about $$$$$$$

  • Re:waiting (Score:2, Insightful)

    by Jimbookis ( 517778 ) on Sunday March 21, 2010 @08:17PM (#31561714)
    Nature abhors a vacuum. It seems that no matter how much compute power you have something will always want to snaffle it up. I have a dual PentiumD at work running WinXP and 3GB of RAM. The proprietary 8051 compiler toolset god awful slow (and pegs one of the CPUs) compiling even just a few thousands of lines of code (10's of seconds with lots of GUI seizures) because I think for some reason the compiler and IDE are running a crapload of inefficient python in the backend. Don't even get me started on how long it takes to upload the frickin' binary to the target on JTAG. My debug cycles take far too long. My point is the compilation of my code base should be done literally in the blink of an eye but the developers saw fit to use a framework that depends on brute CPU power to do relatively simple stuff. A colleague writes VB.net apps to and sometimes it's like being back in 1989 watching .net draw all the elements of the GUI on the screen when you open it or change tabs. Fsck knows how this has come to pass in 2010 and why it's acceptable. So really, blame the programmers for making your beast of a PC slow and waiting around. This notion of massive language abstraction and wanting to use scripting languages ('coz it's easier, apparently) and just-in-time this and that is what is slowing computers down. And hard disks. '
  • Re:This is new?! (Score:5, Insightful)

    by Jeremi ( 14640 ) on Sunday March 21, 2010 @08:19PM (#31561730) Homepage

    Why has it taken so long for the OS designers to get with the program?

    Coming up with a new OS paradigm is hard, but doable.

    Coming up with a viable new OS that uses that paradigm is much harder; because even once the new OS is working perfectly, you still have to somehow make it compatible with the zillions of existing applications that people depend on. If you can't do that, your shiny new OS will be viewed as an interesting experiment for the propeller-head set, but it won't ever get the critical mass of users necessary to build up its own application base.

    So far, I think Apple has had the most successful transition strategy: Come up with the great new OS, bundle the old OS with it, inside an emulator/sandbox, and after a few years, quietly deprecate (and then drop) the old OS. Repeat as necessary.

  • by Anonymous Coward on Sunday March 21, 2010 @08:21PM (#31561752)

    And if you knew what it did, you'd know it isn't going to help.

  • by Anonymous Coward on Sunday March 21, 2010 @08:21PM (#31561754)

    It's called Grand Central Dispatch. [wikipedia.org]

    Despite having a name and a Wikipedia page. it's not doing a good enough job.

  • by Threni ( 635302 ) on Sunday March 21, 2010 @08:23PM (#31561772)

    Windows explorer sucks. It always just abandons copies after a fail - even if you're moving thousands of files over a network. Yes, you're left wondering which files did/didn't make it. It's actually easier to sometimes copy all the files you want to shift locally, then move the copy, so that you can resume after a fail. It's laughable you have to do this, however.

    But it's not a concurrency issue, and neither, really, are the first 2 problems you mention. They're also down to Windows Explorer sucking.

  • by macemoneta ( 154740 ) on Sunday March 21, 2010 @08:25PM (#31561798) Homepage

    The largest single system image I'm aware of runs Linux on a 4096 processor SGI machine with 17TB RAM [google.com]. Maybe He means that Windows needs rework?

  • by GIL_Dude ( 850471 ) on Sunday March 21, 2010 @08:25PM (#31561802) Homepage
    Are you running a 9 year old version of OSX too, or are you comparing a two generation old Windows version to a nice new Mac version? It really sounds like you are comparing apples (snicker) to oranges. After all, both Vista and Windows 7 have no problem running for a long, long time between reboots and don't get slow during that time.
  • Re:This is new?! (Score:5, Insightful)

    by Cryacin ( 657549 ) on Sunday March 21, 2010 @08:26PM (#31561812)
    For that matter, since when have software vendors been willing to pay architects/designers/engineers etc to optimise their software to milk every cycle from the available CPUs and provide useful output with the minimum of effort? They don't, they just wait for hardware to get faster to keep up with code.

    The only company that I have personally been exposed to that gives half a hoot about efficient performance is Google. It annoys me beyond belief that other companies think it's acceptable to make the user wait for minutes whilst the system recalculates data derived from a large data set, and doing those calculations multiple times just because a binding gets invoked.
  • by Sc4Freak ( 1479423 ) on Sunday March 21, 2010 @08:27PM (#31561822)

    I'm not sure I get it - GCD just looks like a threadpool library. Windows has had a built-in threadpool API [microsoft.com] that's been available since Windows 2000, and it seems to do pretty much the same thing as GCD.

  • Re:Grand Central? (Score:2, Insightful)

    by jonwil ( 467024 ) on Sunday March 21, 2010 @08:31PM (#31561850)

    The overhead of systems like .NET is part of WHY we have a problem with excessive CPU usage in the first place.

  • by Grem135 ( 1440305 ) on Sunday March 21, 2010 @08:32PM (#31561862)
    Wow, another Mac fanboy compairing his nice shiny new Mac to an outdated and replaced (2 times over) operating system. I bet he will say his Ipad will out perform a netbook too. Though the netbook can multitask, run virtually any windows app, has wifi, you can connect an external dvd and (gasp) it can be a color Ebook reader just like the ipad!!
  • by Kenz0r ( 900338 ) on Sunday March 21, 2010 @08:42PM (#31561950) Homepage
    I wish I could mod you higher than +5, you just summed up some of the things that bother me most about the OS that is somehow still the most popular desktop OS in the world.

    To anyone using Windows (XP, Vista or 7) right now, go ahead and open up an Explorer window, and type in ftp:// [ftp] followed by any url.
    Even when it's a name that obviously won't resolve, or an ip of your very own local network of a machine that just doesn't exist, this'll hang your Explorer window for a couple of solid seconds. If you're a truly patient person, try doing that with a name that does resolve, like ftp://microsoft.com [microsoft.com] . Better yet, try stopping it.... say goodbye to your explorer.exe .

    This is one of the worst user experiences possible, all for a mundane task like using ftp. And this has been present in Windows for what, a decade?
  • Re:Grand Central? (Score:4, Insightful)

    by jasmusic ( 786052 ) on Sunday March 21, 2010 @09:00PM (#31562098)
    I'm thinking you don't have much experience with .NET. During my projects it has always run comparable to native compiled code when I write my code with the mindset of a C++ programmer and not a VB one.
  • by hitmark ( 640295 ) on Sunday March 21, 2010 @09:07PM (#31562170) Journal

    or basically replaces windows with something else.

  • by shutdown -p now ( 807394 ) on Sunday March 21, 2010 @09:08PM (#31562180) Journal

    The trick with GCD is that it is somewhat more high-level than a simple thread pool - it operates in terms of tasks, not threads. The difference is that tasks have explicit dependencies on other tasks - this lets scheduler be smarter about allocating cores.

  • by judeancodersfront ( 1760122 ) on Sunday March 21, 2010 @09:10PM (#31562210)
    The author is talking about a complete OS redesign, not a new threading system.
  • Re:This is new?! (Score:5, Insightful)

    by jc42 ( 318812 ) on Sunday March 21, 2010 @09:19PM (#31562278) Homepage Journal

    Since when have OS designers optimised their code to milk every cycle from the available CPUs?

    This isn't just an OS-level problem. It's a failure among programmers of all sorts.

    I've been involved in software development since the late 1970s, and for the start I've heard the argument "We don't have to worry about code speed or size, because today's machines are so fast and have so much memory. This was just as common back when machines were 1,000 times slower and had 10,000 times less memory than today.

    It's the reason for Henry Petroski's famous remark that "The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry."

    Programmers respond to faster cpu speed and more memory by making their software use more cpu cycles and more memory. They always have, and there's no sign that this is going to change. Being efficient is hard, and you don't get rewarded for it, because managers can't measure it. So it's better to add flashy eye candy and more features, which people can see.

    If we want efficient code, we have to figure out ways to reward the programmers that write it. I don't see any sign that people anywhere are interested in doing this. Anyone have suggestions for how it might be done?

  • Re:This is new?! (Score:5, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Sunday March 21, 2010 @09:20PM (#31562282) Journal
    I doubt that it's just google. I suspect the following:

    There are(in broad strokes, and excluding the embedded market), two basic axes on which you have to place a company or a company's software offering in order to predict its attitude with respect to efficiency.

    One is problem scale. If a program is a once-off, or an obscure niche thing, or just isn't expected to have to cope with very large data sets, putting a lot of effort into making it efficient will likely not be a priority. If the program is extremely widely distributed, or is expected to cope with massive datasets, efficiency is much more likely to be considered important(if widely distributed, cost of efficient engineering per unit falls dramatically, if expeced to cope with massive datasets, amount of hardware cost and energy cost avoided becomes significant. Tuning a process that eats 50% of a desktop CPU into one that eats 40% probably isn't worth it. Tuning a process that runs on 50,000 servers into one that runs on 40,000 easily could be).

    The second is location: If a company is running their software on their own hardware, and selling access to whatever service it provides(search engine, webmail, whatever), their software's efficiency or inefficiency imposes a direct cost on them. Their customers are paying so much per mailbox, or so much per search query, they have an incentive to use as little computer power as possible to deliver that product. If a company is selling boxed software, to be run on customer machines, their efficiency incentives are indirect. This doesn't mean "nonexistent"(a game that only runs on $2,000 enthusiast boxes is going to lose money, nobody would release such a thing. Among enthusiasts, browser JS benchmarks are a point of contention); but it generally does mean "secondary to other considerations". Customers, as a rule, are more likely to use slow software with the features they want, or slow software that released first and they became accustomed to, than fast software that is missing features or requires substantial adjustment on their part. Shockingly enough, software developers act on this fact.

    On these axes, you would strongly suspect that Google would be efficiency oriented. Their software runs on a grand scale, and most of it runs on their own servers, with the rest competing against various desktop incumbents, or not actually all that dramatically efficient(Nothing wrong with Google Earth or Sketchup; but nothing especially heroic, either). However, you would expect roughly the same of any entity similarly placed on those axes.
  • by Jane Q. Public ( 1010737 ) on Sunday March 21, 2010 @09:29PM (#31562356)
    That is simply not true. In fact, that is what Grand Central Dispatch (Snow Leopard, OS X 10.6) is all about. The OS handles the threads, not the programmer.

    Not only does it work, it is the wave of the future. Eventually, all machines and OSes will work that way because no programmer wants to jump through outrageous hoops to deal with 128 cores. Or even 4.
  • by gig ( 78408 ) on Sunday March 21, 2010 @09:31PM (#31562378)

    I love how Microsoft can come along in 2010 and with a straight face say it's about time they took multiprocessing seriously. Or say it's about time we started putting HTML5 features into our browser. And we're finally going to support the ISO audio video standard from 2002. And by the way, it's about time we let you know that our answer to the 2007 iPhone will be shipping in 2011. And look how great it is that we just got 10% of our platform modernized off the 2001 XP version! And our office suite is just about ready to discover that the World Wide Web exists. It's like they are in a time warp.

    I know they have product managers instead of product designers, and so have to crib design from the rest of the industry, necessitating them to be years behind, but on engineering stuff like multiprocessing, you expect them to at least have read the memo from Intel in 2005 about single cores not scaling and how the future was going to be 128 core chips before you know it.

    I guess when you recognize that Windows Vista was really Windows 2003 and Windows 7 is really Windows 2005 then it makes some sense. It really is time for them to start taking multiprocessing seriously.

    I am so glad I stopped using their products in 1999.

  • Re:This is new?! (Score:5, Insightful)

    by Brian Gordon ( 987471 ) on Sunday March 21, 2010 @09:34PM (#31562396)

    Maybe it's not a question of whether the code is efficient. Maybe it's a question of how much you're asking the code to do. It's no surprise that hardware struggles to make gains against performance demands when software developers are adding on nonsense like compositing window managers and sidebar widgets. I'm enjoying Moore's law without any cancellation.. just run a sane environment. Qt or GTK, not both, if youre running an X desktop. Nothing other than IM in the system tray. No "upgrade fever" that makes people itch for Windows Media Player 14 when older versions work fine and mplayer and winamp work better.

  • by hitmark ( 640295 ) on Sunday March 21, 2010 @09:49PM (#31562508) Journal

    so basically a big pile of C64 wired to a single keyboard and screen, via a BIG kvm switch?

  • Re:waiting (Score:3, Insightful)

    by BikeHelmet ( 1437881 ) on Sunday March 21, 2010 @09:52PM (#31562530) Journal

    Seems pretty good to me.

    If true.

  • Re:This is new?! (Score:3, Insightful)

    by jo_ham ( 604554 ) <joham999@noSpaM.gmail.com> on Sunday March 21, 2010 @09:59PM (#31562586)

    The beachball of rumination is there to remind you to book your holiday to the coast.

    It used to be a feature of 10.2 and earlier - I only see it occasionally in the later versions, but it is still there occasionally.

  • Re:This is new?! (Score:3, Insightful)

    by skids ( 119237 ) on Sunday March 21, 2010 @10:12PM (#31562690) Homepage

    No glory in it either. Even when you're doing it for free, nobody seems to care if you produce an optimization.

    Plus, there are many more coders who have limited depth of understanding of OS interfacing, than there are coders who would go in after them to optimize. Heck, forget multicore -- how many applications fail to use vector units?

    Sometimes optimizations get dropped from code as too difficult to maintain. Rarely, enough of them get collected in one spot to make a library out of them. Even more rarely, those libraries actually get used.

    And it will stay that way until the consumer starts showing a preference for performance over features.

  • Re:This is new?! (Score:3, Insightful)

    by not already in use ( 972294 ) on Sunday March 21, 2010 @10:12PM (#31562692)

    An iPhone 3GS with a 600MHz CPU outperforms a Nexus One with a 1000MHz CPU.

    The reason the 3gs "outperforms" the N1 is because the N1 has more than twice the pixels of a 3GS. If the N1 had to drive the iphones resolution, it would wipe the floor with the iphones ass, all while supporting user app multitasking.

  • Re:This is new?! (Score:2, Insightful)

    by Anonymous Coward on Sunday March 21, 2010 @10:29PM (#31562818)

    Grand Central is not a novel concept, similar libraries like OpenMP have been around for years on *nix/Windows.

    Also, why are you being an iPhone shill in a discussion about multicore processing? The iPhone OS doesn't even really support multitasking, and runs on a mobile device with a single CPU.

  • Re:This is new?! (Score:5, Insightful)

    by Mr. Freeman ( 933986 ) on Sunday March 21, 2010 @10:43PM (#31562908)
    Because Google ain't crunching data sets on fucking mobile phones. They're optimizing their servers and the applications that run on those servers because Google is so damn big that a fraction of a percent increase in efficiency translates into huge amounts of money saved through less wasted CPU time. Mobile phones aren't a part of google.

    If you phone runs a little less efficient then no one gives a damn. They want to make their phones easy to program for, which generally conflicts with efficiency.
  • Re:This is new?! (Score:1, Insightful)

    by Anonymous Coward on Sunday March 21, 2010 @10:59PM (#31563014)

    If we want efficient code, we have to figure out ways to reward the programmers that write it. I don't see any sign that people anywhere are interested in doing this. Anyone have suggestions for how it might be done?

    Simple. When testing the program, put the programmer's nuts in a vise. Give the vise a quarter-turn for every second you spend waiting for the program to respond to your input.

  • Re:This is new?! (Score:4, Insightful)

    by jc42 ( 318812 ) on Sunday March 21, 2010 @11:53PM (#31563356) Homepage Journal

    Hey, if you liked programming for a one-byte machine, maybe you should join the quantum computer research effort. They're just now looking forward to the creation of their first 8-bit "computer" in the very near future. ;-)

    Of course, you can do a bit more computing with 8 Q-bits than you can with 8 of the more mundane bits that the rest of us are using.

  • by Nadaka ( 224565 ) on Monday March 22, 2010 @12:43AM (#31563720)

    You may not have to write your code around threading, but you then have to write it around grand central dispatch. Having GCD available is going to do absolutely nothing for a program that was not written with GCD in mind. Its changing one set of problems/features for another. Writing multi-threaded software isn't exceptionally hard. I have done a lot of it. It may take a lot less code with GCD, but you also give up control. Even using GCD with code blocks you still have to deal with the problems that can be a pain in the ass, things like concurrency, blocking and munging data.

  • Re:waiting (Score:3, Insightful)

    by node 3 ( 115640 ) on Monday March 22, 2010 @12:57AM (#31563784)

    The question wasn't, "why should your CPU have to wait", it was, "why should *you* have to wait". At speeds approaching 3Gb/s, I think it's fair to say, at the person you replied to actually did say, "well, with the rise of the SSD, that's no longer as much of a problem."

    The trick to understanding computing is that all computing really *is* at its heart a throughput problem.

    The trick to understanding computers is to realize that all computing really is, at its heart, a human problem. It really doesn't matter if the CPU has to wait a trillion cycles in between receiving each byte of data, if the computer responds in an apparently instantaneous manner for the person using it, everything is working just fine.

    I only care abstractly about how long my CPU has to wait. I do care directly about how much I have to wait.

  • by RzUpAnmsCwrds ( 262647 ) on Monday March 22, 2010 @01:04AM (#31563830)

    Windows Explorer no longer kills network transfers after a failure as of Windows Vista.

    Maybe some of the people complaining about Windows should stop using a version thats 9 years old (XP). Red Hat 7.2 isn't particularly great by today's standards either.

  • Re:This is new?! (Score:3, Insightful)

    by tsotha ( 720379 ) on Monday March 22, 2010 @01:42AM (#31563988)

    It's not a failure among programmers at all - it's a business decision. The main reason software is less efficient is the costs are so heavily tilted toward software development instead of hardware. For the vast majority of business applications companies are using generalized frameworks to trade CPU cycles and memory for development time.

    Even in terms of development style, it just isn't worth it to optimize your code if it's going to substantially increase development time. People are expensive. Time is expensive. Hardware is not.

    Now, if you're Microsoft, or Blizzard, or Google, then the equation changes, since your code is running on millions of CPUs. But that's not the normal case. If I'm writing a web service so the accounting software at headquarters can tell how many widgets are in my company's warehouse, it really doesn't matter how inefficient the code is (within reason) as long as it works reliably and is easy to maintain. What my boss really wants is for me to finish as quickly as possible and move on to the next task.

  • by Anonymous Coward on Monday March 22, 2010 @01:56AM (#31564042)

    There is no information in the article. He asks if we have multi-core CPUs why would we ever be waiting for something to happen on our PC? Well, cuz the slowest thing in the PC is the hard drive... and more people only have one. Prioritized IOs don't make that much of a difference because even if my disk was idle, and the low priority IO (Vista has these BTW, 2 IO priorities) moves the disk head, and then my higher priority IO comes in, it's been delayed because it has to move back.

    Disk IO is the biggest bottleneck and redesigning Linux or Windows isn't going to solve that problem. L1,2,3..X cache -> memory -> disk, as long as we have that heirarchy and there is such a disparity between the layers, no OS rewrite will help.

    That's why massively parallel systems are so radically different from general purpose computers and ususally only useful on specific tasks (e.g. SIMD implementations etc.)

    Research in highly parallel IO is addressing the issue, but won't help home/small business.

    No wonder this dude's peers at MS disagree w/him.

    You will have to design your apps and supporting hardware to take advantage of whatever parallelism you have available, and break up your workload intelligently.

    Yeah you'll have to pay people who understand how to do concurrent programming properly.

  • Re:This is new?! (Score:3, Insightful)

    by mjwx ( 966435 ) on Monday March 22, 2010 @02:00AM (#31564060)

    The reason the 3gs "outperforms" the N1 is because the N1 has more than twice the pixels of a 3GS. If the N1 had to drive the iphones resolution, it would wipe the floor with the iphones ass, all while supporting user app multitasking.

    What many people are forgetting is that the N1 has no GPU, it requires the CPU to do all the rendering, which makes the rendering a little slower.

    We are better off comparing it to the Motorola Milestone (Droid in the US) which has a GPU.

  • by exomondo ( 1725132 ) on Monday March 22, 2010 @02:06AM (#31564086)

    The issue is who does the thread management, the programmer or the OS?

    The issue is working out how to break up inherently serial problems into smaller parallel problems. Threading is not difficult, the difficulty comes in parallelising the problem and this must be done regardless of who does the thread management.

  • by Anonymous Coward on Monday March 22, 2010 @02:25AM (#31564150)

    App programmers do not deserve all the blame. The tools for multithreaded development are primitive and difficult to use correctly. It is difficult and expensive to make good reliable MT software.

    Many years ago if I wanted a list of data I would manually malloc, memset and free data. There were lots of bugs because of the tedious management of memory. Now in C++ I write vector, or in Python I use a_list = [] and POOF! I don't need to keep track of ANY details.

    The state of multithreading libraries and tools must evolve to the point where normal (ie not VERY skilled or creative) developers can handle them without much thinking. This may require a paradigm shift similar to the object-oriented one in the eighties.

    Objects and exceptions and stack unwinding seem almost obvious to us, but some people had to pave the way for their use. We need some skilled computer scientists to work with skilled library developers to make a new paradigm for developing MT apps. When these ase of yet undiscoverd (or unpopularized) paradigms have been developed, we better hope that big players have the incentive and capacity to implement them in their current systems.

    I for one welcome our multithreaded overlords... when they get here.

  • by macraig ( 621737 ) <mark@a@craig.gmail@com> on Monday March 22, 2010 @02:35AM (#31564182)

    What's wrong with at least some operating systems doesn't even have anything to do with multiple cores per se. They're simply designing the OS and its UI incorrectly, assigning the wrong priorities to events. No event should EVER supersede the ability of a user to interact and intercede with the operating system (and applications). Nothing should EVER happen to prevent a user being able to move the mouse, access the start menu, etc., yet this still happens in both Windows and Linux distributions. That's a fucked-up set of priorities, when the user sitting in front of the damned box - who probably paid for it - gets second billing when it comes to CPU cycles.

    It doesn't matter if there's one CPU core or a hundred. It's the fundamental design priorities that are screwed up. Hell should freeze over before a user is denied the ability to interact, intercede, or override, regardless how many cores are present. Apparently hell has already frozen over and I just didn't get the memo?

  • Re:This is new?! (Score:5, Insightful)

    by IamTheRealMike ( 537420 ) on Monday March 22, 2010 @02:35AM (#31564186)

    Why Java for Android? This is a good question. There are several reasons (that the Android team have discussed).

    One is that ARM native code is bigger, size-wise, than Dalvik VM bytecode. So it takes up more memory. Unlike the iPhone, Android was designed from the start to multi-task between lots of different (user installed) apps. It's quite feasible to rapidly switch between apps with no delay on Android, and that means keeping multiple running programs in RAM simultaneously. So trading off some CPU time for memory is potentially a good design. Now that said, Java has some design issues that make it more profligate with heap memory than it maybe needs to be (eg utf16 for strings) so I don't have a good feel for whether the savings are cancelled out or not, but it's a justification given by the Android team.

    Another is that Java is dramatically easier to program than a C-like language. I mean, incredibly monstrously easier. One problem with languages like C++ or Objective-C is that lots of people think they understand them but very few programmers really do. Case in point - I have an Apple-mad friend who ironically programs C# servers on Windows for his day job. But he figured he'd learn iPad development. I warned him that unmanaged development was a PITA but he wasn't convinced, so I showed him a page that discussed reference counting in ObjC (retain/release). He read it and said "well that seems simple enough" - doh. Another one bites the dust. I walked him through cycle leaks, ref leaks on error paths (no smart pointers in objc!), and some basic thread safety issues. By the end he realized that what looked simple really wasn't at all.

    By going with Java, Android devs skip that pain. I'm fluent in C++ and Java, and have used both regularly in the past year. Java is reliably easier to write correct code in. I don't think it's unreasonable to base your OS on it. Microsoft has moved a lot of Windows development to .NET over the last few years for the same reasons.

    Fortunately, being based on Java doesn't mean Android is inherently inefficient. Large parts of the runtime are written in C++, and you can write parts of your own app in native code too (eg for 3D graphics). You need to use Java to use most of the OS APIs but you really shouldn't be experiencing perf problems with things like gui layout - if you are, that's a hint you need to simplify your app rather than try to micro-optimize.

  • Re:Duh (Score:4, Insightful)

    by keeboo ( 724305 ) on Monday March 22, 2010 @03:40AM (#31564442)

    Why should you ever, with all this parallel hardware, ever be waiting for your computer?'

    For a lot of problems, for the same reason that some guy who just married 8 brides will still have to wait for his baby.

    Of course, he'll be able to get 8 babies at once, assuming none of the processes crash during the computation.

    That improves bandwidth, but not latency: almost 1 baby/month, but 9 months of latency.
    The guy could try interleaving the pregnancies, in order to get the illusion of lower latency.

  • Re:This is new?! (Score:2, Insightful)

    by VulpesFoxnik ( 1493687 ) on Monday March 22, 2010 @06:53AM (#31565082)

    I think it's more of a architecture problem. The x86 is a horrible creature, which has an inefficient language. What x86 does in a huge set of instructions, Arm can often do in 2/3 that many. All Mhz does is eat more power and generate more heat. The answer is smarter digital languages.

  • Re:This is new?! (Score:3, Insightful)

    by steelfood ( 895457 ) on Monday March 22, 2010 @02:39PM (#31573020)

    MS did the same during the transition to 32-bit. They included a 16-bit DOS emulator and had it run transparently. They did the same for the transition to 64-bit. It was so successful and so transparent a lot of IT professionals didn't even know it was even happening in the background.

    Unlike Apple though, they never removed it. Sure, it resulted in a major security hole, but it also let legacy custom business apps run far longer than they otherwise would have been able to.

    I suspect if they were ever to make another large transition, they'd do the same thing they've been doing for years.

An authority is a person who can tell you more about something than you really care to know.

Working...