Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

IBM's Chief Architect Says Software is at Dead End 334

j2xs writes "In an InformationWeek article entitled 'Where's the Software to Catch Up to Multicore Computing?' the Chief Architect at IBM gives some fairly compelling reasons why your favorite software will soon be rendered deadly slow because of new hardware architectures. Software, she says, just doesn't understand how to do work in parallel to take advantage of 16, 64, 128 cores on new processors. Intel just stated in an SD Times article that 100% of its server processors will be multicore by end of 2007. We will never, ever return to single processor computers. Architect Catherine Crawford goes on to discuss some of the ways developers can harness the 'tiny supercomputers' we'll all have soon, and some of the applications we can apply this brute force to."
This discussion has been archived. No new comments can be posted.

IBM's Chief Architect Says Software is at Dead End

Comments Filter:
  • by Salvance ( 1014001 ) * on Tuesday January 30, 2007 @12:53PM (#17815262) Homepage Journal
    The argument that software will get slower assumes that most consumer software will continue to have additional CPU requirements without being coded for multi-core applications. This doesn't make sense. The average consumer uses an Office product, e-mail, and a browser. None of these use anywhere close to 100% of the CPU for very long even on a Pentium 3, let alone on a 2GHz+ core in a multi-core processor.

    Workstation computing will suffer some until software vendors catch up, but this is already happening (e.g. most CAD, Animation, Video Processing are starting to come out with multi-core optimized software). Sure, some apps will continue to be single-threaded, but eventually, who would buy them? Software vendors aren't dumb.

    Games will probably speed up significantly as well. Imagine the possibilities of having a game engine where each AI character utilizes 100% of a single core? Game designers aren't going to sit around desiging games that run on single core engines, they always push the boundaries and will continue to do so.
  • by maynard ( 3337 ) on Tuesday January 30, 2007 @12:55PM (#17815292) Journal
    A New Kind of Science [wikipedia.org]. Converting a range of standard CS algorithms into Cellular Automata networks is the very solution our brains use; a combination of message passing and feedback loops. If we want our computers to scale in parallel, we might want to look at how biology has solved the problem. A lot of people laughed at Wolfram when he initially published that book. I think he yet might have the last laugh.
  • There are so many things wrong with this statement, that it's not even funny. You can't just recompile a program for multiprocessing. When we create computer programs, it is generally expected that future data processing will rely on past data processing. Attempts to parallelize the operations will give bad results. For example:

    int a = 10;
    int b = 20;
    int c = 30;
     
    a = b + c;
    c = a + b;
    If you run the additions in a serial fashion, you get a = 50 and c = 70. If you run them in parallel, you get a = 50 and c = 30. Whoops.

    Being a simplified example, of course, it is possible to parallelize the instructions above. It will require more memory, but we can do the following instead:

    int a = 10;
    int b = 20;
    int c = 30;
    int d;
    int e;
     
    d = b + c;
    e = b + c + b;
    If the first addition is run on a separate core than the second processor, then we'll get a net increase in performance even though we used more computational cycles to compute the results. There's just one catch. CPU designers have known about these micro-optimizations for decades, and have been designing microprocessors with an ability called "Superscalar execution" for almost as long. Superscalar designs make use of CPU processing units not currently utilized by other instructions in order to offer a simplistic form of multithreaded execution on a single core. In this way, the CPU can chew on a larger number of instructions in parallel than it could if it fully serialized the execution. Which makes full multithreading mostly redundant for these situations.
  • I keep thinking about this as well. Really, sitting down a trying to write code that runs optimally on multiple processors is a huge headache, and, frankly, judging by the code I've seen in my life, most coders aren't up to it...It would be far better to put a VM or a specialty compiler between the code and the system, one that is capable of taking regular code and making it more multi-core friendly.

    Sure it'll add overhead, but the number of cores we're going to be working with at a time is going to continue to change, and the only way to not write immediately obsolete code is to have an intermediate control layer that is smart enough to translate.
  • And if thread management is good/fast enough then there's no reason things like GUI widgets can't run on their own thread/core. (I doubt spell checker in Firefox runs in it's own thread though)
  • by Pxtl ( 151020 ) on Tuesday January 30, 2007 @01:03PM (#17815420) Homepage
    Even worst-case-scenario, minimally-threaded workstation software can still allow for manual multitasking - if the render-loop of your 3D-modelling app is only using a small amount of the available processor, then at least the others remain available for continuing to work in the main app.

    The real problem is that procedural languages are fugly for working in on this stuff. Even the "modern" commercial languages like Java/DotNet still are somewhat cumbersome in the world of threading, compared to other languages where the threading metaphors are deeper in teh logic (or more mutable languages, like Lisp, where creating new core metaphors is trivial).
  • Forgive my ignorance, but can't the OS just make each new app run on it's own core? That would probably give us some overall apparent-speed-of-computer increases, without having to completely modify all existing stuff.
  • by theStorminMormon ( 883615 ) <theStorminMormon@@@gmail...com> on Tuesday January 30, 2007 @01:09PM (#17815500) Homepage Journal
    How does current virtualization software fare with multi-core architecture? I mean, if the hype is even somewhat believed even SMBs will be able to afford off-the-shelf "supercomputers". Of course relative to the real super-computers, these machines will be slow. But relative to actual requirements, they should be, well, supercomputers.

    Suddenly the old "everyone's moving to thin-clients/mainframes/dumb terminals/etc" story (as recently as today: http://it.slashdot.org/article.pl?sid=07/01/30/134 0210 [slashdot.org] ) becomes interesting again. If virtualization software works, then we don't need to wait for a golden age of multi-threaded software development. SMBs (and large companies too) will be able to deploy dumb terminals linked to multi-core monsters, install virtualization software to get as many servers as they need, offload individual instances of programs to the various cores as is natural, and viola: now you can actually realize all those TOC savings the thin client crowd has been raving about all these years.

    This transition to multi-core is what we need because, as far as I know, actually getting individual programs to run nicely on multiple cores is (with notable exceptions) something we're not really ready for yet in terms of development.

    -stormin
  • by Overzeetop ( 214511 ) on Tuesday January 30, 2007 @01:12PM (#17815542) Journal
    You are correct. And given the multitude of things that modern OSes need to do to "help us", we need these cores. I wish my laptop could be upgraded to a multi-core system, as there are too many things that will bod down the system that have to run in the background. Having a processor (or two) for them would significantly increase the responsiveness of my system.

    A decade ago I had a dual PP200 that was one of the nicest machines I had ever run. I ran some unruly apps at the time, and having an extra idle processor to cut those processes of at the knee without rebooting was a nice benefit. Nothing was multi-threaded back then, but having two processors was still valuable.
  • by MysteriousPreacher ( 702266 ) on Tuesday January 30, 2007 @01:16PM (#17815608) Journal
    I'm just dipping my toe in to the world of programming so bear with me if this is silly.

    Couldn't the programs inherit the benefits of a multi-core system if the APIs they call are written to distribute the work to the cores? I know this probably isn't optimal but there must be some benefits from this.

    I could take an old library (QuickDraw for example) and totally rewrite it to take advantage of a new architecture as long as it accepts the old calls and returns the expected results. This is probably an over-simplification though.
  • by Pojut ( 1027544 ) on Tuesday January 30, 2007 @01:18PM (#17815630) Homepage
    Well, programming languages come and go...of course, some of the "classics" are still in limited use (cobol, Pascal, C) but for the most part programming languages go the way of the dodo eventually.

    I would imagine that if these new multi-multi core procs are released into the wild in mass numbers, new programming languages will be developed that will enable things to be done more efficiently and easier....or perhaps a hybrid language: One half of the language is for writing processes for individual cores, while the other half acts as a "hub"....or even better, say you have 16 cores, and then one "central" core that acts like a post office...it doesn't actually create any of the mail, it just makes sure it gets delivered to the correct place.

    There is no way that the hardware would advance without the programming ability to back it up
  • by kscguru ( 551278 ) on Tuesday January 30, 2007 @02:07PM (#17816588)
    Instead of an IBM executive, how about David Patterson. Hint: he wrote The Book on computer architecture.

    Berkeley tech report (inc. Patterson as author) [berkeley.edu]

    Brief summary (I heard the same talk when he spoke at PARC), computational problems are divisible into one of thirteen categories that range from matrix multiplication to finite state automata. Most existing research (academia and industry) into parallelism tends to focus on about seven of those categories that are most easily parallelized - think supercomputer cluster. Most apps that you or I use fall into the graph traversal or finite-state categories (think compilers, apps with an event loop, etc.), into which there is essentially no research. Patterson even suspects that finite state machines are inherently serial and CANNOT be parallelized.

    So ... the apps that we already use can't really get faster on parallel cores without major, fundamental advances in computer science that don't seem to be approaching. Which means we'll be using our current apps for a LONG time.

    Additional note: IBM (and other chip manufacturers) have a vested interest in telling everyone that parallelism is the future. They can't make faster chips anymore, they can only compete on sheer number of cores.

  • Perl 6's chance?.. (Score:2, Interesting)

    by Noiser ( 18478 ) on Tuesday January 30, 2007 @02:13PM (#17816650) Homepage
    Perl 6 (as it is designed) introduces a new concept of "junctions", which are a bit like arrays, but can be used in clever ways. One useful way is:

    if ($fruit == ("apple"|"orange"|"pear")) {
            print "sweet!";
    }

    But another intriguing way of using the junction will be parallel loops -

    for ("apple"|"orange"|"pear") {
            eat($_);
    } ... and instead of doing three consecutive eat's -

    eat("apple");
    eat("orange");
    eat("pear");

    the interpreter will run 3 threads, each with the eat() function.

    As the whole of Perl 6, this design is not finalized. Maybe it won't be like that at all. And of course threading is all non-safe and stuff.

    But having threads in a vanilla for loop, instead of setting up thread with clever functions, modules, etc. is something new. If it will happen and unsuspecting programmers will just use it, hey - that'd be something special.
  • by Lazerf4rt ( 969888 ) on Tuesday January 30, 2007 @02:32PM (#17816978)

    TFA is nothing more than a press release announcing the plan to develop a supercomputer in Los Alamos, New Mexico. Yeah, it'll be made by IBM and based on Cell (and Opteron). In an attempt to make it more interesting, the article seems to struggle to make another point... and the point is difficult to discern from its river of vague generalities, lame statistics and other banalities. Best I can fathom is that the writer (IBM's chief architect) simply hopes that new, multicore-centered development tools will somehow emerge as a result of the computer's development:

    We are inviting industry partners to define the components (APIs, tools, etc.) of the programming methodology so that the multicore systems are accessible to those partners as well.

    Fair enough. The Slashdot summary is a horrible spin on TFA, and the attached "-dept." tagline attached is just embarassing. Too bad there wasn't more information content here, because multicore processing is indeed the future, and this could have been a much more interesting read.

  • by maxwell demon ( 590494 ) on Tuesday January 30, 2007 @03:57PM (#17818272) Journal
    But in which way are threads which don't share anything different from several processes which communicate (e.g. through sockets or through pipes)?
  • by Anonymous Brave Guy ( 457657 ) on Tuesday January 30, 2007 @05:59PM (#17820148)

    I commend you for broadening your programming horizons by learning Haskell. I also caution against drinking the kool-aid too much. A lot of the support for Haskell is in the academic community, and their priorities are very different to those of industrial software developers.

    When you start to combine Haskell with real world problems, a lot of that natural elegance starts to look very artificial. Then you introduce concepts that mimic imperative programming with shared state (albeit based on more mathematically rigorous underlying models) to overcome that. At that point, you're in the same bind as many of the newer languages today that are basically imperative in nature but starting to borrow from functional programming: you can do lots of the neat tricks, but when you dig deeper the expressive power isn't as much as you hoped.

    In other words, purely functional programming is very limited until you build in support for real world issues like I/O using monads or whatever. Once you do that, you're not really working with purely functional code (in the sense of having no side-effects) anymore, but rather with code that represents and reasons with side-effects explicitly. Whether the best way to write such code is in a declarative, functional style, or an imperative one, or something else, is a question yet to be answered, because right now only the functional programming language community is making a serious effort to do it.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...