IBM's Chief Architect Says Software is at Dead End 334
j2xs writes "In an InformationWeek article entitled 'Where's the Software to Catch Up to Multicore Computing?' the Chief Architect at IBM gives some fairly compelling reasons why your favorite software will soon be rendered deadly slow because of new hardware architectures. Software, she says, just doesn't understand how to do work in parallel to take advantage of 16, 64, 128 cores on new processors. Intel just stated in an SD Times article that 100% of its server processors will be multicore by end of 2007. We will never, ever return to single processor computers. Architect Catherine Crawford goes on to discuss some of the ways developers can harness the 'tiny supercomputers' we'll all have soon, and some of the applications we can apply this brute force to."
You hit the nail right on the head (Score:5, Interesting)
Workstation computing will suffer some until software vendors catch up, but this is already happening (e.g. most CAD, Animation, Video Processing are starting to come out with multi-core optimized software). Sure, some apps will continue to be single-threaded, but eventually, who would buy them? Software vendors aren't dumb.
Games will probably speed up significantly as well. Imagine the possibilities of having a game engine where each AI character utilizes 100% of a single core? Game designers aren't going to sit around desiging games that run on single core engines, they always push the boundaries and will continue to do so.
Stephen Wolfram has a solution (Score:5, Interesting)
Re:Compilers need to be better. (Score:3, Interesting)
Being a simplified example, of course, it is possible to parallelize the instructions above. It will require more memory, but we can do the following instead: If the first addition is run on a separate core than the second processor, then we'll get a net increase in performance even though we used more computational cycles to compute the results. There's just one catch. CPU designers have known about these micro-optimizations for decades, and have been designing microprocessors with an ability called "Superscalar execution" for almost as long. Superscalar designs make use of CPU processing units not currently utilized by other instructions in order to offer a simplistic form of multithreaded execution on a single core. In this way, the CPU can chew on a larger number of instructions in parallel than it could if it fully serialized the execution. Which makes full multithreading mostly redundant for these situations.
Re:Yeah, if you only run one program at a time.. (Score:4, Interesting)
Sure it'll add overhead, but the number of cores we're going to be working with at a time is going to continue to change, and the only way to not write immediately obsolete code is to have an intermediate control layer that is smart enough to translate.
Re:Yeah, if you only run one program at a time.. (Score:3, Interesting)
Re:You hit the nail right on the head (Score:5, Interesting)
The real problem is that procedural languages are fugly for working in on this stuff. Even the "modern" commercial languages like Java/DotNet still are somewhat cumbersome in the world of threading, compared to other languages where the threading metaphors are deeper in teh logic (or more mutable languages, like Lisp, where creating new core metaphors is trivial).
Can't the OS just bump apps to their own procs? (Score:4, Interesting)
Re:But Developers do? (Score:4, Interesting)
Suddenly the old "everyone's moving to thin-clients/mainframes/dumb terminals/etc" story (as recently as today: http://it.slashdot.org/article.pl?sid=07/01/30/13
This transition to multi-core is what we need because, as far as I know, actually getting individual programs to run nicely on multiple cores is (with notable exceptions) something we're not really ready for yet in terms of development.
-stormin
Re:Can't the OS just bump apps to their own procs? (Score:5, Interesting)
A decade ago I had a dual PP200 that was one of the nicest machines I had ever run. I ran some unruly apps at the time, and having an extra idle processor to cut those processes of at the knee without rebooting was a nice benefit. Nothing was multi-threaded back then, but having two processors was still valuable.
Re:Yeah, if you only run one program at a time.. (Score:4, Interesting)
Couldn't the programs inherit the benefits of a multi-core system if the APIs they call are written to distribute the work to the cores? I know this probably isn't optimal but there must be some benefits from this.
I could take an old library (QuickDraw for example) and totally rewrite it to take advantage of a new architecture as long as it accepts the old calls and returns the expected results. This is probably an over-simplification though.
Re:You hit the nail right on the head (Score:3, Interesting)
I would imagine that if these new multi-multi core procs are released into the wild in mass numbers, new programming languages will be developed that will enable things to be done more efficiently and easier....or perhaps a hybrid language: One half of the language is for writing processes for individual cores, while the other half acts as a "hub"....or even better, say you have 16 cores, and then one "central" core that acts like a post office...it doesn't actually create any of the mail, it just makes sure it gets delivered to the correct place.
There is no way that the hardware would advance without the programming ability to back it up
How about the David Patterson perspective? (Score:4, Interesting)
Berkeley tech report (inc. Patterson as author) [berkeley.edu]
Brief summary (I heard the same talk when he spoke at PARC), computational problems are divisible into one of thirteen categories that range from matrix multiplication to finite state automata. Most existing research (academia and industry) into parallelism tends to focus on about seven of those categories that are most easily parallelized - think supercomputer cluster. Most apps that you or I use fall into the graph traversal or finite-state categories (think compilers, apps with an event loop, etc.), into which there is essentially no research. Patterson even suspects that finite state machines are inherently serial and CANNOT be parallelized.
So ... the apps that we already use can't really get faster on parallel cores without major, fundamental advances in computer science that don't seem to be approaching. Which means we'll be using our current apps for a LONG time.
Additional note: IBM (and other chip manufacturers) have a vested interest in telling everyone that parallelism is the future. They can't make faster chips anymore, they can only compete on sheer number of cores.
Perl 6's chance?.. (Score:2, Interesting)
if ($fruit == ("apple"|"orange"|"pear")) {
print "sweet!";
}
But another intriguing way of using the junction will be parallel loops -
for ("apple"|"orange"|"pear") {
eat($_);
}
eat("apple");
eat("orange");
eat("pear");
the interpreter will run 3 threads, each with the eat() function.
As the whole of Perl 6, this design is not finalized. Maybe it won't be like that at all. And of course threading is all non-safe and stuff.
But having threads in a vanilla for loop, instead of setting up thread with clever functions, modules, etc. is something new. If it will happen and unsuspecting programmers will just use it, hey - that'd be something special.
Re:Clearing things up a bit (Score:4, Interesting)
TFA is nothing more than a press release announcing the plan to develop a supercomputer in Los Alamos, New Mexico. Yeah, it'll be made by IBM and based on Cell (and Opteron). In an attempt to make it more interesting, the article seems to struggle to make another point... and the point is difficult to discern from its river of vague generalities, lame statistics and other banalities. Best I can fathom is that the writer (IBM's chief architect) simply hopes that new, multicore-centered development tools will somehow emerge as a result of the computer's development:
Fair enough. The Slashdot summary is a horrible spin on TFA, and the attached "-dept." tagline attached is just embarassing. Too bad there wasn't more information content here, because multicore processing is indeed the future, and this could have been a much more interesting read.
Re:Yeah, if you only run one program at a time.. (Score:3, Interesting)
Re:Purely Functional Programming... (Score:3, Interesting)
I commend you for broadening your programming horizons by learning Haskell. I also caution against drinking the kool-aid too much. A lot of the support for Haskell is in the academic community, and their priorities are very different to those of industrial software developers.
When you start to combine Haskell with real world problems, a lot of that natural elegance starts to look very artificial. Then you introduce concepts that mimic imperative programming with shared state (albeit based on more mathematically rigorous underlying models) to overcome that. At that point, you're in the same bind as many of the newer languages today that are basically imperative in nature but starting to borrow from functional programming: you can do lots of the neat tricks, but when you dig deeper the expressive power isn't as much as you hoped.
In other words, purely functional programming is very limited until you build in support for real world issues like I/O using monads or whatever. Once you do that, you're not really working with purely functional code (in the sense of having no side-effects) anymore, but rather with code that represents and reasons with side-effects explicitly. Whether the best way to write such code is in a declarative, functional style, or an imperative one, or something else, is a question yet to be answered, because right now only the functional programming language community is making a serious effort to do it.