Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing

Is Parallelism the New New Thing? 174

astwon sends us to a blog post by parallel computing pioneer Bill McColl speculating that, with the cooling of Web 2.0, parallelism may be a hot new area for entrepreneurs and investors. (Take with requisite salt grains as he is the founder of a Silicon Valley company in this area.) McColl suggests a few other upcoming "new things," such as Saas as an appliance and massive memory systems. Worth a read.
This discussion has been archived. No new comments can be posted.

Is Parallelism the New New Thing?

Comments Filter:
  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Friday March 28, 2008 @10:29AM (#22893412) Homepage Journal
    32G chips and put them in 4 slots on my 64 Bit PC before talking about 'massive memory'
  • 1% of programmers (Score:5, Insightful)

    by LotsOfPhil ( 982823 ) on Friday March 28, 2008 @10:34AM (#22893454)

    Only around 1% of the world's software developers have any experience of parallel programming.

    This seems far, far too low. Admittedly I work in a place that does "parallel programming," but it still seems awfully low.
  • by nguy ( 1207026 ) on Friday March 28, 2008 @10:41AM (#22893528)
    the guy has a "startup in stealth mode" called parallel computing. Of course he wants to generate buzz.

    Decade after decade, people keep trying to sell silver bullets for parallel computing: the perfect language, the perfect network, the perfect os, etc. Nothing ever wins big. Instead, there is a diversity of solutions for a diversity of problems, and progress is slow but steady.
  • No (Score:1, Insightful)

    by Anonymous Coward on Friday March 28, 2008 @10:41AM (#22893532)
    Like most topics in computer science, it's a "New Old Thing".
  • by Alzheimers ( 467217 ) on Friday March 28, 2008 @10:42AM (#22893536)
    A guy who's made it his life's work to study Parallel Computing has come forth to say, he thinks Parallelism is the next big thing?

    Shock! And Awe!
  • by Rosco P. Coltrane ( 209368 ) on Friday March 28, 2008 @10:42AM (#22893540)
    For having been in the computer industry for too long, I reckon the "next hot thing" usually means the "latest fad" that many of the entrepreneurs involved in hope will turn into the "next get-rich-quick scheme".

    Because really, anybody believes Web-Two-Oh was anything but the regular web's natural evolution with a fancy name tacked on?
  • by Midnight Thunder ( 17205 ) on Friday March 28, 2008 @10:49AM (#22893620) Homepage Journal
    Now that we are seeing more and more in the way of multi-core CPUs and multi-CPU computers I can definitely see parallelism become more important, for task that can be handled this way. You have to remember that in certain cases trying to parallise a task can end up being less efficient, so what you parallelise will depend on the task in hand. Things like games, media application and scientific applications are usually likely candidates since they are either doing lots of different things at once or have tasks that can be split up into smaller units that don't depend on the outcome of the other. Server applications can to a certain extent, depending whether they are trying to the same resources or not (ftp server, accessing this disk, vs a time server which does not file I/O).

    One thing that should also be noted, is that in certain cases you will need to accept increased memory usage, since you want to avoid tasks locking on resources that they don't really need to synchronise until the end of the work unit. In this case it may be cheaper to duplicate resources, do the work and then resynchronise at the end. Like everything it depends on the size and duration of the work unit.

    Even if your application is not doing enough to warrant running its tasks in parallel, the operating system could benefit, so that applications don't suffer on sharing resources that don't need to be shared.
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday March 28, 2008 @10:50AM (#22893628) Journal
    Oh, now that's not entirely fair.

    Web 2.0 was a single name for an amorphous collection of technologies and philosophies. It was even worse than AJAX.

    Parallelism is a pretty simple, well-defined problem, and an old one. That doesn't mean it can't turn into a buzzword, but I'm not convinced Web 2.0 can be anything but a buzzword.
  • by postbigbang ( 761081 ) on Friday March 28, 2008 @10:56AM (#22893712)
    Your number is a bit insulting.

    Consider that parallel computing means keeping extra monolitic cores busy. There are a number of programmers that need the discipline to know how to spawn, use, and tear down threads to keep them busy. But there are a helluva lot of them that plainly don't need to know. What we lack are reasonable compilers that allow the hardware layer to be sufficiently abstracted so that code can adapt to hardware infrasturcture appropriately. If that doesn't happen, then code becomes machine-specific rather than task-specific/fulfilling. If an app is written, then it should be able to take advantage of parallelism, and the plumbing should take care of the substrate, be that substrate a core-duo, or 64 cores and more, perhaps on separate systems with the obvious latencies that distance might inject.

    Those layers are only nominally addressed in operating systems, which should be the core arbiter of how parallelism can be manifested to an application instance. It's up to the kernel makers to figure out how to take advantage and make that advantage useful to applications writers-- who should be at least knowledgable about how to twig those features in the operating system. But app writers needn't have to know all of the underlaying differential to write flexible code.
  • by sm62704 ( 957197 ) on Friday March 28, 2008 @10:59AM (#22893740) Journal
    Sorry guys, web 2.0 was never cool and never will be.
  • by Otter ( 3800 ) on Friday March 28, 2008 @11:01AM (#22893754) Journal
    Admittedly I work in a place that does "parallel programming," but it still seems awfully low.

    I think your experience is wildly skewed toward the high end of programming skill. The percentage of working programmers who can't iterate over an array is probably in the 15-20% range, even without getting into whether "web programmers" are included in that statistic. I'd be astonished if the number with parallel experience is significantly above 1%.

  • In an ideal world where programmers know how to do the -basics- of their job, software architects would be an obsolete job

    Not really. The structure of a large system still has to be defined by someone. The key difference is that the architect would get a lot more feedback from his team, and could possibly even farm out high-level pieces of the design to be further architected by other developers.

    I'm a .NET and Java dev, and those languages, while it could be better, make parallel programming relatively simple (at least, it is definately within the grasp of a code monkey). Yet, I have -never ever ever-seen any async/parallel code in anything but my own code, everywhere I worked (and I'm doing consulting, so I worked in a lot of places in a short amount of time).

    That's because parallel code is the job of the J2EE or IIS server. Programmers develop modules that are loaded by the app server and run asynchronously. Granted, things are divided along he lines of one connection == one thread (though some app servers use poll/select for more granular control), but that's good enough to where systems like Sun's T1 "Niagara" processor can churn through a web load WAY faster than the single-threaded monstrosities we've been using to date.

    And before you try to tell me that "application servers don't count", ask any parallel computing researcher if he thinks the threaded programming model is a good idea. You'll almost always get a resounding, "NO". Most of the parallel computing folks I've spoken with rave on and on about lambda and the inherent parallel nature of lambda functions.

    The thing is, once parallelism takes off (if one can reasonable argue that it hasn't already), coding will be more about creating parallelizable modules rather than creating threads. The theory will be that the platform running the modules knows more about its resources and how to balance them than the code does. So by exposing areas where parallelism can occur, a language exposes the opportunity to split execution across many processors.

    Let's take an ideal example: Let's say you create a raytracing function to cast a ray. Well, that function is inherently parallel in nature. All you need is to map a list of rays to the function and let the platform work out how to balance that function across many processing units.

    A slightly less ideal (yet still parallelizable) example is collision detection in a video game. Collision detection is a matrix of objects that can interact. In general, that means that you want to test two objects against each other to see if they have collided or not. If they have, trigger an event to update the state of those objects. (e.g. explode, reverse direction, etc.) Once again, you can have a collision function that takes two items and works out if they have collided or not. A very parallelizable situation. Even firing any resulting events can be done in parallel, as long as the platform is careful not to dispatch multiple events on the same object in parallel. (That creates an out-of-order code issue which can be a bit tricky to resolve.)

    Long story short: Parallelism is hard; expect the solution to be as invisible as possible.
  • by Anne Thwacks ( 531696 ) on Friday March 28, 2008 @11:16AM (#22893942)
    I'd believe 1% as having "some experience". Some experience is what you put on your CV when you know what the buzword means.

    If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high. If you mean "can exchange data between a userland thread and an ISR in compliance with the needs of reliable parallel execution" then its a safe bet that less than 0.1% are mentally up to the challenge. /. readers are not typical of the programming cummiity. These days people who can drag-and-drop call themselves programmers. Poeple who can spell "l337" are one!

  • What am I missing? (Score:4, Insightful)

    by James Lewis ( 641198 ) on Friday March 28, 2008 @11:31AM (#22894094)
    Now that multi-core computers have been out I keep hearing buzz around the idea of parallel computing, as if it is something new. We've had threads, processes, multi-CPU machines, grid computing, etc etc for a long time now. Parallelism has been in use on single processor machines for a long time. Multi-core machines might make it more attractive to thread certain applications that were traditionally single-threaded, but that's the only major development I can see. The biggest problem in parallel computing is the complexity it adds, so hopefully developments will be made in that area, but it's an area that's been researched for a long time now.
  • by Sancho ( 17056 ) on Friday March 28, 2008 @11:32AM (#22894110) Homepage
    I agree that parallelism is more easily (and transparently) used at the OS level, but that doesn't mean that we don't need to start moving that way in applications, too. As we move towards a point where extracting more speed out of the silicon becomes harder and harder, we're going to see more and more need for parallelism. For a lot of applications, it's going to be irrelevant, but for anything at all CPU intensive (working with video, games, etc.) it's going to eventually be the way of the future.
  • Race conditions (Score:3, Insightful)

    by microbox ( 704317 ) on Friday March 28, 2008 @11:40AM (#22894220)
    Being able to simply say "the order in which these tasks are made doesn't matter" lets you run a lot of tasks in parallel right there.

    Mucking around with language design and implementation highlights some of the deep problems that the "parallize-everything" crowd often don't know about.

    In your example, the loop can only be efficently parallized if it doesn't have any side-effects. If any variables are written to out of scope of the loop, then they are exposed to race conditions in both the read and write direction. It's still possible to parallize the code if the machine code doesn't take advantage of the memory model of your architecture, and no other thread is modifying those variables, and it doesn't matter in which order the variables are modified, and modifications are atomic. As an implementation detail, memory reads and write will be signficiantly slower, and sychronization of atomic operations is also not without cost. So much in fact, that you'd probably be better off just running the loop on a single thread. Your code will be more predictable regardless.

    At the moment, it seems that effective parallization requires some planning and effort on behalf of the programmer. The bugs can be extraordinarily subtle, impossible to debug and difficult to reason about. Expect single-threaded programming models to be around for a long time.
  • by MOBE2001 ( 263700 ) on Friday March 28, 2008 @11:54AM (#22894378) Homepage Journal
    Needless to say I was a bit early on that prediction.

    The reason is that all academic researchers jumped on the multithreading bandwagon as the basis for parallel computing. Unfortunately for them, they could never get it to work. They've been at it for over twenty years and they still can't make it work. Twenty years is an eternity in this business. You would think that after all this time, it would have occurred to at least one of those smart scientists that maybe, just maybe, multithreading is not the answer to parallel computing. Nope: they're still trying to fit that square peg into the round hole.

    Both AMD and Intel have invested heavily into the multithreading model. Big mistake. Multiple billion dollar mistake. To find out why threads are not part of the future of parallel computing read Nightmare on Core Street [blogspot.com]. It's time for the computer industry to wake up and realize that the analytical engine is long gone. This is the 21st century. It's time to move on and change to a new model of computing.
  • Re:About time (Score:4, Insightful)

    by jc42 ( 318812 ) on Friday March 28, 2008 @01:53PM (#22896038) Homepage Journal
    When I was in graduate school in the mid '90's I thought Parallelism would be the next big thing.

    When I was in grad school back in the 1970s, people thought parallelism would be the next big thing, and it had some interesting technical challenges, so I got into it as much as was possible back then. Then I got out into the Real World [TM], where such ideas just got blank looks and "Let's move on" replies.

    Some what later, in the 1980s, I worked on projects at several companies who thought that parallelism was the next big thing. That time around, I got my hands on a number of machines with hundreds of processors and gigabytes (Wow!) of memory, so I could actually try out some of the ideas from the previous decade. The main things that I learned was that 1) many of the concepts were viable, and 2) debugging in an environment where nothing is reproducible is hard. And I moved on, mostly to networking projects where you could actually do loosely-coupled multiprocessing (though management still gave you blank looks if you started talking in such terms).

    Now we're getting personal computers with more than one processor. It's been a bit of a wait, but we even have management saying it's the "new^N thing". And debugging parallel code is still hard.

    I'll predict that 1) We'll see a lot of commercial parallelized apps now, and 2) those apps will never be debugged, giving us flakiness that outshines the worst of what Microsoft sold back in the 1980s and 1990s. We'll still have the rush to market; developers will still be held to nonsensical schedules; debugging will still be treated as an "after-market" service; and we developers will still be looking for debugging tools that work for more than toy examples (and work with a customer's app that has been running for months when a show-stopper bug pops up).

    There's a reason that, despite the existence of multi-process machines for several decades, we still have very little truly parallel code that works. Debugging the stuff is hard, mostly because bugs can rarely be reproduced.

    (Of course, this won't prevent the flood of magical snake-oil tools that promise to solve the problem. There's a lot of money to be made there. ;-)

  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Friday March 28, 2008 @02:01PM (#22896154) Homepage

    Just because the majority of developers can't doesn't mean it has "failed"
    When you want to get parallelism into the mainstream then that is pretty much the exact definition of "fail".
  • Re:TRIPS (Score:3, Insightful)

    by olddotter ( 638430 ) on Friday March 28, 2008 @02:21PM (#22896404) Homepage
    Unless we are going to multi-cores because increasing clock rate is getting hard to do.

    I don't think "supercomputer" approaches (which I admit I think of as the same as "massively parallel scientific computing") are applicable to most applications today. Exceptions might be video, picture, and audio editing; and maybe certain types of database operations. If you have other examples I'd be most interested in hearing about them.
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday March 28, 2008 @08:27PM (#22901198) Homepage Journal
    There are hundreds of libraries, yes. That's part of the problem. Too many different ways of slicing the same pie, leading to solutions that are not efficient when trying to get them to play nice with each other.

    The next problem is that parallelism is not, as a rule, CPU-bound but network-bound. All the libraries in the world won't work when the network clogs and chokes.

    The third problem is that coders are taught serial methods. Parallel thinking is very different from serial thinking. You run into problems that do not exist in the serial world, even on a timeslicing system like Linux. True parallelism, like true clockless computing, is a nightmare to do well. You can't just shove another library in and hope things'll work.

    The fourth problem is the level of connectedness. Globus is a great toolkit for some things, but you wouldn't use it for programming a vector computer or - most likely - even a Beowulf cluster. It's a gridding solution and a damn good one, but grids are all it will do well. On the flip-side, solutions like bproc and MOSIX are superb mechanisms for optimally using a fairly tight cluster, but you'd never sanely use them on a grid. The latencies would make the very features that make those solutions useful in a cluster useless on a WAN-based grid.

    I'm not sure I'm keen on Java on any parallel solution other than gridding. It's too slow, its threading model is still in its infancy, and the sandboxing makes RDMA an absolute nightmare to do safely. In fact, the very definition of sandboxing is that external entities can't go around poking bits of data into memory, which is the entire essence of RDMA - CPUless networking.

    Regardless, there are some things that C++ and Java simply cannot do well that other, parallel-specific languages like Pi-Occam can do with extreme ease and safety. It is possible to prove an Occam program is safe. You cannot do likewise with a C++ or Java program.

    Parallelism isn't just about more threads on one CPU. In a totally generalized parallel scenario, there may be any number of threads - a few tens of thousands would not be unusual - running on systems that may be SMP, multi-core, multi-threaded, vectored, clockless, or any combination of the above, where those systems may be on a tightly-coupled or loosely-coupled cluster, and where the cluster may be homogenous or heterogeneous, SSI or multi-imaged, where memory may be local, NUMA or distributed, and where these systems/clusters may be gridded over wide-area networks that may or may not be reliable or operational at any given time, and where threads, processes and entire operating systems may migrate from system to system without user intervention or awareness on the part of the application.

    The number of true parallel experts in the world probably number less than a dozen. No, I'm not one of them. I'm good, I understand the problem-space better than the average coder, but I've talked to some of the experts out there and they're as far beyond any traditional programmer as a traditional programmer is beyond the chipmunk. A network engineer might consider themselves OK if they can set up OSPF optimally across a traditional star network of star networks. Any traditional routing protocol over a mesh without getting flaps and maintaining a reasonable level of fault tolerence would be considered tough. A butterfly network, a torroidal network or a hypercube would leave said network engineer a gibbering wreck. Modern supercomputers do not take up buildings. Modern supercomputers take up a few rooms. The interconnects take up entire buildings. And the air conditioning on top systems can be measured in football stadia.

    OpenMOSIX is largely dead, because it was impossible to reconcile those who wanted load-balancing with those who wanted HPC. It's not that they can't be reconciled in theory, it's that the mindsets are too different to cram into one brain.

    If one solution could solve parallelism, the Transputer would be the only processor in use today and Intel would

Always draw your curves, then plot your reading.

Working...