Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing

Is Parallelism the New New Thing? 174

astwon sends us to a blog post by parallel computing pioneer Bill McColl speculating that, with the cooling of Web 2.0, parallelism may be a hot new area for entrepreneurs and investors. (Take with requisite salt grains as he is the founder of a Silicon Valley company in this area.) McColl suggests a few other upcoming "new things," such as Saas as an appliance and massive memory systems. Worth a read.
This discussion has been archived. No new comments can be posted.

Is Parallelism the New New Thing?

Comments Filter:
  • by olddotter ( 638430 ) on Friday March 28, 2008 @09:28AM (#22893390) Homepage
    When I was in graduate school in the mid '90's I thought Parallelism would be the next big thing. Needless to say I was a bit early on that prediction. Finally maybe those graduate classes and grant work will pay off. :-)
    • Needless to say I was a bit early on that prediction.

      The reason is that all academic researchers jumped on the multithreading bandwagon as the basis for parallel computing. Unfortunately for them, they could never get it to work. They've been at it for over twenty years and they still can't make it work. Twenty years is an eternity in this business. You would think that after all this time, it would have occurred to at least one of those smart scientists that maybe, just maybe, multithreading is not the ans
      • Re: (Score:2, Informative)

        by Anonymous Coward
        Unfortunately for them, they could never get it to work.

        Hyperbole much? Parallel systems such as MPI have been the staple of high performance computing since the mid 90's, and there are plenty of developers (including myself) who can write multi-threaded code without breaking into a sweat, and get it right.

        At what point did parallel and concurrent programming "fail"? I really must have missed that memo.
      • "Threads are inherently non-deterministic"

        A direct quote from the article. I guess, in a sense, this is true. It is true that it is impossible to ensure that a lock can be obtained. On a hardware level, no less.

        On the other hand, this statement is a load of bullshit.

      • Multi-process and multithreaded applications abound in the real world. Pull your head from your anus and look around you.

        Now, are they going to make things faster in exactly the same way as clock increases? Hell no.
        Do they take some thought to implement? Hell yes.

        However they are here and have been here for decades.

        This is unless you are referring to very, very narrow definition of parallel processing that is more applicable in the scientific than business or home forums - breaking down set, tricky, mathema
      • by jo42 ( 227475 )

        change to a new model of computing
        Such as ... ?
    • When I was in graduate school in the mid '90's I thought Parallelism would be the next big thing. Needless to say I was a bit early on that prediction.

      Had a company in the early 90s dedicated to heterogeneous parallel computing in what we now call genomics and proteomics. Despite the ongoing boom in DNA sequencing and analysis, it was hard at the time to interest either end-users or (especially) investors in distributed processing. Most worried that it was overkill, or that the computations would somehow be out of their control. How times change...

    • by rbanffy ( 584143 )
      Parallelism is the new new thing since at least around the ILLIAC IV...
    • Re:About time (Score:4, Insightful)

      by jc42 ( 318812 ) on Friday March 28, 2008 @12:53PM (#22896038) Homepage Journal
      When I was in graduate school in the mid '90's I thought Parallelism would be the next big thing.

      When I was in grad school back in the 1970s, people thought parallelism would be the next big thing, and it had some interesting technical challenges, so I got into it as much as was possible back then. Then I got out into the Real World [TM], where such ideas just got blank looks and "Let's move on" replies.

      Some what later, in the 1980s, I worked on projects at several companies who thought that parallelism was the next big thing. That time around, I got my hands on a number of machines with hundreds of processors and gigabytes (Wow!) of memory, so I could actually try out some of the ideas from the previous decade. The main things that I learned was that 1) many of the concepts were viable, and 2) debugging in an environment where nothing is reproducible is hard. And I moved on, mostly to networking projects where you could actually do loosely-coupled multiprocessing (though management still gave you blank looks if you started talking in such terms).

      Now we're getting personal computers with more than one processor. It's been a bit of a wait, but we even have management saying it's the "new^N thing". And debugging parallel code is still hard.

      I'll predict that 1) We'll see a lot of commercial parallelized apps now, and 2) those apps will never be debugged, giving us flakiness that outshines the worst of what Microsoft sold back in the 1980s and 1990s. We'll still have the rush to market; developers will still be held to nonsensical schedules; debugging will still be treated as an "after-market" service; and we developers will still be looking for debugging tools that work for more than toy examples (and work with a customer's app that has been running for months when a show-stopper bug pops up).

      There's a reason that, despite the existence of multi-process machines for several decades, we still have very little truly parallel code that works. Debugging the stuff is hard, mostly because bugs can rarely be reproduced.

      (Of course, this won't prevent the flood of magical snake-oil tools that promise to solve the problem. There's a lot of money to be made there. ;-)

    • Re: (Score:3, Interesting)

      by ChrisA90278 ( 905188 )
      I was a systems programming on the CDC6400. This at one time was the world's fastest super computer and was released in 1964 (hence the model number.) I'm not that old. The machine was old when I worked on it. I did patches and custom modification to the operating system. We worked in asembly language so I did have to know the gory details of the hardware.

      It was a highly parallel machine. At every level. It could execute 10 instuctions at once. There was a 10 way path to main memory so we could do 10
  • by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Friday March 28, 2008 @09:29AM (#22893412) Homepage Journal
    32G chips and put them in 4 slots on my 64 Bit PC before talking about 'massive memory'
    • 32G chips and put them in 4 slots on my 64 Bit PC before talking about 'massive memory'

      Can't you already do that with a server motherboard? Even if you're looking for a PC, Skulltrail supports gobs of RAM and 8 cores.
      On the server side, Intel is coming out (soon) with Dunnington [wikipedia.org], which will be a 6-core single-die CPU with a monster cache... AND you can put 4 of them on a motherboard, giving you a 24-core machine. Then, you can also get custom workstations (Tyan?) that support multiple motherboards in a single box with a high speed interconnect. This is only going to get better when CSI/QPI [wikipedia.org] g

      • Re: (Score:3, Insightful)

        by Sancho ( 17056 )
        I agree that parallelism is more easily (and transparently) used at the OS level, but that doesn't mean that we don't need to start moving that way in applications, too. As we move towards a point where extracting more speed out of the silicon becomes harder and harder, we're going to see more and more need for parallelism. For a lot of applications, it's going to be irrelevant, but for anything at all CPU intensive (working with video, games, etc.) it's going to eventually be the way of the future.
    • by adisakp ( 705706 )
      32G chips and put them in 4 slots on my 64 Bit PC before talking about 'massive memory'

      "Massive Memory" refers to memory density where it's appropriate to use memory for applications that used to require a hard drive and then hard drives are used for long term storage only. This isn't meant for your daily PC. The current state of technology typically uses specialized rack-mount clusters of dozens or hundreds thin blades where each blade might have 16-64G of memory on standard 4 GB ECC Dimms with 8 memo
  • TRIPS (Score:4, Interesting)

    by Bombula ( 670389 ) on Friday March 28, 2008 @09:33AM (#22893446)
    Someone in another recent thread mention the TRIPS architecture [wikipedia.org]. It's quite interesting reading.
    • Re: (Score:3, Interesting)

      TRIPS and EDGE are interesting approaches to parallel processing. The thing is, data interdependency is going to make execution speed remain as the trump card for computing. That is to say: you cannot parallelize an algorithm that requires step-by-step completion. EDGE simply identifies, at the compilation level, what parts of a program can be parallelized, based on interdependencies, and then it creates the program based on this in the form of 'hyperblocks'.

      If each subsequent step is dependent on the pr
  • 1% of programmers (Score:5, Insightful)

    by LotsOfPhil ( 982823 ) on Friday March 28, 2008 @09:34AM (#22893454)

    Only around 1% of the world's software developers have any experience of parallel programming.

    This seems far, far too low. Admittedly I work in a place that does "parallel programming," but it still seems awfully low.
    • Sigh, replying to myself. The source for the 1% figure is a blog of someone at Intel:

      A more pressing near-term problem is the relative lack of experienced parallel programmers in industry. The estimates vary depending on whom you ask, but a reasonable estimate is that 1% (yes, that's 1 in 100) of programmers, at best, have "some" experience with parallel programming. The number that are experienced enough to identify performance pitfalls across a range of parallel programming styles is probably an order of

    • Perhaps they meant it as in "specifically designing to scale up", as opposed to a developer who just uses a thread to do some background processing.

      One thing that's always saddened me is that most embarrassingly parallel problems like web and database development are still in the dark ages with this. They have so much potential, but almost nobody seems to care. To date the only frameworks I know of that allow fully asynchronous efficient processing of requests and database queries (so that you don't nee

      • Re: (Score:3, Interesting)

        by AKAImBatman ( 238306 )

        To date the only frameworks I know of that allow fully asynchronous efficient processing of requests and database queries (so that you don't need to spawn a new thread/process for every request) is ASP.NET, WCF (SOAPish stuff in .NET), and some of .NET's database APIs.

        How do you see that as different from what Java J2EE does? Most J2EE servers these days use pools of threads to handle requests. These threads are then utilized based on poll/select APIs so that one thread can handle many requests depending on

        • How do you see that as different from what Java J2EE does? Most J2EE servers these days use pools of threads to handle requests. These threads are then utilized based on poll/select APIs so that one thread can handle many requests depending on the availability of data. Database connections are similarly pooled and reused, though any optimization on the blocking level would need to be done by the JDBC driver.

          I've never used Java/J2EE before so I couldn't say.

          .NET uses a per-operation callback to notify the app of I/O completion - it doesn't expose whatever internal mechanism it uses to achieve async, so the server choose the most efficient method (iocp, epoll, kqueue, how many threads, etc.). If you use SQL Server, DB queries can be done async too. If you do it right in many cases you can keep from ever blocking on I/O.

      • Was just at a MATLAB seminar yesterday Phrosty, and it has reasonably transparent support for embarassingly parallel and data-parallel operations. For other stuff, it just falls back to MPI (using some lightweight wrappers around it which mainly just take care of counting how large the arrays you're sending are).

        I think it might be worth a slashdot poll to see how many programmers have experience working with threads or MPI. My guess is a lot higher than 1%, especially given how prevalent Java is these days
    • Re: (Score:3, Interesting)

      by Shados ( 741919 )
      I'd be sceptical of the source of the information too (Intel's blog as you posted), but that doesn't seem that low to me...

      The entire hierarchy system in the IT fields has to deal with the painfully obvious fact that less than 1% of programmers know what they're doing: that is, in an ideal scenario, everyone would know what they're doing, and you'd have a FEW hardcore computer scientists to handle the nutso theoritical scenarios (most parallel programming for example can be done with only basic CS knowledge
      • Re: (Score:3, Insightful)

        by postbigbang ( 761081 )
        Your number is a bit insulting.

        Consider that parallel computing means keeping extra monolitic cores busy. There are a number of programmers that need the discipline to know how to spawn, use, and tear down threads to keep them busy. But there are a helluva lot of them that plainly don't need to know. What we lack are reasonable compilers that allow the hardware layer to be sufficiently abstracted so that code can adapt to hardware infrasturcture appropriately. If that doesn't happen, then code becomes mach
      • Re: (Score:3, Insightful)

        by AKAImBatman ( 238306 )

        In an ideal world where programmers know how to do the -basics- of their job, software architects would be an obsolete job

        Not really. The structure of a large system still has to be defined by someone. The key difference is that the architect would get a lot more feedback from his team, and could possibly even farm out high-level pieces of the design to be further architected by other developers.

        I'm a .NET and Java dev, and those languages, while it could be better, make parallel programming relatively simp

        • by Shados ( 741919 )

          Not really. The structure of a large system still has to be defined by someone. The key difference is that the architect would get a lot more feedback from his team, and could possibly even farm out high-level pieces of the design to be further architected by other developers.

          Note I said software architect, and I specifically stated I was not talking about system architects. Big difference between the two.

          And yes application servers DO count. But during one connection, there's a LOT of things that can be sp

          • Note I said software architect, and I specifically stated I was not talking about system architects. Big difference between the two.

            Not really. A software architect represents a division of labor between the guys who build the hardware and the guys who build the software. The software architect obviously deals with the software aspect of the system (and when I say system, I mean a complete, large-scale application) and is thus responsible for how the code will be organized and constructed. You simply can't

    • by Otter ( 3800 ) on Friday March 28, 2008 @10:01AM (#22893754) Journal
      Admittedly I work in a place that does "parallel programming," but it still seems awfully low.

      I think your experience is wildly skewed toward the high end of programming skill. The percentage of working programmers who can't iterate over an array is probably in the 15-20% range, even without getting into whether "web programmers" are included in that statistic. I'd be astonished if the number with parallel experience is significantly above 1%.

      • by Anne Thwacks ( 531696 ) on Friday March 28, 2008 @10:16AM (#22893942)
        I'd believe 1% as having "some experience". Some experience is what you put on your CV when you know what the buzword means.

        If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high. If you mean "can exchange data between a userland thread and an ISR in compliance with the needs of reliable parallel execution" then its a safe bet that less than 0.1% are mentally up to the challenge. /. readers are not typical of the programming cummiity. These days people who can drag-and-drop call themselves programmers. Poeple who can spell "l337" are one!

        • by Otter ( 3800 )
          If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high.

          I was thinking more along the lines of "learned something about parallelism in a CS class and remember having done so, although not necessarily what it was".

        • If you ask how many can "regularly achieve significant performance through use of multiple threads" then 0.1% is far too high.

          Add MPI or distributed parallel processing, and I would guess that number drops even lower. This is not a trivial topic, and for many years, just wasn't necessary. During the clock speed wars it was a non issue for most software, but with clock speeds capping out and core counts on the rise, I think we will see a large shift to this way of doing things in years to come. Most programmers follow the way of necessity, and that is what it will become.

        • These days people who can drag-and-drop call themselves programmers. Poeple who can spell "l337" are one!
          Well, "l337" is a language isn't it? At least they know how to use one language. Some poeple on the other hand don't know even how to spell in English :^P
      • The percentage of working programmers who can't iterate over an array is probably in the 15-20% range, even without getting into whether "web programmers" are included in that statistic.

        As a "web programmer", I find that statistic really hard to swallow. I've never met any professional developer who couldn't iterate over an array. Do you have anything, even anecdotal evidence to support that?

        Unless by "web programmer" you're including anyone who ever took a class on Dreamweaver.

        • Re: (Score:3, Funny)

          by Otter ( 3800 )
          Do you have anything, even anecdotal evidence to support that?

          Conveniently, the DailyWTF steps in [thedailywtf.com] to provide some anecdotal evidence:

          When I showed my lead the old code and the new, he responded, ah, that must have been Jed Code; yeah, he really hated anything that had to use arrays or loops, he couldn't see the point of them ... I think each month he would uncomment the next month and redeploy the application

          In fact, I've encountered quite a few programmers (whom I don't hire, so don't blame me) who don't

  • Parallelism is just a means to a business feature - performance. If clients want it, then there will be capital for it. If they don't, then it won't matter to them.
  • by nguy ( 1207026 ) on Friday March 28, 2008 @09:41AM (#22893528)
    the guy has a "startup in stealth mode" called parallel computing. Of course he wants to generate buzz.

    Decade after decade, people keep trying to sell silver bullets for parallel computing: the perfect language, the perfect network, the perfect os, etc. Nothing ever wins big. Instead, there is a diversity of solutions for a diversity of problems, and progress is slow but steady.
    • It looks to me more like progress is completely stalled.

      I mean, yes, there are all kinds of solutions. Most of them are completely unused, and we're back to threads and locks. Nothing's going to be perfect, but I'll buy the "no silver bullet" when we actually have wide adoption of anything -- even multiple things -- other than threads and locks.
    • by jd ( 1658 )
      There are a whole host of factors that are involved in the deaths of startups and the deaths of parallel technologies. I'm going to have to be careful, for NDA reasons, but here are some I have personally witnessed. These are not from the same company and I've not included anything that could identify the companies concerned. This list is intended for the purposes of showing why companies in general are often so unsuccessful, as there's no way I happened to witness wholly unique circumstances in each such c
  • by Alzheimers ( 467217 ) on Friday March 28, 2008 @09:42AM (#22893536)
    A guy who's made it his life's work to study Parallel Computing has come forth to say, he thinks Parallelism is the next big thing?

    Shock! And Awe!
  • by Rosco P. Coltrane ( 209368 ) on Friday March 28, 2008 @09:42AM (#22893540)
    For having been in the computer industry for too long, I reckon the "next hot thing" usually means the "latest fad" that many of the entrepreneurs involved in hope will turn into the "next get-rich-quick scheme".

    Because really, anybody believes Web-Two-Oh was anything but the regular web's natural evolution with a fancy name tacked on?
    • Re: (Score:3, Insightful)

      Oh, now that's not entirely fair.

      Web 2.0 was a single name for an amorphous collection of technologies and philosophies. It was even worse than AJAX.

      Parallelism is a pretty simple, well-defined problem, and an old one. That doesn't mean it can't turn into a buzzword, but I'm not convinced Web 2.0 can be anything but a buzzword.
    • Yep the phrase "the next big thing in tech" is something uttered by people who are no longer allowed to work on Wall Stree.

      Here's a clue for you "Ultra cheap computers" is the next big thing in tech, or haven't you heard about the impending financial crises that is about the consume the world's economies? That's right kiddies, no shiny new computers for your christmas... just new ISOs from Linux

      Meh, can't blame him for trying to drum up business I guess
      • by samkass ( 174571 )
        I thought that multi-touch interfaces and embedded computing were the next big thing!

        Seriously-- we've had enough computing power for the average desktop tasks for a long time. Instead of putting 8 CPUs on a die and bottling up all the processing power on the desktop, put 8 CPUs in 8 separate different domain-specific embedded devices sitting around you...
        • Actually I've already commented on this. By making a pc that supports maybe 8 plug-in systems-on-a-card (blade style as was pointed out) and one main cpu to supervise the processor/boards and some raid storage with digital storage on the cards you can have the equivalent of 8 pc's running on your one desktop with one interface, near zero bottlenecks. A real click and rip situation.

          You should also be able to choose the number of processors you wish to have running so others can power down when not in use. Th
        • I thought that multi-touch interfaces... were the next big thing!

          Multi-touch and parallelism are both the "next big thing," because multiple touches are touches in parallel!

          ; )

    • by esocid ( 946821 )
      It may be that, but it's too early to tell since it seems like not many programmers can program parallel algorithms over their sequential counterparts. It could also be due to the lack of libraries and standards. It is lame this guy is making it sound like a pyramid scheme but he has an investment in it. *And I agree with that lame web 2.0 shit. Buzzwords don't mean jack.
    • Because really, anybody believes Web-Two-Oh was anything but the regular web's natural evolution with a fancy name tacked on?

      Web 2.0 is a definite set of "things" or "approaches" that "allow" you to (or possibly you "allow" them to) combine other "things" or "technologies" into a "newer" "-ish" mixture of "patterns" of "operation" of the "collection" of "computing" "resources" that "create" "value" beyond what "may" (or "may not") have been previously "achievable"

      Got it?

  • by Midnight Thunder ( 17205 ) on Friday March 28, 2008 @09:49AM (#22893620) Homepage Journal
    Now that we are seeing more and more in the way of multi-core CPUs and multi-CPU computers I can definitely see parallelism become more important, for task that can be handled this way. You have to remember that in certain cases trying to parallise a task can end up being less efficient, so what you parallelise will depend on the task in hand. Things like games, media application and scientific applications are usually likely candidates since they are either doing lots of different things at once or have tasks that can be split up into smaller units that don't depend on the outcome of the other. Server applications can to a certain extent, depending whether they are trying to the same resources or not (ftp server, accessing this disk, vs a time server which does not file I/O).

    One thing that should also be noted, is that in certain cases you will need to accept increased memory usage, since you want to avoid tasks locking on resources that they don't really need to synchronise until the end of the work unit. In this case it may be cheaper to duplicate resources, do the work and then resynchronise at the end. Like everything it depends on the size and duration of the work unit.

    Even if your application is not doing enough to warrant running its tasks in parallel, the operating system could benefit, so that applications don't suffer on sharing resources that don't need to be shared.
    • Re: (Score:3, Interesting)

      by Shados ( 741919 )
      Well, actually...I think more things can be paralleled than one would think at first glance... The very existance of the foreach loop kindda shows this... Looking at most code I have to work with, 90% of such loops simply do "I'm iterating through the entire list/collection/array and processing only the current element", some simple aggregates (the kind where you can split the task and aggregate the result at the end), etc. Virtually all applications have those, and call them often.

      Being able to simply say
      • Race conditions (Score:3, Insightful)

        by microbox ( 704317 )
        Being able to simply say "the order in which these tasks are made doesn't matter" lets you run a lot of tasks in parallel right there.

        Mucking around with language design and implementation highlights some of the deep problems that the "parallize-everything" crowd often don't know about.

        In your example, the loop can only be efficently parallized if it doesn't have any side-effects. If any variables are written to out of scope of the loop, then they are exposed to race conditions in both the read and wri
        • by Shados ( 741919 )
          Of course, the work needs to be thread safe in that example. Thats why I said "lets you run a lot of tasks in parallel right there".

          My point was, if you go through a typical everyday business app (since those are probably the most common kind developed these days, or close), you'll find that this situation is more common than not, which was my point. A LOT of loops contain operations without side effects and without shared ressources. Being able to easily handle those net you a large gain on the spot.

          Basica
          • Basically, I'm advocating the 80/20 way. Make all of the "obvious" and "simple" scenarios easy, and you'll already get a large performance boost with little effort (its already being done...my parallel loop example can easily be done in virtually all languages, and is already used in many places, and it works quite well).

            In a number of cases this means getting the API that you use optimised and doing less improvements yourself. For example your average Java developer could probably push a large part of the
    • My gripe with this multi-core let's program parallel fad is, where are these massively parallel machines?

      Two cores? Big whoop! Four cores? Haven't seen one, and our computing center keeps current with the latest generation desktop machines.

      But factors of (almost) two or (almost) four speedup are no big deal in the grand scheme of things. Wake me up when they are talking about 10 cores, or 100 cores.

      But that is the problem. Our resident computer architecture dude tells us that maybe 10 or 16 cores

  • Tied in with parallelism is the issue of doing something useful with the billions of transistors in a modern computer. During any microsecond, how many of them are doing useful work as opposed to just generating heat?
  • by Nursie ( 632944 ) on Friday March 28, 2008 @09:50AM (#22893630)
    Oh yes, here it is [slashdot.org].

    And the conclusion?

    It's been around for years numbnuts, in commercial and server applications, middle tiers, databases and a million and one other things worked on by serious software developers (i.e. not web programming dweebs).

    Parallelism has been around for ages and has been used commercially for a couple of decades. Get over it.
    • by esocid ( 946821 )
      But think outside the box man. This is going to revolutionize the web 2.0 experience with its metadata socialized infrastructure.

      Sorry, I totally got sick of using even that many buzz words. I'll stop now.
      • Re: (Score:3, Funny)

        But, at the end of the day (where the rubber meets the road) this will utilize the core competencies of solutions that specialize in the new ||ism forefront.
  • Please no (Score:3, Funny)

    by Wiseman1024 ( 993899 ) on Friday March 28, 2008 @09:51AM (#22893650)
    Not parallelism... Why do MBA idiots have to fill everything with their crap? Now they'll start creating buzzwords, reading stupid web logs (called "blogs"), filling magazines with acronyms...

    Coming soon: professional object-oriented XML-based AJAX-powered scalable five-nines high-availability multi-tier enterprise turnkey business solutions that convert visitors into customers, optimize cash flows, discover business logic and opportunities, and create synergy between their stupidity and their bank accounts - parallelized.
  • by 1sockchuck ( 826398 ) on Friday March 28, 2008 @09:53AM (#22893670) Homepage
    This sure looks like a growth area for qualified developers. An audience poll at the Gartner Data Center conference in Las Vegas in November found that just 17 percent of attendees [datacenterknowledge.com] felt their developers are prepared for coding multi-core applications, compared to 64 percent who say they will need to train or hire developers for parallel processing. "We believe a minority of developers have the skills to write parallel code," said Gartner analyst Carl Claunch. I take the Gartner stuff with a grain of salt, but the audience poll was interesting.

    McColl's blog is pretty interesting. He only recently started writing regularly again. High Scalability [highscalability.com] is another worthwhile resource in this area.
  • by david.emery ( 127135 ) on Friday March 28, 2008 @09:57AM (#22893720)
    So all-of-a-sudden people have discovered parallelism? Gee, one of the really interesting things about Ada in the late 80s was its use on multiprocessor systems such as those produced by Sequent and Encore. There was a lot of work on the language itself (that went into Ada95) and on compiler technologies to support 'safe parallelism'. "Safe" here means 'correct implementation' against the language standard, considering things like cache consistency as parts of programs get implemented in different CPUs, each with its own cache.

    Here are a couple of lessons learned from that Ada experience:
    1. Sometimes you want synchronization, and sometimes you want avoidance. Ada83 Tasking/Rendezvous provided synchronization, but was hard to use for avoidance. Ada95 added protected objects to handle avoidance.
    2. In Ada83, aliasing by default was forbidden, which made it a lot easier for the compiler to reason about things like cache consistency. Ada95 added more pragmas, etc, to provide additional control on aliasing and atomic operations.
    3. A lot of the early experience with concurrency and parallelism in Ada learned (usually the hard way) that there's a 'sweet spot' in the number of concurrent actions. Too many, and the machine bogs down in scheduling and synchronization. Too few, and you don't keep all of the processors busy. One of the interesting things that Karl Nyberg worked on in his Sun T1000 contest review was the tuning necessary to keep as many cores as possible running. (http://www.grebyn.com/t1000/ [grebyn.com] ) (Disclosure: I don't work for Grebyn, but I do have an account on grebyn.com as a legacy of the old days when they were in the ISP business in the '80s, and Karl is an old friend of very long standing....)

    All this reminds me of a story from Tracy Kidder's Soul of a New Machine http://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine [wikipedia.org]. There was an article in the trade press pointing to an IBM minicomputer, with the title "IBM legitimizes minicomputers". Data General proposed (or ran, I forget which) an ad that built on that article, saying "The bastards say, 'welcome' ".

    dave

    • by non ( 130182 )
      although many aspects of Brian Wilson's 'A Quick Critique of Java' [ski-epic.com] are no longer particularly valid, and indeed i find myself programming more and more frequently in a language i too once scorned, the section on multi-threaded programming surely still has relevance. i certainly second his opinion that multi-threaded applications tend to crash more often, and have in fact turned down offers of employment with companies whose code failed to run on my workstation due to the fact that it wasn't exactly thread-s
  • by sm62704 ( 957197 ) on Friday March 28, 2008 @09:59AM (#22893740) Journal
    Sorry guys, web 2.0 was never cool and never will be.
    • by msimm ( 580077 )
      Tell that to you CEO as your IT department scrambles to install wiki's and forums and blogs that will never really be used.
  • Will "Parallelism" be the next new thing? Well... no. That's like asking if "for loops" are going to be the next big thing. It's a tool, to be used when appropriate and not used when appropriate. It's going to be very hard to convert "Parallelism" into a magic VC pixie dust.

    I say this as someone who has recently been tuning his career, experience, and personal projects towards learning more about parallel programming in practice, and I still don't see this as a "next big thing". It's just another in a long
  • So I started writing without reading TFA. Then I read TFA to be sure that I was not in fact missing something. I'm not, and you're not either... The assertions of the article are absurd - someone must have felt that they needed to put something on the web for google to index.

    While not related to parallelism I especially like "SaaS as an Appliance. One area within SaaS that is growing quickly is the opportunity to deliver a SaaS product as an appliance."

    So you mean to tell me that the next big thing is insta
  • Former venture capitalist Jeff Nolan has been agonizing this week over "what's next?" and "where the venture capitalists will put all that money?"

    It really sounds like he is shilling.

    But seriously, first it was DP and COBOL.
    Then expert systems.
    Then relational databases.
    The object orientation.
    Then webification.
    Then XMLification.
    Then web 2.0.

    And probably a few I missed.

    I think this is just another fad.

    • Interesting defintion of "fad" you have there. Are you seriously claiming that relational databases went away after the initial burst of enthusiasm? Or OO? The web? XML?
      • by plopez ( 54068 )
        no, just the hype and the propaganda that they would solve all our problems. And venture capital firms pouring money into them.
  • I'm not trolling. Sun, Intel, Nvidia, and ATI/AMD (for obvious reasons) have been investing in this area for years. I think that constitues Silicon Valley investments (except for those crazy Canadians at ATI).
     
  • What am I missing? (Score:4, Insightful)

    by James Lewis ( 641198 ) on Friday March 28, 2008 @10:31AM (#22894094)
    Now that multi-core computers have been out I keep hearing buzz around the idea of parallel computing, as if it is something new. We've had threads, processes, multi-CPU machines, grid computing, etc etc for a long time now. Parallelism has been in use on single processor machines for a long time. Multi-core machines might make it more attractive to thread certain applications that were traditionally single-threaded, but that's the only major development I can see. The biggest problem in parallel computing is the complexity it adds, so hopefully developments will be made in that area, but it's an area that's been researched for a long time now.
    • Parallelism has been done where it is easy (web server and similar) and where there was no other choice (scientific computing, etc.) but it has not been done well in "main stream" software.

      Historically software development has been lazy (with a few notable exceptions) and sat back relying on new silicon (EE's, Moore's Law, higher clock rates) to improve performance. But in the future that may change. Breaking your software up into parallel tasks maybe required to get performance benefits from new silicon.
  • You think that nobody has a real interest in parallel computing? Intel's put their money on it already - they've allotted $20 million between UC Berkeley [berkeley.edu] and University of Illinois [uiuc.edu] to research parallel computing, both in hardware and software.

    I am a EECS student at Cal right now and I have heard talks by the UC Berkeley PARLab [berkeley.edu] professors (Krste Asanovic and David Patterson, the man who brought us RAID and RISC), and all of them say that the computing industry is going to radically change unless we figure

  • We know three kinds of parallelism that work: clusters, shared memory multiprocessors, and graphics processors. Many other ideas have been tried, from hypercubes to SIMD machines, but none have been big successes. The most exotic parallel machine ever to reach volume production is the Cell, and that's not looking like a big win.

    Graphics processors are the biggest recent success. They're still very difficult to program. We need new languages. C and C++ have the built-in assumption that all pointers poi

    • by ceoyoyo ( 59147 )
      Every major desktop, notebook or workstation processor in the last ten years has had SIMD units built in. It's been very successful. Graphics processors are basically massive SIMD machines with a limited instruction set and restrictive architecture. They're not hard to program, but they are limited.

      You've got two choices if you want to run lots of stuff in parallel. You can do it very easily and live with the restrictions, a la GPUs or simple coarse grained cluster stuff. Or you can have a lot more fle
      • by Animats ( 122034 )

        The Cell isn't really an exotic parallel machine. It's a regular multiprocessor/multicore machine (like a ten year old desktop Mac) except that some of those processors are special purpose.

        No, it's a non-shared-memory multiprocessor with limited memory (256K) per CPU. It belongs to roughly the same family as the nCube, although the Cell has a block DMA-like path to main memory rather than relying entirely on CPU to CPU data paths like the nCube.

        It's typically used like a DSP farm; data is pumped throu

  • by DrJokepu ( 918326 ) on Friday March 28, 2008 @10:47AM (#22894294)
    You see, the majority of the programmers out there don't know much about parallelism. They don't understand what synchronization, mutexes or semaphores are. And the thing is that these concepts are quite complex. They require a much steeper learning curve than hacking a "Web 2.0" application together with PHP, Javascript and maybe MySQL. So if now everybody will start writing multithreaded or otherwise parallel programs, that's going to result in an endless chain of race conditions, mysterious crashes and so on. Rembember, race conditions already killed people [wikipedia.org].
    • by pohl ( 872 )
      Thank you for that wikipedia link. That is a tragic story. I don't understand why you think that "we're all doomed", though. Being in the minority of programmers who understand a given technique is potentially lucrative, so those people are not doomed. And there's no need whatsoever to eek out every last drop of 16-core performance out of a machine that needs to make sure that a beam-spreader is rotated into position prior to activiting the high-power X-Ray emitter -- so the obvious solution in that
  • Electricity, Automobiles, and the telephone!

    Seriously, didn't we decide that parallel programming was the next big thing when Sutter wrote a big article in Dr. Dobbs a couple of years ago?

    Welcome to the party pal, we've been here a while already!
  • TFA is mainly about SaaS - Software as a Service (yes, I had to look it up). GMail, Google Calendar etc. in other words.

    Honestly I think that parallelism and SaaS are pretty much on the opposite sides of the spectrum. Your typical SaaS application requires no parallelism whatsoever since they are typically low-impact programs. The only real improvement over ordinary software is that you don't have to install it, don't have to maintain it and that you can access it anytime from anywhere.

    A typical SaaS provid
  • http://vinay.howtolivewiki.com/blog/hexayurt/supercomputer-applications-for-the-developing-world-375 [howtolivewiki.com]

    We've seen unambiguously that **GIGANTIC** data sets have their own value. Google's optimization of their algorithms clearly uses enormous amounts of observed user behavior. Translation efforts with terabyte source cannons. Image integration algorithms like that thing that Microsoft were demonstrating recently... gigantic data sets have power because statistics draw relationships out of the real world, rather
  • The HPC Cluster people have thought about this stuff for a while. One approach that I have thought about is described in the article:Cluster Programming: You Can't Always Get What You Want [clustermonkey.net] There are two follow-on articles as well Cluster Programming: The Ignorance is Bliss Approach" [clustermonkey.net] and Cluster Programming: Explicit Implications of Cluster Computing [clustermonkey.net].

    Of course if you really want to know how I feel about this: How The GPL Can Save Your Ass [linux-mag.com]

    enjoy

  • We *already* live in a parallel computing environment. Almost every computer has a large number of processes and threads running simultaneously. This *is* parallelism.

    Granted, yes, certain products could benefit by extreme threading, i.e. like PostgreSQL breaking the hierarchy of query steps into separate threads and running them in parallel, like doing a more exhaustive search for the query planner using multiple threads, and stuff like that, but there is always going to be the competition between performa
  • Parallelism was the New New Thing when the Inmos Transputer rolled out in 1984 - a CPU explicitly designed to allow multiple CPUs to co-operate, and with a hardware scheduler to provide on-chip parallelism. Then we had the GAPP (Generalised Arithmetic parallel processor), the Connection Machine, and lots of other weird architectures whose names I cannot recall. I designed one myself in the late eighties, and we took it to breadboard stage (essentially Hyperthreading write very large).

    Forgive me for not gett
  • I am still trying to figure out what people are talking about with web 2.0?

    With compuserver (1969), BBS's (1970's), UseNet, E-mail & The Source (1979), The Well & Q-link (1985) we have online communities this whole time. With IRC we have been IM Chatting since 1988.
    And almost all of this existed over the IP based Internet starting around 1983 and starting in 1993 it became http/browser based. I have been using all of these early after their inception.

    So what is new about Web 2.0? I can't see
  • Very old to me. And not fully solved.
  • I know I'm personally deploying my Web 2.0 social-mashup-o-sphere site in z/VM running on z10.

    Sure, it can be a bit tiresome to edit my blogroll in XEDIT, but the parallelism... woooosh! My AJAX just *flies* out of that VTAM, Beowulf clusters ain't got *nothing*on me!
  • I do NOT look forward to our parallel overlords.

    When the only type of tool we have is massively parallel systems, what kind of problems do you think we will apply that tool to, and what kind of problems do you think we will start ignoring? I would rather see us tackle both kinds of problems.

    I suspect, however, we will end up with "the contact lens effect", where someone loses their context lens in a dark alley, then looks for it under a streelight "because the light is better over here".

    -- Terry

To stay youthful, stay useful.

Working...