Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Auto-Parallelizing Compiler From Codeplay 147

Max Romantschuk writes "Parallelization of code can be a very tricky thing. We've all heard of the challenges with Cell, and with dual and quad core processors this is becoming an ever more important issue to deal with. The Inquirer writes about a new auto-parallelizing compiler called Sieve from Codeplay: 'What Sieve is is a C++ compiler that will take a section of code and parallelize it for you with a minimum hassle. All you really need to do is take the code you want to run across multiple CPUs and put beginning and end tags on the parts you want to run in parallel.' There is more info on Sieve available on Codeplay's site."
This discussion has been archived. No new comments can be posted.

Auto-Parallelizing Compiler From Codeplay

Comments Filter:
  • Reentrant? (Score:5, Interesting)

    by Psychotria ( 953670 ) on Friday March 09, 2007 @11:33PM (#18297084)
    Forgive me if I'm wrong (I've not coded parallel things before), but if the code is re-entrant, does this go a long way towards running the code in parallel? Obviously there are other factors involved here, like addressing memory, but this is thought of in re-entrant programming. I'm not sure what the difference is... please enlighten me :-)
    • Re:Reentrant? (Score:5, Informative)

      by Anonymous Coward on Friday March 09, 2007 @11:47PM (#18297186)
      Reentrancy is a factor, because it's a class of dependencies, but there are many other dependencies.

      Consider a for loop: for (int i=0; i100; i++)doSomething(i);

      Can this be parallelized? Perhaps the author meant it like it's written there: First doSomething(0), then doSomething(1), then ... Or maybe he doesn't care about the order and doSomething just needs to run once for each i in 0..99. The art of automatic parallelization is to find overspecifications like the ordered loop where order isn't really necessary. If nothing in doSomething depends on the outcome of doSomething with a different i, they can be run in parallel and in any order. Suppose each doSomething involves a lengthy calculation and an output at the end. Then they can't simply run in parallel, because the output is a dependency: As written, the output from doSomething(0) comes before doSomething(1) and so on. But the compiler could still run the lengthy calculation in parallel and synchronize only the fast output at the end. The more of these opportunities for parallelism the compiler can find, the better it is.
      • Re:Reentrant? (Score:5, Interesting)

        by 644bd346996 ( 1012333 ) on Saturday March 10, 2007 @12:08AM (#18297288)
        In the case of the for loop, that is really a symptom of the fact that c-style languages don't have syntax for saying "do this to each of these". So one must manually iterate over the elements. Java does have the for-each syntax, but it is just an abbreviation of the "for i from 0 to x" loop.

        Practically all for loops written are independent of order, so they could be trivially implemented using MapReduce. That one change would parallelize a lot of code, with no tricky compiler optimizations.
        • Interesting! I'd never heard of "map and reduce" before, but it seems like just the sort of idiom that would be useful for the program I'm getting ready to write.

          Would it make sense to have an interface for it like this in C:

          void map(int (*op)(void*), void** data, int len);

          Where the implementation could be either a for loop:

          void map(int (*op)(void*), void** data, int len)
          {
          int i;
          for(i = 0; i < len; i++) {
          (*op)(data[i]);
          }
          }

          or an actual parallel implementation, such as one using pthreads or r

        • Java does have the for-each syntax, but it is just an abbreviation of the "for i from 0 to x" loop.

          I would be very surprised if nobody's tried to work around this problem with AOP and annotations. It would be trivial to code parallelization with a method interceptor.

          On a multiprocessor machine the JVM will assign threads to individual CPUs. Your interceptor would populate an array of N threads, assign each thread a number, wrap the annotated method in a callback with a finally {lock.notify()} at the end, sy
          • by nuzak ( 959558 )
            > Or you could apply data parallelization, like parallel Fortran

            Fortress does all loops in parallel by default. You have to explicitly tell it when you want a serial loop.
            • Fortress does all loops in parallel by default. You have to explicitly tell it when you want a serial loop.

              Fortress does, but Fortran-90 didn't. It was strictly a SISD language and had to be extended in HPF with javadoc-style hacks as I described above so that the compiler knew how to link against the MPI libraries and produce an executable that could take advantage of data parallelism on SIMD/MIMD architectures. The whole thing was one big hack. I think starting with Fortran 95 they began to incorporate fe
        • Practically all for loops written are independent of order, so they could be trivially implemented using MapReduce.

          That is a very large, very unsupported assumption to be making. It certainly doesn't hold true for the code I've seen in the wild and have written myself.
        • by HiThere ( 15173 )
          Unfortunately, in many of the languages that I'm aware of that implemented foreach, the language itself specifies sequential execution. Some of these are recent languages, too! E.g., D (Digital Mars D) is a language so new that it's 1.0 release was this January. It specifies that foreach is executed sequentially. So, I believe, does Python. This has always seemed to me like lack of foresight, but I didn't write the languages, or even the specs, so I don't have enough room to complain seriously. (I did
          • by nuzak ( 959558 )
            I think I replied to the wrong message (I can't claim it was a typo since I quoted it after all). But anyway, Fortress does all loops in parallel by default, you have to explicitly tell it when you want serial execution.

            Python cares so little about parallelism that it still uses a single monolithic lock around the interpreter, so you can't even make reasonable use of threads except for I/O waits. But no imperative language can just throw in something as drastic as auto-parallelism without rewriting basic
        • C++ does have such statements: http://cppreference.com/cppalgorithm/index.html [cppreference.com]. Check out "for_each" for an example.
          • Yes, the STL does have for_each and transform. But they use several iterators and a functor. This means that they are a) ugly and b) not parallel.
        • Actually, C++'s STL containers do have such a utility: std::for_each (part of the standard header.
    • Re:Reentrant? (Score:5, Informative)

      by jd ( 1658 ) <imipak@ y a hoo.com> on Saturday March 10, 2007 @01:09AM (#18297560) Homepage Journal
      Simple version: Parallel code need not be re-entrant, but all re-entrant code is parallel.

      More complex version: There are four ways to run a program. These are "Single Instruction, Single Data" (ie: a single-threaded program), "Single Instruction, Multi Data" (SETI@Home would be an example of this), "Multi Instruction, Single Data" (a good way to program genetic algorithms) and "Multi Instruction, Multi Data" (traditional, hard-core parallelism).

      SIMD would need to be re-entrant to be parallel, otherwise you can't be running the same instructions. (Duh. :) SIMD is fashionable, but is limited to those cases where you are operating on the data in parallel. If you want to experiment with dynamic methods (herustics, genetic algorithms, self-learning networks) or where you want to apply multiple algorithms to the same data (eg: data-mining, using a range of specialist algorithms), then you're going to be running a vast number of completely different routines that may have no components in common. If so, you wouldn't care if they were re-entrant or not.

      In practice, you're likely to use a blend of SIMD, MISD and MIMD in any "real-world" program. People who write "pure" code of one type or another usually end up with something that is ugly, hard to maintain and feels wrong for the problem. On the other hand, it usually requires the fewest messaging and other communication libraries, as you're only doing one type of communication. You can also optimize the hell out of the network, which is very likely to saturate with many problems.

  • FPP (Score:5, Funny)

    by DigitAl56K ( 805623 ) on Friday March 09, 2007 @11:35PM (#18297096)
    Frtprallps
    is arle ot
    • Sad... (Score:3, Funny)

      by jd ( 1658 )
      I mean, it was only running on two threads AND showed clear signs of excess barrier operations at the end of every character. From here on out, I expect first parallel posts to run over at least four threads and not be sequentially-coherent. The world is moving towards async! Don't let first posts suffer with past limitations!
  • by baldass_newbie ( 136609 ) on Friday March 09, 2007 @11:37PM (#18297110) Homepage Journal
    I loved 'Clocks'. Oh wait, Codeplay...not Coldplay.
    Nevermind.

    Oh look. A duck.
    • You have confused Auto-Parallelizing Compiler From Codeplay

      For
      Auto-Compiling Code from ParallelPlay
      or
      Auto-Play Parallelizer from Complay
      or something...

  • openMP (Score:2, Informative)

    by Anonymous Coward
    and what the difference between this and openMP ?
    • Just a vendor's incompatible me-too implementation. I'm sure there are some semantic differences and maybe some new features, but it's the same thing. This product may also be aimed more at multicore desktops than SMP big iron like openmp is. I'm partial to MPI (Specifically, OcamlMPI) myself. 20 cores at 4.3 cpu-days and it was trivial to achieve >99% CPU utilization on all nodes.
      • by init100 ( 915886 )

        This product may also be aimed more at multicore desktops than SMP big iron like openmp is.

        And the difference would be?

    • Re: (Score:2, Informative)

      and what the difference between this and openMP ?

      On the page 7 of The Codeplay Sieve C++ Parallel Programming System, 2006 [codeplay.com] you'll find section that describes "advantages" of codeplay over openmp, but nothing terribly exciting. Codeplay allows you indeed to better automatize parallelization but is at the same time also limited to a narrower set of optimizations compared to openmp.

    • by Xyrus ( 755017 )
      OpenMP is targeted at shared memory, multi-processor systems. For example, OpenMP could be used on the dual and quad core systems. For distributed systems, such as super-computing clusters you need the capability to pass messages quickly and efficiently between nodes (different machines). In this case, you use a message passing library (MPI being the most common).

      This sounds like it can do both, as well as determine what parts of the code can be parallelized.

      ~X~
  • Interesting, but.. (Score:5, Insightful)

    by DigitAl56K ( 805623 ) on Friday March 09, 2007 @11:49PM (#18297198)
    The compiler will put out code for x86, Ageia PhysX and Cell/PS3. There were three tests talked about today, CRC, Julia Ray Tracing and Matrix Multiply. All were run on 8 cores (2S Xeon 5300 CPUs) and showed 739, 789 and 660% speedups respectively.

    That's great - but do the algorithms involved here naturally lend themselves to the parallelization techniques the compiler uses? Are there algorithms that are very poor choices for parallelization? For example, can you effectively parallelize a sort? Wouldn't each thread have to avoid exchanging data elements any other thread was working on, and therefore cause massive synchronization issues? A solution might be to divide the data set by the number of threads and then after each set was sorted merge them in order - but that requires more code tweaking than the summary implies. So I wonder how different this is from Open/MT?
    • Gah! "OpenMP".
    • by Anonymous Coward on Saturday March 10, 2007 @12:11AM (#18297306)
      or example, can you effectively parallelize a sort? Wouldn't each thread have to avoid exchanging data elements any other thread was working on, and therefore cause massive synchronization issues?

      Yes you can, take a look a Merge sort [wikipedia.org] (or quick sort, same idea). You split up the large data set into smaller ones, sort those and recombine. That's perfect for parallization -- you just need a mechanism for passing out the orginal elements and then recombining them.

      So if you had to sort 1B elements maybe you get 100 computers and give them each 1/100th of the data set. THat's manageable for one computer to sort easily. THen just develop a service that hands you the next element from each machine, and you pull off the lowest one.
      • by DigitAl56K ( 805623 ) on Saturday March 10, 2007 @01:06AM (#18297542)
        If you read my post, this is exactly what I suggested. The actual point was that it requires more than simply putting "beginning and end tags" on the code, e.g. it is not automatic.

        I would also ask this of CodePlay: If your compiler is automatic, why do we need to add beginning and end tags? :)
        • Is there any modern programming language which doesn't provide a sort function in its standard library? Because if you use that, the vendor can simply provide a parallelized version, and you don't have to compare if the vendor parallelized that function manually, or the compiler parallelized it automatically, of even a mixture of both.
    • Re: (Score:2, Interesting)

      by SSCGWLB ( 956147 )
      You have a good point; both matrix multiply and ray tracing are embarrassingly parallel [wikipedia.org] problems. They lend themselves to this type of optimization.

      Consider a two NxN matrices, A and B, multiplied together to make a matrix C. Each element of C (Cij), is the sum of Ai[0..N] and Bj[0..N]. This is an almost trivial parallelization problem, commonly one of the first coding exercise learned in a parallel processing class.

      IMHO, this is interesting but has a long way to go before its useful for anything b
  • snake oil (Score:5, Insightful)

    by oohshiny ( 998054 ) on Friday March 09, 2007 @11:53PM (#18297216)
    I think anybody who is claiming to get decent automatic parallelization out of C/C++ is selling snake oil. Even if a strict reading of the C/C++ standard ends up letting you do something useful, in my experience, real C/C++ programmers make so many assumptions that you can't parallelize their programs without breaking them.
    • "All you really need to do is take the code you want to run across multiple CPUs and put beginning and end tags on the parts you want to run in parallel"

      The compiler isn't going to know if you're doing something stupid or not.
      In other words: use at your own risk.

      The old adage of "garbage in, garbage out" still applies.
      • Re: (Score:3, Insightful)

        "All you really need to do is take the code you want to run across multiple CPUs and put beginning and end tags on the parts you want to run in parallel" The compiler isn't going to know if you're doing something stupid or not. In other words: use at your own risk. The old adage of "garbage in, garbage out" still applies.

        But how are you supposed to know exactly how something is going to run under this? Even with a good understanding of what your trying to do and (hopefully) what exactly the compiler is
        • But how are you supposed to know exactly how something is going to run under this?

          The semantics of that construct is well-defined.

          Of course from the short description it's not entirely clear to me if the compiler actually implements that semantics, or simply relies on you to honor it (e.g. is it possible to call a non-sieve function from within a sieve function or block? In that case, the compiler cannot reasonably implement the semantics). There's a precedent for the second type of semantics: restrict.

          But

    • Re: (Score:2, Informative)

      by ariels ( 6608 )
      TFA specifically mentions that you need to mark up your code with sieves:
      1. A sieve is defined as a block of code
        contained within a sieve {} marker and
        any functions that are marked with sieve.
      2. Inside a sieve, all side-effects are delayed
        until the end of the sieve.
      3. Side effects are defined as modifications
        of data that are declared outside the
        sieve

      The compiler can use this information to decide what parts of the code can safely be parallelized. Adding the "sieve" keyword can change the semantics of the code, a

      • TFA specifically mentions that you need to mark up your code with sieves:

        TFA claims it's auto-parallelizing and says:

        What Sieve is is a C++ compiler that will take a section of code and parallelize it for you with a minimum hassle. All you really need to do is take the code you want to run across multiple CPUs and put beginning and end tags on the parts you want to run in parallel.

        If "sieves" have all the restrictions you mention, then both the claim that it's "auto-parallelizing" and the description in the

    • Even if it's something as simple as a parallel for loop and synchronized variables it would help immensely.

      sync int total=0;
      par (i=0;i<100;i++) {
      int j=0; // Thread local, of course
      static int k; // Implied sync.
      DoSomething(i,&j);
      k++;
      total+=j;
      } // implied wait for thread completion

      It'll even compile on old compilers with a "#define par for" and "#define sync".

      It's long past time for this.

  • Prefer OpenMP (Score:5, Informative)

    by drerwk ( 695572 ) on Friday March 09, 2007 @11:57PM (#18297238) Homepage
    I have some small amount of experience with OpenMP http://openmp.org/ [openmp.org] , which allows one to modify C++ or Fortran code using pragmas to direct the compiler regarding parallelization of the code. And the Codeplay white paper made this sound much like it implements one of the dozen or so OpenMP patterns. I am fairly skeptical that Codeplay has any advantage over OpenMP, but the white paper lists some purported advantages. I will not copy them here and take the fun out of reading them for yourself. I will list OpenMP advantages.
    1: OpenMP is supported by Sun, Intel, IBM, $MS(?) etc, and implemented in gcc 4.2.
    2: OpenMP has been used successfully for about 10 years now, and is on a 2.5 release of the SPEC.
    3. It is Open - the white paper for Codeplay mentions it being protected by patents. (boo hiss)
    4. Did I mention that it is supported in gcc 4.2 which I built it on my Powerbook last week and it is very cool?

    So maybe Codeplay is a nice system. Maybe they even have users and can offer support. But if you are looking to make your C++ code run multi-threaded with the least amount of effort I've seen ( It is still effort! ) take a look at OpenMP. In my simple tests it was pretty easy to make use of OpenMP, and I am looking forward to trying it on a rather more complicated application.

    • Re:Prefer OpenMP (Score:5, Informative)

      by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Saturday March 10, 2007 @12:06AM (#18297282) Homepage
      Don't forget the other end of the development spectrum - Visual C++ 2005 has builtin OpenMP support too.
      • by drerwk ( 695572 )
        The Powerbook reference should have identified me as an acolyte of Steve! Add to that I am stuck in my day job using VS 2003 for some management reason I forget at the moment. But that aside, do you use OpenMP in VS2005, or know of people who are doing so? I just went over the full 2.5 spec so I can map out some strategy for trying OpenMP with our existing software. It is almost 1M lines, and was not designed with Multi-threading in mind. I do think there will be places I can use OpenMP, and the ability to
        • I use it sometimes, for simple things. Most of the time I do my own threading though - VC2005 requires you to distribute a vcomp.dll with your app which is a bit of a turnoff.
          • Why is distributing a small DLL a turn off? Visual Studio comes with a built in installation package creator - which is really how you wanna distribute apps anyway. I mean, unless you're sending out viruses and zombies.
    • Re:Prefer OpenMP (Score:5, Interesting)

      by jd ( 1658 ) <imipak@ y a hoo.com> on Saturday March 10, 2007 @01:37AM (#18297656) Homepage Journal
      Personally, I would agree with you. I have to say I am not fond of OpenMP - I grew up on Occam, and these days Occam-Pi blows anything done in C out of the water. (You can write threads which can auto-migrate over a cluster, for example. Even OpenMOSIX won't work at a finer granularity than entire processes, and most compile-time parallelism is wholly static after the initial execution.)

      On the other hand, OpenMP is a far more solid, robust, established, reputable, reliable solution than Codeplay. The patent in Codeplay is also bothersome - there aren't many ways to produce an auto-parallelizing compiler and they've mostly been done. This means the patent either violates prior art (most likely), or is such "black magic" that no other compiler-writing company could expect to reproduce the results and would be buying the technology anyway. It also means they can't ship to Europe, because Europe doesn't allow software patents and has a reputation of reverse-engineering such code (think "ARC-4") or just pirating/using it anyway (think: pretty good privacy version 2, international version, which had patented code in it)

    • i was just going to ask "who cares, openmp does this already" now i know that i don't care. it's not nearly as interesting as the work done out of nasa greenbelt on a project called ace (which actually is a genuinely automatic parallel compiler that targets clusters rather than cpus --- really kickass concept). my very limited experience with openmp is that i prefer the mpi approach. that said, i don't think mpi or openmp are really the right answer -- it takes a language that was designed from the ground u
      • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Saturday March 10, 2007 @06:51AM (#18298624) Homepage Journal

        Intel's compiler (icc), available for Linux [intel.com], Windows [intel.com], and FreeBSD [freshports.org] extends OpenMP to clusters [intel.com].

        You can build your OpenMP code and it will run on clusters automatically. Intel's additional pragmas allow you to control, which things you want parallelized over multiple machines vs. multiple CPUs (the former being fairly expensive to setup and keep in sync).

        I've also seen messages on gcc's mailing list, that talk about extending gcc's OpenMP implementation (moved from GOMP [gnu.org] to mainstream in gcc-4.2 [gnu.org]) to clusters the same way.

        Nothing in OpenMP [openmp.org] prevents a particular implementation from offering multi-machine parallelization. Intel's is just the first compiler to get there...

        The beauty of it all is that OpenMP is just compiler pragmas [wikipedia.org] — you can always build the same code with them off (or with a non-supporting compiler), and it will still run serially.

        • by init100 ( 915886 )

          You can build your OpenMP code and it will run on clusters automatically.

          Won't that require some runtime support, like mpirun in MPI (that takes care of rsh/ssh-ing to each node and starting the processes)?

          • Re: (Score:3, Insightful)

            by mi ( 197448 )

            Won't that require some runtime support, like mpirun in MPI (that takes care of rsh/ssh-ing to each node and starting the processes)?

            Well, yes, of course. You also need the actual hardware too :-)

            This is beyond the scope of the discussion, really — all clusters require a fair amount of work to setup and maintain. But we are talking about coding for them here...

            • by init100 ( 915886 )

              You also need the actual hardware too

              At work, I have (I work at a supercomputing center).

        • the beauty of ace is that it didn't need any of that stuff. it just worked (tm).
    • This is good stuff. Did you use any special flags to compile it? How about posting a nice walkthrough somewhere? Unfortunately xcode ships with 4.0.1 and fink with 4.1.something.

      Regards,
      Athanasios
      • by drerwk ( 695572 )
        Advance warning: the work is hardly complete. I've been keeping some notes so that I could post a walk through. And I have not done the step of pointing XCode at the 4.2 that I built. See: http://alphakilo.com/openmp-on-os-x-using-gcc-42/ [alphakilo.com]
        Mostly I seem to have been lucky that 4.2 compiled as is on my Powerbook, because it did not do so on my Dual G5 which is of course where I would like to use OpenMP. I'll have to figure out what is on my PB that is not on my G5. And I have an 8 core linux box at work that
        • Thanks very much, I'll try to do that in my MacBook during the week, and will let you know.

          best regards

          Athanasios

  • Yup (Score:4, Interesting)

    by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Saturday March 10, 2007 @12:03AM (#18297260) Homepage

    For the majority of apps, OpenMP [wikipedia.org] is enough. That is what this looks like - a proprietary OpenMP. It might make it easier than creating and managing your own threads but calling it "auto" parallelizing when you need to mark what to execute in parallel is a bit of a stretch.

    For apps that need more, it is probably a big enough requirement that someone knowledgable is already on the coding team. Which isn't to say that a compiler/lang/lib lowering the "experience required" bar wouldn't be welcomed, just that I wish these people would work on solving some new problems instead of re-tackling old ones.

    The main purpose of these extensions seems to be finding a way to restrict the noob developer enough that they won't be able to abuse threading like some apps love to do. That is a very good thing in my book! (Think Freenet [freenetproject.org], where 200-600 threads is normal.)

  • by Anonymous Coward
    Yep, it's in there.

    And it works, too.
  • So, anything within the loop (using your example) cannot depend on i-1 being known? So, for the loop:

    for (i = 0; i doSomething (i);

    doSomething() cannot know or infer i-1. Is that right? So doSomething() really has to regard i as (almost) random. So the loop becomes:

    for (i = 0; i doSomething (uniqueRand (i/RANDMAX*100);

    No wonder it's so complicated and hard to debug ;-)
  • Been done... (Score:4, Interesting)

    by TheRealMindChild ( 743925 ) on Saturday March 10, 2007 @12:17AM (#18297342) Homepage Journal
    I have my 'Mips Pro Auto Parallellizing Option 7.2.1' cd sitting right next to my Irix 6.5 machine... and I know it's YEARS old
    • Re:Been done... (Score:5, Interesting)

      by adrianmonk ( 890071 ) on Saturday March 10, 2007 @03:12AM (#18298016)

      I have my 'Mips Pro Auto Parallellizing Option 7.2.1' cd sitting right next to my Irix 6.5 machine... and I know it's YEARS old

      Oh, are we having a contest for who can name the earliest auto-parallelizing C compiler? If so, I nominate the vc compiler on the Convex [wikipedia.org] computers. The Convex C-1 was released in 1985 and I believe had a vectorizing compiler from the start, which would make sense since it had a single, big-ass vector processor (one instruction, crap loads of operands -- can't remember how many, but it was something like 64 separate values being added to another 64 separate values in one single instruction).

      I personally remember watching somebody compile something with it. It was really neat to watch -- required no special pragmas or anything, just plain old regular C code, and it would produce an annotated copy of your file telling you which lines were fully vectorized, partly vectorized, etc. You could, of course, tweak the code to make it easier for the compiler to vectorize it, but even when you did, it was still plain old C code.

      • I coded on a Convex and I used this option. It worked for basic loops but I liked coding in Convex assembly. It was a lot like PDP assembly.
    • by Temkin ( 112574 )

      I have my 'Mips Pro Auto Parallellizing Option 7.2.1' cd sitting right next to my Irix 6.5 machine... and I know it's YEARS old

      I was thinking the same thing. I remember playing with "-xautopar" option in Sun's compiler way back in 1996. I just checked and it's still there. The problem was always making sure loops didn't have dependancies on the loop counter, and only had one exit.

      Sun gives away the whole Sun Studio compiler suite now days, complete with OpenMP and all the profiling tools. They have a

  • For what I have seen is that this system just parallelizes only the part of the code in sieve instead of the whole code. How is this better than others. Please can someone enlighten me on that.
  • I'm no parallelization expert but it seems to me that a compiler that reliably gives you a scaling factor above 80% would be a huge deal. Is it really possible to achieve those kind of results across the board? Or is this a bunch of bull.
    • by init100 ( 915886 )

      Nothing new here, move along.

      Jokes aside, this is bull. It requires the coder to mark sections that he wants to run in parallel, making the "automatic" part a bit of a stretch. And then, there already is a system that does this, and has done it for 10 years. It's called OpenMP [wikipedia.org], and features wide industry support in compilers such as the upcoming gcc 4.2, MS Visual Studio .NET 2005, the Intel compiler suite and compilers from traditional SMP vendors such as IBM and SGI.

  • by Anonymous Coward
    Let's see if I can teach any old dogs some new trix.

    Here is a quote from the SmartVariables white-paper:

    "The GPL open-source SmartVariables technology works well as a replacement for both MPI and PVM based systems, simplifying such applications. Systems built with SmartVariables don't need to worry about explicit message passing. New tasks can be invoked by using Web-Service modules. Programs always work directly with named-data, in parallel. Tasks are easily sub-divided and farmed out to additional web-ser
    • It may or may not be easier to program, but will it perform well enough? Does it support high-speed low-latency interconnects like Myrinet or Infiniband? Will it perform well enough to make up for the high price of such interconnects? Gigabit Ethernet performance is not enough on such systems, as latency is a major factor, and the latency of Ethernet is typically high compared to HPC interconnects.

    • new? 10 inches from my head i have a 2 volume collection edited by
      shapiro called 'concurrent prolog' published in 1986 which uses the concept of named
      streams to communicate between concurrent processes, which to a
      large extent are treated as variables in the language. i could
      find an earlier reference, but it would be more work than turning 60 degrees.

      try harder
  • by mrnick ( 108356 ) on Saturday March 10, 2007 @12:50AM (#18297496) Homepage
    I read the article, the information at the company's web site and even white papers written on the compiler. And although I did see one reference to "Multiple computers across a network (e.g. a "grid")" there was no other mention of it.

    When I think of Parallelizing software, after getting over my humors mind thinking of a virus that paralyzes users, what comes to mind is clustering. When I think of clustering the train of thought directs me to Beowulf and MPI or it's predecessor PVM. Though I can find no information that supports the concept of clustering in any manner.

    Again I did see a reference to: "Multiple computers across a network (e.g. a "grid")" but according to Wikipedia grid computing is defined "A grid uses the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems. Most use idle time on many thousands of computers throughout the world."

    Well, that sounds like the distributed SETI project and the like, which would seem even more ambitious than a compiler that would help write MPI code for Beowulf clusters.

    From all the examples this looks like a god compiler for writing code that will run more efficiently on multi-core and multi-processor systems but would not help you in writing parallel code for clustering.

    Though, this brings up a concept that many people forget. Even people that I would consider to be rather intelligent on the subject of clustering often forget this. And that is that if you have an 8 computer cluster with each node running on a system with dual-core Intel CPU installed that if you write parallel code for it using MPI you are benefiting from 8 cores in parallel. Many people that write parallel code forget about multi-threading. To benefit from all 16 cores in a cluster I just described the code would have to be written multi-threaded and parallel. One of the main professors involved in a clustering project at my university stated to me that in their test environment they were using 8 dell systems with dual-core Intel CPU so in total they had the power of 16 cores. Since he has his Ph. D. and all I didn't feel the need to correct him and explain that unless his code was both parallel and multi-threaded he was only getting the benefit of 8 cores. I knew he was not multi-threading because they were not even writing the code in MPI rather they were using Python and batching processes to the cluster. From my knowledge Python cannot write multi-threaded applications. Even if it can I know they were not (from looking at their code).

    Sometimes it's the simplest things that confuse the brightest of us....

    Nick Powers

    • Assuming their cluster management system knows that each node is dual core, can you explain why they couldn't run two processes on each node?
    • Sounds like multi-threading AND NOT Parallelizing

      Both multi-threading and message passing systems are parallel systems, they are just different subsets of the parallel computing paradigm. You cannot really claim with any authority that multithreading isn't parallel computing, and that only message passing is.

      Multithreading is used on a shared memory multiprocessor, and message passing is used on distributed memory multiprocessors. They are just two different ways of implementing parallel code, and none of them is more parallel than the other.

      Well, that sounds like the distributed SETI project and the like, which would seem even more ambitious than a compiler that would help write MPI code for Beowulf clusters.

      Actua

    • Mod parent down

      Anything when more than a single thread or process is executing simultaneously is parallel. Anything running on multiple computers at the same time can also be called parallel, but is more precisely (and commonly) referred to as clustered or distributed computing. These are the commonly agreed upon meanings.

      Since he has his Ph. D. and all I didn't feel the need to correct him and explain that unless his code was both parallel and multi-threaded he was only getting the benefit of 8 cores.

      Or yo

  • The trick to taking advantage of future processors like the ones architecture futurists such as David Patterson envisions when he talks about "manycore" chips is to make parallel programming easy. Making the programmer puzzle out the parallelism for himself isn't the way to do that. We already know pre-emptive threading is too difficult for most; putting pervasively parallel programming (PPP) in human hands would be even worse. A proper approach to PPP involves inventing a new language, not adding warts to
  • This is a feature of WCF - Windows Communication Foundation in .NET 3.0 (part of Win V). WCF is designed for next gen CPUs with large numbers of cores. It spawns worker threads for you as needed and sychronises these calls for you automatically. You have the option of manually creating and sychronising threads, but out of the box it does it all for you behind the scenes. Just imagine coding for a machine with 1024 cores! It's obvious that writing software as we've done in the past where you manually spaw
    • by init100 ( 915886 )

      This is a feature of WCF - Windows Communication Foundation in .NET 3.0 (part of Win V). WCF is designed for next gen CPUs with large numbers of cores. It spawns worker threads for you as needed and sychronises these calls for you automatically. You have the option of manually creating and sychronising threads, but out of the box it does it all for you behind the scenes.

      So WCF takes care of parallelizing your compute-intensive tasks for you? Sorry, but I don't believe you. It might spawn threads for communication-related tasks, but those aren't really compute-intensive anyway.

  • by Anonymous Coward
    Deterministic concurrency is a great aid for debugging - no more race conditions, no more heisenbugs, no more visibly different program behaviour on 1 core, 2-core, hyper-threading, Quad Core, 8 Core, and whatever the Intel and AMD road maps bring out in the future. Looks good for the sanity of all those programmers who have ever had problems manifest only on one machine after testing!

    This Sieve programming seems also to make it easier to target the PS3, which has gotten a bad rap as being notoriously diffi
  • This looks similar to RapidMind [rapidmind.net], which is a software development platform that, among other things, "Enables applications to run in a data-parallel way." (I'm not affiliated with them.)
  • by Anonymous Coward
    So, I fail to see what's new about this. As has been mentioned before, OpenMP auto-parallelizes for SMP systems quite well, as long as you know what you're doing. Like anything done in parallel, if you don't figure out where your data and algorithm dependencies are you'll hose your program. If Sieve does some sort of dependency analysis, that would be interesting, but I doubt it would catch all problems. In fact, I imagine it's provably impossible to auto-parallelize in the general case -- it will likel
  • Nothing new (Score:2, Informative)

    by UtilityFog ( 654576 )
    Cilk [mit.edu] has been around for years, indeed it won the ICFP 1998 programming contest. [mit.edu]
  • 'What Sieve is is a C++ compiler that will take a section of code and parallelize it for you with a minimum hassle."

    What does the compiler do, taunt you with harsh language while it compiles your code?
  • How about an auto-commenting compiler. All you have to do is put tags in it where you want comments and it automatically comments them.

    Seriously, Sieve sounds so trivial and meaningless, it's the ultimate silicon valley startup. How about something more valuable like an auto-vectorizing compiler that really works.
    • by woolio ( 927141 )
      ow about an auto-commenting compiler. All you have to do is put tags in it where you want comments and it automatically comments them.

      Do you really know the difference between a compiler and an development environment?

      What "comments" is a compiler going to derive from your code? Something like "EAX will contain the result of the last add." is NOT going to be a useful comment in most situations. What do you expect the *compiler* to be able to tell you about *your* code?

      And I believe it is widely held tha
  • Already implement "auto-parallelization" of sorts. It's called "out of order" execution - code specificially written with this in mind can perform quite a bit faster - all you have to do is to create separate independent sections of code.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...