Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software

Wintel, Universities Team On Parallel Programming 91

kamlapati writes in with a followup from the news last month that Microsoft and Intel are funding a laboratory for research into parallel computing at UC Berkeley. The new development is the imminent delivery of the FPGA-based Berkeley Emulation Engine version 3 (BEE3) that will allow researchers to emulate systems with up to 1,000 cores in order to explore approaches to parallel programming. A Microsoft researcher called BEE3 "a Swiss Army knife of computer research tools."
This discussion has been archived. No new comments can be posted.

Wintel, Universities Team On Parallel Programming

Comments Filter:
  • This is getting to be ridiculous. There's no way that anyone could juggle 1000 cores in their head and make a synchronous-threaded program. Put the money into quantum computing research and we'll have proper parallel computing.
    • Re:1000 cores? (Score:5, Informative)

      by rthille ( 8526 ) <web-slashdotNO@SPAMrangat.org> on Friday March 14, 2008 @12:23PM (#22752710) Homepage Journal
      The point of the Berkeley program is to come up with toolsets so you don't have to "juggle 1000 cores in your head". Instead, you describe, using the toolset, the problem in a way which is decomposable, and the tools spread the work over the 1000+ cores. No more worrying if you incremented that semaphore correctly because you're operating at a much higher level.
      • Instead, you describe, using the toolset, the problem in a way which is decomposable, and the tools spread the work over the 1000+ cores.

        One day soon, the computer industry will realize that, 150 years after Charles Babbage invented his idea of a general purpose sequential computer, it is time to move on and change to a new computing model. The industry will be dragged kicking and screaming into the 21st century. Threads were not originally intended to be the basis of a parallel software model but only a me
        • I don't think anyone except you said anything about threads. You may have just described exactly what the GP was describing -- point is, why should you have to break them down into individual programs yourself?

          Personally, I like Erlang, but the point is the same -- come up with a toolset and/or programming paradigm which makes scaling to thousands of cores easy and natural.

          The only problem I have yet to see addressed is how to properly test a threaded app, as it's non-deterministic.
          • Re: (Score:2, Insightful)

            by MOBE2001 ( 263700 )
            I don't think anyone except you said anything about threads. You may have just described exactly what the GP was describing -- point is, why should you have to break them down into individual programs yourself?

            This is precisely what is wrong with the current approach. The idea that the problem should be addressed from the top down has been tried for decades and has failed miserably. The idea that we should continue to write implicitly sequential programs and have some tool extract the parallelism by dividin
            • The idea that we should continue to write implicitly sequential programs and have some tool extract the parallelism by dividing the program into concurrent threads is completely ass-backwards, IMO.

              Maybe so, but it's certainly not what I was suggesting.

              Rather, I'm suggesting that we should have tools which make it easy to write a parallel model, even if individual tasks are sequential -- after all, they are ultimately executed in sequence on each core.

              One reason is that it uses a coarse-grain approach to

      • No more worrying if you incremented that semaphore correctly because you're operating at a much higher level.

        You only need to "worry" about that if you insist on programming your multi-core machine in low-level C. Better solutions have existed for decades, people just don't use them. How is the BEE3 going to change that?
    • Re:1000 cores? (Score:4, Insightful)

      by SeekerDarksteel ( 896422 ) on Friday March 14, 2008 @12:43PM (#22752898)
      1) Quantum computing != parallel computing.

      2) A significant number of applications can and do run on 1000+ cores. Sure, most are scientific apps rather than consumer apps, but there is a market for it nevertheless. Go tell a high performance computing guy that there's no need for 1k cores on a single chip and watch him collapse laughing at you.
      • by Anonymous Coward on Friday March 14, 2008 @01:17PM (#22753258)
        640 Cores should be enough for anybody.
        • 640 Cores should be enough for anybody.
          No, 640K cores! We're still a ways off.

          I keep wondering when we're going to put processing closer to the memory again. As in, put a couple of SPUs right on the memory chips. At least an FPGA with a couple 1,000 gates, that would be very general purpose.

          • by Raenex ( 947668 )

            I keep wondering when we're going to put processing closer to the memory again.
            Isn't that essentially what the L1 and L2 caches do?
            • by jimicus ( 737525 )
              Yes, but they're very expensive and they only put the most recently needed instructions/data close to the core.

              Granted, in 90% of day to day uses that's all you need. But the other 10% would probably love to see RAM running synchronously with the CPU.
      • Go tell a high performance computing guy that there's no need for 1k cores on a single chip and watch him collapse laughing at you.
        It seems pretty ridiculous, unless you can also cram in enough pins for 1k memory channels. Even current 4-core chips are enough to make memory the bottleneck on some workloads, and AIUI scientific computing often tends to be that sort of workload.
      • 2) A significant number of applications can and do run on 1000+ cores.
        If you're on gentoo, they'll all be gcc ;)

        (just teasing)
    • Just think what SETI@Home could do with a 1000 core processor. Or for the more practical and useful to our real world, Folding@Home.
      • by gdgib ( 1256446 )
        Actually, if you check out the BEE2 website ( http://bee2.eecs.berkeley.edu/ [berkeley.edu], BEE2 being the precursor to BEE3) you'll notice a Casper (http://casper.berkeley.edu/ [berkeley.edu]) logo in the upper left. That is the SETI folks!

        Except that instead of running SETI@home, they used heavily FPGA optimized designs. Since most radio astronomy only requires a few bits of precision (2-8) modern CPUs or GPUs are incredibly wasteful for them. So intead they use heavily optimized fixed-point math circuitry. By using FPGAs they
    • by mikael ( 484 )
      And people said the same with 128+ variable CPU stack frames combined with RISC instruction sets. Nobody is going to be able to do that kind of juggling in their head, so "register scoreboarding" was built into the compilers.

      You could try and have a process running on each core, but even on a university server, you will only have a few hundred processes running, so giving every user a single core is still going to underutilize 80% of those cores. And even then many of those processes are hourly cron jobs or
    • Because all of those supercomputers with 1000 CPUs are never used by anyone for anything because noone can figure them out. If you use the same techniques on a 1000 core processor, it will work just the same. I expect that a 1000 core processor will be very NUMA, so you'll probably just resort to MPI anyway.

      OK, so I know there's no "wrong" mod, but don't mod it insightful.
    • by Fry-kun ( 619632 )
      Actually, functional programming is the answer. Unlike procedural code, you don't need to think of what you want the CPUs to do, but of the result you want them to achieve - in other words, "threads" are not necessary for your code to utilize multiple processors.
    • This is getting to be ridiculous. There's no way that anyone could juggle 1000 cores in their head and make a synchronous-threaded program.

      Why would you need to ? Either your program is multithreaded or it isn't; and if it is, it either is or isn't properly synchronized. The number of cores is completely irrelevant; a broken multithreaded program will fail randomly in a single-core machine too, and a singlethreaded program in a 1000-core machine won't run into any issues either.

  • how nice of microsoft to help train the next generation of google engineers.
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Friday March 14, 2008 @12:21PM (#22752692)
    It's a little disingenuous to claim that programmers are "stuck" with a serial programming model. The fact of the matter is that multi-threaded programming is a common paradigm which takes advantage of multiple cores just fine. Additionally, many algorithms cannot be parallelized.

    Even languages like Erlang which bring parallelization right to the front of the language are still stuck running serial operations serially. There is sometimes no way around doing something sequentially.

    Now, can we blow a few cycles on a few cores trying to predict which operations will get executed next? Yeah, sure, but that's not a programming problem, it's a hardware design problem.
    • by gdgib ( 1256446 ) on Friday March 14, 2008 @12:50PM (#22752966)
      ParLab is so not about branch predictors and out-of-order execution. As you say, that's a hardware design problem and a solved one at that. Boring.

      While I'll agree that not all programmers are stuck with the serial programming model, threads aren't exactly a great solution (http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html [berkeley.edu]). They're heavyweight and inefficient compared to running most algorithms on e.g. bare hardware or even an FPGA. Plus they deal badly with embarrasing parallelism (http://en.wikipedia.org/wiki/Embarrassingly_parallel [wikipedia.org]). And finally they are HARD to use, the programmer must explicitly manage the parallelism by creating, synchronizing and destroying threads.

      Setting aside those problems which exhibit no parallelism (for whom there is no solution but a faster CPU really), there are many classes of problems which would benefit enormously from better programming models, which are more efficiently tied to the operating system and hardware rather than going through an OS level threading package.
      • Re: (Score:2, Interesting)

        by asills ( 230118 )
        Threads are harder just like memory management in C++ is harder than Java and .NET.

        It's the people who really can't program that are having significant trouble with parallelization in modern applications. That's not to say that in the future I won't love to be able to express a solution and have it automatically parallelized, but for the time being creating applications that take advantage of multiple cores well (server apps, not client apps) is not that difficult if you know what you're doing.

        Though, like
        • by gdgib ( 1256446 )
          I find it amusing that the original post was by "BadAnalogyGuy" and you just used one.

          Why do Java and .NET have better memory management? So that it's easier for people to work with memory. Why does ParLab exist? So that it will be easier to work with parallelism.

          Part of the goal here is to make it so that, like with memory management, someone who knows what their doing (i.e. a hardcore manage-their-own-memory assembly or C++ programmer) can write a large parallel library, and someone who doesn't (
          • by gdgib ( 1256446 )
            Bugger. I read the GP again, and he might've been making exactly the same point I just did. Oops.
            • by asills ( 230118 )
              I won't hold it against you ;)

              Yes, we're making the same point, though I'm also pointing out that the doom and gloom that is always presented wrt parallelism in current programming languages isn't so. It's only so for those that don't know what they're doing.
              • by Unoti ( 731964 )
                Isn't that generally true for anything you can possibly think of with programming languages, not just parallelism? Once a language has memory, loops, conditionals, and iterative execute-- couldn't we say that every single other language feature is just for "those who don't know what they're doing"? If the language makes things easier then it makes it easier for everyone, both the elite and the unwashed masses.
        • by jgrahn ( 181062 )

          Threads are harder just like memory management in C++ is harder than [in] Java and .NET.

          I find memory management is trivial in most real-life C++ code. And management of non-memory resources is easier than in Java.

      • by 0xABADC0DA ( 867955 ) on Friday March 14, 2008 @01:55PM (#22753642)

        Setting aside those problems which exhibit no parallelism (for whom there is no solution but a faster CPU really), there are many classes of problems which would benefit enormously from better programming models, which are more efficiently tied to the operating system and hardware rather than going through an OS level threading package.
        The programming models we have are just fine. By far the vast majority of program time is spent in a loop of some kind, but languages which could easily parallelize loops don't. There is no reason why 'foreach' or 'collect' cannot use other processors (whereas 'inorder' or 'inject' would always be sequential). So our programming models are not the problem. The real problem is trying to use them with a 40 year old operating system design.

        Current operating system could run code in parallel if they stop scheduling threads a timeslice on a processor but instead schedule a timeslice across multiple processors. Take an array of 1000 strings and a regex to match them against. If the program is allocated 10 processors it can do a simple interrupt and have them start working on 100 strings each. By having the processors allocated can you avoid the overhead of switching memory spaces and of scheduling, making this kind of fine-grained parallelism feasible.

        But the problem here is that most programs will use one or two processors most of the time and all the available processors at other times. And if your parallel operation had to synchronize at some point then you'd have all your other allocated processors doing nothing while waiting for one to finish with its current work. So there is a huge amount of wasted time by allocating a thread to more than one processor.

        A solution to the unused processor problem is to have a single memory space, and so as a consequence only run typesafe code -- an operating system like JavaOS or Singularity or JXOS. This lets any processor be interrupted quickly to run any process's code in parallel, so CPU's can be dynamically assigned to different threads. Even small loops can be effectively run across many CPUs, and there is no waste from the heavyweight allocations and clunkiness that is caused ultimately by separate memory spaces needed to protect C-style programs from each other. This is why it is the operating system, not the programming models, that is the main problem.
        • by octogen ( 540500 )
          If an operating system knew, which parts of a program can be executed in parallel, then it already COULD run ONE thread on MULTIPLE processors. But even if it knew, when you have multiple tasks or threads running in parallel, then you want every one of them to stay on the same CPU as long as possible, because scheduling every thread on every processor pollutes the caches and slows down the entire system.
        • by brysgo ( 1257382 )

          There is no reason why 'foreach' or 'collect' cannot use other processors

          While it sounds like a good idea, 'foreach' loops often collect the value for each into a single instance variable to get a sum, or similar compilation of the contents of the array. If the cycles were to run at the same time, the second iteration of the loop would not have the data from the first to append to.

          I do believe that parallel processing could be used to improve the speed of 3d rendering and particle simulations though, and that is reason enough to be optimistic about it.

    • There is sometimes no way around doing something sequentially.
      Like I always say (sometimes): "A process blocked on I/O is blocked, no matter how many processors you throw at it."
      • Like I always say (sometimes): "A process blocked on I/O is blocked, no matter how many processors you throw at it."

        But if it is blocked because the target of the I/O operation, say, a local database server, is busy calculating the data to be returned, throwing more processors at it might indeed cause it to unblock sooner.

    • There is sometimes no way around doing something sequentially.

      Yup. And as Amdahl's Law [wikipedia.org] (paraphrased) puts it: the amount of speed increase you can achieve with parallelization is always constrained by the parts of the process that can't be parallelized.
    • by robizzle ( 975423 ) on Friday March 14, 2008 @01:13PM (#22753200)
      You are right, programmers aren't currently "stuck" with a serial programming model; however, looking into the future it is pretty clear that the hardware is developing faster than the programming models. Systems with dozens and dozens of cores aren't far off but we really don't have a good way to take advantage of all the cores.

      In the very near future, we could potentially have systems with hundreds of cores that sit idle all the time because none of the software takes advantage of much more than 5-10 cores. Of course, this would never actually happen, because once the hardware manufacturers notice this to be a problem, they will stop increasing the number of cores and try to make some other changes that would result in increased performance to the end user. There will always be a bottleneck -- either the software paradigms or the hardware and right now it looks like in the near future it will be the software.

      Yes, there are some algorithms that no matter what you do have to be executed sequentially. However, there is a huge truckload of algorithms that can be rewritten, with little added complexity, to take advantage of parallel computing. Furthermore, there is a slew of algorithms that could be rewritten with a slight loss in efficiency to be parallelized but with a net gain in performance. This third type of algorithm is what I think the most interesting is for researchers -- Even though parallelizing the algorithm may introduce redundant calculations or added work, the increased number of workers outweighs this.

      In other words, what is more efficient: 1 core that performs 20,000 instructions in 1 second or 5 cores that each perform 7,000 instructions, in parallel, in 0.35 seconds. Perhaps surprisingly to you, the single core is more efficient (20,000 instructions instead of 7,000*5 = 35,000 instructions) -- BUT, if we have the extra 4 cores sitting around doing nothing anyways, we may as well introduce inefficiency and finish the algorithm about 2.9 times faster.
      • Until you can take

        (1..100).foreach { |e| e.function() }

        And turn it into

        (1..50).foreach { |e| e.function() }
        (50..100).foreach { |e| e.function() }

        and these get run on two or more processors faster than it runs on one you'll never get much use out of the extra ones. Our current operating systems can't take a small loop like say 100 iterations and divide it up across processors and have it run faster than just doing it on one. That's the problem. Just a rough guess but I bet .function would have to take wel
    • Re: (Score:3, Insightful)

      My favorite massively parallel programming system is LINDA [wikipedia.org], and the Java distributed equivalent, Javaspaces [sun.com]. The idea is basically a job jar. For instance, a 3D ray tracer would put each output pixel in the job jar, and worker threads grab a pixel and trace it. (Naturally, the pixel coords can be generated algorithmically rather than actually stored). Even though the time to trace a pixel varies widely, all workers are kept at capacity. Watching it raytrace a scene in a fraction of a second is like wat
      • I know I built a scheme exactly like this in Perl, once. And Erlang is built around message-passing, which could be used to implement something like this.

        And I don't think it needs to be a bottleneck unless it needs to be sequential. All you do is, have multiple LINDAs (or queues, or jars, or whatever), and have multiple sources to each.

        For example: Suppose you wanted that raytracer to do some simple anti-aliasing which took into account the surrounding pixels. The antialiasing "jar" could be fed by all of
    • We should be "stuck with a serial programming model". If your program runs too slow on a single 1 GHz CPU, lack of multicore techniques is the last thing you should be concerned about. The first thing you ought to do is optimize your damn code! There are very few applications that are CPU-bound, and in those that are, only one or two inner loops need parallelizing. The overwhelming majority of slow code is slow because you wrote it badly. So fix the software before blaming the hardware!
      • There are two things to think about here:

        First, how much effort will it take to optimize it, versus throwing another core at it? Or computer? Not always an option, but take, say, Ruby on Rails -- it wouldn't scale to 1000 cores, but it might scale to 1000 separate machines. And yes, it could probably run four or five times faster -- and thus require four or five times less hardware -- but at what cost? Ruby itself is slow, and there are certain aspects of it which will always be slow.

        But, you see, the advan
      • by jgrahn ( 181062 )

        We should be "stuck with a serial programming model". If your program runs too slow on a single 1 GHz CPU, lack of multicore techniques is the last thing you should be concerned about. The first thing you ought to do is optimize your damn code!

        I believe SMP (when did we decide to start calling it "multicore"?) is mostly useful (in everyday computing!) for multiple-user machines.

        Right now I'm running a number of important jobs on a four-CPU Linux server. At the same time, one guy has a process hogging 1

    • by bit01 ( 644603 )

      Additionally, many algorithms cannot be parallelized.

      Conventional wisdom but it's just not true. Maybe you meant to say automatically parallelizable however that's much the same as saying that it is necessary to

      I work in parallel programming and I have never seen a real world problem/algorithm that was not parallelizable. Maybe there's a few obscure ones out there but I've never seen them. Anybody want to suggest even one?

      In any case almost all PC's these days are already highly parallel; display c

      • Maybe there's a few obscure ones out there but I've never seen them. Anybody want to suggest even one?

        Reading a sector of data from a hard disk.

        But that's pretty obscure, I'll grant you that.
        • by bit01 ( 644603 )

          Reading a sector of data from a hard disk.

          But that's embarrassingly parallelizable, it's why hard disks have multiple heads and multiple platters to read/write in parallel and up the throughput. RAID's make it even more parallel. Within limits and for normal volumes of hard disk data, which are much larger than a sector size, (small data is held in memory+caches) this will increase throughput proportional to the number of heads.

          In any case that's a hardware limitation, nothing to do with an algorithm

          • In any case that's a hardware limitation, nothing to do with an algorithm as such.

            Then you admit that there are operations upon which software must wait. There is no way for a program that relies on the read data to act upon it until it arrives in the CPU registers. How are you going to serialize that?

            I'm not aware of any atomic operation in real world programming that takes any more than a tiny fraction of second and thus cause real world impact

            These things add up. You would be the first to admit, I'm sure
    • The fact of the matter is that multi-threaded programming is a common paradigm which takes advantage of multiple cores just fine.

      Multi-threaded programming is cumbersome. There have been better was of doing parallel programming for a long time.

      Additionally, many algorithms cannot be parallelized.

      Whether algorithms can be parallelized doesn't matter. What matters is whether there are parallel algorithms that solve problems faster than serial algorithms, and in most cases there are.

      Even languages like Erlan
  • Why not 1024, or 1000 cores will be enough ...
  • by elwinc ( 663074 ) on Friday March 14, 2008 @12:26PM (#22752734)
    Actually, this is old news. There's a month old discussion thread [realworldtech.com] on RWT Discussion forum. Berkeley proposes the "thirteen dwarfs" - 13 kinds of test algorithms they consider valuable to parallelize. Linus doesn't think the 13 dwarfs correspond well to everyday computing loads. My 2 cents: Intel & others are spending hundreds of millions of bucks per year trying to speed up single-thread style computing, so it's not a bad idea to put a few more million/year into thousand thread computing.
  • I remember working on the now system http://now.cs.berkeley.edu/ [berkeley.edu] . Its a distributed system and they have parallel programming languages such as split-c or titanium (parallel java) and it support MPI. I guess a network of those BEE3s would be called a bee hive?
  • Real Information (Score:5, Informative)

    by gdgib ( 1256446 ) on Friday March 14, 2008 @12:32PM (#22752790)
    The real websites are:
    ParLab (what's being funded): http://parlab.eecs.berkeley.edu/ [berkeley.edu]
    RAMP (the people who are building the architectural simulators for ParLab): http://ramp.eecs.berkeley.edu/ [berkeley.edu]
    BEE2 (the precursor to the not-quite-so-microsoft BEE3): http://bee2.eecs.berkeley.edu/ [berkeley.edu]

    The funding being announced here is for ParLab whose mission is to "solve the parallel programming problem". Basically they want to design new architectures, operating systems and languages. And before you get all "we tried that an it didn't work" there are some genuinely new ideas here and the wherewithall to make them work. ParLab grew out of the Berkeley View report (http://view.eecs.berkeley.edu/ [berkeley.edu]) which was the work of very large group of people to standardize on the same language and figure out what the problems in parallel computing were. This included everyone from architecture to applications (e.g. the music department).

    RAMP is a multi-university group working to build architectural simulators in FPGAs. In fact you can go download one such system right now called RAMP Blue (http://ramp.eecs.berkeley.edu/index.php?downloads [berkeley.edu]). With ParLab starting up there will be another project RAMP Gold which will build a similar simulator but specifically designed for the architectures ParLab will be experimenting with.

    As a side note, keep in mind when you read articles like this that statements like the "Microsoft BEE3" are amusing when you take in to account that "B.E.E." standards for Berkeley Emulation Engine. Microsoft did a lot of the work and did a good job of it, but still...

  • 1000 core machines? Imagine a beowulf cluster of those!
  • Intel and other chip vendors are pushing the manycore vision as The True Path Forward. this is disingenuous, since it's merely the easy path forward for said chip vendors. everyone agrees "morecore" will be common in the future, but 1k cores? definitely not clear. is it even meaningful to call it shared-memory programming if you have 1k cores? it's not as if 1k cores can ever sanely share particular data, at least not if it's ever written. and what's the value of 1k cores all sharing the same RO data?
  • Cheap Bastards. (Score:4, Interesting)

    by cyc ( 127520 ) on Friday March 14, 2008 @01:19PM (#22753272)

    Rick Merritt, who wrote the lead article also posted an opinion piece in EE Times [eetimes.com] lambasting Wintel for their lackluster funding efforts in parallel programming. I thoroughly agree with this guy. To quote:

    Wintel should not just tease multiple researchers with a $10 million grant awarded to one institution. They need to significantly up the ante and fund multiple efforts.

    Ten million is a drop in the bucket of the R&D budgets at Intel and Microsoft. You have to wonder about who is piloting the ship in Redmond these days when the company can afford a $44 billion bid for Yahoo to try to bolster its position in Web search but only spends $10 million to attack a needed breakthrough to save its core Windows business.
  • Use your GPU (Score:5, Interesting)

    by TheSync ( 5291 ) * on Friday March 14, 2008 @01:20PM (#22753286) Journal
    If you have a GeForce 8800 GT, you already have a 112 processor parallel computer that you can program using CUDA [nvidia.com].
    • by gdgib ( 1256446 )
      Ironically the folks in ParLab (*ahem*) just arranged a field trip to meet with some of the people who do that kind of work. We read slashdot too you know...

      But try putting that GeForce in your cell phone. And don't come crying to us when your ass catches on fire from the hot cell phone on your back pocket. Or for that matter when your pants fall down from carying the battery around.

      ParLab (http://parlab.eecs.berkeley.edu/ [berkeley.edu]) is interested in MOBILE computing as well as your desktop.
      • by TheSync ( 5291 ) *
        Ironically the folks in ParLab (*ahem*) just arranged a field trip to meet with some of the people who do that kind of work.

        Super...I'm hoping that GPUs can provide a cheap way for newcomers to learn parallel programming, and it appears that the GPU makers are really waking up to general purpose uses of GPUs.

        (I learned parallel programming on a Connection Machine :)
  • Microsoft has actually released a library which I would imagine is related to this work. PLINQ lets you very easily and declaratively multithread tasks. http://msdn2.microsoft.com/en-us/magazine/cc163329.aspx [microsoft.com]
  • by fpgaprogrammer ( 1086859 ) on Friday March 14, 2008 @01:57PM (#22753658) Homepage
    The BEE boards are being trumpeted as multicore experimentation environment, but the FPGA itself is a powerful computational engine in its own right. FPGAs have to overcome the inertia of their history as verification tools for ASIC designs if they want to grow into being algorithm executers in their own right.

    There's a growing community of FPGA programmers making accelerators for supercomputing applications. DRC (www.drccomputing.com) and XtremeData (www.xtremedatainc.com) both make co-processors for Opteron sockets with HyperTransport connections, and Cray uses these FPGA accelerators in their latest machines. There is even an active open standards body (www.openfpga.org).

    FPGAs and multicore BOTH suffer from the lack of a good programming model. Any good programming model for multicore chips will also be a good programming model for FPGA devices. The underlying similarity here is the need to place dataflow graphs into a lattice of cells (be they fine-grained cells like FPGA CLBs or coarse-grained cells like a multicore processor). I can make a convincing argument that spreadsheets will be both the programming model and killer-app for future parallel computers: think scales with cells.

    I've kept a blog on this stuff if you still care: fpgacomputing.blogspot.com
  • by technienerd ( 1121385 ) on Friday March 14, 2008 @03:04PM (#22754340)
    I'm about to start a graduate degree in this area so I'm a little biased. However, I think a lot of problems can be solved in parallel. For example, maybe, LZW compression as it's implemented in the "zip" format might not be easily parallelizable but that doesn't prevent us from developing a compression algorithm with parallelism in mind. I did some undergraduate research in parallel search algorithms and I know for a fact that there are many, many ways you can parallelize search. Frankly, saying that you can't parallelize algorithms is a bit closed minded. Many problems don't inherently require serial solutions, it's just current algorithms handle them that way. Rather than trying to implement existing algorithms on a massively parallel processor, you want to re-tackle the problem under a new model, a model of an arbitrary number of processors. You build around the idea of data-parallelism rather than task-parallelism. Many, many things are possible under this model and I think it's naive to think otherwise. You don't need to think, how do I juggle 1000 threads around, you think, how do I take a problem, break it up into arbitrarily many chunks and distribute those chunks to an arbitrary number of processors and how do I do all that scheduling efficiently? This model doesn't work for interactive tasks mind you (where you're waiting for user input), but I'm very confident a model can be developed that can.
    • by zymano ( 581466 )
      itanium handles it. Good optimistic thread by you.
      • Itanium? The processor? A processor can only do so much on its own. Ultimately I think we need to reinvent sorting, searching, encryption, compression, FFT, etc and teach undergrads how to design these algorithms to run on n processors where n is some arbitrary number. We need libraries that do this stuff for us as well. We also need programming languages and platforms like RapidMind to keep us in the "data-parallel" mindset. I've attended two talks by Professor David Patterson from UC Berkeley and in each
  • Oh sure, Code Pink protests the military's presence in Berkley, but leaves Microsoft free to enter the flagship university. What a bunch of commie pussies!
  • I really think that Intel needs to skip doing quad-core and whatever processors, and jump directly to doing a kilocore processor. Such a processor would have 1024 cores. It would be the pride of any self-respecting geek to own such a computer. Then they could improve on it by gradually going to two kilocores, four kilocores, etc. In a number of years, when the average computer processor has 250 gigacores, we'll laugh and poke fun at the good ol' days when 640 kilocores were enough for anyone.

Technology is dominated by those who manage what they do not understand.

Working...