Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

Supercomputer Built With 8 GPUs 232

FnH writes "Researchers at the University of Antwerp in Belgium have created a new supercomputer with standard gaming hardware. The system uses four NVIDIA GeForce 9800 GX2 graphics cards, costs less than €4,000 to build, and delivers roughly the same performance as a supercomputer cluster consisting of hundreds of PCs. This new system is used by the ASTRA research group, part of the Vision Lab of the University of Antwerp, to develop new computational methods for tomography. The guys explain the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors. On a normal desktop PC their tomography tasks would take several weeks but on this NVIDIA-based supercomputer it only takes a couple of hours. The NVIDIA graphics cards do the job very efficiently and consume a lot less power than a supercomputer cluster."
This discussion has been archived. No new comments can be posted.

Supercomputer Built With 8 GPUs

Comments Filter:
  • I guess... (Score:4, Funny)

    by LordVader717 ( 888547 ) on Saturday May 31, 2008 @01:26PM (#23610779)
    They didn't have enough dough for 9.
  • Re-birth of Amiga? (Score:5, Interesting)

    by Yvan256 ( 722131 ) on Saturday May 31, 2008 @01:30PM (#23610811) Homepage Journal
    Am I the only one seeing those alternative uses of GPUs as some kind of re-birth of the Amiga design?
    • by Quarters ( 18322 ) on Saturday May 31, 2008 @02:03PM (#23611113)
      The Amiga design was, essentially, dedicated chips for dedicated tasks. The CPU was a Motorola 68XXX chip. Agnus handled RAM access requests from the CPU and the other custom chips. Denise handled video operations. Paula handled audio. This cpu + coprocessor setup is roughly analogous to a modern X86 PC with a CPU, northbridge chip, GPU, and dedicated audio chip. At the time the Amiga's design was revolutionary because PCs and Macs were using a single CPU to handle all operations. Both Macs and PCs have come a long way since then. 'Modern' PCs have had the "Amiga design" since about the time the AGP bus became prevalent.

      nVidia's CUDA framework for performing general purpose operations on a GPU is something totally different. I don't think the Amiga custom chips could be repurposed in such a fashion.

      • by porpnorber ( 851345 ) on Saturday May 31, 2008 @05:28PM (#23612551)
        I think the parent was seeing the same situation a little differently. You ever code up Conway's Life for the blitter? Whoosh! Now CUDA does floating point where the Amiga could only do binary operations, and the GPU has a lot more control onboard, but the analogy is not unsound. After all, CPUs themselves didn't even do floating point in the old days (though of course they did do narrow integer arithmetic).
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Modern' PCs have had the "Amiga design" since about the time the AGP bus became prevalent.

        Not really. The Amiga also had perfect synchronization between the different components. When you configured soundchip and graphics chip for a particular sample rate and screen resolution, you would know exactly how many samples would be played for the duration of one frame. And you had synchronization to the point where you could know which of the samples were played while a particular line was being sent through the

    • It's more like the re-birth of the ILLIAC III [wikipedia.org].
    • What this makes me thinking about it the marketing campaign [falconfly.de] that 3DFx used back when they launched their Voodoo 5 series ("So powerful it's kind of ridiculous").

      Most of the TV spot started explaing how scientist could save humanity with GFLOPS-grade chips. But then humorously, the TV sport announces that they decided to play game (often with hilarious effect on the various "dreams of a better humanity" that the first half of the spot showed).

      In a funny twist of things, it's the exact opposite that happened
  • by arrenlex ( 994824 ) on Saturday May 31, 2008 @01:34PM (#23610843)
    This article makes it seem like it is possible to use the GPUs as general purpose CPUs. Is that the case? If so, why doesn't NVIDIA or especially AMD\ATI start putting its GPUs on motherboards? At a ratio of 8:300, a single high-end GPU seems to be able to do the work of dozens of high-end CPUs. They'd utterly wipe out the competition. Why haven't they put something like this out yet?
    • by kcbanner ( 929309 ) * on Saturday May 31, 2008 @01:37PM (#23610881) Homepage Journal
      They are useful for applications that can be massively parallelized. Your average program can't break off into 128 threads, that takes a little bit of extra skill on the coder's part. If, for example, someone could port gcc to run on the GPU, think of how happy those Gentoo folks would be :) (make -j128)!
    • No; if you read all the way to the end, you can see where they discuss the limited specific "general" programs that currently support this kind of thing. Namely, folding@home (on ATI cards) and maybe Photoshop in the future. The tomography software they use is likely their own code, is graphics-heavy, and is tailored for this set-up.
    • It is possible to solve non-graphics problems on graphics cards nowadays, but the hardware is still very specialized. You don't want the GPU to run your OS or your web browser or any of that; when a SIMD (single instruction, multiple data) problem arises, a decent computer scientist should recognize it and use the tools he has available.
      Also, this stuff isn't as mature as normal C programming, so issues that don't always exist in software that's distributed to the general public will crop up because not everyone's video card will support everything that's going on in the program.
    • by gweihir ( 88907 )
      This article makes it seem like it is possible to use the GPUs as general purpose CPUs. Is that the case? If so, why doesn't NVIDIA or especially AMD\ATI start putting its GPUs on motherboards? At a ratio of 8:300, a single high-end GPU seems to be able to do the work of dozens of high-end CPUs. They'd utterly wipe out the competition. Why haven't they put something like this out yet?

      Simple: This is not a supercomputer at all, just special-purpose hardware running a very special problem. For general computa
    • GPUs are massively parallel and very poor at processing branches. This doesn't make for good speed with most programs. For a company to put out a product that they claimed is much faster than the competition, they would have to be very selective with their examples and create new benchmarks that took advantage of their product. They might even have to create new programs to take advantage of the power, like a modified gimp.

      In all likelihood, if they tried too hard to advertise their speed advantage, they w

    • This article makes it seem like it is possible to use the GPUs as general purpose CPUs. Is that the case?

      As well as the issues that others have mentioned, there's also the problem of accuracy with GPUs.

      AFAIK, in many (all?) ordinary consumer graphics cards, minor mistakes by the GPU are tolerated because they'll typically result in (at worst) minor or unnoticable glitches in the display. I assume that this is because, to get the best performance, designers push the hardware beyond levels that would be acceptable otherwise.

      Clearly if you're using them for other mathematical operations, or to partly replace

    • An ATI or nVidia GPU as main processor is completely out of question. These things aren't even able to *call* a function (everything is inlined at compile time), not to mention that everything is ran by a SIMD engine behind the scene which mean that all the processing unit must all actually run the same code at the same time.

      Intel on the other hand are touting their future Larrabee as being completely compatible with the x86 instruction set. The whole thing, according to them, should behave like a big many
    • The major different between CPU's and GPU's is that CPU's have to handle the effects of different branch conditions. One of the optimizations that CPU's have been designed for, is the combination of pipelining and dual path evaluation. Because the CPU is pipelining instructions, it has to evaluate both outcomes of each conditional instruction in parallel with the actual condition, and then select the actual outcome once it is actually known. Calculating the condition first followed by the outcome, would red
  • by sticks_us ( 150624 ) on Saturday May 31, 2008 @01:36PM (#23610863) Homepage
    Ok, probably a paid NVIDIA ad placement, but check TFA anyway (and even if you don't read, you gotta love the case). It looks like heat generation is one of the biggest problems--sweet.

    I like this too:

    The medical researchers ran some benchmarks and found that in some cases their 4000EUR desktop superPC outperforms CalcUA, a 256-node supercomputer with dual AMD Opteron 250 2.4GHz chips that cost the University of Antwerp 3.5 million euro in March 2005...

    ...and at 4000EUR, that comes to what (rolls dice, consults sundial) about $20000 American?
    • Re: (Score:2, Informative)

      by livingboy ( 444688 )
      Insted of dices you could use KCALC 1 EUR is about 1.55 USD so instead of 20000 it did cost only about 6200 USD.
      • Re: (Score:3, Funny)

        by Anonymous Coward
        WHOOOOOSH - over your head it went.
    • Re: (Score:2, Informative)

      by krilli ( 303497 )
    • The FASTRA uses aircooling and with the sidepanel removed the GPUs run at 55 degrees Celsius in idle, 86 degrees Celsius under full load and 100 degrees Celsius under full load with the shaders 20% overclocked. They have to run the system with the left side panel removed as the graphics cards would otherwise overheat but they're looking for a solution for their heat problem.

      Looking for a solution?
      Geeks everyone have used the old "box fan aimed at the case" solution since time immemorial.

      If you wanna get real fancy, you can pull/push air through a water cooled radiator.
      Example: http://www.gmilburn.ca/ac/geoff_ac.html [gmilburn.ca]

    • and at 4000EUR, that comes to what (rolls dice, consults sundial) about $20000 American?
      That made me try to extrapolate the 2002-2008 trend of the exchange rate to see when that would become true (provided the trend continues). I get 2014 and 2045 with linear extrapolations, which are gross approximations, and 2023 with an exponential extrapolation. Does anyone know how exchange rates should be expected to behave with respect to time?
      • Re: (Score:2, Insightful)

        by maxume ( 22995 )
        Unpredictably.

        (the big shift over the last 6 years is mostly due to wanton printing of money in the US and rather tight central banking in Europe [with a healthy dose of Chinese currency rate fixing thrown in]. The trend isn't all that likely to continue, as a weakening dollar is great for American businesses operating in Europe and horrible for European businesses operating in America, which creates [increasing amounts of] counter-pressure to the relatively loose government policy in the US, or saying it t
  • Tomography (Score:5, Informative)

    by ProfessionalCookie ( 673314 ) on Saturday May 31, 2008 @01:36PM (#23610865) Journal
    noun a technique for displaying a representation of a cross section through a human body or other solid object using X-rays or ultrasound.


    In other news Graphics cards are good at . . . graphics.

    • Re:Tomography (Score:5, Insightful)

      by jergh ( 230325 ) on Saturday May 31, 2008 @02:01PM (#23611093)
      What they are is doing is reconstruction, basically analyzing the raw data data from a tomographic scanner and generating a representation which can then be visualized. So its more doing numerical methods than graphics.

      And BTW even rendering the reconstructed results is not that simple, as current graphics card are optimized for geometry, not volumetric data.
    • Re: (Score:2, Informative)

      by imrehg ( 1187617 )

      In other news Graphics cards are good at . . . graphics.

      It's not the graphics part that makes it so computer-intensive.... All the mathematics behind it, once that's done, the presentation could be done on any ol' computer....

      So, if you mean by "graphics" that they are good at difficult geometrical calculations (like in games, for example), than you are right.... because that's what it is, truck-load of geometry...

      From Wikipedia:
      Tomography: "[...] Digital geometry processing is used to generate a three-dimensional image of the inside of an object from a l

      • if you mean by "graphics" that they are good at difficult geometrical calculations (like in games, for example)
        I can't imagine any other definition of 3d graphics.

        Cheers ;)

  • coincidence (Score:2, Insightful)

    by DaveGod ( 703167 )

    I can't imagine that it is a coincidence that this comes along just as Nvidia are crowing about CUDA, or that the resulting machine looks like a gamer's dream rig.

    While there is ample crossover between hardware enthusiasts and academia, anyone soley with the computation interest in mind probabyl wouldn't be selecting neon fans, aftermarket coolers or spend that much time on presentable wiring.

  • by bobdotorg ( 598873 ) on Saturday May 31, 2008 @01:39PM (#23610891)
    ... 3D Realms announced this as the minimum platform requirements to run Duke Nuke'em Forever.
  • Finally... (Score:5, Funny)

    by ferrellcat ( 691126 ) on Saturday May 31, 2008 @01:40PM (#23610901)
    Something that can play Crysis!
  • by poeidon1 ( 767457 ) on Saturday May 31, 2008 @01:41PM (#23610917) Homepage
    this is an example of acceleration architecture. Anyone who have used FPGAs knows that. Ofcourse, making sensational news is a too common thing on /.
  • Killer Slant (Score:2, Insightful)

    The guys explain the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors.

    Pardon the italics, but I was impacted by the killer slant of this posting.

    For specific kinds of calculations, sure, GPGPU supercomputing is superior. I would question what software optimization they had applied to the 300 CPU system. Apparently, none. Let's not sensationalize quite so much, shall we?
  • by gweihir ( 88907 ) on Saturday May 31, 2008 @01:44PM (#23610957)
    It is also not difficuult to find other tasks where, e.g., FPGAs peform vastly better than general-purpose CPUs. That does not make an FPGA a "Supercomputer". Stop the BS, please.
    • Re: (Score:3, Interesting)

      by emilper ( 826945 )
      aren't most of the supercomputers designed to perform some very specific tasks ? You don't buy a supercomputer to run the Super edition of Excel.
  • Brick of GPUs (Score:5, Interesting)

    by Rufus211 ( 221883 ) <rufus-slashdot@h ... g ['ack' in gap]> on Saturday May 31, 2008 @01:48PM (#23610995) Homepage
    I love this picture: http://fastra.ua.ac.be/en/images.html [ua.ac.be]

    Between the massive brick of GPUs and the massive CPU heatsink/fan, you can't see the mobo at all.
    • Re: (Score:3, Funny)

      by Fumus ( 1258966 )
      They spent 4000 EUR for the computer, but use two boxes in order to situate the monitor higher. I guess they spent everything they had on the computer.
      • On a more serious note, they shouldn't have elevated their monitor. Generally, the top of computer monitors should be at the eye-level of the user. From the picture, it doesn't seem so. This would cause more strain on the user's neck as he/she needs to look up more than looking down.

        The way our neck bones are structured, makes looking up more strenuous than looking down. Hence, it is more comfortable to look downwards than upwards.
  • by bockelboy ( 824282 ) on Saturday May 31, 2008 @01:54PM (#23611043)
    Wave of the Future? Yes*. Revolution in computing? Not quite.

    The GPGPU scheme is, after all, a re-invention of the vector processing of old. Vector processors died out, however, because there were too few users to support. Now that there's a commercially viable reason to make these processors (PS3 and video games), they are interesting again.

    The researchers took a specialized piece of hardware, rewrote their code for it, and found it was faster than their original code on generic hardware. The problems here are that you have to rewrite your code (High Energy Physics codebases are about a GB, compiled... other sciences are similar) and you have to have a problem which will run well on this scheme. Have a discrete problem? Too bad. Have a gigantic, tightly coupled problem which requires lots of inter-GPU communication? Too bad.

    Have a tomography problem which requires only 1GB of RAM? Here you go...

    The standard supercomputer isn't going away for a long, long time. Now, as before, a one-size-fits-all approach is silly. You'll start to see sites complement their clusters and large-SMP machines with GPU power as scientists start to understand and take advantage of them. Just remember, there are 10-20 years of legacy code which will need to be ported... it's going to be a slow process.
    • by 77Punker ( 673758 ) <spencr04@highpoint . e du> on Saturday May 31, 2008 @02:03PM (#23611109)
      Fortunately, Nvidia provides a CUDA version of the basic linear algebra subprograms, so even if your software is hard to port, you can speed it up considerably if it does some big matrix operations, which can easily take a long time on a CPU.
    • Re: (Score:3, Interesting)

      The GPGPU scheme is, after all, a re-invention of the vector processing of old. Vector processors died out, however, because there were too few users to support. Now that there's a commercially viable reason to make these processors (PS3 and video games), they are interesting again.

      Since when have "vector processors died out"? The "Earth Simulator" for example used the NEC SX-6 CPU, currently the SX-9 is sold. Vector processors never died out and were in use for what they are best at. The GPU and the Cell

  • Vector Computing (Score:2, Interesting)

    by alewar ( 784204 )
    They are comparing their system against normal computers, I'd be interesting to see a benchmark against a vector computer, like, eg. NEC SX9
  • The price ! (Score:3, Funny)

    by this great guy ( 922511 ) on Saturday May 31, 2008 @02:48PM (#23611427)
    The system uses four NVIDIA GeForce 9800 GX2 graphics cards, costs less than 4,000 EUR to build

    What's more crazy: calling something this inexpensive a supercomputer, or 4 video cards costing a freaking 4,000 EUR.
  • Nvidia offers an external GPU solution specifically for "deskside supercomputing", the Tesla D870 [nvidia.com]. It has only 2 cores with 1.35GHz each, apart from it being a bit more expensive, I wonder how it compares (you can connect several to a PC).
  • They are using graphical processors to process graphics. Truly revolutionary. Who woulda thunkit?
    • Makes me slightly annoyed whilst I sit here watching my windows stutter around the screen when I switch desktops since all of my graphics are being processed by my CPU.

      Thanks a lot ATI :(

      (PS: This is in Enlightenment not Compiz)
  • by Chris Snook ( 872473 ) on Saturday May 31, 2008 @03:56PM (#23611977)
    I'm extremely curious to know where the performance bottleneck is in this system. Is it memory bandwidth? PCIe bandwidth? Raw GPU power? Depending on which it is, it may or may not be very easy to improve upon the price/performance ratio of this setup. Given that the work parallelizes very easily, if you could build two machines that are each 2/3 as powerful and each cost 1/2 as much, that's a huge win for other people trying to build similar systems.
  • The guys in Antwerp have probably got themselves the greater number crunching power, but reconstruction of tomographic images has been done using similar multi-core hardware. See the following (pdf alert) from the University of Erlangen, which uses a cluster of PS3s for a great use of commodity consumer hardware http://www.google.co.uk/url?sa=t&ct=res&cd=1&url=http%3A%2F%2Fwww.imp.uni-erlangen.de%2FIEEE%2520MIC2007%2FKnaup_Poster_M19-291.pdf&ei=t_FBSKnZKoie1gbh2Y23Bg&usg=AFQjCNG7vNGmMM2h [google.co.uk]

What is algebra, exactly? Is it one of those three-cornered things? -- J.M. Barrie

Working...