Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

Supercomputer Built With 8 GPUs 232

FnH writes "Researchers at the University of Antwerp in Belgium have created a new supercomputer with standard gaming hardware. The system uses four NVIDIA GeForce 9800 GX2 graphics cards, costs less than €4,000 to build, and delivers roughly the same performance as a supercomputer cluster consisting of hundreds of PCs. This new system is used by the ASTRA research group, part of the Vision Lab of the University of Antwerp, to develop new computational methods for tomography. The guys explain the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors. On a normal desktop PC their tomography tasks would take several weeks but on this NVIDIA-based supercomputer it only takes a couple of hours. The NVIDIA graphics cards do the job very efficiently and consume a lot less power than a supercomputer cluster."
This discussion has been archived. No new comments can be posted.

Supercomputer Built With 8 GPUs

Comments Filter:
  • by cromar ( 1103585 ) on Saturday May 31, 2008 @01:33PM (#23610839)
    I am guessing it has something to do with floating point calculations vs. integer calculations, but if I read the article, this wouldn't be Slashdot, would it? Think about it. We have GPUs to perform vector maths, flops, etc. because the CPU is not all that great at that sort of thing typically. A general purpose CPU is not necessarily going to be the fastest if your problem domain is more suited to an "inferior" chip; general purpose CPUs are not designed to be the fastest chip in every situation.
  • by symbolset ( 646467 ) on Saturday May 31, 2008 @01:35PM (#23610859) Journal

    By what benchmark is eight of the NVIDIA GPUs in the 9800 GX2 more powerful than 300 2.4 GHz C2Ds?

    By the benchmark that they solve the particular problem of this specific application in 1/300th of the time?

  • coincidence (Score:2, Insightful)

    by DaveGod ( 703167 ) on Saturday May 31, 2008 @01:37PM (#23610871)

    I can't imagine that it is a coincidence that this comes along just as Nvidia are crowing about CUDA, or that the resulting machine looks like a gamer's dream rig.

    While there is ample crossover between hardware enthusiasts and academia, anyone soley with the computation interest in mind probabyl wouldn't be selecting neon fans, aftermarket coolers or spend that much time on presentable wiring.

  • by kcbanner ( 929309 ) * on Saturday May 31, 2008 @01:37PM (#23610881) Homepage Journal
    They are useful for applications that can be massively parallelized. Your average program can't break off into 128 threads, that takes a little bit of extra skill on the coder's part. If, for example, someone could port gcc to run on the GPU, think of how happy those Gentoo folks would be :) (make -j128)!
  • by poeidon1 ( 767457 ) on Saturday May 31, 2008 @01:41PM (#23610917) Homepage
    this is an example of acceleration architecture. Anyone who have used FPGAs knows that. Ofcourse, making sensational news is a too common thing on /.
  • Killer Slant (Score:2, Insightful)

    by FurtiveGlancer ( 1274746 ) <.AdHocTechGuy. .at. .aol.com.> on Saturday May 31, 2008 @01:43PM (#23610941) Journal

    The guys explain the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors.

    Pardon the italics, but I was impacted by the killer slant of this posting.

    For specific kinds of calculations, sure, GPGPU supercomputing is superior. I would question what software optimization they had applied to the 300 CPU system. Apparently, none. Let's not sensationalize quite so much, shall we?
  • Re:nVidia Tesla (Score:1, Insightful)

    by Anonymous Coward on Saturday May 31, 2008 @01:44PM (#23610963)
    A Tesla system would cost a lot more.
  • Re:Tomography (Score:5, Insightful)

    by jergh ( 230325 ) on Saturday May 31, 2008 @02:01PM (#23611093)
    What they are is doing is reconstruction, basically analyzing the raw data data from a tomographic scanner and generating a representation which can then be visualized. So its more doing numerical methods than graphics.

    And BTW even rendering the reconstructed results is not that simple, as current graphics card are optimized for geometry, not volumetric data.
  • by Jaime2 ( 824950 ) on Saturday May 31, 2008 @02:16PM (#23611199)
    I think the GP (and myself) were objecting to the use of the fairly general word "power" and the use of this one problem as a "power benchmark". While it is obviously true that 8GPUs is as fast as 300 C2Ds for this problem, this system isn't as fast as a supercomputer for most problems. All this does is point out that the recent trend of building supercomputers out of inexpensive general purpose CPUs may not be a good idea for all applications.
  • by gumbi west ( 610122 ) on Saturday May 31, 2008 @02:19PM (#23611215) Journal
    When you get into inverting matricies, or doing matrix vector multiplication the algo is very easily in parallel, but I always wonder where the full matrices live. i.e. they could easily be tens of GBs of matrix, so the CPU would seem to have to be heavily involved as well.
  • by dreamchaser ( 49529 ) on Saturday May 31, 2008 @02:23PM (#23611239) Homepage Journal
    Because for 95%+ of the problems a general purpose computer tackles GPU's would suck. It's only in very special cases that GPU's outperform CPU's. Thus, your idea is a poor one.
  • by pablomme ( 1270790 ) on Saturday May 31, 2008 @02:33PM (#23611319)
    As far as I know, GPUs are amazingly fast at matrix operations and other things allowing vectorized evaluation. I guess these tomography applications must make massive use of these. After all, tomography is in essence image processing..
  • by symbolset ( 646467 ) on Saturday May 31, 2008 @02:40PM (#23611379) Journal

    All this does is point out that the recent trend of building supercomputers out of inexpensive general purpose CPUs may not be a good idea for all applications.

    And... a screwdriver is not always a prybar. A tool's a tool - they have preferred usage but if your requirement is specific and you're creative enough, you can do some fine work outside of the tool's intended purpose. Like this guy. Kudos to him.

    Perhaps some more creative people finding this information will now discover if their specific requirements can be met by this interesting configuration. That will save them large quantities of cash or possibly enable some facility that was not previously available because supercomputers cost a grip-o-cash.

    Of course for general purpose supercomputing you would want to use modified PS3s [wired.com].

  • by mangu ( 126918 ) on Saturday May 31, 2008 @02:43PM (#23611397)

    They are useful for applications that can be massively parallelized

    Precisely. But that happens to be one of the areas where more performance is still needed.


    You don't need a super-duper CPU for text editing, that's for sure. For most of the tasks people do on computers, we have had CPU enough for the last 15 years or more. But where we still need more CPU happens to be mostly in tasks that ARE massively parallel, for instance, physics simulations, of which you will find several examples in the nVidia site [nvidia.com].


    I'm following this technology with much interest, and I think I will have a major upgrade in my home computer soon. My old FX-5200 card has been more than enough for my gaming needs, but now I have a new reason for upgrading.

  • by osu-neko ( 2604 ) on Saturday May 31, 2008 @03:19PM (#23611667)

    Designed in America, manufactured in Asia, purchased in Europe.

    20th century thinking. Welcome to globalization. The product was designed, manufactured, and purchased on Earth.

  • by Enderandrew ( 866215 ) <enderandrew&gmail,com> on Saturday May 31, 2008 @03:30PM (#23611755) Homepage Journal
    Please, please, please do the math.

    8 GPUs are being compared to 300 CPUs. So the single GPU for this pupose isn't 300 times as powerful as the CPU.

    It is doing the operation in 1/37th the time approximately. This isn't news or unbelievable. GPUs are dedicated to performing certainly types of tasks far better than a CPU.
  • by maxume ( 22995 ) on Saturday May 31, 2008 @03:37PM (#23611815)
    Unpredictably.

    (the big shift over the last 6 years is mostly due to wanton printing of money in the US and rather tight central banking in Europe [with a healthy dose of Chinese currency rate fixing thrown in]. The trend isn't all that likely to continue, as a weakening dollar is great for American businesses operating in Europe and horrible for European businesses operating in America, which creates [increasing amounts of] counter-pressure to the relatively loose government policy in the US, or saying it the other way around, counter-pressure to the relatively tight government policy in the EU.)
  • by tdelaney ( 458893 ) on Saturday May 31, 2008 @05:57PM (#23612723)
    Sure - but at 4000 euros, you can afford to do a one-off purchase and write custom software for a limited application. The point of this is that if your application suits it, this is a very cheap way to get supercomputer performance without paying for your own supercomputer (cluster) or time on an existing one.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...