Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Australia Graphics Hardware

Australia's CSIRO To Launch CPU-GPU Supercomputer 82

bennyboy64 contributes this excerpt from CRN Australia: "The CSIRO will this week launch a new supercomputer which uses a cluster of GPUs [pictures] to gain a processing capacity that competes with supercomputers over twice its size. The supercomputer is one of the world's first to combine traditional CPUs with the more powerful GPUs. It features 100 Intel Xeon CPU chips and 50 Tesla GPU chips, connected to an 80 Terabyte Hitachi Data Systems network attached storage unit. CSIRO science applications have already seen 10-100x speedups on NVIDIA GPUs."
This discussion has been archived. No new comments can be posted.

Australia's CSIRO To Launch CPU-GPU Supercomputer

Comments Filter:
  • by bluesatin ( 1350681 ) on Monday November 23, 2009 @05:51AM (#30200330)

    Can someone explain exactly what the benefits/drawbacks of using GPUs for processing?

    It would also be nice if someone could give a quick run down of what sort of applications GPUs are good at.

  • by Anonymous Coward on Monday November 23, 2009 @05:53AM (#30200338)

    The article didn't seem to mention cost, power usage, heat, or anything remotely relevant. Just a nice happy fluff piece for NVIDIA who I do adore but really these articles on slashdot do not have as much tech sustenance as it used to.

  • by Sockatume ( 732728 ) on Monday November 23, 2009 @06:17AM (#30200418)

    Okay, that's not quite true, most tasks benefit from piddling about on the CPU, but demanding tasks would be better off running on something faster and more specialised. The barrier to that is that it's harder to write GPGPU code.

  • by Anonymous Coward on Monday November 23, 2009 @06:18AM (#30200420)

    I can take a stab; GPUs traditionally render graphics, good at processing vectors and mathsy things. Now think of a simulation of a bunch of atoms, the forces between the atoms are often approximated to Newtonian laws of motion for computational efficiency reasons, this is especially important when dealing with tens of thousands of atoms - called Molecular Dynamics (MD). So the same maths used for graphic intensive computer games is the same as classical MD. The problem hither to is that MD software has never really been compiled for GPU architecture, just Athlons and Pentiums.

    I should mention that I use the CSIRO CPU cluster, it's quite good already, but I'm still waiting weeks to simulate a microsecond of 10,000 atoms using 32 processors. My new side project will be trying it out on the GPUs. 100x faster they reckon, that'll be a game changer for me

  • by Anonymous Coward on Monday November 23, 2009 @06:58AM (#30200526)

    The main drawback of using GPUs for scientific applications is their poor support for double precision floating point operations.

    Using single precision mathematics makes sense for games, where it doesn't matter if a triangle is a few millimetres out of place or the shade of a pixel is slightly wrong. However, in a lot of scientific applications these small errors can build up and completely invalidate results.

    There are rumours that Nvidias next generation Fermi GPU will support double precision mathematics at the same speed as single precision. If this is the case then they will be incredibly popular within the scientific community and I would expect the top500 supercomputer list will become dominated by machines built around GPUs rather then traditional CPUs. (Of course this is really dependant on the Fermi GPUs FLOPS per Watt performance which is impossible to gauge before they are released).

  • by sonamchauhan ( 587356 ) <sonamc@PARISgmail.com minus city> on Monday November 23, 2009 @07:03AM (#30200552) Journal

    Hmmm.... is this setup a realisation of this release from Nvidia in March

    Nvidia Touts New GPU Supercomputer
    http://gigaom.com/2009/05/04/nvidia-touts-new-gpu-supercomputer/ [gigaom.com]

    Another 'standalone' GPGPU supercomputer, without the Infiniband switch
    University of Antwerp makes 4000EUR NVIDIA supercomputer
    http://www.dvhardware.net/article27538.html [dvhardware.net]

  • by Anonymous Coward on Monday November 23, 2009 @07:57AM (#30200726)

    Note that doubles take... er... double bandwidth as the equivalent single values. GPUs are, afaik, pretty bandwidth intensive... so even if the operators are supported natively you will notice the hurt of the increased bandwidth. Note that the bandwidth is only a problem for actual inputs/outputs of the program... the intermediate values should be in GPU registers... but then again... double registers take twice as much space as single registers, so with the same space you get half of them, supporting less complex shaders. I'd say that doubles will at least mean half the performance even if they support them natively.

    I don't know how Fermi GPUs handle IEEE. Usually GPUs have a really relaxed compliance as their target application doesn't require it. I am talking about how denorms, overflows, underflows etc are treated. Comparing single FLOPS to double FLOPS is already quite unfair... ...if we are also comparing a relaxed IEEE compliance on the handling of infinities, denorms and the like with a fully compliant one it becomes even worse (and I repeat, I don't know the actual compliance of any of the two compared machines).

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...