Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Australia Graphics Hardware

Australia's CSIRO To Launch CPU-GPU Supercomputer 82

bennyboy64 contributes this excerpt from CRN Australia: "The CSIRO will this week launch a new supercomputer which uses a cluster of GPUs [pictures] to gain a processing capacity that competes with supercomputers over twice its size. The supercomputer is one of the world's first to combine traditional CPUs with the more powerful GPUs. It features 100 Intel Xeon CPU chips and 50 Tesla GPU chips, connected to an 80 Terabyte Hitachi Data Systems network attached storage unit. CSIRO science applications have already seen 10-100x speedups on NVIDIA GPUs."
This discussion has been archived. No new comments can be posted.

Australia's CSIRO To Launch CPU-GPU Supercomputer

Comments Filter:
  • Can someone explain exactly what the benefits/drawbacks of using GPUs for processing?

    It would also be nice if someone could give a quick run down of what sort of applications GPUs are good at.

    • by SanguineV ( 1197225 ) on Monday November 23, 2009 @05:02AM (#30200366) Homepage

      Can someone explain exactly what the benefits/drawbacks of using GPUs for processing?

      GPUs are massively parallel handling hundreds of cores and tens of thousands of threads. The drawbacks are they have limited instruction sets and don't support a lot of the arbitrary jumping, memory loading, etc. that CPUs do.

      It would also be nice if someone could give a quick run down of what sort of applications GPUs are good at.

      Anything that is massively parallelisable and processing intensive. The usual bottle neck with GPU programming in normal computers is the overhead of loading from RAM to GPU-RAM. Remove this bottleneck in a custom system and you can have enormous speed ups in parallel applications once you compile the code down to GPU instructions.

      Greater detail I will leave to the experts...

      • SanguineV (1197225) wrote, "GPUs are massively parallel handling hundreds of cores and tens of thousands of threads. The drawbacks are they have limited instruction sets and don't support a lot of the arbitrary jumping, memory loading, etc. that CPUs do."

        In other words, the GPU is a single-instruction-multiple-data (SIMD) device. It matches well simple, regular computations like that which occurs in digital signal processing, image processing, computer-generated graphics, etc.

        The modern-day GPU is the

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        The main drawback of using GPUs for scientific applications is their poor support for double precision floating point operations.

        Using single precision mathematics makes sense for games, where it doesn't matter if a triangle is a few millimetres out of place or the shade of a pixel is slightly wrong. However, in a lot of scientific applications these small errors can build up and completely invalidate results.

        There are rumours that Nvidias next generation Fermi GPU will support double precision mathematics

        • Re: (Score:1, Interesting)

          by Anonymous Coward

          Note that doubles take... er... double bandwidth as the equivalent single values. GPUs are, afaik, pretty bandwidth intensive... so even if the operators are supported natively you will notice the hurt of the increased bandwidth. Note that the bandwidth is only a problem for actual inputs/outputs of the program... the intermediate values should be in GPU registers... but then again... double registers take twice as much space as single registers, so with the same space you get half of them, supporting less

          • Most of them are somewhat relaxed IEEE on doubles anyway. They don't do the full 80bit for long doubles, they typically only do the 64bit double. There are times where having those 80bit calculations are important, especially when you start running into huge data sets.
        • by TheKidWho ( 705796 ) on Monday November 23, 2009 @08:25AM (#30201140)

          The next gen Fermi is supposed to do ~600 Double Precision GFLOPS. It also has ECC Memory, has a threading unit built into it, and a lot more cache.

          http://en.wikipedia.org/wiki/GeForce_300_Series [wikipedia.org]

        • Many GPUs do in fact support double precision, its not IEEE standard double precision floating point yet, but that's going to be a feature of the next generation or two. My source is ATi, anything marked with a superscript of '1' does not support double precision maths, everything else does. ATi StreamSDK requirements [amd.com]
      • GPUs are massively parallel handling hundreds of cores and tens of thousands of threads

        eh? Massively parallel yes. The rest?

        More to do with a single instruction performing the same operation on multiple bits of data at the same time. AKA vector processors. Great for physics/graphics processing where you want to perform the same process on lots of bits of data.

         

        • by blueg3 ( 192743 )

          Sort of. NVIDIA's definition of a "thread" is different from a CPU thread -- it's more similar to the instructions executed on a single piece of data in a SIMD system. You're not required to make data-parallel code for the GPU, but certainly data-parallel code is the easiest to write and visualize.

          On NVIDIA chips, at least, there are a number of independent processors. The processors execute vector instructions (though all the vector instructions can be conditionally executed, so that, e.g., they only affec

    • by EdZ ( 755139 )
      Benefits: blindingly fast when running massively parallel computations (think several hundred thousand threads).
      Drawbacks: trying to program something to take advantage of all that power requires you to scale up to several thousand threads. Not always that easy.
    • GPUs are fast but limited to very specific kinds of instructions. If you can write your code using those instructions, it will run much quicker than it would on a general-purpose processor. They're also ahead of the curve on things like parallelisation, compared to desktop chips: the idea of writing graphics code for a 12-pipe GPU was mundane half a decade ago while there's still scant support for multiple cores in CPUs.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      I can take a stab; GPUs traditionally render graphics, good at processing vectors and mathsy things. Now think of a simulation of a bunch of atoms, the forces between the atoms are often approximated to Newtonian laws of motion for computational efficiency reasons, this is especially important when dealing with tens of thousands of atoms - called Molecular Dynamics (MD). So the same maths used for graphic intensive computer games is the same as classical MD. The problem hither to is that MD software has

      • On that subject, I just read a great paper on methane hydrates (trapping methane in ice) which would've have been possible without some truly enormous computing horsepower. Studies over a microsecond timescale (which is an eternity for molecules in motion) were needed because of the rarity of the events they were trying to model. Good luck: you're opening up a whole new generation of computational chemistry.

    • GPUs are good if (Score:5, Informative)

      by Sycraft-fu ( 314770 ) on Monday November 23, 2009 @05:43AM (#30200498)

      1) Your problem is one that is more or less infinitely parallel in nature. Their method of operation is a whole bunch of parallel pathways, as such your problem needs to be one that can be broken down in to very small parts that can execute in parallel. A single GPU these days can have hundreds of parallel shaders (the GTX 285 has 240 for example).

      2) Your problem needs to be fairly linear, not a whole lot of branching. Modern GPUs can handle branching, but they take a heavy penalty doing it. They are designed for processing data streams where you just crunch numbers, not a lot of if-then kind of logic. So if your problem should be fairly linear to run well.

      3) Your problem needs to be solvable using single precision floating point math. This is changing, new GPUs are getting double precision capability and better integer handling, but almost all of the ones on the market now are only fast with 32-bit FP. So your problem needs to use that kind of math.

      4) Your problem needs to be able to be broken down in to pieces that can fit in the memory on a GPU board. This varies, it is typically 512MB-1GB for consumer boards and as much as 4GB for Teslas. Regardless, your problem needs to fit in there for the most part. The memory on a GPU is very fast, 100GB/sec or more of bandwidth for high end ones. The communication back to the system via PCIe is an order of magnitude slower usually. So while you certainly can move data to main memory and to disk, it needs to be done sparingly. For the most part, you need to be cranking on stuff that is in the GPU's memory.

      Now, the more your problem meets those criteria, the better a candidate it is for acceleration by GPUs. If your problem is fairly small, very parallel, very linear and all single precision, well you will see absolutely massive gains over a CPU. It can be 100x or so. These are indeed the kind of gains you see in computer graphics, which is not surprising given that's what GPUs are made for. If your problem is very single threaded, has tons of branching, requires hundreds of gigs of data and such, well then you might find offloading to a GPU slower than trying it on a CPU. The system might spend more time just getting the data moved around than doing any real work.

      The good news is, there's an awful lot of problems that nicely meet the criteria for running on GPUs. They may not be perfectly ideal, but they still run plenty fast. After all, if a GPU is ideally 100x a CPU, and your code can only use it to 10% efficiency, well hell you are still doing 10x what you did on a CPU.

      So what kind of things are like this? Well graphics would be the most obvious one. That's where the design comes from. You do math on lots of matrices of 32-bit numbers. This doesn't just apply to consumer game graphics though, material shaders in professional 3D programs work the same way. Indeed, you'll find those can be accelerated with GPUs. Audio is another area that is a real good candidate. Most audio processing is the same kind of thing. You have large streams of numbers representing amplitude samples. You need to do various simple math functions on them to add reverb or compress the dynamics or whatever. I don't know of any audio processing that uses GPUs, but they'd do well for it. Protein folding is another great candidate. Folding@Home runs WAY faster on GPUs than CPUs.

      At this point, GPGPU stuff is still really in its infancy. We should start to see more and more of it as more people these days have GPUs that are useful for GPGPU apps (pretty much DX10 or better hardware, nVidia 8000 or higher and ATi 3000 or higher). Also there is starting to be better APIs out for it. nVidia's CUDA is popular, but proprietary to their cards. MS has introduced GPGPU support in DirectX, and OpenCL has come out and is being supported. As such, you should see more apps slowly start to be developed.

      GPUs certainly aren't good at everything, I mean if they were, well then we'd just make CPUs like GPUs and call it good. However there is a large set of problems they are better than the CPU at solving.

      • Re: (Score:3, Insightful)

        TFA doesn't talk about specific applications but I bet the CSIRO want this machine for modelling. Climate modelling is a big deal here in Australia. Predicting where the water will and will not be. This time of year bush fires are a major threat. I bet that with the right model and the right data you could predict the risk of fire at high resolution and in real time.

        • No, this is to hide the space ships from public view. You're on Slashdot, so youi've obviously watched Stargate. We have these giant space-faring ships, but we can't let the public know about them. They're obviously going to cover the sky with giant LCD monitors, so amateur astronomers can't see what's going on. Let's just hope they remember to scrape off the logo.

          What? That's not what they meant by "launch"? Oh.

        • modelling how to split the beer atom?
        • by afidel ( 530433 )
          The problem of resolution is normally one of data, not modeling power. The reason forecast's aren't much good past 7-10 days is that the points between data collection stations leads to too much future randomness that no amount of additional processing power will eliminate. There ARE other fields that can take advantage of every bit of processing power you can find, molecular chemistry, proteomics, among others.
      • You touched on but I think you missed the #1 biggest winner for high end GPU's as it pertains to most GPGPU stuff.. convolution.

        It is not an exaggeration to call these things super-convolvers, excelling at doing large-scale pairwise multiply-and-add's on arrays of floats, which can be leveraged to do more specific things like large matrix multiplication in what amounts to (in practice) sub-linear time. A great many different problem sets can be expressed as a series of convolutions, including neural netwo
        • matrix multiplication in what amounts to (in practice) sub-linear time.

          What? The GPU matrix multiplications are generally done in the straightforward O(n^3) fashion - you may divide this by something proportional to the number of cores available, but what you mean by "sub-linear" I can't imagine.

  • Cool but... (Score:1, Funny)

    by Anonymous Coward

    ..can it Run CRySiS?

  • A super computing cluster is already used for highly parallelized problems. Using hardware that handles those kinds of problems at a far greater speed than a typical CPU is a no-brainer. I think the part of the story that would be real interesting to the /. crowd is what exactly are the kinds of problems they're using this cluster to speed up. GPUs aren't too keen on problems involving data that is hard to cache and as far as I know, the instruction set is somewhat limited to doing lots of little, parallel
  • wow the world of technology is spiking, i remember only a few years ago there was only 1 massive super computer, now every university will have one, what next, link every supercomputer and have a supercomputer cloud or should i say nebula now? :p the rise of the machine, let me take this time to welcome our new ovelords.
    • by u38cg ( 607297 )
      They should link them all together to form one supercomputer, it would need some kind of hardcore name, though, like something out of an Anglo-Saxon epic perhaps.
  • by sonamchauhan ( 587356 ) <sonamc@NOsPam.gmail.com> on Monday November 23, 2009 @06:03AM (#30200552) Journal

    Hmmm.... is this setup a realisation of this release from Nvidia in March

    Nvidia Touts New GPU Supercomputer
    http://gigaom.com/2009/05/04/nvidia-touts-new-gpu-supercomputer/ [gigaom.com]

    Another 'standalone' GPGPU supercomputer, without the Infiniband switch
    University of Antwerp makes 4000EUR NVIDIA supercomputer
    http://www.dvhardware.net/article27538.html [dvhardware.net]

  • FINALY! (Score:3, Funny)

    by TheDarkMaster ( 1292526 ) on Monday November 23, 2009 @06:29AM (#30200612)
    Finaly a machine good enought to run Crysis at full specs on 1680x1050 (well, I hope so)
    • by ozbird ( 127571 )
      Yes, but can it run Duke Nukem Forever?
    • My current computer already runs Crysis at full specs at 1680x1050 you insensitive clod!

      • Mine runs it at 2048x1152.

        I just have trouble controlling my character at 7fps.

        • by Barny ( 103770 )

          Time to upgrade, my 12mth old hardware runs it at 1920x1200 with all detail on max at 60fps.

          This meme is getting old very fast.

          • I did just upgrade... my monitor! :P

            Are you referring to the insensitive clod meme? Yeah, it's not funny anymore.

          • Your 12 month hardware is probably a GTX285/275 SLI setup or a similar ATI one. Most users don't have such luxury :P

            • by Barny ( 103770 )

              Very very close, GTX280 SLI :)

              Most users also don't need to run at 1920x1200, and most users can now afford a GTX275 for their basic gaming machine without too much of a stretch :)

              • This config on Brazil is a little... difficult. You pay maybe $400 on one GTX280, here you are forced to pay +- $755 for the exacty same card. For a SLI setup the cost will go to $1510... only the cards.
                • by Barny ( 103770 )

                  Likely because 280s were phased out a while back by 285s, which should be cheaper to get a hold of.

          • But... but i do not have the necessary north-korean nuclear reactor to power this... thing :)
  • by Anonymous Coward

    Does it use wood screws?

  • We [harvard.edu] have one of those already; I imagine a lot of schools do. Ours is only an 18-node cluster so the numbers are much smaller, but the story here is that this is relatively big, not that it's some new thing.
    • Tsubame at Tokyo Tech's also had GPU's for well over a year now, and though I'm not sure about the numbers we talk large scale (high on the top 500 list).
    • by dlapine ( 131282 )

      We've had the Lincoln cluster [illinois.edu] online and offering processing time since February of 2009. 196 computing nodes (dual quad cores) and 96 Tesla units. That being said, congrats to the Aussie's for bringing a powerful new system online.

      Someone later in thread asked if these GPU units would actually be useful for scientific computing. We think so. Our users and researchers here have developed implementations of both NAMD, a parallel molecular dynamics simulator [uiuc.edu] and MIMD Lattice Computation (MILC) Collaboration [indiana.edu]

      • by Barny ( 103770 )

        Mod parent up, although limit the mod to around 4, there were no cars in his analogy!

        I can see the CSIRO getting more cool toys in the not-too-distant future what with their payout from their 802.11n patent win. Great to see them putting the funds to use (although I am betting this baby was on order long before that money will flow into their coffers).

  • From an open source point of view... this is a mistake since we (as open source people) must favor AMD GPUs. Moreover, it has been 2 years the AMD GPUs seem faster than nvidia ones. So from such bad news, open source people must keep the bearing: favor AMD GPUs whatever.
  • What API would be the best approach for writing some future proof GPU code?
    I'm willing to sacrifice some bleeding edge performance now for ease of maintainability.

    Other GPU possibilities
    * OpenCL
    * GPGPU
    * CUDA
    * DirectCompute
    * FireStream
    * Larrabee
    * Close to Metal
    * BrookGPU
    * Lib Sh

    Cheers
  • ... a beowulf cluster of those! ;)

    (Sorry, it had to be said)
  • http://www.top500.org/system/10186 [top500.org] The machine quoted in TFA is quoting single precision. Currently the ATI boards trounce the Nvidia boards in double precision. The next GPU cluster down the list is Nvidia based at #56 http://www.top500.org/site/690 [top500.org]
  • ... reading a story some time ago about the use of GPU clusters by organizations on national security watch lists to circumvent ITAR controls.

You are always doing something marginal when the boss drops by your desk.

Working...