Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing Australia Graphics Hardware

Australia's CSIRO To Launch CPU-GPU Supercomputer 82

bennyboy64 contributes this excerpt from CRN Australia: "The CSIRO will this week launch a new supercomputer which uses a cluster of GPUs [pictures] to gain a processing capacity that competes with supercomputers over twice its size. The supercomputer is one of the world's first to combine traditional CPUs with the more powerful GPUs. It features 100 Intel Xeon CPU chips and 50 Tesla GPU chips, connected to an 80 Terabyte Hitachi Data Systems network attached storage unit. CSIRO science applications have already seen 10-100x speedups on NVIDIA GPUs."
This discussion has been archived. No new comments can be posted.

Australia's CSIRO To Launch CPU-GPU Supercomputer

Comments Filter:
  • by MichaelSmith ( 789609 ) on Monday November 23, 2009 @06:53AM (#30200512) Homepage Journal

    TFA doesn't talk about specific applications but I bet the CSIRO want this machine for modelling. Climate modelling is a big deal here in Australia. Predicting where the water will and will not be. This time of year bush fires are a major threat. I bet that with the right model and the right data you could predict the risk of fire at high resolution and in real time.

  • by Anonymous Coward on Monday November 23, 2009 @07:54AM (#30200710)

    Coding will never get you publications, so the necessary rewrites are never done. We have code that refuses to compile on compilers newer than ~2003, and even needs specific library versions. It will never be fixed.
    To all fellow PhD's out there hacking away at programs: The best thing you can do is to rely heavily on standard libraries (BLAS and LAPACK). The CS guys do get (some) publications out of optimizing those, so there are some impressive speedups to be had there - like recently the FLAME project for LAPACK, e.g. or the GOTO-BLAS for the BLAS routines.

  • by Stratoukos ( 1446161 ) on Monday November 23, 2009 @07:58AM (#30200732)

    The reason that most apps run on the CPU is that it's easier to write for, not that most apps actually run better on it for some fundimental reason.

    Well that's not exactly true. Of course frameworks for writing programs that utilize the gpu are still on their infancy, but that doesn't mean that all problems are suited for the gpu. Problems that are best solved by the gpu are problems that can be parallelised. I am not exactly sure what do you mean when you say most apps, but if you are talking about apps typicaly found on a desktop that simply isn't true.

    The fundamental reason is that gpus are really good at doing the same thing on different sets of data. For example you can send an array of 1000 ints and tell the gpu to calculate and return their square or something similar. The reason for this is that when gpus are used for graphics they usually have to do the same operation on all the pixels on the screen, and they evolved to be good at that. I cannot see how this is useful for desktop applications, especially if you consider the massive cost of accessing data on main memory from the gpu.

One possible reason that things aren't going according to plan is that there never was a plan in the first place.

Working...