Australia's CSIRO To Launch CPU-GPU Supercomputer 82
bennyboy64 contributes this excerpt from CRN Australia: "The CSIRO will this week launch a new supercomputer which uses a cluster of GPUs [pictures] to gain a processing capacity that competes with supercomputers over twice its size.
The supercomputer is one of the world's first to combine traditional CPUs with the more powerful GPUs.
It features 100 Intel Xeon CPU chips and 50 Tesla GPU chips, connected to an 80 Terabyte Hitachi Data Systems network attached storage unit. CSIRO science applications have already seen 10-100x speedups on NVIDIA GPUs."
Stating the obvious, but... (Score:4, Informative)
Graphics processing, the technically demanding part of PC gaming, uses GPUs essentially exclusively. Physics processing, the runner-up, can already be loaded off to technically-similar PPUs, or even actual GPUs working as physics processors. The reason that most apps run on the CPU is that it's easier to write for, not that most apps actually run better on it for some fundimental reason.
Re:Stating the obvious, but... (Score:5, Interesting)
Okay, that's not quite true, most tasks benefit from piddling about on the CPU, but demanding tasks would be better off running on something faster and more specialised. The barrier to that is that it's harder to write GPGPU code.
Re:Stating the obvious, but... (Score:4, Insightful)
The reason that most apps run on the CPU is that it's easier to write for, not that most apps actually run better on it for some fundimental reason.
Well that's not exactly true. Of course frameworks for writing programs that utilize the gpu are still on their infancy, but that doesn't mean that all problems are suited for the gpu. Problems that are best solved by the gpu are problems that can be parallelised. I am not exactly sure what do you mean when you say most apps, but if you are talking about apps typicaly found on a desktop that simply isn't true.
The fundamental reason is that gpus are really good at doing the same thing on different sets of data. For example you can send an array of 1000 ints and tell the gpu to calculate and return their square or something similar. The reason for this is that when gpus are used for graphics they usually have to do the same operation on all the pixels on the screen, and they evolved to be good at that. I cannot see how this is useful for desktop applications, especially if you consider the massive cost of accessing data on main memory from the gpu.
Conceded (Score:1)
There are indeed tasks that don't parallelise well. My brain's filed them as unimportant, but that's likely due to the difficulty in doing computational work that parallelises poorly rather than some fundimental deficiency. A better way of putting it would be to say that most hard-core research computing is done in a manner that's very similar to hard-core gaming computing, so it's actually a very sensible transition.
Re:lollero (Score:5, Informative)
The hardware has been around for quite some time, but now we're realizing all the things a GPU can do besides run pretty games faster.
Re:lollero (Score:4, Informative)
It's only traditional on very particular workloads that are very parallel, use a lot of floating point and has a largely coherent execution pattern/memory access. The CPU is still the king of general computing tasks that have lots of incoherent branches, indirection and that require serialized execution.
Re: (Score:2)
*only more powerful than traditional...
Can someone explain... (Score:2, Interesting)
Can someone explain exactly what the benefits/drawbacks of using GPUs for processing?
It would also be nice if someone could give a quick run down of what sort of applications GPUs are good at.
Re:Can someone explain... (Score:5, Informative)
Can someone explain exactly what the benefits/drawbacks of using GPUs for processing?
GPUs are massively parallel handling hundreds of cores and tens of thousands of threads. The drawbacks are they have limited instruction sets and don't support a lot of the arbitrary jumping, memory loading, etc. that CPUs do.
It would also be nice if someone could give a quick run down of what sort of applications GPUs are good at.
Anything that is massively parallelisable and processing intensive. The usual bottle neck with GPU programming in normal computers is the overhead of loading from RAM to GPU-RAM. Remove this bottleneck in a custom system and you can have enormous speed ups in parallel applications once you compile the code down to GPU instructions.
Greater detail I will leave to the experts...
SIMD (Score:1)
In other words, the GPU is a single-instruction-multiple-data (SIMD) device. It matches well simple, regular computations like that which occurs in digital signal processing, image processing, computer-generated graphics, etc.
The modern-day GPU is the
Re: (Score:2)
Sorry, you need to tie that comparison to something. What did you mean?
Re:SIMD (Score:5, Funny)
You lost me there, your car analogy contains a train, which threw me off track.
Re: (Score:2)
Nice.
Re: (Score:2)
Well, at least in Half Life I could always select "Software" as a rendering method. It wasn't nice, but it didn't look like "Asteroids".
Re: (Score:3, Interesting)
The main drawback of using GPUs for scientific applications is their poor support for double precision floating point operations.
Using single precision mathematics makes sense for games, where it doesn't matter if a triangle is a few millimetres out of place or the shade of a pixel is slightly wrong. However, in a lot of scientific applications these small errors can build up and completely invalidate results.
There are rumours that Nvidias next generation Fermi GPU will support double precision mathematics
Re: (Score:1, Interesting)
Note that doubles take... er... double bandwidth as the equivalent single values. GPUs are, afaik, pretty bandwidth intensive... so even if the operators are supported natively you will notice the hurt of the increased bandwidth. Note that the bandwidth is only a problem for actual inputs/outputs of the program... the intermediate values should be in GPU registers... but then again... double registers take twice as much space as single registers, so with the same space you get half of them, supporting less
Re: (Score:2)
Re: (Score:2)
check overflow, if so add 1 to B or D, then add B + D store in D
Most CPUs have an "add with carry" instruction that reduces this sequence of steps to one instruction.
Faking higher-precision floating point in lower-precision hardware is WORSE.
Agreed, FAR worse.
Re:Can someone explain... (Score:4, Informative)
The next gen Fermi is supposed to do ~600 Double Precision GFLOPS. It also has ECC Memory, has a threading unit built into it, and a lot more cache.
http://en.wikipedia.org/wiki/GeForce_300_Series [wikipedia.org]
Re: (Score:1)
I thought it was Single Instruction Multiple Data (Score:2)
GPUs are massively parallel handling hundreds of cores and tens of thousands of threads
eh? Massively parallel yes. The rest?
More to do with a single instruction performing the same operation on multiple bits of data at the same time. AKA vector processors. Great for physics/graphics processing where you want to perform the same process on lots of bits of data.
Re: (Score:2)
Sort of. NVIDIA's definition of a "thread" is different from a CPU thread -- it's more similar to the instructions executed on a single piece of data in a SIMD system. You're not required to make data-parallel code for the GPU, but certainly data-parallel code is the easiest to write and visualize.
On NVIDIA chips, at least, there are a number of independent processors. The processors execute vector instructions (though all the vector instructions can be conditionally executed, so that, e.g., they only affec
Re: (Score:2)
Drawbacks: trying to program something to take advantage of all that power requires you to scale up to several thousand threads. Not always that easy.
Re: (Score:2)
GPUs are fast but limited to very specific kinds of instructions. If you can write your code using those instructions, it will run much quicker than it would on a general-purpose processor. They're also ahead of the curve on things like parallelisation, compared to desktop chips: the idea of writing graphics code for a 12-pipe GPU was mundane half a decade ago while there's still scant support for multiple cores in CPUs.
Re: (Score:3, Interesting)
I can take a stab; GPUs traditionally render graphics, good at processing vectors and mathsy things. Now think of a simulation of a bunch of atoms, the forces between the atoms are often approximated to Newtonian laws of motion for computational efficiency reasons, this is especially important when dealing with tens of thousands of atoms - called Molecular Dynamics (MD). So the same maths used for graphic intensive computer games is the same as classical MD. The problem hither to is that MD software has
Re: (Score:2)
On that subject, I just read a great paper on methane hydrates (trapping methane in ice) which would've have been possible without some truly enormous computing horsepower. Studies over a microsecond timescale (which is an eternity for molecules in motion) were needed because of the rarity of the events they were trying to model. Good luck: you're opening up a whole new generation of computational chemistry.
GPUs are good if (Score:5, Informative)
1) Your problem is one that is more or less infinitely parallel in nature. Their method of operation is a whole bunch of parallel pathways, as such your problem needs to be one that can be broken down in to very small parts that can execute in parallel. A single GPU these days can have hundreds of parallel shaders (the GTX 285 has 240 for example).
2) Your problem needs to be fairly linear, not a whole lot of branching. Modern GPUs can handle branching, but they take a heavy penalty doing it. They are designed for processing data streams where you just crunch numbers, not a lot of if-then kind of logic. So if your problem should be fairly linear to run well.
3) Your problem needs to be solvable using single precision floating point math. This is changing, new GPUs are getting double precision capability and better integer handling, but almost all of the ones on the market now are only fast with 32-bit FP. So your problem needs to use that kind of math.
4) Your problem needs to be able to be broken down in to pieces that can fit in the memory on a GPU board. This varies, it is typically 512MB-1GB for consumer boards and as much as 4GB for Teslas. Regardless, your problem needs to fit in there for the most part. The memory on a GPU is very fast, 100GB/sec or more of bandwidth for high end ones. The communication back to the system via PCIe is an order of magnitude slower usually. So while you certainly can move data to main memory and to disk, it needs to be done sparingly. For the most part, you need to be cranking on stuff that is in the GPU's memory.
Now, the more your problem meets those criteria, the better a candidate it is for acceleration by GPUs. If your problem is fairly small, very parallel, very linear and all single precision, well you will see absolutely massive gains over a CPU. It can be 100x or so. These are indeed the kind of gains you see in computer graphics, which is not surprising given that's what GPUs are made for. If your problem is very single threaded, has tons of branching, requires hundreds of gigs of data and such, well then you might find offloading to a GPU slower than trying it on a CPU. The system might spend more time just getting the data moved around than doing any real work.
The good news is, there's an awful lot of problems that nicely meet the criteria for running on GPUs. They may not be perfectly ideal, but they still run plenty fast. After all, if a GPU is ideally 100x a CPU, and your code can only use it to 10% efficiency, well hell you are still doing 10x what you did on a CPU.
So what kind of things are like this? Well graphics would be the most obvious one. That's where the design comes from. You do math on lots of matrices of 32-bit numbers. This doesn't just apply to consumer game graphics though, material shaders in professional 3D programs work the same way. Indeed, you'll find those can be accelerated with GPUs. Audio is another area that is a real good candidate. Most audio processing is the same kind of thing. You have large streams of numbers representing amplitude samples. You need to do various simple math functions on them to add reverb or compress the dynamics or whatever. I don't know of any audio processing that uses GPUs, but they'd do well for it. Protein folding is another great candidate. Folding@Home runs WAY faster on GPUs than CPUs.
At this point, GPGPU stuff is still really in its infancy. We should start to see more and more of it as more people these days have GPUs that are useful for GPGPU apps (pretty much DX10 or better hardware, nVidia 8000 or higher and ATi 3000 or higher). Also there is starting to be better APIs out for it. nVidia's CUDA is popular, but proprietary to their cards. MS has introduced GPGPU support in DirectX, and OpenCL has come out and is being supported. As such, you should see more apps slowly start to be developed.
GPUs certainly aren't good at everything, I mean if they were, well then we'd just make CPUs like GPUs and call it good. However there is a large set of problems they are better than the CPU at solving.
Re: (Score:3, Insightful)
TFA doesn't talk about specific applications but I bet the CSIRO want this machine for modelling. Climate modelling is a big deal here in Australia. Predicting where the water will and will not be. This time of year bush fires are a major threat. I bet that with the right model and the right data you could predict the risk of fire at high resolution and in real time.
Re: (Score:1)
No, this is to hide the space ships from public view. You're on Slashdot, so youi've obviously watched Stargate. We have these giant space-faring ships, but we can't let the public know about them. They're obviously going to cover the sky with giant LCD monitors, so amateur astronomers can't see what's going on. Let's just hope they remember to scrape off the logo.
What? That's not what they meant by "launch"? Oh.
Re: TFA doesn't talk about specific applications (Score:2)
Re: (Score:2)
All you need for that is a chisel and a back shed to work in.
Re: (Score:2)
Re: (Score:2)
It is not an exaggeration to call these things super-convolvers, excelling at doing large-scale pairwise multiply-and-add's on arrays of floats, which can be leveraged to do more specific things like large matrix multiplication in what amounts to (in practice) sub-linear time. A great many different problem sets can be expressed as a series of convolutions, including neural netwo
Re: (Score:2)
matrix multiplication in what amounts to (in practice) sub-linear time.
What? The GPU matrix multiplications are generally done in the straightforward O(n^3) fashion - you may divide this by something proportional to the number of cores available, but what you mean by "sub-linear" I can't imagine.
Re: (Score:2)
The system is as fast as setups twice the size, i.e. it is half the size.
Re: (Score:2)
This is also true of any typical x86_64 node in an HPC. It's just a regular server board, often optimised for high density (so half-width 1U isn't uncommon) and with a better interconnect than gigabit (like Infiniband). Rack mount that'll fit double width PCIe cards used to be tricky. Now even Dell produces one (Precision rack mount).
Having code that it is a good fit for Tesla is a big problem. A lot of HPC work is code that's been tweaked since the 70s that is a mangle of Fortran 4/77/90/95 hacked on b
Re: (Score:2, Insightful)
Coding will never get you publications, so the necessary rewrites are never done. We have code that refuses to compile on compilers newer than ~2003, and even needs specific library versions. It will never be fixed.
To all fellow PhD's out there hacking away at programs: The best thing you can do is to rely heavily on standard libraries (BLAS and LAPACK). The CS guys do get (some) publications out of optimizing those, so there are some impressive speedups to be had there - like recently the FLAME project for
Cool but... (Score:1, Funny)
..can it Run CRySiS?
Seems logical to me. (Score:2)
The World of Tomorrow (Score:1, Funny)
Re: (Score:2)
In related news... (Score:3, Interesting)
Hmmm.... is this setup a realisation of this release from Nvidia in March
Nvidia Touts New GPU Supercomputer
http://gigaom.com/2009/05/04/nvidia-touts-new-gpu-supercomputer/ [gigaom.com]
Another 'standalone' GPGPU supercomputer, without the Infiniband switch
University of Antwerp makes 4000EUR NVIDIA supercomputer
http://www.dvhardware.net/article27538.html [dvhardware.net]
FINALY! (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
My current computer already runs Crysis at full specs at 1680x1050 you insensitive clod!
Re: (Score:2)
Mine runs it at 2048x1152.
I just have trouble controlling my character at 7fps.
Re: (Score:2)
Time to upgrade, my 12mth old hardware runs it at 1920x1200 with all detail on max at 60fps.
This meme is getting old very fast.
Re: (Score:2)
I did just upgrade... my monitor! :P
Are you referring to the insensitive clod meme? Yeah, it's not funny anymore.
Re: (Score:2)
Your 12 month hardware is probably a GTX285/275 SLI setup or a similar ATI one. Most users don't have such luxury :P
Re: (Score:2)
Very very close, GTX280 SLI :)
Most users also don't need to run at 1920x1200, and most users can now afford a GTX275 for their basic gaming machine without too much of a stretch :)
Re: (Score:1)
Re: (Score:2)
Likely because 280s were phased out a while back by 285s, which should be cheaper to get a hold of.
Re: (Score:1)
NVIDIA, huh? (Score:1, Funny)
Does it use wood screws?
not first, just big (Score:2)
Re: (Score:2)
Re: (Score:2)
We've had the Lincoln cluster [illinois.edu] online and offering processing time since February of 2009. 196 computing nodes (dual quad cores) and 96 Tesla units. That being said, congrats to the Aussie's for bringing a powerful new system online.
Someone later in thread asked if these GPU units would actually be useful for scientific computing. We think so. Our users and researchers here have developed implementations of both NAMD, a parallel molecular dynamics simulator [uiuc.edu] and MIMD Lattice Computation (MILC) Collaboration [indiana.edu]
Re: (Score:2)
Mod parent up, although limit the mod to around 4, there were no cars in his analogy!
I can see the CSIRO getting more cool toys in the not-too-distant future what with their payout from their 802.11n patent win. Great to see them putting the funds to use (although I am betting this baby was on order long before that money will flow into their coffers).
mistake for open source (Score:1)
CUDA, GPGPU, OpenCL etc. (Score:1)
I'm willing to sacrifice some bleeding edge performance now for ease of maintainability.
Other GPU possibilities
* OpenCL
* GPGPU
* CUDA
* DirectCompute
* FireStream
* Larrabee
* Close to Metal
* BrookGPU
* Lib Sh
Cheers
Imagine... (Score:2)
(Sorry, it had to be said)
The #5 Supercomputer is already GPU based (Score:1)
I seem to recall ... (Score:2)