Inside Tsubame, Japan's GPU-Based Supercomputer 75
Startled Hippo writes "Japan's Tsubame supercomputer was ranked 29th-fastest in the world in the latest Top 500 ranking with a speed of 77.48T Flops (floating point operations per second) on the industry-standard Linpack benchmark. Why is it so special? It uses NVIDIA GPUs. Tsubame includes hundreds of graphics processors of the same type used in consumer PCs, working alongside CPUs in a mixed environment that some say is a model for future supercomputers serving disciplines like material chemistry." Unlike the GPU-based Tesla, Tsubame definitely won't be mistaken for a personal computer.
Re:Ofcourse (Score:5, Informative)
Indeed, that's the whole idea behind the recently ratified OpenCL [wikipedia.org] specification. Design a C-like language that provides a standard abstraction layer for the ability to perform complex computations on a CPU, GPU, or conceivably on any number of other devices lying around (e.g. idle I/O Processors, the DSP core in your WinModem, your printer's raster engine...).
Re:Hold the hyperbole - Read again (Score:5, Informative)
You may want to read the article again, if not here's a recap:
655 Sun Boxes each with 16 AMD cores=10,480 CPU cores
680 Tesla Cards each with 240 processors=163,2000 GPU processors
As for how to use the GPU's, I use my GTX280 (almost same thing as Tesla) to crunch through lots of numeric calculations in parallel. I'm sure these guys are doing the same thing as that is the strength of the GPU. NVIDIA has made it easier to access the processing power of the GPU with CUDA. You create a program in C that gets loaded on the GPU and when you launch it you can tell it how many copies to run at one time, each one typically operates on a different portion of the data. Because you can launch more threads than there are processors, the GPU can be reading data in from global vid mem while other threads are performing calculations.
Re:Clever name (Score:2, Informative)
Tsubame is actually 'swallow', not 'sparrow', which is suzume.
The missing numbers (Score:3, Informative)
just to get a perspective, the GPUs provide about 10 out of 77 TFLOPs benchmarked in LINPACK HPC article [sun.com]
Re:Could do it for cheaper (Score:4, Informative)
ATI's latest cards give more punch for the cost apiece. and they are designed specifically for being clustered/linked/xfired and whatnot.
I thought the nV Teslas were designed for HPC.
Performance going up, cost going down happens so quickly something like that can easily happen between the time it's ordered and the time it's installed.
Re:Wow! [Obligatory] (Score:1, Informative)
Re:Supercomputer or many not-so-super computers? (Score:3, Informative)
Wikipedia claims that a supercomputer "is a computer at the forefront current processing capability" http://en.wikipedia.org/wiki/Supercomputer/ [wikipedia.org]. The top500 list implies that a supercomputer is a system that can run Linpack really fast, while noting that the system must also be able to run other applications. http://www.top500.org/project/introduction [top500.org]
Given that NCSA has run many supercomputers over the years, and that I've personally run three while working there, I'd say that a good rule of thumb is that a supercomputer is a system designed to achieve high amounts of calculation throughput (as opposed to instant response) and that the system is at least 100x as powerful a high-end PC of that time frame. In fact, you could simplfy the rule down to- a system designed as a single unit to achieve high computing performance.
In order to accomplish all these things, supercomputers tend to have 2 things that "normal" network of PC's doesn't- a high speed, low latency network or interconnect, (and possibly several networks, each serving a different purpose) and a high speed, shared filesystem. Also, supercomputer tends to be designed and installed as a single unit, whereas a network of PC's happens over time.
Supercomputers tend to fall into one of 2 categories- a large collection of server class machines(cluster) or a small set of mainframe style systems(SMP). If you have the cash, you buy a large set of mainframe style systems, but who has the cash? Folks tend to purchase clusters as they tend to be less expensive, but you'd have determine if your application can work correctly on a large number of systems. Not all computing tasks can.
Tsubame, the system described above, is basically a cluster of inexpensive nodes with a high speed network. Applications on the cluster run on many of the individual nodes at the same time, and use the high speed network to pass messages to each other during the program, so that the application appears to be working on a single system. Tsubame is variant of a supercomputer cluster, where each inexpensive node is beefed up with co-processors and accelerators to increase the overall performance. Harder to program correctly, but potentially more powerful and still not as expensive as the large set of mainframes. Hope that helps.