Inside Tsubame, Japan's GPU-Based Supercomputer 75
Startled Hippo writes "Japan's Tsubame supercomputer was ranked 29th-fastest in the world in the latest Top 500 ranking with a speed of 77.48T Flops (floating point operations per second) on the industry-standard Linpack benchmark. Why is it so special? It uses NVIDIA GPUs. Tsubame includes hundreds of graphics processors of the same type used in consumer PCs, working alongside CPUs in a mixed environment that some say is a model for future supercomputers serving disciplines like material chemistry." Unlike the GPU-based Tesla, Tsubame definitely won't be mistaken for a personal computer.
Wow! [Obligatory] (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1, Informative)
Hold the hyperbole (Score:2)
On reading the article, the box has 30 thousand cores, of much the vast majority are AMD Opterons in Sun boxes. No mention of how/in what you'd program this to actually put the GPUs to good use.
Re:Hold the hyperbole (Score:4, Insightful)
That's why the supercomputer rankings are based on reasonably complex benchmarks instead of synthetic "cores * flops/core" types of numbers. Scoring well on the benchmark is supposed to be solid evidence that the computer can in fact do something useful. My question though is whether the GPUs contributed to the benchmark score, or were just along for the ride.
Re: (Score:2)
As I recall, GPUs and other vector type processors do quite poorly on Linpack, so probably not.
Re:Hold the hyperbole (Score:4, Insightful)
how would data parallelism negatively affect a test that is designed to measure a system's performance in supercomputing applications--a field which is dominated by problems which involve processing extremely large data sets?
if vector processors do in fact perform poorly on LINPACK benchmarks then that would mean LINPACK performance is not a good indicator of real-world performance, but that clearly isn't the case as vector processors consistently perform quite well in LINPACK suite measurements [hoise.com].
vector processing began in the field of supercomputing, which during the 1980's and 1990's were essentially the exclusive realm of vector processors. it wasn't until companies, to save money, started designing & building supercomputers using commodity processors (P4s, Opterons, etc.) that general-purpose scalar CPUs began to replace specialized vector processors in high-performance computing. but now companies like Cray and IBM [cnet.com] are starting to realize that this change was a mistake.
even in commodity computing the momentum is shifting away from general-purpose scalar CPUs towards specialized vector coprocessors like GPUs, DSPs, array processors, stream processors, etc. when you're dealing with things like scientific modeling, economic modeling, engineering calculations, etc. you need to crunch large data sets using the same operation; this is best done in parallel using SIMD. using specialized vector processors (and instruction sets) you can run these applications far more efficiently than you could using a scalar processor running at much higher clock speeds. the only downside is that you lose the advantage of using commodity hardware that's cheap because of their high volume production. but if companies like Adobe start developing their applications to employ vector/stream coprocessors, then that will boost the adoption of these vector processors in the commodity computing market, which will increase production volume and lower manufacturing costs.
Re: (Score:2)
http://www.netlib.org/linpack/ [netlib.org]
Note that if you've got a vector machine you usually use LAPACK, which is optimized for that architecture.
Re: (Score:2)
LAPACK [wikipedia.org] may be the successor to LINPACK [wikipedia.org], but they were both written for vector processors (PDF) [warwick.ac.uk].
LINPACK was just optimize for the shared-memory architectures that were once popular, whereas LAPACK is optimized to exploit (using Basic Linear Algebra Subprograms) the cache-based architectures used in modern supercomputers.
Re:Hold the hyperbole - Read again (Score:5, Informative)
You may want to read the article again, if not here's a recap:
655 Sun Boxes each with 16 AMD cores=10,480 CPU cores
680 Tesla Cards each with 240 processors=163,2000 GPU processors
As for how to use the GPU's, I use my GTX280 (almost same thing as Tesla) to crunch through lots of numeric calculations in parallel. I'm sure these guys are doing the same thing as that is the strength of the GPU. NVIDIA has made it easier to access the processing power of the GPU with CUDA. You create a program in C that gets loaded on the GPU and when you launch it you can tell it how many copies to run at one time, each one typically operates on a different portion of the data. Because you can launch more threads than there are processors, the GPU can be reading data in from global vid mem while other threads are performing calculations.
Re: (Score:2)
Well I actually work with CUDA and I just used that term, so that makes at least 1 person.
The term "GPU processor" was merely a shorthand method of stating that the number 163,200 related to circuitry that performs calculations but without as much flexibility as a core on a traditional CPU. They do work, but groups of them share the same instruction. The term "core" would have seemed inaccurate
Re: (Score:2)
i don't know about CUDA, but when Microsoft discusses the number of "processors" a single instance of their OS supports they are generally referring to logical processors, which they define as:
# of physical processors * # of cores * # of threads
that's why Microsoft claims Windows 7 will scale up to 256 processors [zdnet.com]. in reality that's 64 physical processors * 2 cores * 2 threads, or 32 physical processors * 4 cores * 2 threads, etc.
Re: (Score:2)
The GPUs definitely made a huge difference in this case.
Clever name (Score:5, Funny)
Ironic name: tsubame means sparrow in japanese, and also has the slang usage of toy-boy (as in a cougar's toy-boy).
Not sure what to read into that ...
Re: (Score:2, Informative)
Tsubame is actually 'swallow', not 'sparrow', which is suzume.
Re: (Score:3, Funny)
Tsubame is actually 'swallow',
Is that an African, a European, or an Asian swallow?
Re: (Score:2)
I'm imagining Pirates of the Caribbean in Japanese... featuring the lovely Captain Jack Boy-Toy. Fitting.
Re: (Score:2)
Tsubame is also a female first name. And a nice one at that.
No need to dig further than that imo.
Re: (Score:2)
You say that as though we're supposed to know what it means...
Re: (Score:1)
What is a GPU? (Score:3, Interesting)
When it has no graphics out? It is still a GRAPHICS Processing Unit when it doesn't calculate any graphics and doesn't display any graphics. HUH? ;)
They have a whole lot of these boosting a whole lot of quad-cores.
Re: (Score:1)
Re: (Score:2)
They want the GPU's for their number-crunching ability. Since each GPU would be working on a small portion of the simulation being processed, you are going to need a separate system to fetch whatever item of data you want to visualize. This system is going to have to talk to every GPU in order to this data and render it.
Re: (Score:2)
Ofcourse (Score:1)
Re: (Score:2)
Nvidia/ati and a bunch of others just built an open spec (library?) that will allow this to happen
Re:Ofcourse (Score:5, Informative)
Indeed, that's the whole idea behind the recently ratified OpenCL [wikipedia.org] specification. Design a C-like language that provides a standard abstraction layer for the ability to perform complex computations on a CPU, GPU, or conceivably on any number of other devices lying around (e.g. idle I/O Processors, the DSP core in your WinModem, your printer's raster engine...).
Re: (Score:1)
I thought the whole point of a winmodem was that there wasn't a DSP in it (and that junky printers don't have raster engines, it's in the driver).
Re: (Score:2)
Wow, that nit you managed to pick is tiny.
Re: (Score:1)
I'm not nit picking. The other post is pretty confident in what it says, so I'm actually curious if I am misinformed.
Re: (Score:2)
You're right, WinModems don't have DSPs. I don't know about printers without rasterizing engines being junky, some may be. I haven't heard much about this issue lately. Frankly, I don't know if some of my printers have them or not. I know I have one that supports PCL 6, but it was a high end business printer when it was new. DSPs can be a bit expensive, so it can make some printer tech more affordable. I think the main objection now might be that they didn't support a printing standard, so there was o
Re: (Score:2)
Re: (Score:2)
On the other hand there is the DSP core in Creative X-Fi cards (not that anyone should own one). Modern TV tuner cards have MPEG-2 encoding units, these must be worth something. Higher end, professional video hardware like HD video capture cares and real-time video effects rendering cards often have Xilinx FPGA, most of which probably have a built in POWER CPU core. In this case, the CPU and the programmability of the FPGA are useful. Actually useful SATA RAID cards that support RAID 5 and RAID 6, like
Re: (Score:2)
Perhaps the Winmodem thing was a poor example, but according to this post [osdir.com], some of them do have DSP hardware, but lack a hardware UART. Whether that poster was correct or not, I'm not sure, but that is consistent with my vague memory on the subject. In any case, that's straying pretty far from the subject at hand. :-)
The missing numbers (Score:3, Informative)
just to get a perspective, the GPUs provide about 10 out of 77 TFLOPs benchmarked in LINPACK HPC article [sun.com]
Could do it for cheaper (Score:2)
Re:Could do it for cheaper (Score:4, Informative)
ATI's latest cards give more punch for the cost apiece. and they are designed specifically for being clustered/linked/xfired and whatnot.
I thought the nV Teslas were designed for HPC.
Performance going up, cost going down happens so quickly something like that can easily happen between the time it's ordered and the time it's installed.
Re: (Score:3, Insightful)
They could do it cheaper with anything at the current price. However, this wasn't just slopped together last month with the latest hardware off newegg.
No doubt, there's a SC being built up right now around all the latest AMD parts. By the time it gets benchmarked, we'll be able to complain that something else is a better deal.
tesla is a pci-e card... (Score:2)
Supercomputer or many not-so-super computers? (Score:4, Interesting)
Re: (Score:3)
Re: (Score:2)
Well, IANASE (Supercomputer Expert) but I *am* a programmer....
I'm assuming that you have a supercomputer when all those otherwise individual computers are working together in a coordinated fashion on a common problem.
A great example of a supercomputer is SETI @ Home [berkeley.edu] which easily meets the definition of a "supercomputer" in many (most?) circles, although they usually refer to it as "distributed computing".
Re: (Score:1)
The usual distinction between a supercomputer (that may be a cluster) and distributed computing is that in a supercomputer, all the individual computers are under central control. In a distributed computing environment you control your computer and provide resources to someone else's cluster.
The difficulty arises because so many people use similar phrases for slightly different things. You can argue that the second you have more than one processor you are in a 'distributed' computing environment as you are
Re: (Score:3, Informative)
Wikipedia claims that a supercomputer "is a computer at the forefront current processing capability" http://en.wikipedia.org/wiki/Supercomputer/ [wikipedia.org]. The top500 list implies that a supercomputer is a system that can run Linpack really fast, while noting that the system must also be able to run other applications. http://www.top500.org/project/introduction [top500.org]
Given that NCSA has run many supercomputers over the years, and that I've personally run three while working there, I'd say that a good rule of thumb is that
77.48T Flops (Score:1)
Re: (Score:1)
Model the stock market with it... (Score:2)
Re: (Score:2)
Re: (Score:2)