IBM To Build 3-Petaflop Supercomputer 73
angry tapir writes "The global race for supercomputing power continues unabated: Germany's Bavarian Academy of Science has announced that it has contracted IBM to build a supercomputer that, when completed in 2012, will be able to execute up to 3 petaflops, potentially making it the world's most powerful supercomputer. To be called SuperMUC, the computer, which will be run by the Academy's Leibniz Supercomputing Centre in Garching, Germany, will be available for European researchers to use to probe the frontiers of medicine, astrophysics and other scientific disciplines."
Key take aways from the summary (Score:1)
Key take aways from the summary:
1. IBM will be responsible for a large 'SuperMuck'.
2. It will be used for probing by the Europeans.
Re: (Score:2)
The SkyNet central core.
They realized that computers in office buildings and dorm rooms aren't powerful enough.
Re: (Score:3, Insightful)
2: huge supercomputers can be leased out to cycle-hungry organizations the same way one would lease office space in a skyscraper.
3: each incremental advancement represents overcoming various hurdles faced by all computing technology; the simple needs of common folk will become that little bit easier as a part of our constant forward march in technological advancement.
Re: (Score:2)
Re: (Score:1)
1: it's a dick-wagging contest to have the best supercomputer in the world..
Absolutely correct. This was clearly demonstrated by the accelerated funding and freebie process that occurred when NASA was building its first SGI Altix monster, including emergency meetings with the Governor or California's office, the DOE directors office, Intel, and SGI. The build of that machine would have normally taken a year using the normal method of assembly, construction, and testing. They cut it to less than six months with one goal in mind, and it wasn't science. It was to get a sufficient
Imagine if they overclocked.. oh wait. (Score:2)
This looks like a pretty awesome setup they have. I'm glad that the US has a few supercomputer projects planned for 2012 that will possibly bring the somewhat elusive #1 title back our way. We'll have to see, the competition as always is pushing the envelope and by that time who knows what else could be in the works from China, etc.
Anyways, pre - gratz to the Germans for their new machine. Is anybody familiar with the hot water cooling tech developed by IBM as mentioned in the article?
Re: (Score:1)
Also in 2008 they published this [ibm.com], a solution to cool inside stacked dies.
Re: (Score:1)
Great info, thanks for your response!
Re: (Score:2)
with bubblesort
Re: (Score:1)
Indeed. Since bubble sort only swaps consecutive elements, and every outer loop proceeds over the dataset in linear order, it ensures 1) maximal locality of reference for the best possible use of cache and 2) very predictable memory access, allowing the processor to take advantage of cache read-ahead. No other sorting algorithm gets even close to using the memory hierarchy with such efficiency.
Re:3 Petaflops (Score:4, Funny)
So how long would that take to sort 3 petafiles?
Hi, I'm Stone Philips with Dateline NBC...
Re: (Score:3)
My name is Chris Hanson you insensitive clod!
Re: (Score:3)
I bow to your superior knowledge on the cataloger of petafiles.
Re: (Score:2)
what are you doing hear? (Score:2)
what are you doing hear?
Not POWER7, Not BlueGene, (Score:3, Interesting)
From the article:
"The system will use 14,000 Intel Xeon processors running in IBM System x iDataPlex servers."
IBM has two in-house HPC platforms that could both reach 3 PFLOPS (BlueGene/Q and POWER7), but instead they're building a Xeon cluster. I'm surprised that they would want to put a machine near the top of the TOP500 that wasn't a full-on IBM benefit--maybe IBM Germany is the contractor, and they don't have the R&D expertise? Or the Xeon cluster is cheaper/easier to program and maintain?
Re: (Score:1)
Didn't IBM help the Germans build such a machine back in the 1930's. As I remember it was to be used for counting people and general census taking....q
Re:Not POWER7, Not BlueGene(BlueGene/Q) (Score:4, Informative)
Both the Chinese machine and the German machine are not cutting edge designs. They represent what you can do with near commodity hardware and good but not fully custom packaging. They may look like top end machines today, but by 2012 they will not be in the top ten.
Re: (Score:2)
Hah! 20 petaflops may look like top end machines in 2012, but by 2015 they will not be in the top ten.
Re: (Score:2)
Bullshit, 3 petaflops should be enough for anyone.
Re: (Score:2)
The cooling system sounds genuinely innovative and beneficial, if successful:
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
However, this config makes no mention of GPUs, so it's probably moot. If you are saying they may upgrade these later, I would be surprised if they are using systems with enough space to accommodate GPUs if not doing it up front. Most of these configurations, regardless of vendor, go to half the CPU density to make room (space, power, cooling wise) for gpus. When dealing with a scale of 14k cpus, you generally pick the config up front and don't bother going back for piece-wise upgrades.
The only way to tel
Re: (Score:2)
Many don't care about the architecture because all their work is not hard to redo per-architecture. Those will jump on Itanium, POWER, Sparc, or whatever architecture the vendor has that hits the sweet spot. You can tell those as their top500 entries from year to year frequently jump architectures. These are also customers most amenable to jumping on the GPU bandwagon, despite the fact they are more painful to program for and require particular care and feeding to avoid becoming memory bottlenecked.
Many
I hope they rename it. (Score:2)
Why I love Moore's law (Score:3)
Another way of looking at it is that we'll have a similar amount of power in our phones, tablets, etc. that we have in our desktops right now. Super computers are going to get even more super and the types of problems that are expensive to solve today continue to get cheaper. I'm still a young man, but given how far things have come since I was born, I can't help but wonder what the world will be like when I'm many years further along the road. If for no other reason than the vast amount of computational power that's available to us.
Re:Why I love Moore's law (Score:4, Insightful)
So our desktop computers will wait thousands of times faster than they do today... for the next keystroke or mouse button-press. :-)
Re: (Score:3)
Moore's law is ok.
I prefer that law, forget the name right now, that says that as computational power increases, windows will require ALL of it to run, greatly increasing demand for CPU and RAM, and lowering the cost of hardware just behind the curve for the rest of us.
Re:Why I love Moore's law (Score:5, Funny)
As Intel giveth, Microsoft taketh away.
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
I don't know if it's the efficiency that falls, or just that all the extra power gets used on 3D-shaded semi-transparent smooth-scrolling menus.
Why yes Vista, I did glance at you!
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Back in 1983, I had an Atari 800 and a Kaypro. They prolly had more power than the computers used to land on the moon (and I remember that too). In 1969, my Dad was doing his PhD in fluid dynamics on an IBM with 64k of core memory. My calculator blows the old mainframe away (though the mainframe did useful work for 25 yrs).
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:3, Funny)
Germany, FUCK YEAH!
Coming again, to save the mother fucking day yeah,
Germany, FUCK YEAH!
Federal parliamentary republic is the only way yeah,
Computations your game is through cause now you have to answer too,
Germany, FUCK YEAH!
Das Land der Dichter und Denker,
Germany, FUCK YEAH!
What you going to do when we come for you now,
it's the dream that we all share; it's the hope for tomorrow
FUCK YEAH!
BMW, FUCK YEAH!
Mercedes, FUCK YEAH!
Porsche, FUCK YEAH!
Engineering, FUCK YEAH!
Efficiency, FUCK YEAH!
Claudia Schiffer, FU
Re: (Score:1)
German beer...Fuck yeah!
And Claudia Schiffer.
(Sorry, my word count program isn't working right now.)
How should we measure supercomputers now? (Score:4, Informative)
Once upon a time, supercomputers were bunches of general-purpose cpu's, and you made them faster by connecting up more of them.
Now people have realized that massively parallel special purpose chips (like Cell and, even more so, GPU's) can be used to do general-purpose computing, and have started to add those to clusters. But those chips have a lower bandwidth:flops ratio than the x86 etc. CPU's that have been historically used; the gap between a computer's "peak" FLOPS (on an ideal job with no communication requirements to either other nodes or to memory) and the performance it actually achieves is wider using something like CUDA than on a standard supercomputer. CUDA machines are so bandwidth-limited that people use rather hairbrained data compression schemes to move data from place to place, just because all the nodes have extra compute power lying around anyway, and the bottleneck is in communication. (The example that comes to mind is sending the coefficients of the eight generators of an SU(3) matrix rather than just sending the eighteen floats that make up the damn matrix. It's a lot of work to reassemble, relatively speaking, but it's worth it to avoid sending a few bits down the wire.)
CUDA is wonderful, and my field at least (lattice QCD) is falling over itself trying to port stuff to it. Even though it falls far short of its theoretical FLOPS, it's still a hell of a lot faster than a supercomputer made of Opterons. But we shouldn't fool ourselves into thinking that you can accurately measure computer speed now by looking at peak FLOPS. It makes the CUDA/Cell machines look better than they really are.
Re:How should we measure supercomputers now? (Score:5, Informative)
Oh, and the measure you are looking for are Rmax to Rpeak which will tell you how efficient the machine is (at least for LINPACK which may or may not track with your own code depending on how chatty it is in comparison to the benchmark).
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
I had to really think about measuring the efficiency of a simulation and I came up with a single answer: money. I was at a lecture about gyrokinetic simulations, and when I heard about the amount of resources being used for some simulations, I asked "how much does one of these simulations cost, in euros?". Luckily for me, the guy knew (large simulations cost in order of thousands), and he also knew how much an experiment on ITER will cost (order of a million); his argument was "it's obviously efficient to r
20 and 10 Pflops by then (Score:2)
Welcome CookieMonsterComputingOverlords! (Score:2)
No doubt named after the delicious Leibniz [wikipedia.org] cookies, mmmm, mmmm.
Re: (Score:2)
It took English mathematics a century to recover. So the next time you hear someone criticize something because "that's the way they do it in France" remember: Good artists copy, great artists steal (a bon mot I just came up with, pretty brilliant if I may say so)
the answer to life the universe and everything (Score:2)
Re: (Score:1)
Go Canada!
Re: (Score:1)
May have already been done (Score:2)
Not in the top 10, certainly not number 1 (Score:2)