"Intrepid" Supercomputer Fastest In the World 122
Stony Stevenson writes "The US Department of Energy's (DoE) high performance computing system is now the fastest supercomputer in the world for open science, according to the Top 500 list of the world's fastest computers.
The list was announced this week during the International Supercomputing Conference in Dresden, Germany.
IBM's Blue Gene/P, known as 'Intrepid,' is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall.
The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings. According to the list, 74.8 percent of the world's supercomputers (some 374 systems) use Intel processors, a rise of 4 percent in six months. This represents the biggest slice of the supercomputer cake for the firm ever."
So ... let met be the first to ask ... (Score:5, Funny)
Re:So ... let met be the first to ask ... (Score:5, Funny)
Sure it will.
As long as you don't run any programs.
Re:So ... let met be the first to ask ... (Score:5, Interesting)
Re: Your sig: (Score:1)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2, Funny)
Re:So ... let met be the first to ask ... (Score:5, Funny)
Honestly.. (Score:2)
Re: (Score:2, Funny)
Remember when Apple used to compete in this... (Score:2)
Another example of the reality distortion field...remember the "first 64-bit desktop" and "the thinnest laptop*"?
*Ports and DVD driv
Re: (Score:2)
Re: (Score:2)
This was the cluster. At its peak it was #14 on the top 500 list.
There's no reason to believe that Apple systems (XServes, etc) couldn't be used for a supercomputer cluster, but since they now use the same Xeon processors as everyone else, there's no compelling reason to choose them over another vendor of similar hardware.
Re: (Score:2)
http://www.top500.org/site/history/2024 [top500.org]
Re: (Score:2)
http://news.search.yahoo.com/news/search?p=Microsoft+supercomputing&c= [yahoo.com]
Of course, as long as they don't re-invent Unix, planet is safe. I can't picture people cleaning registry of a atomic explosion simulator.
what? where? (Score:2)
Re:what? where? (Score:4, Funny)
I'm more concerned about A, C, G and T.
Re: (Score:1)
Re: (Score:3, Funny)
Re: (Score:3, Funny)
This raises another question: how much porn can fit into one DNA molecule?
And should we store it in female DNA, just to be on the safe side?
Re:what? where? (Score:5, Informative)
The P in Blue Gene/P stands for "Petaflops", the target performace
The Q in Blue Gene/Q is probably just the letter after P
The C in Blue Gene/C stands for "cellular computing", now renamed Cyclops64.
Re: (Score:1)
Linpack? So does it run Linux? (Score:3, Funny)
Apparently, not necessarily. [netlib.org] It's just some Fortran routines.
So much for that joke.
Re:Linpack? So does it run Linux? (Score:4, Informative)
Re:Linpack? So does it run Linux? (Score:4, Funny)
Perhaps even more importantly (Score:3, Informative)
Re: (Score:3, Informative)
Re:Perhaps even more importantly (Score:5, Informative)
You were misled by a terrible headline. The 0.557 petaflop computer is the fastest *for open science.* Roadrunner, at Los Alamos, tops the list. It does 1 petaflop.
Petaflops (Score:3, Informative)
Re: (Score:2)
Yeah :)
Re: (Score:2)
Re: (Score:2)
Let me know when a system not on the list passes the petaflop mark.
That will be newsworthy.
Re: (Score:2)
Extrapolating from the performance development chart [top500.org] which shows a 10 fold increase about every 4 years (desktop computers should be pretty similar), and assuming top desktop computers today hit around 100 gigaflops, then you can expect we'll hit that sometime around 2024.
Unclassified speed (Score:1, Funny)
And that's the unclassified speed. Just imagine how fast it can really go! Just like the SR-71!
Supercomputer (Score:5, Funny)
George Broussard says that when the next generation of this machine reaches the desktop, Duke Nukem 4ever will be released. "Really", he said, "The game's been finished for over five years now. We're just waiting for a powerful enough computer to play it on."
Sources say that besides computitng power, DNF is waiting for the holographic display. The The US Department of Energy's (DoE) high performance computing system lacks a holographic display.
Gamers were reportedly disappointed in the news, although most said the price of the DoE's new computer wouldn't faze them. "After all" one said, "you have to have a decent machine to play any modern game!"
Re: (Score:1)
Wow, I'm waiting for that display too!
Re: (Score:1)
Does not compute (Score:5, Informative)
Re: (Score:1)
Re: (Score:2)
I'd really, really, love to learn about the programming practices one follows for a computer like that.
Re: (Score:1)
Re: (Score:2, Informative)
http://arstechnica.com/news.ars/post/20080618-game-and-pc-hardware-combo-tops-supercomputer-list.html [arstechnica.com]
Only partially true (Score:3, Informative)
Nonsense (Score:1)
The actual list (Score:5, Informative)
Inaccurate Summary (Score:2, Informative)
Love the fine print. (Score:1)
Re: (Score:2)
Because the less energy there is, the more the DoE is needed. They have to protect their cushy jobs, you know.
Answer: nuclear bombs (Score:2)
Because they simulate nuclear bombs, now that actual testing is forbidden by international treaty.
"Fastest supercomputer" an overused phrase. (Score:2)
But yet another article that uses the phrase "Fastest supercomputer" for attention because it can qualify in the article which list out of the dozens it's on. We have a fastest supercomputer almost every week of varying speeds. See Roadrunner [slashdot.org].
"Fastest supercomputer uses Slashdot"
The fastest supercomputer in Skreech's living room has posted a post on Slashdot.
I don't understand. (Score:1)
Key words: "For open science" (Score:4, Informative)
Re:Cliche (Score:4, Funny)
gDefine: Intrepid [google.com]
Re: (Score:3, Funny)
Intrepid can refer to: [wikipedia.org]
What's the framerate? (Score:1)
This article is wrong or confused (Score:1)
http://www.top500.org/lists/2008/06 [top500.org]
The #1 computer, the one over a petaflop, is RoadRunner at Los Alamos.
#2 is a bluegene machine from the DOE
#3 is Intrepid at argonne.
It's not clear to me how they could be so wrong in the article.
Top500 list, speculation, and private companies (Score:2, Insightful)
Secondly, the real benchmark is the application. Some algorithms run better on some platforms and worse on others. Period. Unless you are running a highly specialized set of applications - and nothing but - the rule of thumb is "design the best system you can
Fastest computer in the world. (Score:2)
Booooring (Score:5, Interesting)
I liked (back in the Old Days) when supercomputer rankings where based on linear, single processor performance. Now it's just how much money can you afford to put a lot of processors in a single place. That was a real test of engineering. By the current standards, Google (probably) has the largest supercomputer in the world.
Unfortunately, single core performance seems to have hit the wall.
Wroooong (Score:4, Informative)
--
In 1988, Cray Research introduced the Cray Y-MP®, the world's first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system to a record sustained speed of 2.3 gigaflops. --
The difference today is that almost all supercomputers use commodity chips, instead of custom designed cores.
Ohh - and the IBM one is almost a million times faster than the 20 years old '88 cray model.
Re: (Score:2)
Even in the Old Days, supercomputers had multiple processors.
Those aren't the Old Days. :) In fact, that was around the beginning of the New Days, when companies began giving up on linear processors. See also: The Connection Machine.
Re: (Score:1, Insightful)
In supercomputing it's all about bandwidth. It always was and always will be. That's also why Google isn't on the list - a bunch of off the shelf hardware sucks at bandwidth.
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
1.write plot.
2.make movie.
3.???.
4.profit.
somebody had to say it.
Re: (Score:3, Funny)
Re:Booooring (Score:5, Informative)
Sorry, but no. As big as one of Google's several data centers might be, it can't touch one of these guys for computational power, memory or communications bandwidth, and it's darn near useless for the kind of computing that needs strong floating point (including double precision) everywhere. In fact, I'd say that Google's systems are targeted to an even narrower problem domain than Roadrunner or Intrepid or Ranger. It's good at what it does, and what it does is very important commercially, but that doesn't earn it a space on this list.
More generally, the "real tests of engineering" are still there. What has changed is that the scaling is now horizontal instead of vertical, and the burden for making whole systems has shifted more to the customer. It used to be that vendors were charged with making CPUs and shared-memory systems that ran fast, and delivering the result as a finished product. Beowulf and Red Storm and others changed all that. People stopped making monolithic systems because they became so expensive that it was infeasible to build them on the same scales already being reached by clusters (or "massively parallel systems" if you prefer). Now the vendors are charged with making fast building blocks and non-shared-memory interconnects, and customers take more responsibility for assembling the parts into finished systems. That's actually more difficult overall. You think building a thousand-node (let alone 100K-node) cluster is easy? Try it, noob. Besides the technical challenge of putting together the pieces without creating bottlenecks, there's the logistical problem of multiple-vendor compatibility (or lack thereof), and then how do you program it to do what you need? It turns out that the programming models and tools that make it possible to write and debug programs that run on systems this large run almost as well on a decently engineered cluster as they would on a UMA machine - for a tiny fraction of the cost.
Economics is part of engineering, and if you don't understand or don't accept that then you're no engineer. A system too expensive to build or maintain is not a solution, and the engineer who remains tied to it has failed. It's cost and time to solution that matter, not the speed of individual components. Single-core performance was always destined to hit a wall, we've known that since the early RISC days, and using lots of processors has been the real engineering challenge for two decades now.
Disclosure: I work for SiCortex, which makes machines of this type (although they're probably closer to the single-system model than just about anything they compete with). Try not to reverse cause and effect between my statements and my choice of employer.
Almost (Score:2)
This is the only false statement in your posting. Google's data centers are, in fact, a huge pile of intel/AMD processors connected with a couple lanes of gigabig ethernet. True, they are not designed for HPC, and therefore cannot compete with real supercomputers on REAL hpc applications. However, the top500 list is generated using linpack. Linpack is a terrible
Re: (Score:2)
Re: (Score:2)
Yes, exactly. In science, this is a problem, because even though most scientist are at least proficient in programming (albeit not always adhering to clean coding practices), there are few that know MPI or have the time or will to learn it. In fact, I'm a bit worried. MPI is improving in a much slower rate than compute nodes are getting multicore.
This means that even though the optimal application from Linpack will keep showing inc
Re: (Score:2)
True enough. My company provides an optimized MPI implementation, and it has been said that the whole box was designed to run MPI code well, but personally I've never been m
real measure (Score:3, Insightful)
Most codes are somewhere between. As the machine gets larger, the more effort has to be put in des
Re: (Score:2)
Exactly -- Parallelism isn't everything (Score:2)
Better Benchmark (Score:2)
My assumption is that embarrassingly parallel problems aren't the hardest ones to solve, so if throwing money at buying more cores gets you a good score on a benchmark, then perhaps the benchmark is where the problem lies.
A supercomputer under your desk (Score:2)
It seems to me, at least superficially, that supercomputers these days do not use the fastest processors around. I'm sure there are processors faster than the Intels. They just use more of them.
Quite smart, as using commodity processors must save a lot of money compared to specialised processors. And I suppose it may make programming easier as the compilers for the architecture are there already, and are very mature in development.
But then what we now call an average desktop is what twenty years ago was a
Read the fine print! (Score:1)
Here's [lanl.gov] the actual Fasted Computer in the World.
Intel (Score:1)
Re: (Score:2)
Gotta respect to such PR and sold out tech journalists (!).
Where's Steele? (Score:1)
But how much power does it use? (Score:2)
Beowulf Cluster of PS3s (Score:5, Interesting)
The PS3's RSX video chip [wikipedia.org] from nVidia does 1.8TFLOPS on specialized graphics instructions. If you're rendering, you get close to that performance. The PS3's CPU, the Cell [wikipedia.org], gets theoretical 204GFLOPS on its more general purpose (than the RSX) onchip DSP-type SPEs, and some more on its onchip 3.4GHz PPC. A higher end Cell with 8 (instead of 7 - less one for "chip utilities" - in the PS3's Cell) delivers about 100GFLOPS on Linpack 4096x4096. Overall a PS3 has about 2TFLOPS, so 278 PS3s have a theoretical peak equal to this supercomputer. But they'd cost only $11,200. YMMV.
Re: (Score:1)
Re: (Score:1)
ps: yes those are very expensive 'wires' used to 'connect' them.
Re: (Score:1)
I am assuming you know about the folding@home [scei.co.jp] project, yah..?
Which supposedly hit a petaflop [dmwmedia.com] back in 9/07.
Re: (Score:2)
Headline (Score:1)
The summary is wrong - both Intel and AMD together (Score:3, Interesting)
The summary is right. (Score:2)
* A total of 375 systems (75 percent) are now using Intel processors. This is up from six months ago (354 systems, 70.8 percent) and a represents the largest share for Intel chips in the TOP500 ever.
* The IBM Power processors passed the AMD Opteron family and are now (again) the second most common processor family with 68 systems (13.6 percent), up from 61 systems (12.2 percent) six months ago. Fifty-six systems (11 percent) are using AMD Opteron processors, down from 78 systems (15.6 percent) six months ago.
I swear I will not hurt anyone... (Score:2)
there only 12 supercomputers this year (Score:2)
Folding At Home r0ck$0rz its s0ck$0rz (Score:1)
Wrong (Score:2)
I'm pretty sure the fastest computer in the world must be the one on the space shuttle. At least when it is launched.
Re: (Score:2)
Re: (Score:1)