"Intrepid" Supercomputer Fastest In the World 122
Stony Stevenson writes "The US Department of Energy's (DoE) high performance computing system is now the fastest supercomputer in the world for open science, according to the Top 500 list of the world's fastest computers.
The list was announced this week during the International Supercomputing Conference in Dresden, Germany.
IBM's Blue Gene/P, known as 'Intrepid,' is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall.
The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings. According to the list, 74.8 percent of the world's supercomputers (some 374 systems) use Intel processors, a rise of 4 percent in six months. This represents the biggest slice of the supercomputer cake for the firm ever."
Perhaps even more importantly (Score:3, Informative)
Re:Perhaps even more importantly (Score:3, Informative)
Does not compute (Score:5, Informative)
Re:Perhaps even more importantly (Score:5, Informative)
You were misled by a terrible headline. The 0.557 petaflop computer is the fastest *for open science.* Roadrunner, at Los Alamos, tops the list. It does 1 petaflop.
The actual list (Score:5, Informative)
Inaccurate Summary (Score:2, Informative)
Key words: "For open science" (Score:4, Informative)
Re:Linpack? So does it run Linux? (Score:4, Informative)
Wroooong (Score:4, Informative)
--
In 1988, Cray Research introduced the Cray Y-MP®, the world's first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system to a record sustained speed of 2.3 gigaflops. --
The difference today is that almost all supercomputers use commodity chips, instead of custom designed cores.
Ohh - and the IBM one is almost a million times faster than the 20 years old '88 cray model.
Re:Booooring (Score:5, Informative)
Sorry, but no. As big as one of Google's several data centers might be, it can't touch one of these guys for computational power, memory or communications bandwidth, and it's darn near useless for the kind of computing that needs strong floating point (including double precision) everywhere. In fact, I'd say that Google's systems are targeted to an even narrower problem domain than Roadrunner or Intrepid or Ranger. It's good at what it does, and what it does is very important commercially, but that doesn't earn it a space on this list.
More generally, the "real tests of engineering" are still there. What has changed is that the scaling is now horizontal instead of vertical, and the burden for making whole systems has shifted more to the customer. It used to be that vendors were charged with making CPUs and shared-memory systems that ran fast, and delivering the result as a finished product. Beowulf and Red Storm and others changed all that. People stopped making monolithic systems because they became so expensive that it was infeasible to build them on the same scales already being reached by clusters (or "massively parallel systems" if you prefer). Now the vendors are charged with making fast building blocks and non-shared-memory interconnects, and customers take more responsibility for assembling the parts into finished systems. That's actually more difficult overall. You think building a thousand-node (let alone 100K-node) cluster is easy? Try it, noob. Besides the technical challenge of putting together the pieces without creating bottlenecks, there's the logistical problem of multiple-vendor compatibility (or lack thereof), and then how do you program it to do what you need? It turns out that the programming models and tools that make it possible to write and debug programs that run on systems this large run almost as well on a decently engineered cluster as they would on a UMA machine - for a tiny fraction of the cost.
Economics is part of engineering, and if you don't understand or don't accept that then you're no engineer. A system too expensive to build or maintain is not a solution, and the engineer who remains tied to it has failed. It's cost and time to solution that matter, not the speed of individual components. Single-core performance was always destined to hit a wall, we've known that since the early RISC days, and using lots of processors has been the real engineering challenge for two decades now.
Disclosure: I work for SiCortex, which makes machines of this type (although they're probably closer to the single-system model than just about anything they compete with). Try not to reverse cause and effect between my statements and my choice of employer.
Re:Does not compute (Score:2, Informative)
http://arstechnica.com/news.ars/post/20080618-game-and-pc-hardware-combo-tops-supercomputer-list.html [arstechnica.com]
Only partially true (Score:3, Informative)
Re:what? where? (Score:5, Informative)
The P in Blue Gene/P stands for "Petaflops", the target performace
The Q in Blue Gene/Q is probably just the letter after P
The C in Blue Gene/C stands for "cellular computing", now renamed Cyclops64.
Petaflops (Score:3, Informative)