Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing

"Intrepid" Supercomputer Fastest In the World 122

Stony Stevenson writes "The US Department of Energy's (DoE) high performance computing system is now the fastest supercomputer in the world for open science, according to the Top 500 list of the world's fastest computers. The list was announced this week during the International Supercomputing Conference in Dresden, Germany. IBM's Blue Gene/P, known as 'Intrepid,' is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall. The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings. According to the list, 74.8 percent of the world's supercomputers (some 374 systems) use Intel processors, a rise of 4 percent in six months. This represents the biggest slice of the supercomputer cake for the firm ever."
This discussion has been archived. No new comments can be posted.

"Intrepid" Supercomputer Fastest In the World

Comments Filter:
  • by SpaFF ( 18764 ) on Thursday June 19, 2008 @12:05PM (#23858617) Homepage
    This is the first time a system on the TOP500 has passed the Petaflop mark.
  • by bunratty ( 545641 ) on Thursday June 19, 2008 @12:08PM (#23858693)
    "The supercomputer has a peak performance of 557 teraflops."

    This is the first time a system on the TOP500 has passed the Petaflop mark.
    Or 0.557 petaflops, but who's counting?
  • Does not compute (Score:5, Informative)

    by UnknowingFool ( 672806 ) on Thursday June 19, 2008 @12:08PM (#23858721)
    The title says: "'Intrepid' Supercomputer Fastest In the World" for open science while the article says "IBM's Blue Gene/P, known as 'Intrepid', is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall." There needs to be some clarification. Roadrunner [networkworld.com] is considered the fastest in the world and is also built for the DOE. I'm guessing that Roadrunner is used exclusively by Los Alamos and is not available for open science while Intrepid is.
  • by clem.dickey ( 102292 ) on Thursday June 19, 2008 @12:14PM (#23858849)

    Or 0.557 petaflops, but who's counting?

    You were misled by a terrible headline. The 0.557 petaflop computer is the fastest *for open science.* Roadrunner, at Los Alamos, tops the list. It does 1 petaflop.

  • The actual list (Score:5, Informative)

    by Hyppy ( 74366 ) on Thursday June 19, 2008 @12:15PM (#23858895)
    Top500 [top500.org] has the actual list. Would have been nice to have this in TFA or TFS.
  • Inaccurate Summary (Score:2, Informative)

    by Anonymous Coward on Thursday June 19, 2008 @12:18PM (#23858963)
    The title line of the summary isn't accurate - Intrepid is not the world's fastest supercomputer, just the fastest for 'open science'.
  • by LighterShadeOfBlack ( 1011407 ) on Thursday June 19, 2008 @12:38PM (#23859425) Homepage

    The top500 list [top500.org] clearly show that roadrunner is #1. What's this one then?
    I'll let TFA answer this one:

    IBM's Blue Gene/P, known as 'Intrepid', is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall.
    In other words I don't really know why this is news. I don't think anything has changed about its position recently (other than Roadrunner becoming #1 a few weeks back).
  • by k_187 ( 61692 ) on Thursday June 19, 2008 @12:49PM (#23859661) Journal
    Actually, Intrepid does run linux according to the list.
  • Wroooong (Score:4, Informative)

    by dk90406 ( 797452 ) on Thursday June 19, 2008 @01:01PM (#23859915)
    Even in the Old Days, supercomputers had multiple processors.

    --
    In 1988, Cray Research introduced the Cray Y-MP®, the world's first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system to a record sustained speed of 2.3 gigaflops. --
    The difference today is that almost all supercomputers use commodity chips, instead of custom designed cores.

    Ohh - and the IBM one is almost a million times faster than the 20 years old '88 cray model.

  • Re:Booooring (Score:5, Informative)

    by Salamander ( 33735 ) <jeff@ p l . a t y p.us> on Thursday June 19, 2008 @01:28PM (#23860499) Homepage Journal

    That was a real test of engineering. By the current standards, Google (probably) has the largest supercomputer in the world.

    Sorry, but no. As big as one of Google's several data centers might be, it can't touch one of these guys for computational power, memory or communications bandwidth, and it's darn near useless for the kind of computing that needs strong floating point (including double precision) everywhere. In fact, I'd say that Google's systems are targeted to an even narrower problem domain than Roadrunner or Intrepid or Ranger. It's good at what it does, and what it does is very important commercially, but that doesn't earn it a space on this list.

    More generally, the "real tests of engineering" are still there. What has changed is that the scaling is now horizontal instead of vertical, and the burden for making whole systems has shifted more to the customer. It used to be that vendors were charged with making CPUs and shared-memory systems that ran fast, and delivering the result as a finished product. Beowulf and Red Storm and others changed all that. People stopped making monolithic systems because they became so expensive that it was infeasible to build them on the same scales already being reached by clusters (or "massively parallel systems" if you prefer). Now the vendors are charged with making fast building blocks and non-shared-memory interconnects, and customers take more responsibility for assembling the parts into finished systems. That's actually more difficult overall. You think building a thousand-node (let alone 100K-node) cluster is easy? Try it, noob. Besides the technical challenge of putting together the pieces without creating bottlenecks, there's the logistical problem of multiple-vendor compatibility (or lack thereof), and then how do you program it to do what you need? It turns out that the programming models and tools that make it possible to write and debug programs that run on systems this large run almost as well on a decently engineered cluster as they would on a UMA machine - for a tiny fraction of the cost.

    Economics is part of engineering, and if you don't understand or don't accept that then you're no engineer. A system too expensive to build or maintain is not a solution, and the engineer who remains tied to it has failed. It's cost and time to solution that matter, not the speed of individual components. Single-core performance was always destined to hit a wall, we've known that since the early RISC days, and using lots of processors has been the real engineering challenge for two decades now.

    Disclosure: I work for SiCortex, which makes machines of this type (although they're probably closer to the single-system model than just about anything they compete with). Try not to reverse cause and effect between my statements and my choice of employer.

  • Re:Does not compute (Score:2, Informative)

    by hlimethe3rd ( 879459 ) on Thursday June 19, 2008 @01:45PM (#23860881)
    Actually, Roadrunner uses the Cell chip for the heavy lifting, not the AMD chips:
    http://arstechnica.com/news.ars/post/20080618-game-and-pc-hardware-combo-tops-supercomputer-list.html [arstechnica.com]
  • Only partially true (Score:3, Informative)

    by Nursie ( 632944 ) on Thursday June 19, 2008 @01:45PM (#23860889)
    It's made of tri-blade clusters, the opteron to do IO and various other mundane things, and then two Cell PowerX 8 (I think I have that right) blades to do the heavy lifting.
  • Re:what? where? (Score:5, Informative)

    by Henriok ( 6762 ) on Thursday June 19, 2008 @02:13PM (#23861413)
    The L in Blue Gene/L stands for Lawrence Livermore National Laboratory, the site for the first installment.
    The P in Blue Gene/P stands for "Petaflops", the target performace
    The Q in Blue Gene/Q is probably just the letter after P
    The C in Blue Gene/C stands for "cellular computing", now renamed Cyclops64.
  • Petaflops (Score:3, Informative)

    by Henriok ( 6762 ) on Thursday June 19, 2008 @02:16PM (#23861445)
    ..or more correctly: 1 Petalops. Can't leave the trailing "s" out, it stands for "second". "Floating point operations per" doesn't mean much.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...