Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing

"Intrepid" Supercomputer Fastest In the World 122

Stony Stevenson writes "The US Department of Energy's (DoE) high performance computing system is now the fastest supercomputer in the world for open science, according to the Top 500 list of the world's fastest computers. The list was announced this week during the International Supercomputing Conference in Dresden, Germany. IBM's Blue Gene/P, known as 'Intrepid,' is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall. The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings. According to the list, 74.8 percent of the world's supercomputers (some 374 systems) use Intel processors, a rise of 4 percent in six months. This represents the biggest slice of the supercomputer cake for the firm ever."
This discussion has been archived. No new comments can be posted.

"Intrepid" Supercomputer Fastest In the World

Comments Filter:
  • by YeeHaW_Jelte ( 451855 ) on Thursday June 19, 2008 @11:03AM (#23858557) Homepage
    ... will it run Vista with everything on?
  • What happened to Blue Gene M, N and O?
  • by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Thursday June 19, 2008 @11:04AM (#23858603) Journal

    Apparently, not necessarily. [netlib.org] It's just some Fortran routines.

    So much for that joke.

  • by SpaFF ( 18764 ) on Thursday June 19, 2008 @11:05AM (#23858617) Homepage
    This is the first time a system on the TOP500 has passed the Petaflop mark.
    • Re: (Score:3, Informative)

      by bunratty ( 545641 )
      "The supercomputer has a peak performance of 557 teraflops."

      This is the first time a system on the TOP500 has passed the Petaflop mark.
      Or 0.557 petaflops, but who's counting?
    • by cp.tar ( 871488 )

      Let me know when a system not on the list passes the petaflop mark.
      That will be newsworthy.

      • by Dak RIT ( 556128 )

        Extrapolating from the performance development chart [top500.org] which shows a 10 fold increase about every 4 years (desktop computers should be pretty similar), and assuming top desktop computers today hit around 100 gigaflops, then you can expect we'll hit that sometime around 2024.

  • by Anonymous Coward
    The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings.

    And that's the unclassified speed. Just imagine how fast it can really go! Just like the SR-71!

  • by sm62704 ( 957197 ) on Thursday June 19, 2008 @11:08AM (#23858699) Journal
    Computer scientists building the monstrosity admit that it still isn't powerful enough to run VISTA with all the bells and whistles turned on.

    George Broussard says that when the next generation of this machine reaches the desktop, Duke Nukem 4ever will be released. "Really", he said, "The game's been finished for over five years now. We're just waiting for a powerful enough computer to play it on."

    Sources say that besides computitng power, DNF is waiting for the holographic display. The The US Department of Energy's (DoE) high performance computing system lacks a holographic display.

    Gamers were reportedly disappointed in the news, although most said the price of the DoE's new computer wouldn't faze them. "After all" one said, "you have to have a decent machine to play any modern game!"
    • Sources say that besides computitng power, DNF is waiting for the holographic display.

      Wow, I'm waiting for that display too!
  • Does not compute (Score:5, Informative)

    by UnknowingFool ( 672806 ) on Thursday June 19, 2008 @11:08AM (#23858721)
    The title says: "'Intrepid' Supercomputer Fastest In the World" for open science while the article says "IBM's Blue Gene/P, known as 'Intrepid', is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall." There needs to be some clarification. Roadrunner [networkworld.com] is considered the fastest in the world and is also built for the DOE. I'm guessing that Roadrunner is used exclusively by Los Alamos and is not available for open science while Intrepid is.
  • I've got my 2500+ XP overclocked to 1.9GHz. Beat that!
  • The actual list (Score:5, Informative)

    by Hyppy ( 74366 ) on Thursday June 19, 2008 @11:15AM (#23858895)
    Top500 [top500.org] has the actual list. Would have been nice to have this in TFA or TFS.
  • Inaccurate Summary (Score:2, Informative)

    by Anonymous Coward
    The title line of the summary isn't accurate - Intrepid is not the world's fastest supercomputer, just the fastest for 'open science'.
  • I was thinking the Intrepid was the "Fastest in the World", but actually it's the fastest for open science. The DoE owns the top three on the list [top500.org]. Why do they need so many? If you're protecting the nation's energy, why not set and example and use less of it?
    • by sm62704 ( 957197 )
      If you're protecting the nation's energy, why not set and example and use less of it?

      Because the less energy there is, the more the DoE is needed. They have to protect their cushy jobs, you know.
    • The DoE owns the top three on the list. Why do they need so many?

      Because they simulate nuclear bombs, now that actual testing is forbidden by international treaty.
  • Good for open science.

    But yet another article that uses the phrase "Fastest supercomputer" for attention because it can qualify in the article which list out of the dozens it's on. We have a fastest supercomputer almost every week of varying speeds. See Roadrunner [slashdot.org].

    "Fastest supercomputer uses Slashdot"
    The fastest supercomputer in Skreech's living room has posted a post on Slashdot.
  • The top500 list [top500.org] clearly show that roadrunner is #1. What's this one then?
    • by LighterShadeOfBlack ( 1011407 ) on Thursday June 19, 2008 @11:38AM (#23859425) Homepage

      The top500 list [top500.org] clearly show that roadrunner is #1. What's this one then?
      I'll let TFA answer this one:

      IBM's Blue Gene/P, known as 'Intrepid', is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall.
      In other words I don't really know why this is news. I don't think anything has changed about its position recently (other than Roadrunner becoming #1 a few weeks back).
  • What framerate does Crysis have on this machine with all settings maxed out?
  • I work with Argonne and am involved with the HPC world. Sadly this article doesn't include a link to the actual top500 that would clear this mess up.
    http://www.top500.org/lists/2008/06 [top500.org]

    The #1 computer, the one over a petaflop, is RoadRunner at Los Alamos.

    #2 is a bluegene machine from the DOE

    #3 is Intrepid at argonne.

    It's not clear to me how they could be so wrong in the article.
  • Firstly, the Top 500 list is the "authoritative" list, released each year at the Supercomputing Convention. Until then, nothing is really official. Though the list has it's own flaws, mostly from vendors submitting flawed benchmarks and/or guesswork.

    Secondly, the real benchmark is the application. Some algorithms run better on some platforms and worse on others. Period. Unless you are running a highly specialized set of applications - and nothing but - the rule of thumb is "design the best system you can

  • That we know about. I bet behind closed doors somewhere there is nearly unlimited funding and there is a faster machine.
  • Booooring (Score:5, Interesting)

    I liked (back in the Old Days) when supercomputer rankings where based on linear, single processor performance. Now it's just how much money can you afford to put a lot of processors in a single place. That was a real test of engineering. By the current standards, Google (probably) has the largest supercomputer in the world.

    Unfortunately, single core performance seems to have hit the wall.

    • Wroooong (Score:4, Informative)

      by dk90406 ( 797452 ) on Thursday June 19, 2008 @12:01PM (#23859915)
      Even in the Old Days, supercomputers had multiple processors.

      --
      In 1988, Cray Research introduced the Cray Y-MP®, the world's first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system to a record sustained speed of 2.3 gigaflops. --
      The difference today is that almost all supercomputers use commodity chips, instead of custom designed cores.

      Ohh - and the IBM one is almost a million times faster than the 20 years old '88 cray model.

      • Even in the Old Days, supercomputers had multiple processors.

        Those aren't the Old Days. :) In fact, that was around the beginning of the New Days, when companies began giving up on linear processors. See also: The Connection Machine.

    • Re: (Score:1, Insightful)

      by Anonymous Coward
      So how come Roadrunner has half the number of processors of Intrepid, but is twice as fast?

      In supercomputing it's all about bandwidth. It always was and always will be. That's also why Google isn't on the list - a bunch of off the shelf hardware sucks at bandwidth.
    • by javilon ( 99157 )
      One day Google's supercomputer will wake up to consciousness and we will all be his slaves.
      • by ch-chuck ( 9622 )
        Actually that would make a great sciFi movie plot with a message about the evils of overly intrusive marketing. A large advertising media firm with a corporate purpose of promoting products for shareholder profit gradually accumulates cpu sentience as the company grows, eventually reaching living consciousness and takes fulfilling it's mission to extremes, like vger returning from space, resulting in the eventual annoyance of every consumer on the planet.

             
      • Re: (Score:3, Funny)

        by pwizard2 ( 920421 )

        One day Google's supercomputer will wake up to consciousness and we will all be his slaves.
        ...GoogleNet becomes self aware at 2:14 AM EST, August 29. In a panic, they try to pull the plug... GoogleNet fights back.
    • Re:Booooring (Score:5, Informative)

      by Salamander ( 33735 ) <jeff.pl@atyp@us> on Thursday June 19, 2008 @12:28PM (#23860499) Homepage Journal

      That was a real test of engineering. By the current standards, Google (probably) has the largest supercomputer in the world.

      Sorry, but no. As big as one of Google's several data centers might be, it can't touch one of these guys for computational power, memory or communications bandwidth, and it's darn near useless for the kind of computing that needs strong floating point (including double precision) everywhere. In fact, I'd say that Google's systems are targeted to an even narrower problem domain than Roadrunner or Intrepid or Ranger. It's good at what it does, and what it does is very important commercially, but that doesn't earn it a space on this list.

      More generally, the "real tests of engineering" are still there. What has changed is that the scaling is now horizontal instead of vertical, and the burden for making whole systems has shifted more to the customer. It used to be that vendors were charged with making CPUs and shared-memory systems that ran fast, and delivering the result as a finished product. Beowulf and Red Storm and others changed all that. People stopped making monolithic systems because they became so expensive that it was infeasible to build them on the same scales already being reached by clusters (or "massively parallel systems" if you prefer). Now the vendors are charged with making fast building blocks and non-shared-memory interconnects, and customers take more responsibility for assembling the parts into finished systems. That's actually more difficult overall. You think building a thousand-node (let alone 100K-node) cluster is easy? Try it, noob. Besides the technical challenge of putting together the pieces without creating bottlenecks, there's the logistical problem of multiple-vendor compatibility (or lack thereof), and then how do you program it to do what you need? It turns out that the programming models and tools that make it possible to write and debug programs that run on systems this large run almost as well on a decently engineered cluster as they would on a UMA machine - for a tiny fraction of the cost.

      Economics is part of engineering, and if you don't understand or don't accept that then you're no engineer. A system too expensive to build or maintain is not a solution, and the engineer who remains tied to it has failed. It's cost and time to solution that matter, not the speed of individual components. Single-core performance was always destined to hit a wall, we've known that since the early RISC days, and using lots of processors has been the real engineering challenge for two decades now.

      Disclosure: I work for SiCortex, which makes machines of this type (although they're probably closer to the single-system model than just about anything they compete with). Try not to reverse cause and effect between my statements and my choice of employer.

      • "It's [google's data center] good at what it does, and what it does is very important commercially, but that doesn't earn it a space on this list."

        This is the only false statement in your posting. Google's data centers are, in fact, a huge pile of intel/AMD processors connected with a couple lanes of gigabig ethernet. True, they are not designed for HPC, and therefore cannot compete with real supercomputers on REAL hpc applications. However, the top500 list is generated using linpack. Linpack is a terrible
        • Hi, and welcome to slashdot. Yeah, I can see that you have a low ID, which means there's even less excuse for not knowing how things go here. Someone smarmily tries to imply that what I - and, it seems quite likely, you - do as not a "real test of engineering" based on fundamental misunderstandings. I take exception, and express my pique, and at least - unlike most slashdotters - provide some actual information to back up what I'm saying. Where's the problem? At least I'm willing to stand behind what I
      • and the burden for making whole systems has shifted more to the customer.

        Yes, exactly. In science, this is a problem, because even though most scientist are at least proficient in programming (albeit not always adhering to clean coding practices), there are few that know MPI or have the time or will to learn it. In fact, I'm a bit worried. MPI is improving in a much slower rate than compute nodes are getting multicore.

        This means that even though the optimal application from Linpack will keep showing inc

        • In science, this is a problem, because even though most scientist are at least proficient in programming (albeit not always adhering to clean coding practices), there are few that know MPI or have the time or will to learn it. In fact, I'm a bit worried. MPI is improving in a much slower rate than compute nodes are getting multicore.

          True enough. My company provides an optimized MPI implementation, and it has been said that the whole box was designed to run MPI code well, but personally I've never been m

    • real measure (Score:3, Insightful)

      by flaming-opus ( 8186 )
      Well, the real measure of fastest computer has a lot to do with what software you want to run on it. In the example of the top500 list, linpack scales almost perfectly as you add processor cores, and makes very limited demands of network speed, memory bandwidth, or single-processor performance. Other codes really can't scale past 16 processors, so these massive processor jumbles don't amount to a hill of beans.

      Most codes are somewhere between. As the machine gets larger, the more effort has to be put in des
    • by raddan ( 519638 )
      Interconnecting all of those cores is a real engineering challenge. The basic problem is covered in elementary discrete mathematics books. These guys are most definitely still pushing the envelope.
    • Many problems simply can't be parallelized. 95% of the time, throwing more cores at the problem doesn't help me. I hope per-core performance picks up a little pretty soon.
    • Now it's just how much money can you afford to put a lot of processors in a single place.

      My assumption is that embarrassingly parallel problems aren't the hardest ones to solve, so if throwing money at buying more cores gets you a good score on a benchmark, then perhaps the benchmark is where the problem lies.
  • It seems to me, at least superficially, that supercomputers these days do not use the fastest processors around. I'm sure there are processors faster than the Intels. They just use more of them.

    Quite smart, as using commodity processors must save a lot of money compared to specialised processors. And I suppose it may make programming easier as the compilers for the architecture are there already, and are very mature in development.

    But then what we now call an average desktop is what twenty years ago was a

  • It says, "... for open science."

    Here's [lanl.gov] the actual Fasted Computer in the World.
  • Yeah, like IBM would use Intel chips in their top of the line supercomputers! They use Power chips, and not even very fast ones at that.
    • by Ilgaz ( 86384 )
      How could Intel PR attach themselves to the story which should be about first ever Petaflop (documented) supercomputer made possible by IBM low power/mhz PowerPC processors and AMD Processors?

      Gotta respect to such PR and sold out tech journalists (!).

  • Does anyone know why Purdue's 'Steele [purdue.edu]' system isn't on the list?
  • This year the top 500 also tracks how much power is used by each system. Systems under development at Oak Ridge National Lab will reportedly have annual power bills of more than $30 million when they debut in 2012. See ComputerWorld [computerworld.com] and Data Center Knowledge [datacenterknowledge.com] for more.
  • by Doc Ruby ( 173196 ) on Thursday June 19, 2008 @12:55PM (#23861085) Homepage Journal

    The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application


    The PS3's RSX video chip [wikipedia.org] from nVidia does 1.8TFLOPS on specialized graphics instructions. If you're rendering, you get close to that performance. The PS3's CPU, the Cell [wikipedia.org], gets theoretical 204GFLOPS on its more general purpose (than the RSX) onchip DSP-type SPEs, and some more on its onchip 3.4GHz PPC. A higher end Cell with 8 (instead of 7 - less one for "chip utilities" - in the PS3's Cell) delivers about 100GFLOPS on Linpack 4096x4096. Overall a PS3 has about 2TFLOPS, so 278 PS3s have a theoretical peak equal to this supercomputer. But they'd cost only $11,200. YMMV.
    • by Tweenk ( 1274968 )
      The only problem is that the RSX chip is inaccessible from Linux due to the hypervisor, and can only be utilized games from Sony. So you actually get 1/10 of your stated 2 TFLOPS in supercomputing applications. This is because the PS3 itself is sold close to production costs or even at a loss, and the real profit for Sony is derived from games. Allowing RSX to be used for supercomputing would destroy their business model.
    • connecting them together and the fudge factor for making sure it's the sustained performance and not hte theoretical peak would add the rest of the 110 Million $.

      ps: yes those are very expensive 'wires' used to 'connect' them.
    • I am assuming you know about the folding@home [scei.co.jp] project, yah..?

      Which supposedly hit a petaflop [dmwmedia.com] back in 9/07.

  • That's a misleading headline. It's the fastest "open science" supercomputer. Roadrunner has it beat.
  • by Tweenk ( 1274968 ) on Thursday June 19, 2008 @01:30PM (#23861709)
    It's not Intel chips that have 74.8% share, it's x86 chips. Those are produced by both AMD and Intel. In fact, there are 7 systems with x86 hardware in the top 10, and the 4 faster ones use AMD Opterons (Crays are also Opterons) while the 3 slower use Xeons.
    • From the Top500 Highlights [top500.org]:

      * A total of 375 systems (75 percent) are now using Intel processors. This is up from six months ago (354 systems, 70.8 percent) and a represents the largest share for Intel chips in the TOP500 ever.
      * The IBM Power processors passed the AMD Opteron family and are now (again) the second most common processor family with 68 systems (13.6 percent), up from 61 systems (12.2 percent) six months ago. Fifty-six systems (11 percent) are using AMD Opteron processors, down from 78 systems (15.6 percent) six months ago.

  • Does Sarah Connor know about this?
  • I defined a supercomputer as the top order of magnitude of speed. That would 100 to 1000 teraflops in mid-2008 or 12 computers.
  • folding@home [stanford.edu] beats this with just the GPU cards.
  • I'm pretty sure the fastest computer in the world must be the one on the space shuttle. At least when it is launched.

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]

Working...