Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

NNSA Supercomputer Breaks Computing Record 266

Lecutis writes "National Nuclear Security Administration (NNSA) Administrator Linton F. Brooks announced that on March 23, 2005, a supercomputer developed through the Advanced Simulation and Computing program for NNSAs Stockpile Stewardship efforts has performed 135.3 trillion floating point operations per second (teraFLOP/s) on the industry standard LINPACK benchmark, making it the fastest supercomputer in the world."
This discussion has been archived. No new comments can be posted.

NNSA Supercomputer Breaks Computing Record

Comments Filter:
  • Neat (Score:3, Interesting)

    by neccoant ( 3345 ) on Sunday April 03, 2005 @01:18PM (#12127013)
    It's amazing that we were stalled at 50TFLOPS for two years, and are piling on the FLOPS now.
    • Re:Neat (Score:3, Insightful)

      the increased flops is simply a function of the fact that they are expanding the number of nodes.
      • Re:Neat (Score:5, Informative)

        by brsmith4 ( 567390 ) <brsmith4@gmail. c o m> on Sunday April 03, 2005 @03:03PM (#12127615)
        That's not how linpack works. Sure, increasing your number of nodes will give definite performance advantages to course-grained, embarassingly parallel applications, but Linpack is not one of these applications. As well, Linpack should not be used as a guide for raw floating point performance, but is much better suited to gauge throughput.

        Linpack does its benchmarks using a more fine-grained algorithm, creating lots of communications for Message Passing to share segments of dense matrices for rather large linear systems. Not only is the number of nodes a factor, but so is the interconnect speed. If that cluster was using GigE for its interconnect, its Linpack benchmarks would not be nearly as impressive. Haven't RTFA but its likely that BlueGene/L is using Myranet or Infinband for its interconnect (or possibly a more proprietary backplane style interconnect, though that cluster is way too big for that).

        These latest generations of high-speed interconnects (esp. Infinband) have brought clusters closer to the point of being near shared-memory performance and hence is more of a throughput test than anything else.

        This description of the HPL benchmark (The "official" name for the Linpack benchmark) should provide some clarity as to how memory-dependent Linpack actually is:

        The algorithm used by HPL can be summarized by the following keywords: Two-dimensional block-cyclic data distribution - Right-looking variant of the LU factorization with row partial pivoting featuring multiple look-ahead depths - Recursive panel factorization with pivot search and column broadcast combined - Various virtual panel broadcast topologies - bandwidth reducing swap-broadcast algorithm - backward substitution with look-ahead of depth 1.

        http://www.netlib.org/benchmark/hpl/ [netlib.org]

        They took a lot of time to get Linpack to be less shared-memory dependent, like adding the swap-broadcast algorithm (which i'm fairly certain was absent in the old mainframe version of Linpack), to make it more "fair" to run on a cluster versus a shared memory set up. However, on a typical cluster, Linpack can push your interconnect pretty hard, esp. if you are stuck on GigE. However, Linpack has _lots_ of settings and parameters to "tune" the benchmark for your particular cluster.

        My point: Linpack/HPL is not an overall flops benchmark for a cluster. It measures the performance not only of double precision CPU performance, but also the performance of a cluster's interconnect.
        • Re:Neat (Score:4, Interesting)

          by imsabbel ( 611519 ) on Sunday April 03, 2005 @04:48PM (#12128252)
          Well, in fact the truth is a right in the middle.
          Linpack is VERY easy to parallize. Earth simulator and other vector machines get over 85% of their theoretical processing power with linpack, and even clusters with relatively abyssmal interconnects are still in the 50% range.

          Lots of computational problems need orders of magnitutes more inter-node communication, up to the point where linpack doesnt even matter anymore and clusters and vector computers with the same linpack score are a factor of 10 or 20 apart.
        • Re:Neat (Score:2, Informative)

          by kayak334 ( 798077 )
          Myranet or Infinband

          Just some minor corrections and informaton for those interested.

          Myricom [myri.com] is the company, Myrinet is the protocol. Infiniband [google.com] is an open protocol. Myrinet has a maximum speed of 2.2Gb/sec while Infiniband can scale up to 30Gb/sec on a 16x PCI-E card and a 12x port on the switch.

          As for what BlueGene/L uses, I don't think I'm at liberty to discuss that.
          • Re:Neat (Score:3, Informative)

            by brsmith4 ( 567390 )
            Were you correcting my spelling? Because I always make that mistake (myranet... it's myrinet damn it!). You know what I meant though ;) It looks like BlueGene/L is using a hybrid backplane/hypertorus interconnect where a whole bunch of "machines" (more like system-on-a-chip) are connected via a backplane, then that case of "machines" is connected to another case in the same rack on some number of layers of interconnect. Then the racks are connected using some other protocol. Though you may not "be at l
    • Re:Neat (Score:3, Funny)

      by Anonymous Coward
      I think they were waiting for final specs on Doom3.
    • Re:Neat (Score:3, Interesting)

      by woah ( 781250 )
      The reason is, of course, that we've been stuck with sameish desktop performance as well. Which correlates with supercomputer performance, since nowdays most of them use Intel/AMD processors.

      Just goes to show that Moore's law won't hold forever.

      • Re:Neat (Score:3, Insightful)

        by JQuick ( 411434 )
        Actually Intel compatible clusters in the supercomputer rankings are not all that compelling. True, linux cluster did fare very well for several years as measured by price/performance. Also it is true that about 63% of the top 500 supercomputers are Intel or Intel compatible.

        Despite this, the majority of systems at the top of supercomputer top 500 chart are based on the POWER architecture, not Intel chips.

        The POWER based systems, including BlueGene and PowerPC systems, are all much better on both price/p
    • Re:Neat (Score:4, Insightful)

      by imsabbel ( 611519 ) on Sunday April 03, 2005 @01:51PM (#12127228)
      You are misstaken.
      We didnt STALL at 30Gflops, its just that the 30Gflops were SO much better than everything else available that it took a couple of years to catch up and overtake it.

      If you average over the last 10 years, the the Earth simulator was a bump above moores law and now we are back on track.
  • by rebelcool ( 247749 ) on Sunday April 03, 2005 @01:21PM (#12127035)
    wait till its fully online.
  • by Zebra_X ( 13249 ) on Sunday April 03, 2005 @01:22PM (#12127040)
    This performance was achieved at Lawrence Livermore National Laboratory (LLNL) at only the half-system point of the IBM BlueGene/L installation. Last November, just one-quarter of BlueGene/L topped the TOP500 List of the world's top supercomputers.

    Is there anything that will be able to touch this when it's complete?

  • Blue Gene? (Score:2, Informative)

    by eth8686 ( 793978 )
    Didn't IBM push Blue Gene to 180'something teraflops recently?? News story herer [businessweek.com]
  • imagine (Score:3, Funny)

    by dario_moreno ( 263767 ) on Sunday April 03, 2005 @01:23PM (#12127052) Journal
    a Beowulf cluster of these !
  • Wow! (Score:5, Funny)

    by FlyByPC ( 841016 ) on Sunday April 03, 2005 @01:23PM (#12127054) Homepage
    Just imagine running Fractint on this puppy!
    • Re:Wow! (Score:3, Interesting)

      by ucblockhead ( 63650 )
      Heh. I guess I wasn't the only one who christianed a new machine by running fractint on it. Gave it up around 1998 because there was just no point.
      • These days, I christen new machines by performing a stage1 Gentoo installation. While not as pretty as Fractint, there's nothing like some serious code compilation to appreciate the performance of your machine (but life's too short to wait for OpenOffice.org to compile.)
  • Steroids (Score:4, Funny)

    by tiktok ( 147569 ) on Sunday April 03, 2005 @01:23PM (#12127056) Homepage
    There was another machine that had already beaten that record, but unfortunately failed a diagnostic test for banned substances...
  • Did you RTFA? (Score:5, Informative)

    by Donny Smith ( 567043 ) on Sunday April 03, 2005 @01:24PM (#12127058)
    > has performed 135.3 trillion floating point operations per second (teraFLOP/s) on the industry standard LINPACK benchmark, making it the fastest supercomputer in the world."

    Did you read the fucking article?

    "This performance was achieved at Lawrence Livermore National Laboratory (LLNL) at only the half-system point of the IBM BlueGene/L installation. Last November, just one-quarter of BlueGene/L topped the TOP500 List of the world's top supercomputers."

    See, this is the SAME supercomputer that has already topped the list last November, so the latest record did NOT make it the fastest supercomputer in the world.

    It already had been the fastest supercomputer in the world.
    • OK so it is still the fastest computer in the world.
      Technically the description wsa accurate, however.
      • OK so it is still the fastest computer in the world. Technically the description wsa accurate, however.

        Actually, technically the description was incorrect, as the term "making" requires that the subject initially not be what it was made into, i.e. not the fastest computer in the world.

        But yeah, this is all splitting hairs, and I should be ashamed of myself for even mentioning this...

  • Wow. (Score:2, Funny)

    by TsukasaZero ( 850187 )
    Slap an X850 in there and you've got some serious Doom 3 action.
  • by Black Jack Hyde ( 2374 ) on Sunday April 03, 2005 @01:34PM (#12127129)
    ...will it run NetHack [nethack.org]?
  • Earth Simulator (Score:3, Insightful)

    by Anonymous Coward on Sunday April 03, 2005 @01:38PM (#12127151)
    I rather miss the time when the world's most
    powerful supercomputer was used to study our
    planet. It was something to be proud of, actually.
    These machines are essentially weapons. Pity, that.
    • Re:Earth Simulator (Score:3, Insightful)

      by lp-habu ( 734825 )
      Historically, I'll think you'll find that a great many technological advances were made with the original purpose of killing other beings -- usually other humans. Seems to be one of the basic human characteristics. Pretty effective, too.
    • OK then... (Score:3, Insightful)

      by caveat ( 26803 )
      How 'bout we use Blue Gene for climate modeling, and start setting off full-yield nuclear tests to insure the viability of the stockpile? I don't terribly like the idea of nukes, but the genie is out of the bottle and there's no stuffing it back in - we need to have the things, and if god forbid we ever have to use them, I'd like to see them work properly. Seriously...unless you use one of the interconnect cables to garrote somebody, these computers are hardly "weapons", quite the opposite in fact.
      • we need to have the things, and if god forbid we ever have to use them, I'd like to see them work properly.

        Indeed. It would suck if we were to only wipe out part of the human race. What would be the point of that? Worse still, with all the destruction from the ones that do go off, it could take thousands of years to set up another attempt at armageddon.

        It's kinda like that time I was playing russian roulette. I noticed the "click" had a different sound to it than usual, but the damn thing didn't go

        • Deterrence only works if the threat is plausible. That means that potential enemies have to believe that we are willing and capable of delivering functional weapons on target with a high probability of success. Whatever you do to us, we will still be capable of turning your country into a radioactive wasteland.
  • Link to the list (Score:5, Informative)

    by dnaboy ( 569188 ) on Sunday April 03, 2005 @01:39PM (#12127160)
    FYI the top 500 supercomputers list is maintained at http://www.top500.org/ [top500.org].
  • Dupe (Score:3, Informative)

    by karvind ( 833059 ) <karvind.gmail@com> on Sunday April 03, 2005 @01:45PM (#12127199) Journal
    Didn't we cover this before [slashdot.org] ?
  • LINPACK usage? (Score:2, Interesting)

    by Gleepy ( 16226 )
    I think of LAPACK [netlib.org] as being much more up-to-date for benchmarking.
  • Human Intelligence? (Score:2, Interesting)

    by kyle90 ( 827345 )
    Isn't the human brain supposed to be equivalent to a supercomputer running at about ~100 teraflops? And if so, shouldn't this computer be smarter than us?
    • "Imagine a beowulf cluster of those!"
    • "Imagine the Seti@Home rank on that puppy!"
    • "Pfft... A mere abacus -- mention it not!
    • "Molest me not with this pocket calculator stuff!"
  • All these increment "my computer is faster than your computer" articles are getting boring. I'll be interested in when they reach a petaflop. With "Moore's law" predicting a 10x speed up every five years, that should be around 2010.
  • I'd like to see a computing measurement unit for comparing how much energy it takes to perform those TFLOPS.
  • To study the effects of different nuclear weapon designs, there are basically two approaches:

    1. Throw massive amounts of computing power at the problem (as done here), or:
    2. Actually set off a nuclear weapon.

    Having massive computing power in the hands of Lawrence Livermore scientists reduces or even eliminates the need for U.S. nuclear forces to actually detonate nuclear and thermonuclear explosions.

    Of course, some people would prefer to see the United States undertake unilateral nuclear disarmament, something they've been advocating since SANE/FREEZE was telling us we could trust the Soviet Union in the 1980s. Only today they claim we can trust Kim Il Jong and the mullahs of Iran more than the democratically elected government of the United States, just as they claimed we could trust Leonid Breshnev and Yuri Andropov more than we could trust Ronald Reagan. Their views are every bit as ill-conceived now as they were then.

    • by ozborn ( 161426 ) on Sunday April 03, 2005 @04:29PM (#12128117)
      Of course, some people would prefer to see the United States undertake unilateral nuclear disarmament, something they've been advocating since SANE/FREEZE was telling us we could trust the Soviet Union in the 1980s. Only today they claim we can trust Kim Il Jong and the mullahs of Iran more than the democratically elected government of the United States, just as they claimed we could trust Leonid Breshnev and Yuri Andropov more than we could trust Ronald Reagan. Their views are every bit as ill-conceived now as they were then.
      Nice strawman you've constructed, but pray tell who are these "some people" you are talking about? I challenge you to cite a single press release, webpage or publication by any independent NGO (even kooky ones) pushing for nuclear disarmanment that claims Kong Il Jong can be trusted. I can't think of any disarmament/peace group that would be opposed to 3rd party bilateral weapons inspections.
      • Comment removed based on user account deletion
        • I do not regard Iran as harmless. However, they have been given a highly convincing demonstration of the dangers of not having weapons of mass destruction. When the US attacks your next door neighbour and then more or less announces that you are a top candidate to be next, it is understandable that you want a deterrent.
        • North Korea's government is primarily a clear and present danger to its citizens, who are suffering from severe poverty because it's a blatantly incompetent personality cult who keeps them from farming or trading successfully. Sure, its leaders occasionally say "Booga booga booga!" to the world just so someone will take them half-seriously - it helps keep the peasants in line. It's possible that some day they'll decide they have to actually nuke Seoul to keep themselves in power, but they do know that the
    • Only today they claim we can trust Kim Il Jong and the mullahs of Iran more than the democratically elected government of the United States

      Right! Because the majority of people are reliably smart enough [googlefight.com] to elect competent leaders.

      (Not that I think *anybody* should have a nuke, mind you)
  • by Animats ( 122034 ) on Sunday April 03, 2005 @02:54PM (#12127577) Homepage
    The "stockpile stewardship program" is basically a senior activity center for retired physicists. They have busywork projects to keep people thinking about how to design nuclear weapons. DOE is worried that all the old bomb designers will die off, and no new ones will replace them.

    Remember, everything in the inventory was designed with far less compute power than today's desktops.

    • DOE's stewardship program is not for retired scientists, but current ones. The laboratory directors at the nuclear labs (Sandia/LLNL/maybe others) are required to certify the stockpile as being ready to go each year. Their supercomputers are the only way to test the aging stockpile without actually detonating a few to see which designs age better than others.

      And let's remember that almost everything in the current arsenal was designed and actually tested, not just worked up via computer. It takes a whol
      • T5 wrote:

        DOE's stewardship program is not for retired scientists, but current ones. The laboratory directors at the nuclear labs (Sandia/LLNL/maybe others) are required to certify the stockpile as being ready to go each year. Their supercomputers are the only way to test the aging stockpile without actually detonating a few to see which designs age better than others.

        And let's remember that almost everything in the current arsenal was designed and actually tested, not just worked up via computer. It tak

  • ...to FINALLY have working voice recognition! :)

    Now for the obligatory...

    * Now imagine a Beowolf cluster of these!

    * This would make a hell of a MAME PC!

    * Windows will finally boot up in under five minutes!

    * Any Java GUI app would STILL run like a dog on this!

    Did I miss any??
  • by Moderation abuser ( 184013 ) on Sunday April 03, 2005 @03:20PM (#12127714)
    "making it the fastest supercomputer in the world"

    Or rather the fastest supercomputer with published LINPACK results. There are a number of reasons that agencies with supercomputers might not want to publish results.

  • by alex4u2nv ( 869827 ) on Sunday April 03, 2005 @03:35PM (#12127801) Homepage
    135.3 trillion floating point operations per second

    Does this mean we can't slashdot it?
  • by Allnighterking ( 74212 ) on Sunday April 03, 2005 @03:36PM (#12127811) Homepage
    You can now open a Mozilla session in under a minute!
  • 135.3 trillion floating point operations per second (teraFLOP/s) on the industry standard LINPACK benchmark, making it the fastest supercomputer in the world

    Microsoft just announced that NNSA is the fastest because it uses the upcoming version of Microsoft Office XP 2005, which offers faster startup times and a talking paperclip optimized for modern processors.

  • by theufo ( 575732 ) on Sunday April 03, 2005 @05:29PM (#12128538) Homepage
    Here's an article describing some of the specs.

    http://www.llnl.gov/asci/platforms/bluegene/talks/ gupta.pdf [llnl.gov]

    It's from the days when BlueGene/L was still relatively small, but the basic design hasn't changed since then.

    Turns out it's split into I/O and computing nodes. The 1024 I/O nodes run Linux. Each controls 64 dual-cpu nodes, which use simplistic microkernels written from scratch using Linux as an example.

    The network architecture sounds funky: apparantly it's based on a torus!
  • by Nom du Keyboard ( 633989 ) on Sunday April 03, 2005 @07:08PM (#12129103)
    Scientists at LLNL for the first time have performed 16-million-atom molecular dynamics simulations with the highest accuracy inter-atomic potentials necessary to resolve the key physical effects to successfully model pressure induced rapid resolidification in Tantalum.

    You just gotta love a sentence like that!

"If it ain't broke, don't fix it." - Bert Lantz

Working...