Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Stats Supercomputing

LLNL/RPI Supercomputer Smashes Simulation Speed Record 79

Lank writes "A team of computer scientists from Lawrence Livermore National Laboratory and Rensselaer Polytechnic Institute have managed to coordinate nearly 2 million cores to achieve a blistering 504 billion events per second, over 40 times faster than the previous record. This result was achieved on Sequoia, a 120-rack IBM Blue Gene/Q normally used to run classified nuclear simulations. Note: I am a co-author of the coming paper to appear in PADS 2013."
This discussion has been archived. No new comments can be posted.

LLNL/RPI Supercomputer Smashes Simulation Speed Record

Comments Filter:
  • by stewsters ( 1406737 ) on Thursday May 02, 2013 @04:43PM (#43614835)
    Was i the only one who thought for a second that this was about a raspberry pi cluster?
  • I was already running Warp 3 in 1995! :-)
    (OS/2 Warp 3, to be exact)

    • At present, we are now at {Warp Speed 2.7}. It will be nearly 150 years before we expect to reach {Warp Speed} 10.0.

      And then, Delta Quadrant here we come!

  • Now, if only we found better uses for top supercomputers than assuring our WMD supply is always in tip-top shape for mass murder.

    • by Anonymous Coward

      It let's us get by with fewer bombs. Fewer bombs means less chances for mistakes. And I am very glad that the superpower states cannot fight each other directly because indirect war is bad enough.

      • The US military budget is about as much as the next 10 biggest national military budgets *combined.* The US isn't one player in a delicate balance of superpowers; it is a massive unilateral force, driven by greed and paranoia to utterly irrational levels of military spending. No matter how much the US has, war hawks clamor for more. "Fewer bombs" is a sick joke in the context of the ridiculous number of bombs the US has. Scrap 90% of our military, and we'd still be an untouchable superpower.

        • by Anonymous Coward

          While that is true (or true-ish; there is no reason to believe the PRC's public budget numbers), the US spends its defense money so inefficiently that it doesn't have as much strength as the next ten national militarizes combined, or anywhere close to that.

        • The US military budget is about as much as the next 10 biggest national military budgets *combined.* The US isn't one player in a delicate balance of superpowers; it is a massive unilateral force, driven by greed and paranoia to utterly irrational levels of military spending. No matter how much the US has, war hawks clamor for more. "Fewer bombs" is a sick joke in the context of the ridiculous number of bombs the US has. Scrap 90% of our military, and we'd still be an untouchable superpower.

          I think that the sad truth is that the primary purpose of the military budget is to serve as a welfare program whereby congresscritters can hand out jobs to their constituents and pretend that it's not "wasteful gummint spending" because it's FREEDOM, DAMMIT!

          An awful lot of money gets spent on horribly expensive military toys that the Pentagon claims not to want or need just because someone in Congress could get facilities opened back home to make and/or service them. You could replace quite a few bridges -

        • We came to this state by basically unilaterally assuming responsibility for the defense of Europe. It was complicated, and had to do with the question of German rearmament after WWII (and, in no small measure, Vietnam before the US was really involved). But the short answer for why the US spends so much on defense is that we have chosen to carry all our allies. Could Taiwan support ten carriers at sea? No... but will they need ten carriers to wage an effective campaign if the day ever comes? Yes. And we hav
          • I guess you've got some specific, well-funded, noble alternative to retask all these resources on?

            Well, nuclear anti-proliferation is a pretty nice start, which just requires enough funding to yank the plug out of the wall. But, since you've already presumably got funding for operation and research personnel for the bomb-maintenance tasks, you could just re-task that along with the computer (not designing bombs is a good start in itself). I don't currently have any personal pet projects that need a supercomputer, but perhaps I could refer you to the poster "aussie.virologist" further down the thread not

    • Maybe we could program them to scour the internet for comments that take a completely unrelated tangent to the original article, for the purposes of expounding on whatever irrelevant social agenda/issue the commentator has stuck up his behind? And then it could automatically delete them.
      • by nateb ( 59324 )
        Commentator is not a word. HTH. HAND.
        • by ratbag ( 65209 )

          From Oxford Dictionary of English:

          commentator |ËkÉ'mÉ(TM)nteÉtÉ(TM)|
          noun
          a person who comments on events or on a text.
          â a person who commentates on a sports match or other event.

          Commenter may have been more appropriate in the circumstances, I'll grant you.

    • You mean, like assuring our WMD supply is always in tip-top shape for deterring mass murder?

      Because that's what nukes have been doing for the last 60 years.

  • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Thursday May 02, 2013 @04:52PM (#43614945)

    Note: I am a co-author of the coming paper [acm-sigsim-pads.org] to appear in PADS 2013 [acm-sigsim-pads.org].

    I clicked hoping to read the paper, but the actual paper doesn't seem to be posted, only the abstract. The ACM copyright policy explicitly allows [acm.org] authors to "Post the Accepted Version of the Work on ... the Author's home page", so there is no legal barrier to the authors putting a PDF online. Doing so would of course increase readership of the paper, so ought to benefit everyone.

    • by Lank ( 19922 ) on Thursday May 02, 2013 @05:22PM (#43615205)
      I didn't realize that it was acceptable to post it before the conference even happened. But you're right so here it is [rpi.edu].
      • by Trepidity ( 597 )

        Thanks! My own policy is that I don't post draft or submitted versions, but once something is finalized (camera-ready final copy as it's going to appear in the proceedings), I'll post the PDF online.

        One plus side for those who care about such things is that it'll get into Google Scholar faster—GS is surprisingly good at picking these PDFs up in its crawls and figuring out how to index them.

      • Knowledge and Thoroughness, yo.

  • by Anonymous Coward

    I wonder how much money you could make mining bitcoins on that for one minute.

    'You earn $400,000 by using this computer for 12 seconds.'

    • And then at the end of the month you get an electric (+cooling / water) bill for $400,000. Doh!

      - Toast

      • Yes, but in the mean time you earn the interest on the $400k. Just make sure you mine on the first day of the month and pay the bill on the last.

  • Headline is incorrect: Sequoia is at LLNL, not RPI.

  • by aussie.virologist ( 1429001 ) on Thursday May 02, 2013 @06:08PM (#43615527)
    I'd be interested in seeing if this system could run our full Poliovirus simulations (consisting of around 3.5 million atoms). I've run our simulations on the BlueGene/Q at VLSCI using 32,768 cores (65,536 threads) and have been getting a very respectable 11.2 nanoseconds per day of simulation data using NAMD. Some data on our full virus simulations can be found here... (VIDRL supercomputer simulation page). [vidrl.org.au] Hey Lank, maybe you can help me figure out a way to crack the millisecond mark for our full-virus sims??? Great work and cheers from down under :-)
    • I would think that the macroscopic behavior of 3.5 million atoms in (poly)crystals or in a fluid or plasma states are within the capability of Sequoia. That's about 2 atoms per core and per GB of RAM. But the complex dynamics of proteins, DNA, RNA, and any other complex polymers that comprise the polio virus interacting with, say, a cell membrane, are still probably out of reach for accurate calculation in a reasonable amount of time.

      • Agreed, at this point we are looking at virus dynamics in response to drug binding events and gross alterations in conformational structure in response to significant changes in temperature and ionic content. So for these simulations, the longer the better. I dream of a day when we can model complex host cell interactions and hopefully I will a grey bearded old man still full of enthusiasm when these sort of simulations are considered "run of the mill". Your work helps to keep me excited about the future of
        • by ratbag ( 65209 )

          At the risk of getting all mushy and sentimental - thank you aussie.virologist, and your ilk, for doing something worthwhile with all these processor cycles available to the world.

        • I find this topic extremely interesting, and it is a field I could see myself getting involved in, however my background is undergrad elec/mech with my MSc. in robotics/mapping/AI. I've also done a ton of simulation work via Robocode [robowiki.net]. What kind of background topics would I need to still learn to do this kind of work? I'm guessing quantum physics and chemistry along with some more hardcore comp-sci.

          • Hey thanks "ratbag" for your kind words. The work that Barnes et al. are doing is so important for researchers like us. It opens the door for us to answer questions in a manner that even 5 years ago was considered "ambitious" to say the least. I am very lucky to be in a position where I have access to resources that allow me to explore new ways of answering some very old questions about how viruses behave, with the added bonus that we may hopefully be able to contribute to making the world just a little bit

  • This experiment didn't perform any useful computation - they just ran PHOLD, a benchmark that sends messages between nodes in a random pattern. It's a benchmark that's specifically tailored to perform well with the Time Warp synchronization algorithm for parallel discrete event simulation. Although Time Warp performs great in theory, it relies on rolling back program state when it detects a synchronization error, and is notoriously difficult to implement in practice for large simulations.

    Furthermore, these

    • I have to disagree. PHold was not designed to run well under Time Warp. It was designed as a stress test for any parallel discrete event simulator, whether based on Time Warp or not, and in particular originally to compare optimistic to conservative synchronization algorithms. Also, Sequoia is much less biased toward regular geometry continuum simulations that other world class supercomputers. It has no GPUs, for example. Machines of this class will be used more and more in the future for discrete simulati
  • and the operating system it runs is?

  • by DJefferson ( 1228878 ) on Thursday May 02, 2013 @08:30PM (#43616537)

    The title to this piece is wrong. The supercomputer in question was Sequoia, the Blue Gene/Q supercomputer located at Lawrence Livermore National Laboratory. Some preliminary work was done on a smaller RPI BG/Q machine, however. (I am a coauthor of the paper.)

  • Gustafson would be proud.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...