Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Stats Supercomputing

LLNL/RPI Supercomputer Smashes Simulation Speed Record 79

Lank writes "A team of computer scientists from Lawrence Livermore National Laboratory and Rensselaer Polytechnic Institute have managed to coordinate nearly 2 million cores to achieve a blistering 504 billion events per second, over 40 times faster than the previous record. This result was achieved on Sequoia, a 120-rack IBM Blue Gene/Q normally used to run classified nuclear simulations. Note: I am a co-author of the coming paper to appear in PADS 2013."
This discussion has been archived. No new comments can be posted.

LLNL/RPI Supercomputer Smashes Simulation Speed Record

Comments Filter:
  • by Lank ( 19922 ) on Thursday May 02, 2013 @06:22PM (#43615205)
    I didn't realize that it was acceptable to post it before the conference even happened. But you're right so here it is [rpi.edu].
  • by DJefferson ( 1228878 ) on Thursday May 02, 2013 @09:30PM (#43616537)

    The title to this piece is wrong. The supercomputer in question was Sequoia, the Blue Gene/Q supercomputer located at Lawrence Livermore National Laboratory. Some preliminary work was done on a smaller RPI BG/Q machine, however. (I am a coauthor of the paper.)

  • Re:what OS please? (Score:4, Informative)

    by DJefferson ( 1228878 ) on Thursday May 02, 2013 @09:40PM (#43616601)
    It runs a custom IBM OS specifically designed for Blue Gene/Q. It proveds an API very similar to Linux, but with some restrictions, e.g. static limits on threads, no process forking, and custom MPI messaging instead of a TCP/IP stack.
  • by DJefferson ( 1228878 ) on Friday May 03, 2013 @12:47AM (#43617475)
    The simulation was a well-known parallel discrete event benchmark called PHold. It is not a model of any particular physical system, but is more of a stress and scalability test for the simulator, in this case the ROSS simulator developed at RPI. PHold has particularly fine-grained events, which stresses the synchronization mechanism known as Time Warp, implemented ROSS with support for reverse computation. It stresses the scalability of the Global Virtual Time commitment mechanism (used for I/O, error detection, storage management, and termination detection). And because PHold has no locality in its communication, it greatly stresses the underlying communication layer, MPI. The general idea is that a simulator that can achieve high performance on PHold at very large parallel scale can achieve high performance on just about any realistic, load balanced discrete event simulation at that scale.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...