Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Supercomputing Technology

$208 Million Petascale Computer Gets Green Light 174

coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system. Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."
This discussion has been archived. No new comments can be posted.

$208 Million Petascale Computer Gets Green Light

Comments Filter:
  • Re:oblig. (Score:3, Informative)

    by Bill, Shooter of Bul ( 629286 ) on Wednesday September 03, 2008 @07:00PM (#24866509) Journal
    No. This is Urbana,Illinois. HAL 9000 [wikipedia.org] would be more appropriate.
  • by InlawBiker ( 1124825 ) on Wednesday September 03, 2008 @07:02PM (#24866533)
    I think you meant tea. [wikipedia.org]
  • Re:Naive question... (Score:5, Informative)

    by Deadstick ( 535032 ) on Wednesday September 03, 2008 @07:22PM (#24866781)

    Weather modeling comes to mind, both terrestrial and space.

    rj

  • Re:Naive question... (Score:5, Informative)

    by mikael ( 484 ) on Wednesday September 03, 2008 @07:29PM (#24866869)

    These machines are used to work on simulations that involve aerodynamics and hydrodynamics, quantum electrodynamics (QED), or electromagnetohydrodynamics. All of these simulations require that a mathematical model is constructed from a high density mesh of data points (2048 ^ 3). Blocks of such points are allocated to individual processors. Because of this, each processor must be able to communicate at a high speed with its neighbours (up to 26 neighbours with a cubic mesh).

    Usually, the actual individual calculations per element will be take up less than a page of mathematical equations, but require high precision, so the data values will be 64-bit floating point quantities. A single element might require 20 or more variables. Thus the need for some many processors and high clock speed.

  • Re:Naive question... (Score:5, Informative)

    by Ilgaz ( 86384 ) on Wednesday September 03, 2008 @07:37PM (#24866961) Homepage

    Do you notice neither USA or Russia blows a portion of planet to test nuclear weapons anymore? It is because the planet is so peaceful so further research is not required? Unfortunately no.

    These monsters can simulate a gigantic nuclear explosion in molecular level.

    Or for peace purposes, they can actually simulate that New Orleans storm based on real World data and pinpoint exactly what would happen.

  • by DegreeOfFreedom ( 768528 ) on Wednesday September 03, 2008 @08:25PM (#24867485)

    Blue Waters will be the first to deliver a sustained petaflop on "real-world" applications, meaning various scientific simulations [uiuc.edu]. Specifically, the program solicitation [nsf.gov] required prospective vendors to explain how their proposed systems would sustain a petaflop on three types specific types of simulations, one each in turbulence, lattice-guage quantum chromodynamics, and molecular dynamics.

    Granted, Roadrunner was the first machine to deliver a petaflop on the Linpack benchmark [netlib.org] (though certainly IBM's own implementation of it). The benchmark does nothing more than set up and solve a system of linear equations. Roadrunner solved a system of 2,236,927 equations (in other words, it had a 2,236,927-by-2,236,927 coefficient matrix) in 2 hours.

    But Blue Waters is planned to deliver a petaflop on applications that normally don't sustain >80% of theoretical peak; these applications are lucky to get near 20%.

  • Re:It's said... (Score:4, Informative)

    by Bill Barth ( 49178 ) <bbarthNO@SPAMgmail.com> on Wednesday September 03, 2008 @08:36PM (#24867615)
    You could not be more wrong.

    Considering that we've got SDR IB with under 2 microseconds latency for the shortest hops (and ~3 for the longest), I think you need to go update your anti-cluster argument. :) The problems with congestion in fat trees have virtually nothing to do with latency. Yes massive congestion will kill your latency numbers, but given that you don't get cascades and other failures causing congestion without fairly large bandwidth utilization, latency is the least of your worries that that point. Furthermore, the cascades you talk about also aren't common except in extremely oversubscribed networks or in the presence of malfunctioning hardware. We do our best to use properly functioning hardware and to have no more that 2:1 oversubscription (with our largest machine not being oversubscribed at all).

    MPICH ain't that bad (heck, MPICH2, even just it's MPI-1 parts might be considered to be pretty good by some). MPI as standard for message-passing is fine. I'd love to hear what you think is wrong with MPI and see some examples where another portable message passing standard does consistently better. Though it's a bit like C or C++ or Perl in that there are lots of really bad ways to accomplish things in MPI and a handful of good ones. It's low-level enough that you need to know what you're doing. But if you believe anyone that tells you they have a way to make massively parallel programming easy, I've got a bridge you might be interested in.

    Finally, I don't know of much in the way of a "supercomputer" that's using TCP for it's MPI traffic these days, so you can put that old saw out to pasture as well.

  • Re:Naive question... (Score:5, Informative)

    by Rostin ( 691447 ) on Wednesday September 03, 2008 @09:43PM (#24868189)

    I'm working on a PhD in chemical engineering, and I do simulations. I occasionally use Lonestar and Ranger, which are clusters at TACC, the U. of Texas' supercomputing center. Lonestar is capable of around 60 TFLOPS and Ranger can do around 500-600 TFLOPS. A few users run really large jobs using thousands of cores for days at a stretch, but the majority of people use 128 or fewer cores for a few hours at a time.

    My research group does materials research using density function theory, which is an approximate way of solving the Schroedinger equation. Each of our jobs usually uses 16 or 32 cores, and takes anywhere from 5 minutes to a couple of days to finish. Usually we are interested in looking at lots of slightly different cases, so we run dozens of jobs simultaneously.

    The applications are pretty varied. Some topics we are working on -
    1) Si nanowire growth
    2) Si self-interstitial defects
    3) Au cluster morphology
    4) Catalysis by metal clusters
    5) Properties of strained semiconductors

  • Re:Naive question... (Score:3, Informative)

    by dlapine ( 131282 ) <<lapine> <at> <illinois.edu>> on Wednesday September 03, 2008 @10:51PM (#24868843) Homepage

    For a reasonable sample of the things that can be done on a supercomputer, start here: http://www.ncsa.uiuc.edu/Projects/ [uiuc.edu]. Those are just the things running at NCSA.

    Followup with this [teragrid.org], as the science gateways for the TeraGrid are designed to let scientists worry more about the science part and less about the programming part. Part of the reason to build bigger supercomputers is to let non-programmers get work done as well. By having more cycles available, the TeraGrid can allow access for codes that are easier for the average scientist to use, even if they don't make the best use of the machine. Not everyone is a wiz at parallel programming, and we shouldn't expect an expert in say, biology, to be just as expert in computer science.

  • Re:Naive question... (Score:1, Informative)

    by Anonymous Coward on Wednesday September 03, 2008 @11:07PM (#24868981)

    My field is the simulation of complex materials so I can give you some insight into what excites us about Blue Waters. Perhaps someone with a different background can speak to other uses.

    One of our primary goals for this computer is to be able to probe the quantum state of the electrons in real materials so that we can better understand their behavior. This is useful both for theoretical insight as well as predictions where experiments are difficult to perform (like understanding the material at the earth's core or in a star). Currently the most accurate methods can only treat several hundred electrons on even the largest computers so Blue Waters may help to treat systems like DNA or high temperature superconductors that need a much larger scale. Also it is possible to make some more approximations and move to much larger simulations like biological molecules and hydrogen storage candidates.

    As to concrete advances, these techniques and their predecessors are used daily for tasks such as drug discovery and simulation of new semiconductor fabrication techniques. The real excitement over Blue Waters though is that we don't really know what will be possible and there is much debate even about whether our current paradigms of programming are up to the task, so stay tuned to a journal near you...

  • Or (Score:3, Informative)

    by Colin Smith ( 2679 ) on Thursday September 04, 2008 @03:29AM (#24870603)

    Simulating nuclear explosions.

     

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...