Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Space

Modeling Supernovae With a Supercomputer 64

A team of scientists at the University of Chicago will be using 22 million processor-hours to simulate the physics of exploding stars. The team will make use of the Blue Gene/P supercomputer at Argonne National Laboratory to analyze four different scenarios for type Ia supernovae. Included in the link is a video simulation of a thermonuclear flame busting its way out of a white dwarf. The processing time was made possible by the Department of Energy's INCITE program. "Burning in a white dwarf can occur as a deflagration or as a detonation. 'Imagine a pool of gasoline and throw a match on it. That kind of burning across the pool of gasoline is a deflagration,' Jordan said. 'A detonation is simply if you were to light a stick of dynamite and allow it to explode.' In the Flash Center scenario, deflagration starts off-center of the star's core. The burning creates a hot bubble of less dense ash that pops out the side due to buoyancy, like a piece of Styrofoam submerged in water."
This discussion has been archived. No new comments can be posted.

Modeling Supernovae With a Supercomputer

Comments Filter:
  • Re:flawed (Score:5, Informative)

    by Aranykai ( 1053846 ) <slgonser AT gmail DOT com> on Sunday May 04, 2008 @11:46AM (#23292000)
    Probably someone who actually knows what simulation means.

    Simulation is something which simulates a system or environment in order to predict actual behavior.

    To speculate on the other hand is to make an inference based on inconclusive evidence; to surmise or conjecture.

    So, he was indeed insightful when he stated that the lack of understanding would render the results speculative at best.

    (all definitions courtesy of wikitionary)
  • Re:flawed (Score:2, Informative)

    by jr-slash ( 904487 ) on Sunday May 04, 2008 @12:13PM (#23292200)
    You are also testing the math formula with this, if the visualized model looks different to what can be seen in nature, the formula is flawed. If it looks similar to nature it probably is a good model.
  • saw this on tv (Score:3, Informative)

    by lucky130 ( 267588 ) on Sunday May 04, 2008 @12:22PM (#23292306)
    A little over a third of the way through s02e09 of The Universe here [tv.com] has these guys talking about their simulation.
  • Re:flawed (Score:5, Informative)

    by pclminion ( 145572 ) on Sunday May 04, 2008 @12:37PM (#23292428)

    we understand little about it and the math formula used will be a half guess. supercomputer or not, results will be speculative at best.

    I don't think you understand how experiments work... If the results of the computations are something other than what is observed in nature, then the methods and/or equations are proven wrong. That is most certainly a NON-speculative result.

    Just because the model shows a burst of star stuff blowing out this way or that way in some particular configuration doesn't mean that scientists will leap up from their chairs and say "Stars do this, and we've proven it."

    You can never know if your models are correct. All you can do is continually test them and try to prove them wrong. Maxwell's equations have not been proven to be correct -- they've just never been shown to be wrong. This simulation is just a step on the path of evidence.

  • Re:honest question (Score:1, Informative)

    by Anonymous Coward on Sunday May 04, 2008 @12:52PM (#23292566)
    From TFA: "Blue Gene/P has more than 160,000 processors."

    BTW, 22 million hours = 2500 years, not 42 years.
  • Re:honest question (Score:4, Informative)

    by pclminion ( 145572 ) on Sunday May 04, 2008 @12:57PM (#23292614)
    A processor-hour is a single processor being utilized for an hour. This supercomputer has a lot of processors (as do all supercomputers, really).
  • Computing in Cloud (Score:3, Informative)

    by JavaGenosse ( 1174861 ) on Sunday May 04, 2008 @02:28PM (#23293370)
    I think Scientific Modeling in a compute cloud is more sexy, since it is way cheaper than 42 millions of processor hours and allows spikes. If one doesn't see differences between lab grid and cloud, go read wikipedia or http://groups.google.ca/group/cloud-computing/browse_thread/thread/73e1030b18df3730?hl=en [google.ca]
  • by mikael ( 484 ) on Sunday May 04, 2008 @09:02PM (#23296056)
    If you visit the webpages of the various research departments related to visualisation and parallel processing, then you can find many research papers related to this and other topics:

    A study of parallel techniques for visualisation [ucdavis.edu].

    A parallel visualization pipeline for Terascale earthquake simulation [ucdavis.edu]

    Scientific Discovery through Advanced Visualization [ucdavis.edu]

    A case study in Supernovae Simulation Data [uchicago.edu]

    It's just amazing to find out how much is going on inside a star - not just the fusion of Hydrogen and Helium atoms, but intense magnetic fields that drive rivers of liquid Hydrogen and Helium through rising and falling convection cells, which in turn create new magnetic fields.
  • by transonic_shock ( 1024205 ) on Sunday May 04, 2008 @09:56PM (#23296384) Homepage
    I had looked at ther work month ago when researching on writing my own N-body code. So, basically this is an implementation of Fast multipole Method (used in N-body computaion). Tradationaly (or rather in it's naive form) N-body codes are of the order N^2. Fast multipole algorithm (and Barnes-Hut and multitudes of their derivatives) does this at NLogN or better. You can have various kinds physical phenomenon occuring between two bodies/particles/points (graviational, electromagnetic etc) and this problem solves the physics for millions (or billions of such particles making up a supernovae) The entire 3d space is broken down into a oct-tree. You can traverse down to a group of particles (or one particle in Barnes-Hut), and traverse up calculating the force. The basic idea is to make a group of particles a large distance act like a single particle when calculating it's potential on another particle. Mind you the particle here is really a loose definition. It's really the most granular subdivion of space you could afford to calculate. Hence the need for bigger computers for better accuracy. The parallelism is MPI based. It's simpler to handle parallelism for nbody stuff compared to eulerian grid type problems.
  • by Anonymous Coward on Monday May 05, 2008 @03:01AM (#23297864)
    the code they are using, flash code, is not a simple n-body code. the flash code is a multi-physics, adaptive mesh, eularian hydrodynamics code. you may want to do a better job on your homework before posting.

    p.s. - i'm one of the original authors of the flash code.

The Macintosh is Xerox technology at its best.

Working...