Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Space

Modeling Supernovae With a Supercomputer 64

A team of scientists at the University of Chicago will be using 22 million processor-hours to simulate the physics of exploding stars. The team will make use of the Blue Gene/P supercomputer at Argonne National Laboratory to analyze four different scenarios for type Ia supernovae. Included in the link is a video simulation of a thermonuclear flame busting its way out of a white dwarf. The processing time was made possible by the Department of Energy's INCITE program. "Burning in a white dwarf can occur as a deflagration or as a detonation. 'Imagine a pool of gasoline and throw a match on it. That kind of burning across the pool of gasoline is a deflagration,' Jordan said. 'A detonation is simply if you were to light a stick of dynamite and allow it to explode.' In the Flash Center scenario, deflagration starts off-center of the star's core. The burning creates a hot bubble of less dense ash that pops out the side due to buoyancy, like a piece of Styrofoam submerged in water."
This discussion has been archived. No new comments can be posted.

Modeling Supernovae With a Supercomputer

Comments Filter:
  • Re:flawed (Score:2, Insightful)

    by maxume ( 22995 ) on Sunday May 04, 2008 @12:27PM (#23292332)
    So if you have a model and you run what you think is a simulation and then it turns out that the prediction from your model was incorrect, you should have been calling your simulation speculation?

    How is that not just hair splitting semantics?
  • Re:flawed (Score:5, Insightful)

    by hazem ( 472289 ) on Sunday May 04, 2008 @02:25PM (#23293352) Journal
    There are several steps in constructing useful models and the last, and most controversial is that of "model validation".

    In building a computer model/simulation, you generally follow these steps:

    1) problem formulation - what do you want to figure out, gather data, get the "reference behavior pattern"
    2) formulate a mental model of the system - what are the entities involved and how are they related
    3) build and debug your model
    4) verification - this is where you ensure the model behaves as expected against specific sets of inputs - as you change inputs, does it do what you expect (I turn up the volume knob - and the sound gets louder)

    5) validation - this is where you compare the results of the model with the reference data from the real world. If it doesn't match, you then have to back up and figure out what's wrong... was the implementation of the model incorrect? were your initial hypotheses incorrect? And if it does match, have you gathered enough real world data to know your model is functioning well? How confident are you of this model's ability to model the system you're interested in.

    So suppose you've built a model that you can validate against gathered data, you still have to demonstrate that your model is valuable to the scientific community.

    You're probably going out on a limb to make strong assertions when a model demonstrates/predicts behavior that has not been observed. However, it can serve a great role in helping determine what other things to look for, what conditions may exist, or help see relationships that you may not have seen before.

    The controversy is over how much you can use a model to help prove a hypothesis.
  • Re:flawed (Score:2, Insightful)

    by ScreamingCactus ( 1230848 ) on Monday May 05, 2008 @01:33AM (#23297542)
    "the phenomenon that the majority of the gravitational effects within galaxies are unaccounted for, but are now commonly attributed to some kind of 'invisible matter'" just seemed like too much to write. Alas, I ended up writing it anyway.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...