Modeling Supernovae With a Supercomputer 64
A team of scientists at the University of Chicago will be using 22 million processor-hours to simulate the physics of exploding stars. The team will make use of the Blue Gene/P supercomputer at Argonne National Laboratory to analyze four different scenarios for type Ia supernovae. Included in the link is a video simulation of a thermonuclear flame busting its way out of a white dwarf. The processing time was made possible by the Department of Energy's INCITE program.
"Burning in a white dwarf can occur as a deflagration or as a detonation. 'Imagine a pool of gasoline and throw a match on it. That kind of burning across the pool of gasoline is a deflagration,' Jordan said. 'A detonation is simply if you were to light a stick of dynamite and allow it to explode.' In the Flash Center scenario, deflagration starts off-center of the star's core. The burning creates a hot bubble of less dense ash that pops out the side due to buoyancy, like a piece of Styrofoam submerged in water."
Re:flawed (Score:2, Insightful)
How is that not just hair splitting semantics?
Re:flawed (Score:5, Insightful)
In building a computer model/simulation, you generally follow these steps:
1) problem formulation - what do you want to figure out, gather data, get the "reference behavior pattern"
2) formulate a mental model of the system - what are the entities involved and how are they related
3) build and debug your model
4) verification - this is where you ensure the model behaves as expected against specific sets of inputs - as you change inputs, does it do what you expect (I turn up the volume knob - and the sound gets louder)
5) validation - this is where you compare the results of the model with the reference data from the real world. If it doesn't match, you then have to back up and figure out what's wrong... was the implementation of the model incorrect? were your initial hypotheses incorrect? And if it does match, have you gathered enough real world data to know your model is functioning well? How confident are you of this model's ability to model the system you're interested in.
So suppose you've built a model that you can validate against gathered data, you still have to demonstrate that your model is valuable to the scientific community.
You're probably going out on a limb to make strong assertions when a model demonstrates/predicts behavior that has not been observed. However, it can serve a great role in helping determine what other things to look for, what conditions may exist, or help see relationships that you may not have seen before.
The controversy is over how much you can use a model to help prove a hypothesis.
Re:flawed (Score:2, Insightful)