Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Stats Technology

Wielding Supercomputers To Make High-Stakes Predictions 65

aarondubrow writes "The emergence of the uncertainty quantification field was initially spurred in the mid-1990s by the federal government's desire to use computer models to predict the reliability of nuclear weapons. Since then, the toll of high-stake events that could potentially have been better anticipated if improved predictive computer models had been available — like the Columbia disaster, Hurricane Katrina and the World Trade Center collapse after the 9/11 terrorist attacks — has catapulted research on uncertainty quantification to the scientific and engineering forefronts." (Read this with your Texas propaganda filter turned to High.)
This discussion has been archived. No new comments can be posted.

Wielding Supercomputers To Make High-Stakes Predictions

Comments Filter:
  • by RichMan ( 8097 ) on Thursday December 08, 2011 @10:13AM (#38302444)

    Seems to me all the supercomputer models are predicting the disaster called global climate change is powered by human CO2 emissions. We have predicted it. It has a decided human cause against which we can take direct action. Over the next 50 years billions of people will be displaced. Trillions or more of infrastructure will be lost to rising oceans.

    Are we doing anything? Seems to me the whole prediction thing is useless if we are unwilling to take action on the results.
    Is it because the results are wrong or is it because it involves money in peoples pockets.

    We can make the predictions, we need to remove the barriers to action.

  • by mcgrew ( 92797 ) * on Thursday December 08, 2011 @10:37AM (#38302654) Homepage Journal

    What was old is new again. [usatoday.com]

    In a few hours on Nov. 4, 1952, Univac altered politics, changed the world's perception of computers and upended the tech industry's status quo. Along the way, it embarrassed CBS long before Dan Rather could do that all by himself.

    Computers were the stuff of science fiction and wide-eyed articles about "electric brains." Few people had actually seen one. Only a handful had been built, among them the first computer, ENIAC, created by J. Presper Eckert and John Mauchly at the University of Pennsylvania in the 1940s.

    In summer 1952, a Remington Rand executive approached CBS News chief Sig Mickelson and said the Univac might be able to plot early election-night returns against past voting patterns and spit out a predicted winner. Mickelson and anchor Walter Cronkite thought the claim was a load of baloney but figured it would at least be entertaining to try it on the air.

    On election night, the 16,000-pound Univac remained at its home in Philadelphia. In the TV studio, CBS set up a fake computer -- a panel embedded with blinking Christmas lights and a teletype machine. Cronkite sat next to it. Correspondent Charles Collingwood and a camera crew set up in front of the real Univac.

    By 8:30 p.m. ET -- long before news organizations of the era knew national election outcomes -- Univac spit out a startling prediction. It said Eisenhower would get 438 electoral votes to Stevenson's 93 -- a landslide victory. Because every poll had said the race would be tight, CBS didn't believe the computer and refused to air the prediction.

    Under pressure, Woodbury rejigged the algorithms. Univac then gave Eisenhower 8-to-7 odds over Stevenson. At 9:15 p.m., Cronkite reported that on the air. But Woodbury kept working and found he'd made a mistake. He ran the numbers again and got the original results -- an Eisenhower landslide.

    Late that night, as actual results came in, CBS realized Univac had been right. Embarrassed, Collingwood came back on the air and confessed to millions of viewers that Univac had predicted the results hours earlier.

    In fact, the official count ended up being 442 electoral votes for Eisenhower and 89 for Stevenson. Univac had been off by less than 1%. It had missed the popular vote results by only 3%. Considering that the Univac had 5,000 vacuum tubes that did 1,000 calculations per second, that's pretty impressive. A musical Hallmark card has more computing power.

  • by Anonymous Coward on Thursday December 08, 2011 @10:48AM (#38302754)

    The title is misleading and not really correct, because it doesn't describe the main thrust of the project. What the group at Texas is trying to do is change the way computer models make predictions, because they recognize that predicting events like Katrina or 9/11 with any kind of accuracy, based on essentially no data, is basically impossible, and that even when prediction is possible, it's still full of uncertainty.

    They don't want the models to spit out a single answer (e.g. "There will be a sea level rise of 10 centimeters within 20 years"), but rather a probability distribution ("The sea level rise over the next twenty years can be modeled as a normal distribution with a mean of 10 centimeters and a standard deviation of 5 centimeters"). The distribution is supposed to be based on uncertainties that arise at various stages in the process of modeling, such as model assumptions and data collection.

    Personally, I think in certain cases these techniques are great, and in other cases they are worse than useless. If you have a model that's supposed to predict terrorist attacks, it will happily tell you that with 90% confidence the probability of a terrorist attack in Location X in the next year is between 1 and 3 percent. This may be perfectly correct, but highly misleading, because fundamentally the event is not probabilistic, and the only reason it appears to be is that a key piece of data is missing. As such, what the computer should really do is say the following. "Dear DHS: I don't know whether the terrorists are planning the attack. If they are, it is very likely to occur. If they aren't, it won't happen. Please do your job and go find out whether they are or not, and let me do more interesting things. Sincerely, Computer."

  • by statdr ( 1585151 ) on Thursday December 08, 2011 @10:57AM (#38302834)
    That doesn't make much sense. Models don't just spit out one answer. Models will report estimates and estimates of the uncertainty (typically standard errors) of the estimates. These uncertainty estimates define the probabilistic distribution from which the events being modeled derive. Of course, there can be quite complex underlying probabilistic distributions; not just the simple case of a one-dimensional distribution defined by one parameter. Computers are useful when the number of dimensions of the underlying probabilistic distribution is large and also when the form of the underlying distribution is not some convenient structure. However, while a supercomputer can handle models with complex underlying probabilistic distributions, this doesn't mean that you'll get anything useful out of the modeling exercise. You still need to have lots of data on events (and non-events) to try and predict the events with any degree of accuracy and precision.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...