Forgot your password?
typodupeerror
Supercomputing Stats Technology

Wielding Supercomputers To Make High-Stakes Predictions 65

Posted by timothy
from the but-sir-this-research-could-explode dept.
aarondubrow writes "The emergence of the uncertainty quantification field was initially spurred in the mid-1990s by the federal government's desire to use computer models to predict the reliability of nuclear weapons. Since then, the toll of high-stake events that could potentially have been better anticipated if improved predictive computer models had been available — like the Columbia disaster, Hurricane Katrina and the World Trade Center collapse after the 9/11 terrorist attacks — has catapulted research on uncertainty quantification to the scientific and engineering forefronts." (Read this with your Texas propaganda filter turned to High.)
This discussion has been archived. No new comments can be posted.

Wielding Supercomputers To Make High-Stakes Predictions

Comments Filter:
  • by SrJsignal (753163) on Thursday December 08, 2011 @10:04AM (#38302350)
    ...thanks a lot....Ass
    • by ByOhTek (1181381)

      I read the article, and my first thought was, yeah, there is a lot of arrogance and pride. But I also looked at the specs of the super computer they are talking about. Half the local disk of the super computer we have a my workplace, but probably 2.5 times the CPU horsepower, and 50% more overall memory. We aren't known for being slouches in that department, so UT might have some reason for that arrogance.

      I'll be interested to see what the comparison looks like when out supercomputer is upgraded at the end

      • by mcgrew (92797) * on Thursday December 08, 2011 @10:37AM (#38302654) Homepage Journal

        What was old is new again. [usatoday.com]

        In a few hours on Nov. 4, 1952, Univac altered politics, changed the world's perception of computers and upended the tech industry's status quo. Along the way, it embarrassed CBS long before Dan Rather could do that all by himself.

        Computers were the stuff of science fiction and wide-eyed articles about "electric brains." Few people had actually seen one. Only a handful had been built, among them the first computer, ENIAC, created by J. Presper Eckert and John Mauchly at the University of Pennsylvania in the 1940s.

        In summer 1952, a Remington Rand executive approached CBS News chief Sig Mickelson and said the Univac might be able to plot early election-night returns against past voting patterns and spit out a predicted winner. Mickelson and anchor Walter Cronkite thought the claim was a load of baloney but figured it would at least be entertaining to try it on the air.

        On election night, the 16,000-pound Univac remained at its home in Philadelphia. In the TV studio, CBS set up a fake computer -- a panel embedded with blinking Christmas lights and a teletype machine. Cronkite sat next to it. Correspondent Charles Collingwood and a camera crew set up in front of the real Univac.

        By 8:30 p.m. ET -- long before news organizations of the era knew national election outcomes -- Univac spit out a startling prediction. It said Eisenhower would get 438 electoral votes to Stevenson's 93 -- a landslide victory. Because every poll had said the race would be tight, CBS didn't believe the computer and refused to air the prediction.

        Under pressure, Woodbury rejigged the algorithms. Univac then gave Eisenhower 8-to-7 odds over Stevenson. At 9:15 p.m., Cronkite reported that on the air. But Woodbury kept working and found he'd made a mistake. He ran the numbers again and got the original results -- an Eisenhower landslide.

        Late that night, as actual results came in, CBS realized Univac had been right. Embarrassed, Collingwood came back on the air and confessed to millions of viewers that Univac had predicted the results hours earlier.

        In fact, the official count ended up being 442 electoral votes for Eisenhower and 89 for Stevenson. Univac had been off by less than 1%. It had missed the popular vote results by only 3%. Considering that the Univac had 5,000 vacuum tubes that did 1,000 calculations per second, that's pretty impressive. A musical Hallmark card has more computing power.

      • Don't read too much into it. It's a utexas.edu news bite about an upcoming large research project at... UTexas. Literally every major university's website has a front page link about a big new research project on campus.

    • by forkfail (228161)

      And here I thought it was a reference to what Texas is doing to textbooks...

  • sounds a bit shaky to me...

  • by RichMan (8097) on Thursday December 08, 2011 @10:13AM (#38302444)

    Seems to me all the supercomputer models are predicting the disaster called global climate change is powered by human CO2 emissions. We have predicted it. It has a decided human cause against which we can take direct action. Over the next 50 years billions of people will be displaced. Trillions or more of infrastructure will be lost to rising oceans.

    Are we doing anything? Seems to me the whole prediction thing is useless if we are unwilling to take action on the results.
    Is it because the results are wrong or is it because it involves money in peoples pockets.

    We can make the predictions, we need to remove the barriers to action.

    • by khallow (566160)

      Over the next 50 years billions of people will be displaced.

      By what? Those supercomputer models aren't predicting that global warming will be the cause.

    • by Anonymous Coward

      There is a strange dichotomy with humans with respect to the cause of an event being by other humans or by nature (as if they are different). If 30,000 people die due to a flood or earthquake, a couple of years later it will just be known as The flood of 2001 or forgotten entirely. If 3,000 people die due to a human plot, its remembered and re-remembered periodically on the exact date. Just yesterday, we are remembering the 70th anniversary of Perl Harbor. For example, the recent quake in Haiti killed o

      • by khallow (566160)

        There is a strange dichotomy with humans with respect to the cause of an event being by other humans or by nature (as if they are different).

        It's not strange when you figure that a human-caused event can be repeated frequently, if the culprits can get away with it. Natural disasters don't increase with frequency, if one happens.

        In elementary terms, I think it has to do with the Prisoners' Dilemma. As you probably recall, the game is set up so that if two people cooperate they do ok. If one cooperates and one cheats, then the cheater does a lot better. And if both cheat, then both suffer worse than if they had both cooperated (but not as a coo

    • by RichMan (8097)

      Global climate change will lead to sea level rises. This will displace people.
      It is rising, sea graph at http://www.climate.org/topics/sea-level/index.html#sealevelrise

      The US has 57,000 km^2 of land below 1.5m above sea level. Another 33,000km^2 of land between 1.5m and 3.5m. (Source epa.gov)

      "Over 600 million people live in coastal areas that are less than 10 meters above sea level, and two-thirds of the world’s cities that have populations over five million are located in these at-risk areas (http:/

    • Another old computer rule, Garbage In, Garbage Out. Penn State buggers the data, and forgets to finish the whole energy equation with unmodeled terms, like non-radiant solar energy. We are more likely to experience unusual temperature declines across the next 30 years according to more predictive OLD models.
    • The problem is, the computers don't tell what to do about it.

    • by forkfail (228161)

      To ignore the oncoming train is, unfortunately, pretty ingrained into human nature, and has been for some time.

      See also Cassandra.

  • Since then, the toll of high-stake events that could potentially have been better anticipated if improved predictive computer models had been available — like the Columbia disaster, Hurricane Katrina and the World Trade Center collapse after the 9/11 terrorist attacks — has catapulted research on uncertainty quantification to the scientific and engineering forefronts

    How sure are we that the tolls could have been better anticipated?

    We should leverage a super computer to calculate the potential that each high-stake event can be better anticipated by a super computer model. Then simply pool our resources and use greater predictive computing power for the events we have the most potential to anticipate.

    I put it to you that once such a model can be computed, it will be trivial to use predictive computer models to determine which super computer will predict the the mos

  • Chaotic Systems (Score:3, Insightful)

    by Tokolosh (1256448) on Thursday December 08, 2011 @10:32AM (#38302610)

    Some things can be well-modeled by using good input data and fine-grained analysis, which may require supercomputers.

    A problem arises when inherently chaotic (in the mathematical sense) systems are modeled. No amount of computing power will improve the quality of the results.

    It may be hard to know what type of system you are dealing with.

    And by definition, black swans cannot be modeled at all.

    • by Anonymous Coward

      And by definition, black swans cannot be modeled at all.
       
      ... because after all these years, I'd still let Natalie Portman [imdb.com] model me with a bowl of hot grits anyday.

    • As a researcher in computational sciences (and chaotic systems) similar to the news blurb posted here, no. There are many methods for dealing with chaotic/uncertain inputs. Monte Carlo approaches, for example.

      • by Tokolosh (1256448)

        Ok, so I assume you use a Monte Carlo technique to generate probabilities of outcomes. But does having supercomputers improve the accuracy of the results, with any certainty?

        • Ok, so I assume you use a Monte Carlo technique to generate probabilities of outcomes. But does having supercomputers improve the accuracy of the results, with any certainty?

          Well yes, of course. That's the entire purpose of uncertainty prediction, and HPC simulations in general. In any kind of complex numerical simulation (say, turbulent aerodynamics), the accuracy with which you can simulate a given physical situation is entirely constrained by your available computational power. You must find a balance between the level of detail you need versus computer power available (i.e., direct numerical simulation of turbulence for a full-sized aircraft is both entirely unfeasible comp

  • Having a supercomputer won't help predict rare events unless you have a particular mathematical model for those events already (see physics). If you don't have a model for how rare events occur (terrorism events, natural disasters) then a computer (of any type) won't help you predict them. If you want to build a model then you needs lots and lots of events (and nonevents) and associated data to try and build a model. If you have a lot of data, perhaps you'd need a supercomputer to investigate the interim
  • by divisionbyzero (300681) on Thursday December 08, 2011 @10:40AM (#38302686)

    The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.

    The reason there is so much "uncertainty" (not for me but many others) around climate change is that it is practically a singular event that'll occur 50-100 years in the future. Of course the models can be validated as we go but how much validation is enough? When it's too late?

    • The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.

      True, but, despite the /. summary, the article really isn't about predicting events so much as trying to assess the level of uncertainty around the results o fteh model. By quantifying the uncertainty you can better use the results to decide what to do. Essentially, you want to be able to say "I think this will be the outcome, but I am only so sure about the accuracy of my prediction." It's not really about predicting the future (in the sense of "what event will occur" but what will happen if x occurs and

      • The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.

        True, but, despite the /. summary, the article really isn't about predicting events so much as trying to assess the level of uncertainty around the results o fteh model. By quantifying the uncertainty you can better use the results to decide what to do. Essentially, you want to be able to say "I think this will be the outcome, but I am only so sure about the accuracy of my prediction." It's not really about predicting the future (in the sense of "what event will occur" but what will happen if x occurs and how certain am I about that result.

        Yeah, I hate to go all Taleb but that doesn't make sense from a Black Swan point of view. The catastrophes outside the model are always the worst catastrophes because they are outside the model.

        • The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.

          True, but, despite the /. summary, the article really isn't about predicting events so much as trying to assess the level of uncertainty around the results o fteh model. By quantifying the uncertainty you can better use the results to decide what to do. Essentially, you want to be able to say "I think this will be the outcome, but I am only so sure about the accuracy of my prediction." It's not really about predicting the future (in the sense of "what event will occur" but what will happen if x occurs and how certain am I about that result.

          Yeah, I hate to go all Taleb but that doesn't make sense from a Black Swan point of view. The catastrophes outside the model are always the worst catastrophes because they are outside the model.

          True, but they aren't trying to predict Black Swan events, at least not from my RTF. Take Columbia, for example. If, after running the damage model, that had said "there is only a 20% chance we are right - i.e. there is a high degree of uncertainty surrounding our results" then NASA would have known to further analyze the situation. They weren't trying to say "there is x% chance a shuttle will suffer damage on launch the results in catastrophic failure of the vehicle on reentry."

          By being more certain about

  • Scepticism... (Score:5, Insightful)

    by shic (309152) on Thursday December 08, 2011 @10:44AM (#38302730)

    I like supercomputers in the same way I like architectural monuments - there's an element of beauty in stretching technology to ever more extreme goals, but I'm far from convinced that there's an objective, practical, point to any of the calculations they make.

    I'm very sceptical about climate change prediction - because, without any calculation, it's blindingly obvious that climate will change (all evidence suggests vast changes throughout history) and - because mankind is significant among life on earth - obviously we should assume a fair chunk to be 'man made'. I seldom see the questions that matter addressed... for example, in what ways can we expect climate change to be beneficial to mankind? When we ask the wrong questions, no matter how large-scale or accurate our computation, it will be worthless. Don't get me wrong, I see immense value in forecasting... but I don't see available computational power as a limiting factor... in my opinion there are two critical issues for forecasting: (1) collecting relevant data accurately; (2) establishing the right kind of summaries and models. While some models are computationally expensive - in my opinion - the reason for attempting to brute-force these models has far less to do with objective research and far more to do with political will to have a concrete answer irrespective of its relevance... The complexity of extensive computation is exploited to lend an air of credibility, in most cases, IMHO.

    "Don't worry about the future. Or worry, but know that worrying is as effective as trying to solve an algebra equation by chewing bubble gum. The real troubles in your life are apt to be things that never crossed your worried mind, the kind that blindside you at 4 p.m. on some idle Tuesday."

    The reason is simple: avoidable disasters occur not because we haven't done enough calculations - but because the calculations we do are done for the wrong reasons and produce irrelevant results. If we want to move forwards, we need more observation and more intelligent consideration. Iterating existing formulas beyond the extent possible with off-the-shelf technology, IMHO, is unlikely to yield anything significant.

    • (1) collecting relevant data accurately; (2) establishing the right kind of summaries and models.

      Yes, you are right. But due to sensitivity to initial conditions and a positive Lyapunov exponent [wikipedia.org], the number of days you are able to forecast scales only logarithmically with your computing power, even with near-perfect knowledge of the initial conditions. So yes, bigger is better when it comes to weather prediction.

      • by shic (309152)

        The diminishing returns implied by the Lyapunov exponent definitely lend credibility to my claim that much of supercomputing is objectively pointless, but I was anxious not to focus upon only one of the ways in which calculations might be irrelevant.

        I'd agree that "bigger is better" - but only if we exclude cost from our assessment.With significant financial overheads for marginal improvement in accuracy, I have to wonder - at the extremes of industry practice - might the same funding might been more effect

        • The point is that by doubling your supercomputer size, you gain one day of weather forecast. Yes, that is a very small gain, in some mathematical sense; however, if you are a farmer and you're planning your harvest, that's huge. Same if you're a fisherman and want to stay out there as long as possible until the winter storm actually hits. For society, this information is expensive to obtain, but the returns on the investment are great.
          • by shic (309152)

            Doubling your computational effort to extend your weather forecast to a 24th day might well be justified, as might doubling it again to get an extra hour. Doubling again to get the next few minutes, or again for an extra few seconds is far harder to justify - especially as other addressable factors might have greater influence on the uncertainty of the predictions.

            We clearly have a different subjective take on the typical practical value of calculations at the cutting edge of 'brute-force' computation. Wit

    • by EETech1 (1179269)

      Yeah, all the land that used to be forest or grassland cleaning the air and providing space for life to live, is now covered in black asphalt soaking up heat, covered in millions of pollutant spewing vehicles filled with air-conditioned lead-footed egomaniacs radiating thousands of BTUs a minute of centuries old carbon into a smothered landscape no longer able to contain plant life and therefore clean itself. Headed from Life_Of_Consumption pounding out 60 miles each way to work at PollutionCorp LLC, suckin

  • by Anonymous Coward

    The title is misleading and not really correct, because it doesn't describe the main thrust of the project. What the group at Texas is trying to do is change the way computer models make predictions, because they recognize that predicting events like Katrina or 9/11 with any kind of accuracy, based on essentially no data, is basically impossible, and that even when prediction is possible, it's still full of uncertainty.

    They don't want the models to spit out a single answer (e.g. "There will be a sea level r

    • Re: (Score:2, Interesting)

      by statdr (1585151)
      That doesn't make much sense. Models don't just spit out one answer. Models will report estimates and estimates of the uncertainty (typically standard errors) of the estimates. These uncertainty estimates define the probabilistic distribution from which the events being modeled derive. Of course, there can be quite complex underlying probabilistic distributions; not just the simple case of a one-dimensional distribution defined by one parameter. Computers are useful when the number of dimensions of th
  • This is a topic of great interest in aerodynamics. Aim is to understand how uncertainties in the input data (flow conditions, geometric imperfections, ....) affect the predicted aircraft performances. Some research has already taken place in Europe, for example see the project nodesim (http://www.nodesim.eu).
  • win the lottery? :)

The universe does not have laws -- it has habits, and habits can be broken.

Working...