Wielding Supercomputers To Make High-Stakes Predictions 65
aarondubrow writes "The emergence of the uncertainty quantification field was initially spurred in the mid-1990s by the federal government's desire to use computer models to predict the reliability of nuclear weapons. Since then, the toll of high-stake events that could potentially have been better anticipated if improved predictive computer models had been available — like the Columbia disaster, Hurricane Katrina and the World Trade Center collapse after the 9/11 terrorist attacks — has catapulted research on uncertainty quantification to the scientific and engineering forefronts." (Read this with your Texas propaganda filter turned to High.)
Re: (Score:2)
"Texas propaganda filter"???? (Score:3, Insightful)
Re: (Score:2)
I read the article, and my first thought was, yeah, there is a lot of arrogance and pride. But I also looked at the specs of the super computer they are talking about. Half the local disk of the super computer we have a my workplace, but probably 2.5 times the CPU horsepower, and 50% more overall memory. We aren't known for being slouches in that department, so UT might have some reason for that arrogance.
I'll be interested to see what the comparison looks like when out supercomputer is upgraded at the end
federal government's desire to use computer models (Score:5, Interesting)
What was old is new again. [usatoday.com]
Re: (Score:2)
Don't read too much into it. It's a utexas.edu news bite about an upcoming large research project at... UTexas. Literally every major university's website has a front page link about a big new research project on campus.
Re: (Score:2)
I've lived in quite a few places... I attended 10 schools before graduating from High School!) What's your favorite state?
Yes, Texans brag... almost as much as the Irish I think.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And as you sit there munching on your hamburger and sipping on Dr Pepper, remember those were invented in Texas too.
Not to mention fire, the wheel and pole dancing.
Re: (Score:2)
And here I thought it was a reference to what Texas is doing to textbooks...
Uncertainty quantification... (Score:1)
sounds a bit shaky to me...
Re: (Score:2)
Like Global Climate Change ? (Score:5, Interesting)
Seems to me all the supercomputer models are predicting the disaster called global climate change is powered by human CO2 emissions. We have predicted it. It has a decided human cause against which we can take direct action. Over the next 50 years billions of people will be displaced. Trillions or more of infrastructure will be lost to rising oceans.
Are we doing anything? Seems to me the whole prediction thing is useless if we are unwilling to take action on the results.
Is it because the results are wrong or is it because it involves money in peoples pockets.
We can make the predictions, we need to remove the barriers to action.
Re: (Score:3)
Over the next 50 years billions of people will be displaced.
By what? Those supercomputer models aren't predicting that global warming will be the cause.
Re: (Score:1)
There is a strange dichotomy with humans with respect to the cause of an event being by other humans or by nature (as if they are different). If 30,000 people die due to a flood or earthquake, a couple of years later it will just be known as The flood of 2001 or forgotten entirely. If 3,000 people die due to a human plot, its remembered and re-remembered periodically on the exact date. Just yesterday, we are remembering the 70th anniversary of Perl Harbor. For example, the recent quake in Haiti killed o
Re: (Score:2)
There is a strange dichotomy with humans with respect to the cause of an event being by other humans or by nature (as if they are different).
It's not strange when you figure that a human-caused event can be repeated frequently, if the culprits can get away with it. Natural disasters don't increase with frequency, if one happens.
In elementary terms, I think it has to do with the Prisoners' Dilemma. As you probably recall, the game is set up so that if two people cooperate they do ok. If one cooperates and one cheats, then the cheater does a lot better. And if both cheat, then both suffer worse than if they had both cooperated (but not as a coo
Re: (Score:2)
Global climate change will lead to sea level rises. This will displace people.
It is rising, sea graph at http://www.climate.org/topics/sea-level/index.html#sealevelrise
The US has 57,000 km^2 of land below 1.5m above sea level. Another 33,000km^2 of land between 1.5m and 3.5m. (Source epa.gov)
"Over 600 million people live in coastal areas that are less than 10 meters above sea level, and two-thirds of the world’s cities that have populations over five million are located in these at-risk areas (http:/
GIGO (Score:1)
Re: (Score:2)
The problem is, the computers don't tell what to do about it.
Re: (Score:2)
To ignore the oncoming train is, unfortunately, pretty ingrained into human nature, and has been for some time.
See also Cassandra.
Who computes the computer? (Score:2, Troll)
Since then, the toll of high-stake events that could potentially have been better anticipated if improved predictive computer models had been available — like the Columbia disaster, Hurricane Katrina and the World Trade Center collapse after the 9/11 terrorist attacks — has catapulted research on uncertainty quantification to the scientific and engineering forefronts
How sure are we that the tolls could have been better anticipated?
We should leverage a super computer to calculate the potential that each high-stake event can be better anticipated by a super computer model. Then simply pool our resources and use greater predictive computing power for the events we have the most potential to anticipate.
I put it to you that once such a model can be computed, it will be trivial to use predictive computer models to determine which super computer will predict the the mos
Chaotic Systems (Score:3, Insightful)
Some things can be well-modeled by using good input data and fine-grained analysis, which may require supercomputers.
A problem arises when inherently chaotic (in the mathematical sense) systems are modeled. No amount of computing power will improve the quality of the results.
It may be hard to know what type of system you are dealing with.
And by definition, black swans cannot be modeled at all.
I'm not so sure of that... (Score:1)
And by definition, black swans cannot be modeled at all.
... because after all these years, I'd still let Natalie Portman [imdb.com] model me with a bowl of hot grits anyday.
Re: (Score:2)
As a researcher in computational sciences (and chaotic systems) similar to the news blurb posted here, no. There are many methods for dealing with chaotic/uncertain inputs. Monte Carlo approaches, for example.
Re: (Score:1)
Ok, so I assume you use a Monte Carlo technique to generate probabilities of outcomes. But does having supercomputers improve the accuracy of the results, with any certainty?
Re: (Score:2)
Ok, so I assume you use a Monte Carlo technique to generate probabilities of outcomes. But does having supercomputers improve the accuracy of the results, with any certainty?
Well yes, of course. That's the entire purpose of uncertainty prediction, and HPC simulations in general. In any kind of complex numerical simulation (say, turbulent aerodynamics), the accuracy with which you can simulate a given physical situation is entirely constrained by your available computational power. You must find a balance between the level of detail you need versus computer power available (i.e., direct numerical simulation of turbulence for a full-sized aircraft is both entirely unfeasible comp
People don't understand models and computers (Score:1)
Predictions are only as good as the models... (Score:4, Insightful)
The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.
The reason there is so much "uncertainty" (not for me but many others) around climate change is that it is practically a singular event that'll occur 50-100 years in the future. Of course the models can be validated as we go but how much validation is enough? When it's too late?
Re: (Score:2)
The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.
True, but, despite the /. summary, the article really isn't about predicting events so much as trying to assess the level of uncertainty around the results o fteh model. By quantifying the uncertainty you can better use the results to decide what to do. Essentially, you want to be able to say "I think this will be the outcome, but I am only so sure about the accuracy of my prediction." It's not really about predicting the future (in the sense of "what event will occur" but what will happen if x occurs and
Re: (Score:2)
The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.
True, but, despite the /. summary, the article really isn't about predicting events so much as trying to assess the level of uncertainty around the results o fteh model. By quantifying the uncertainty you can better use the results to decide what to do. Essentially, you want to be able to say "I think this will be the outcome, but I am only so sure about the accuracy of my prediction." It's not really about predicting the future (in the sense of "what event will occur" but what will happen if x occurs and how certain am I about that result.
Yeah, I hate to go all Taleb but that doesn't make sense from a Black Swan point of view. The catastrophes outside the model are always the worst catastrophes because they are outside the model.
Re: (Score:2)
The only way to validate the model is to apply it and see if it works. The problem with hish risk disasters is that they don't happen all that often so it's hard to validate the model. I mean sure you can special case it to death to get it to predict "the Columbia disaster, Hurricane Katrina and the World Trade Center collapse" but if you special case it too much it loses predictive ability for similar but not identical events.
True, but, despite the /. summary, the article really isn't about predicting events so much as trying to assess the level of uncertainty around the results o fteh model. By quantifying the uncertainty you can better use the results to decide what to do. Essentially, you want to be able to say "I think this will be the outcome, but I am only so sure about the accuracy of my prediction." It's not really about predicting the future (in the sense of "what event will occur" but what will happen if x occurs and how certain am I about that result.
Yeah, I hate to go all Taleb but that doesn't make sense from a Black Swan point of view. The catastrophes outside the model are always the worst catastrophes because they are outside the model.
True, but they aren't trying to predict Black Swan events, at least not from my RTF. Take Columbia, for example. If, after running the damage model, that had said "there is only a 20% chance we are right - i.e. there is a high degree of uncertainty surrounding our results" then NASA would have known to further analyze the situation. They weren't trying to say "there is x% chance a shuttle will suffer damage on launch the results in catastrophic failure of the vehicle on reentry."
By being more certain about
Scepticism... (Score:5, Insightful)
I like supercomputers in the same way I like architectural monuments - there's an element of beauty in stretching technology to ever more extreme goals, but I'm far from convinced that there's an objective, practical, point to any of the calculations they make.
I'm very sceptical about climate change prediction - because, without any calculation, it's blindingly obvious that climate will change (all evidence suggests vast changes throughout history) and - because mankind is significant among life on earth - obviously we should assume a fair chunk to be 'man made'. I seldom see the questions that matter addressed... for example, in what ways can we expect climate change to be beneficial to mankind? When we ask the wrong questions, no matter how large-scale or accurate our computation, it will be worthless. Don't get me wrong, I see immense value in forecasting... but I don't see available computational power as a limiting factor... in my opinion there are two critical issues for forecasting: (1) collecting relevant data accurately; (2) establishing the right kind of summaries and models. While some models are computationally expensive - in my opinion - the reason for attempting to brute-force these models has far less to do with objective research and far more to do with political will to have a concrete answer irrespective of its relevance... The complexity of extensive computation is exploited to lend an air of credibility, in most cases, IMHO.
"Don't worry about the future. Or worry, but know that worrying is as effective as trying to solve an algebra equation by chewing bubble gum. The real troubles in your life are apt to be things that never crossed your worried mind, the kind that blindside you at 4 p.m. on some idle Tuesday."
The reason is simple: avoidable disasters occur not because we haven't done enough calculations - but because the calculations we do are done for the wrong reasons and produce irrelevant results. If we want to move forwards, we need more observation and more intelligent consideration. Iterating existing formulas beyond the extent possible with off-the-shelf technology, IMHO, is unlikely to yield anything significant.
Re: (Score:2)
(1) collecting relevant data accurately; (2) establishing the right kind of summaries and models.
Yes, you are right. But due to sensitivity to initial conditions and a positive Lyapunov exponent [wikipedia.org], the number of days you are able to forecast scales only logarithmically with your computing power, even with near-perfect knowledge of the initial conditions. So yes, bigger is better when it comes to weather prediction.
Re: (Score:2)
The diminishing returns implied by the Lyapunov exponent definitely lend credibility to my claim that much of supercomputing is objectively pointless, but I was anxious not to focus upon only one of the ways in which calculations might be irrelevant.
I'd agree that "bigger is better" - but only if we exclude cost from our assessment.With significant financial overheads for marginal improvement in accuracy, I have to wonder - at the extremes of industry practice - might the same funding might been more effect
Re: (Score:2)
Re: (Score:2)
Doubling your computational effort to extend your weather forecast to a 24th day might well be justified, as might doubling it again to get an extra hour. Doubling again to get the next few minutes, or again for an extra few seconds is far harder to justify - especially as other addressable factors might have greater influence on the uncertainty of the predictions.
We clearly have a different subjective take on the typical practical value of calculations at the cutting edge of 'brute-force' computation. Wit
Re: (Score:1)
Yeah, all the land that used to be forest or grassland cleaning the air and providing space for life to live, is now covered in black asphalt soaking up heat, covered in millions of pollutant spewing vehicles filled with air-conditioned lead-footed egomaniacs radiating thousands of BTUs a minute of centuries old carbon into a smothered landscape no longer able to contain plant life and therefore clean itself. Headed from Life_Of_Consumption pounding out 60 miles each way to work at PollutionCorp LLC, suckin
Re: (Score:2)
The title is misleading (Score:2, Interesting)
The title is misleading and not really correct, because it doesn't describe the main thrust of the project. What the group at Texas is trying to do is change the way computer models make predictions, because they recognize that predicting events like Katrina or 9/11 with any kind of accuracy, based on essentially no data, is basically impossible, and that even when prediction is possible, it's still full of uncertainty.
They don't want the models to spit out a single answer (e.g. "There will be a sea level r
Re: (Score:2, Interesting)
(No subject) (Score:2)
Re: (Score:2)
Anonymous Defenders of Texas, forwaaaard maaaarch!
Uhh. Can this somehow help me (Score:1)