Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing United States

Supercomputing Upgrade Produces High-Resolution Storm Forecasts 77

dcblogs writes A supercomputer upgrade is paying off for the U.S. National Weather Service, with new high-resolution models that will offer better insight into severe weather. This improvement in modeling detail is a result of a supercomputer upgrade. The National Oceanic and Atmospheric Administration, which runs the weather service, put into production two new IBM supercomputers, each 213 teraflops, running Linux on Intel processors. These systems replaced 74-teraflop, four-year old systems. More computing power means systems can run more mathematics, and increase the resolution or detail on the maps from 8 miles to 2 miles.
This discussion has been archived. No new comments can be posted.

Supercomputing Upgrade Produces High-Resolution Storm Forecasts

Comments Filter:
  • Woo! (Score:3, Funny)

    by pushing-robot ( 1037830 ) on Thursday October 02, 2014 @09:03PM (#48052981)

    Now they can be wrong in hi-def!

  • by RichMan ( 8097 ) on Thursday October 02, 2014 @09:09PM (#48053005)

    I was at a supercomputing conference back in the 90's. There were wonderful reports on doubling the resolution of the grid and so on. Advances in the scale are all good.

    The questions are
    a) with the increase in detail of the simulations have we converged on a solution. That is do solutions at scale N and 10N match. If they do then the resolution and model are aligned for accuracy in the solution.
    b) do the simulations agree with reality.

    If a) and not b) then there is something wrong with the model that is not related to compute power or problem resolution, and no amount of compute power will fix it.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      a) we don't care if the models match
      1) the most impotant thing, the one true thing, is that the model remain stable
      2) if the model remains stable, then it's judged against reality, not other models. However,
      3) Don't break the stability of the model

      b) this is a global differential equation. We don't know the initial conditions, and the model can only approximate them. However, the finer the mesh, the closer we can get to the initial conditions, so the further out

    • by pushing-robot ( 1037830 ) on Thursday October 02, 2014 @09:40PM (#48053137)

      TFA ironically begins with the quote '"I don’t think we will ever have enough [computing power] to satisfy us,” says researcher.'

      The summary is vague, and the article not much better, and neither say anything about whether the 'new model' is matching observations any better than the old.

      It would be nice if they could at least clarify if the sole pair of comparison images are even the same forecast, because the new model shows not only more detail but a completely different prediction.

      Come on kids, this isn't a network news sound bite. This is the Internet, and you're a tech news site. Would it kill you to go past the press release?

      Maybe I'm just bitter about this because I live in mountains where a coin is a more accurate forecasting tool than the weather service.

      • by the eric conspiracy ( 20178 ) on Thursday October 02, 2014 @09:53PM (#48053219)

        Well the possibility is that at some level the weather will exhibit chaotic behavior and no matter how big your model gets it won't matter - the coin flip will be just as good.

        • by dbIII ( 701233 )
          You can say that about a lot of things, so the answer is just don't extrapolate to the point beyond where your model is useless due to noise being higher than likely signal. Such a thing should be obvious to anyone who thinks they know enough about modelling anything to comment on the subject. So Eric - science denier pushing politics or out of your depth?
    • by EETech1 ( 1179269 ) on Thursday October 02, 2014 @10:06PM (#48053265)

      TFL in TFA goes over it.

      www.computerworld.com/article/2484337/computer-hardware/noaa-goes--live--with-new-weather-supercomputers.html

      It's been a complicated process to get to this point. The NWS has had to ensure that the software running on the new system is producing scientifically correct results. It had been running the old and new systems in parallel for months, and comparing the output.

      This comparative testing involved examining output data to determine whether it is numerically reproducible out to five decimal places. There is also a statistical analysis of weather predictions on the new system against the actual weather conditions.

      The process wasn't just an examination of numerical data. NWS scientists also studied the weather products and examined them for subtle differences. "There is a lot of human, highly experienced, subjective evaluation," said Kyger.

      There are computational differences involved in switching to new chips and a new operating system. They are subtle, and appear in decimal places six through 12.

      As you go further out in a forecast, the differences compound. The changes may appear in the fifth day of an extended, five-day forecast as a difference of one degree.

      • Based on the resolution increase, and the difference in computing power that was needed to provide that increase, we can make a few assumptions about their algorithm. There was an increase in computing power of 2.88 times, which achieved a better predictive resolution of 16 times. This tells us that they are calculating differences based on the perimeters of their smallest resolution. Create a square on paper, call each edge 8 miles. Now divide that square into 2 mile subsections. The increase in edges you
    • by StevisF ( 218566 )

      Not surprisingly the scientists that work on weather models care very much about their accuracy. The GFS model's peformance is constantly reviewed: http://www.emc.ncep.noaa.gov/G... [noaa.gov]

    • by amck ( 34780 ) on Friday October 03, 2014 @01:34AM (#48053909) Homepage

      These simulations are forecasts. They check every forecast against observations, and have very good metrics on how good their forecasts are, and how much skill changes.
      See for example how the European ECMWF does its forecasts:
      http://www.ecmwf.int/en/foreca... [ecmwf.int]

      Every change to the operational model(s) can be and is checked out first against " will it improve the forecast". Similarly improvements in computing power: we simply run yesterdays forecast at higher resolution for example; we can then say "this new model is n% better, but takes 10x as long to calculate", and use that to decide whether its worth buying a faster computer.

      On the climate timescale we have a challenge verifying the simulations, but on the weather timescale its straightforward, and done.

  • by Thorfinn.au ( 1140205 ) on Thursday October 02, 2014 @09:34PM (#48053113)
    Better resolution is good, but with each improvement in the system, the input data also needs to be improved and remeasured.
    Ultimately the ground features need to be modelled in greater detail to match the increased resolution of the grid.
    Which comes own to knowing where each tree/building and similar sized static feature is and how this affects the model.
    However, as the grid increases it should not need to know where the butterflies are .
    • by erikscott ( 1360245 ) on Thursday October 02, 2014 @11:10PM (#48053489)

      I'm a computer engineer, not a meteorologist, but I've worked with them off and on for about eight years now. One of the most common models for research use is "Weather Research and Forecasting Model" (WRF, pronounced like the dude from ST:TNG). There are several versions in use, so caveats are in order, but in general WRF can produce really good results at a 1.6KM grid for 48 hours in the future. I was given the impression that coarser grids are the route to happiness for longer period forecasts.

      WRF will accept about as much or as little of an initializer as you want to give it. Between NEXRAD radar observations, ground met stations all over the place, two hundred or so balloon launches per day, satellite water vapor estimates, and a cooperative agreement with airlines to download in-flight met conditions (after landing, natch), there's gobs of data available.

      The National Weather Service wants to run new models side-by-side with older models and then back check the daylights out of them, so we can expect the regular forecast products to improve dramatically over the next (very) few years.

      • I worked in meteorology in the 1980s for several years, and one thing I learned was that data is in short supply. I learned that the ultimate truth comes from sending up radiosonde balloons with humidity, wind and pressure sensors. The goal is not just to get data from the ground (there's lots of that kind of data, I'm sure). You have to sample the whole atmosphere to set up the numerical models. You say that about two hundred balloons are sent up every day, and assuming this is done every 6 hours in th
    • Which comes own to knowing where each tree/building and similar sized static feature is and how this affects the model.

      Yes, ideally you'd have that much information. But you're still going to gain benefit until your resolution matches your resolution of ground scanning. Right now they've just gotten to two-mile resolution. We have much better maps than that already.

      • There are urban airshed models that do exactly this for air quality studies and plume analysis models for hazmat, but I'm not aware of weather forecasting at the block-by-block level. Right off the cuff, I would suspect that albedo is at least as important - at human building scales, reynolds number is going to be pretty high. At that point, it looks more like computational fluid dynamics and less like weather - hence airshed modeling and plume analysis.

    • I think we're a LONG way from needed to know information the "down to the tree/building level", but I would be very surprised if they aren't using reflectivity and intensity characteristics from satellite photography (usually about 3 meter resolution). I would think that kind of information would tell them the heat generation/absorption/water evaporation of a particular area reasonably well, they may even have to simplify the data so they can compute it "real time". I work with mapping quite a bit and I c

  • by Anonymous Coward

    HRRR going into production is not directly a result of the supercomputer upgrade. It is a new model that has been in testing for two years, but has just recently been officially "released". While the supercomputer upgrade is great for the increased runs and spatial resolution, it is not directly related to the HRRR being a better model. It will, however, be much more useful when the GFS model is upgraded later this year to significantly increased horizontal and vertical resolutions.

  • by Anonymous Coward

    Contributing to global warming, no doubt.

    Thanks guys for buying the iSuper. 4x more powerful, 4x the resolution. Come back in 4 years for the retina resolution.

    • Shithead, you deserve to die in a severe storm.

      You're an asshole. Your were born an asshole and you'll die an asshole. And the sooner you die the better off the rest of the world will be.

  • Well, congrats to NOAA for the milestone...but isn't a peta-scale system a thing of the past for something as computationally intensive and critical as weather prediction? The prediction model seems to have no scalability issues: "The Hrrr model produces output from the model every 15 minutes versus the previous hourly rate,..", meaning 4x speedup for a 4x upgrade in peak performance. Why didn't NOAA go for a peta-scale system straightaway?

    The upgraded systems are already placed way down the list of world's

    • by Anonymous Coward

      NOAA is never particularly well funded and these are not systems that you can just build and immediately throw into production. These "new" boxes are 2 years old and have been operating continuously during that time in an extremely rigorous testing cycle. For all we know the peta scale system is being constructed now and won't be in production til the end of a similar test run.

    • While weather forecasts are important, they are not _that_ important. There is no real motivation to be at the top of the pop in computing power for something 'as simple' as a weather forecast.

      For some reason people are still stuck in their mind with the "weather forecasts" are 50% luck and 50% wrong myth of the time 30 years ago.

      Weather forecasts are actually extremely accurate over a 24h period and very reliable 48h to 72h in future.

      Ask anyone who is sailing and checks weather reports more or less daily (

  • China has a much bigger computer with over 3 million cores.

    Also, aren't the two "new" supercomputers 2 years old now?

  • I am in an area in which hurricanes are common. After a storm we see wind reports that indicate speeds like 120 mph or 140 mph. But people who live here see I beams that are quite large bent like noodles and we all know that it took much higher winds to bend those beams. Apparently micro bursts are the culprit and these tornados must have immense strength. We do not normally hear where these officially recognized bursts have occured but it does explain why some homes are simply inconvenienced a bit
    • Winds that strong are usually only measurable with radar.
      If there is no weather radar close, no one measures it, obviously. So I doubt there is a 'conspiracy' :)

      However with the 'rise' of the wind industry it would be very important to get a good idea how strong those hurricanes can be in peak gusts.

  • This thing is going to put all of the local weathermen out of jobs. I mean how can they predict the weather better than this thing? ;-)

    • The local weatherman, if he/she is a meteorologist, looks at several commercial models, the NWS models and adjusts due to his professional experience in an area, he/she is unlikely to go anywhere hell Sonny Eliot (Sonny Idiot) [freep.com] was on the air for 63 years.

    • Where are you that weathermen are reliable enough for you to want to keep them around. Around me there's a running joke that you'd be more accurate at predicting the weather throwing a darts blindfolded at a board with all of the seasonal weather possibilities.

    • Pixar is going to put the local weathergirls out of their jobs... they already act in front of bluescreens - why not just render them to look and act as desired?

    • The July 2013 article discusses an old model that used a 27 kilometer resolution new model that used a 13 kilometer resolution. The new article discusses moving from that to a 2 mile (3.21 kilometer) resolution.
      • Need to drink more coffee...

        ...discusses transitioning from an old model that used a 27 kilometer resolution to a new model...

  • Has the resolution and reliability of initial data points improved as well? Or are we just doing a finer interpolation of model output with same data input?

    • by Anonymous Coward

      There have been advances in assimilating radar and satellite data into models, which can provide better resolution than data from other observation platforms such as profilers, raobs, and surface observing stations. This reduces the time needed for models to spin up.

      The real advantage from improving the resolution is the ability to explicitly represent processes in a way that couldn't be done with coarser models. This is especially true in regard to convective processes. A thunderstorm updraft might be a fe

  • while (1) {
        prediction = PredictWeather();
        if (prediction == true) {
              AskForMoreGrantFunding();
              BlaimGlobalWarming();
        } else {
              BlaimGlobalWarming();
              AskForMoreGrantFunding();
        }
    }

    • by Anonymous Coward

      "Blaim"? AskForMoreEducationFunding();

  • This was an interesting article. However as UW's Cliff Mass has previously pointed out [blogspot.com] (And, today, he privately confirmed is still the case), NOAA is sitting on already-approved funds to purchase a weather modeling computer that's seen as a potential "game changer" for US climate modeling.

    Over a year ago Congress approved the purchase of a computer that's roughly an order of magnitude more powerful than the pair mentioned in this article - but, because NOAA has a contract with IBM and IBM recently sold their server business to Lenovo, NOAA has been sitting on their hands regarding approval of the purchase of such a computer from a Chinese company.

    So while the improvements mentioned in the article are better than nothing... in truth we should be a significant step beyond that by now.

When you are working hard, get up and retch every so often.

Working...