The Supercomputer Race 158
CWmike writes "Every June and November a new list of the world's fastest supercomputers is revealed. The latest Top 500 list marked the scaling of computing's Mount Everest — the petaflops barrier. IBM's 'Roadrunner' topped the list, burning up the bytes at 1.026 petaflops. A computer to die for if you are a supercomputer user for whom no machine ever seems fast enough? Maybe not, says Richard Loft, director of supercomputing research at the National Center for Atmospheric Research in Boulder, Colo. The Top 500 list is only useful in telling you the absolute upper bound of the capabilities of the computers ... It's not useful in terms of telling you their utility in real scientific calculations. The problem with the rankings: a decades-old benchmark called Linpack, which is Fortran code that measures the speed of processors on floating-point math operations. One possible fix: Invoking specialization. Loft says of petaflops, peak performance, benchmark results, positions on a list — 'it's a little shell game that everybody plays. ... All we care about is the number of years of climate we can simulate in one day of wall-clock computer time. That tells you what kinds of experiments you can do.' State-of-the-art systems today can simulate about five years per day of computer time, he says, but some climatologists yearn to simulate 100 years in a day."
The true best measurement (Score:5, Funny)
Is how many libraries of congress it can read in a fortnight.
Re: (Score:2)
Is how many libraries of congress it can read in a fortnight.
Nope.
PROGRAM HelloWorld
PRINT *, "Hello World"
END PROGRAM HelloWorld
Re: (Score:3, Interesting)
Re:Exactly. HP PA-RISC vs. 21364 Alpha vs. UltraSP (Score:2)
Re: (Score:2)
Re: (Score:2)
Thats not a "barrier" (Score:3, Insightful)
computings Mount Everest - the petaflops barrier
Two bad cliched metaphors in one! Its not a peak, and its not a barrier, just another arbitrary milestone. Who writes this crap? ... a "professional" writer from an industry magazine. That figures.
Oh
This guy should enter the The Bulwer-Lytton Fiction Contest [bulwer-lytton.com]
Re: (Score:2)
True. And if anything it should be the K2 of computing.
The real problem is once you've overcome the highest peak does that mean all computing is down hill! Hurray for bad analogies!
Re: (Score:3, Funny)
you're making a mountain of a metaphor
Re: (Score:2)
a furlong (plus one ear)
Why does the yearning stop there? (Score:2)
some climatologists yearn to simulate 100 years in a day
Why does the yearning stop there? Why not yearn to simulate, say, 1000 years in three seconds?
Simulation (Score:5, Funny)
Simulate 100 years of climate in a day? Here's my code:
echo -e "sunny\nrainy\ncloudy" | rl -rc 36525
Re: (Score:2)
Re: (Score:3, Funny)
Re:Simulation (Score:4, Informative)
From the webpage: rl is a command-line tool that reads lines from an input file or stdin, randomizes the lines and outputs a specified number of lines. It does this with only a single pass over the input while trying to use as little memory as possible.
Didn't know about it either. Seems marginally useful
Re: (Score:2)
"Seems marginally useful"
For you, you insensitive clod. Someone with feelings wrote it!
Re: (Score:2)
Re: (Score:2)
The Random Line picker.
Re: (Score:2)
It's a simple line randomisation program. I use it a lot in the work that I do for short shell scripts, which involves doing lots of permutations of lines of text files (or just sequences of numbers). Once that code gets put deeper into a loop (and hence becomes more of a limiting factor in terms of excecution time), I substitute it for faster stuff written in Java, Perl, R, C++, or whatever else takes my fancy at the time. shuf is a similar program which seems to be in the coreutils of Debian, but doesn't
Re: (Score:3, Interesting)
Both these factors are hard/impossible to control and to standardize, and the only factor that does not change is the actual hardware and its peak/sustained performance, so it's the o
Financial modeling and spying better funded (Score:4, Insightful)
Sadly, while predicting the weather and better understanding it ultimately helps a lot of people, I suspect a LOT more computing power is thrown at more mundane things like predicting where the financial markets are going to be based on a gazillion data inputs. Probably even better funded are the vast datacenters around the world that fondle communications and other data for the spymasters. I doubt those computing resources are represented in the annual supercomputing lists. :)
Re:Financial modeling and spying better funded (Score:4, Funny)
Re: (Score:2, Insightful)
I suspect no computing power is being thrown at predicting where the financial markets are going.
A lot is thrown at pretending to predict it, but it's brilliantly obvious that the output of such things is no better than chicken entrails or the last two presidential elections.
Something like (Score:2)
Re: (Score:2)
I suspect a LOT more computing power is thrown at more mundane things like predicting where the financial markets are going to be based on a gazillion data inputs.
I wouldn't say mundane.
For one thing, any intelligent deep thought requires the gazillion inputs because of the sheer number of factors and an error or bad approximation in just a few inputs could throw off the decision or computation. Granted, the financial markets are typically so frenzied that hardly anyone knows whether to buy or sell, but we
Re: (Score:3, Interesting)
Sadly, while predicting the weather and better understanding it ultimately helps a lot of people, I suspect a LOT more computing power is thrown at more mundane things like predicting where the financial markets are going to be based on a gazillion data inputs. Probably even better funded are the vast datacenters around the world that fondle communications and other data for the spymasters. I doubt those computing resources are represented in the annual supercomputing lists. :)
There are a couple of misperceptions here.
Both the problems described, modeling years of weather models or modeling financial instruments, suffer from a definite flaw: they are not mathematical problems in the "high school" sense of the world, i.e. it is not possible to prove that there is only one finite solution that is demonstrably right.
Financial models are "fit to reality": you take a long time series, make a few wild guesses, throw it into a Cray-2, and look what the model says. Lather, rinse, r
Re: (Score:2)
Weather modeling (what the National Weather Service does) and climate modeling (what global warming folks do) are different problems.
Weather modeling has lots of data available (365 forecasts per year, per model pixel) and its easy to verify predictions against weather station measurements. The problem is that it's impossible to specify the c
Re: (Score:2)
Some problems really are bad enough that they require proactive attention. Stop hiding be
Re: (Score:2)
Right now, it is mainly manufactured evidence. There is still a chance that no indisputable evidence will ever be found and that we are barking up the wrong tree.
Before people start blasting me for blasphemy, As soon as you stop looking at things objectively, you stop pursuing knowledge and start believing with faith instead of looking for the truth. I don't care how convinced you might be that X=Y, if there is no room for Y to equal Z without Z equaling X, you have lost the scientific principle and are act
Re: (Score:2)
Tropospheric CO2 and O3 run around on a roughly 10,000 year cycle, between 150-275 ppm for CO2. This has been shown through ice cores in Antarctica, and other methods proven to give us a real look at historic atmospheric data. In the last 150 years, CO2 has gone through the roof [wikipedia.org].
In addition, global O3 concentrations used to be around 10 ppb. Currently, in the midwest region of the USA, it stands around 40 ppb. Through PopFACE and AspenFACE experiements, along with work done all over the world, it has be
Flops not useful? (Score:5, Informative)
But.. The whole point is to test the model, and the models change, don't they? Surely we're not just simulating more "years" of climate with the current batch, but improving resolution, making fewer simplifying assumptions, and hopefully, finding ways to do the exact same operations with fewer cycles.
How can you possibly evaluate supercomputers in any other way except how many mathematical operations can be performed in some reference time? And.. some serial metric if the math is highly parallel, since just reducing the size of vectors in those cases wouldn't actually result in those flops being useful for other tasks.
Re:Flops not useful? (Score:5, Interesting)
That's just the problem, people want to hear raw numbers, but those are useless.
How well can it do the specific task it needs to do is the actual question. It's a hard one, to be sure.
Re:Flops not useful? (Score:5, Informative)
Of course, if it's actually the case that people are dumb, lazy or in marketing, then that would explain why we don't get a full range of stats, even though the tools have existed for many years and are certainly widely known.
Re: (Score:2)
Re:Flops not useful? (Score:5, Informative)
Flops wouldn't test how well the interconnects work.
Since you say "increase the resolution of the model", you are expanding the size of the model, and how much data must be used by all of the nodes of the computer.
Since how important the interconnect properties are is dependent on the model, with almost no communication needed, like for F@H, to a problem that needs all of the nodes to have access to a single shared set of data, it would be very hard to quantify performance in one number.
Unfortunately, there are more than a few fields where marketers want a single number to advertise in a "mine is bigger than yours" competition, and come up with a metric that is almost worthless.
Re: (Score:2)
Re: (Score:3, Informative)
Actually, Linpack is not embarrassingly parallel so it DOES test how well the interconnects work, to some extent.
The top 500 list is interesting, but if you're building a supercomputer to make a certain rank you have too much money and you should really give me some.
You build a supercomputer to perform some task or class of tasks. If it gets you on the list, cool.
Re: (Score:2)
Because the climate simulation model they use does a LOT of inter-process communication. Each piece of the calculation depends on what's going on around it.
Ever see footage of manual calculation rooms NASA used to have*? Imagine if every one of the calculations those people were doing depended on the previous calculation they did, AND all of the previous calculations of their eight nearest neighbors.
Now you know why that atmospheric model has a benchmark rated in "century/months" - the number of centuries
Re:Flops not useful? (Score:5, Informative)
Simple: you evaluate how much actual work it can perform across the entire system per unit time, where "actual work" means a mix of operations similar to some real application of interest. The whole problem here is that practically no real application is as purely focused on arithmetic operations as Linpack. Even the people who developed Linpack know this, which is why they developed the HPCC suite as its successor. It's composed of seven benchmarks, including some (e.g. stream triad) that mostly stress memory and some (e.g. matrix transpose) that mostly stress interconnects. If you want to get an idea how your application will perform on various machines, you determine what mix of those seven numbers best approximates your application, assign appropriate weights, and then apply those weights to the vendor numbers. Then you negotiate with the two or three most promising vendors to run your application for real. SPEC should have put an end to simplistic "single figure of merit" comparisons, or if not them then TPC, SPC, etc. Sadly, though, there's still always someone who comes along and tries to revive the corpse.
Re: (Score:2)
The hard part of that is that my hypothetical high-end supercomputer application and your hypothetical high-end supercomputer application don't do the same thing, and so we must weight those values differently.
To get a single unbiased score, there must be one simple test or a group of unweighted ones. If you're giving an equal weight to different benchmarks, then the overall score might still mean something. The single value from equal weighing still won't be very predictive for applications that stress one
Re: (Score:2)
"Simple: you evaluate how much actual work it can perform across the entire system per unit time, where "actual work" means a mix of operations similar to some real application of interest"
Personally I'd like to see supercomputing power start automating detecting bad design and potential failures of someone elses code, not to mention optimization.
I've always wondered what you could do with google and a few supercomputers coming the open source database where you can deduce what a function does without needi
Re: (Score:3, Interesting)
"How can you possibly evaluate supercomputers in any other way except how many mathematical operations can be performed in some reference time? "
It's much more subtle than that. Most programs, including weather simulations, use a large amount of data stored on disk and in RAM. The problem with LINPACK as a benchmark is that, for all practical purposes, it ignores this cost by using a few very specific linear algebra operations that have very low communication/computation ratios. The LINPACK number is onl
Re: (Score:2, Funny)
But.. The whole point is to test the model, and the models change, don't they?
Alas, no. It's suspected that the closer your computer climbs to the top of this list, the larger your penis becomes.
Benchmark your application (Score:5, Informative)
I agree (Score:3, Informative)
I write massively parallel scientific code that runs on these supercomputers for a living... and this is what I've been preaching all along.
The thing about RoadRunner and others (such as Red Storm at Sandia) is that they are special pieces of hardware that run highly specialized operating systems. I can say from experience that these are an _enormous_ pain in the ass to code for... and reaching anything near the theoretical computing limit on these machines with real world engineering applications is essentially impossible... not too mention all of the extra time it costs you in just getting your application to compile on the machine and debug it...
My "day-to-day" supercomputer is a 2048 processor machine made up of generic Intel cores all running a slightly modified version of Suse Linux. This is a great machine for development _and_ for execution. My users have no trouble using my software and the machine... because it's just Linux.
When looking at a supercomputer I always think in terms of utility... not in terms of Flops. It's for this reason that I think the guys down at the Texas Advanced Computing Center got it right when they built Ranger ( http://www.tacc.utexas.edu/resources/hpcsystems/#constellation [utexas.edu] ). It's about a half a petaflop... but guess what? It runs Linux! And is actually made up of a bunch of Opteron cores... the machine itself is also a huge, awesome looking beast (I've been inside it... the 2 huge Infiniband switches are really something to see). I haven't used it myself (yet), but I have friends working at TACC and everyone really likes the machine a lot. It definitely strikes that chord between ultra-powerful and ultra-useful.
Friedmud
Re:I agree (Score:4, Funny)
My "day-to-day" supercomputer is a 2048 processor machine made up of generic Intel cores all running a slightly modified version of Suse Linux.
We all envy you.
Re:I agree (Score:4, Funny)
Re: (Score:2)
Its only 3 in the afternoon!!
Early bird catches the worm
Well, let's see (Score:5, Interesting)
It's about a half a petaflop... but guess what? It runs Linux!
This sounds kind of nice but why should this make it any easier to write parallel programs for it? You still have to manage hundreds if not thousands of threads, right? This will not magically turn it into a computer for the masses, I guarantee you that. I have said it elswhere [blogspot.com] but parallel computing will not come of age until they do away with multithreading and the traditional CPU core [blogspot.com]. There is a way to build and program parallel computers that does not involve the use of threads or CPUs. This is the only way to solve the parallel programming crisis. Until then, supercomputing will continue to be a curiosity that us mainstream programmers and users can only dream about.
Re: (Score:3, Insightful)
"Until then, supercomputing will continue to be a curiosity that us mainstream programmers and users can only dream about."
I'm not so sure that's a bad thing... most applications don't need the power of a super computer...
At the same time, I agree that I wish that desktop development tools made it easier to do threading for multi-core machines. Every new computer comes with more than one core... but the development tools (languages, compilers, IDE's, debuggers) simply aren't helping the everyday joe progra
dont invoke threadfights. (Score:2)
Really I'm an LAM-MPI freak.. let all the processes talk, gather data and not share memory. it skips the pthreads issue, but some would call the mpirun a thread-launcher.
Nope, mpi wont make parallel code a computer for the masses... I dont have ANY clue as to how to pull the CPU out of the equation.
However there are a BUNCH of ways that parallel code can be commo
Re: (Score:2)
This sounds kind of nice but why should this make it any easier to write parallel programs for it? You still have to manage hundreds if not thousands of threads, right? This will not magically turn it into a computer for the masses, I guarantee you that. I have said it elswhere [blogspot.com] but parallel computing will not come of age until they do away with multithreading and the traditional CPU core [blogspot.com]. There is a way to build and program parallel computers that does not involve the use of threads or CPUs. This is the only way to solve the parallel programming crisis. Until then, supercomputing will continue to be a curiosity that us mainstream programmers and users can only dream about.
You're wildly mistaken, but for interesting reasons. While SIMD architectures are indeed interesting and do very well on some types of problem (I believe that weather simulation is good) on other problems it is far easier to express them as either MIMD or something more exotic. A real system that needs to handle a mix of codes needs to be implemented as a hybrid: e.g. a cycle-scavenged Condor pool for high-availability, a traditional cluster with a better interconnect, and a supercomputer for those times wh
Re: (Score:2)
Don't think of these things as 2048-core CPU's sharing memory and disk space, think of them as clusters of 2048 separate [asnychronous!] PC's, each with its own memory and disk space. The actual architectures will vary, but this is usually the right paradigm (Crays and other big vector processors are the main exception to this rule). Think, then, of the program as if it were a bunch of network clients talking to each other over sockets
RoadRunner (Score:2, Insightful)
Is running relatively stock Fedora (the ppc distribution). True, it's ram hosted, but the OS is hardly specialized in terms of libraries and such. You could say the Cell SDK is a tad specialized, but the underlying platform is not so custom as implied.
In fact, every single Top500 system I've ever touched has been far more typical linux than most people ever expect.
In any event, the most compelling aspect of RoadRunner in my view is the flops/watt. Application developers who can leverage highly parallel c
Re: (Score:3, Informative)
The specialization of the hardware / software combo is what I was referring to.
Have you ever coded for one of these special architectures? It really is a bitch. Yes, Redstorm is even worse (special OS that doesn't even allow dynamic linking!)... but the non-generality of the cell-processors is going to kill the real world impact of Roadrunner.
ASCII Purple was one of the previous machines at LANL that was a "one-off" build from IBM. It was a complete disaster. Porting code to the machine took much longer
Re: (Score:3, Informative)
Sorry... got my supercomputers mixed up... ASCII Purple was at LANL...
I was thinking of ASCI Q, but that was made by HP...
Oh... just nevermind... I screwed it up well enough, just forget it ;-)
Need to get some sleep.
Friedmud
Re: (Score:2)
Hasn't Red Storm switched to Linux yet? Seems most of the Cray XT sites have done so already. One reason for developing CNL (Compute Node Linux, the Linux kernel they run on the nodes) IIRC was that multicore support in Catamount was sort of a kludge. And OS buffering, and..
Then again, the catamount developers are at Sandia, so maybe they have some emotional attachment to it. :)
CNL still doesn't allow dynamic linking, though it's much less alien than catamount.
Re: (Score:3, Informative)
As far as I know (as of 3 months ago) they're still running Catamount at Sandia... and it's for the reason you state: they developed it.
Friedmud
Re:I agree (Score:4, Insightful)
from http://en.wikipedia.org/wiki/IBM_Roadrunner [wikipedia.org]
Re: (Score:2)
I will just say that it was about a year before any meaningful calculations were run using Redstorm...
Friedmud
Re: (Score:2)
That's a bit ridiculous isn't it? The wikipedia page is very useful. Who cares if it's in the top ten, if it's not useful then:
a) Google will do something about it
b) People can look at the other nine results
Cost per computing power (Score:3, Insightful)
Re: (Score:2)
Your users don't really need that kind of power, then. They could get by using much less of a computer, because they're spending less time coding and have time for running. In fact, I suspect about 70% of your users could get by on a 4x4 card if they ran it and went on vacation for the summer.
The really big HPCs can tolerate the really big software development efforts because the runtime saved by the specialized OS may more than discount the extra development effort.
Re: (Score:2)
"he really big HPCs can tolerate the really big software development efforts because the runtime saved by the specialized OS may more than discount the extra development effort."
The thing is... that might be true for a very select number of projects... and that number of projects is actually smaller than most people think.
Yes, my code can run up to 3 or 4 thousand processors nicely... to get some really high fidelity engineering simulations... _but_ most people are running in the 64 to 128 processor range d
Re: (Score:2)
Not very long with distcc....
Re: (Score:2)
The only time I was employed doing kernel hacking was at NEC and the "big" machine they gave me for testing had ~1GB with 2 CPUS (in 2002). Sigh. Gone are the days...
Non-story... (Score:3, Insightful)
...ever looked at gaming benchmarks? Server benchmarks? Productivity benchmarks? Rendering benchmarks? In fact, any kind of benchmark? Seen how they all differ depending on the product and test run? Same with supercomputers, you got some synthetic benchmarks, and you got some real world benchmarks. But the weather simulation may not be a relevant benchmark at all if you're doing nuke simulations or gene decoding or finite deformation or some other kind of simulation. Synthetics are the lowest common denominator - you'd rather see benchmarks in your field, and most of all benchmarks with your exact application. That doesn't change that those are individual wants and synthetic benchmarks are the only ones with any value to everyone.
Re: (Score:2)
"...ever looked at gaming benchmarks? Server benchmarks? Productivity benchmarks? Rendering benchmarks? In fact, any kind of benchmark? Seen how they all differ depending on the product and test run? Same with supercomputers, you got some synthetic benchmarks, and you got some real world benchmarks."
I hear what you're saying, but if we look at application complexity (modern games for instance) there really there isn't any such thing as "real world benchmarks" since each application is specific because of t
Like any benchmark... (Score:3, Interesting)
Just with a lot more dollars behind it...
Every one remotely engaged in Top500 systems knows how very specific the thing being measured is. It's most sensitive to the aggregate clock cycles and processor architecture, and not as sensitive to memory throughput/architecture or networking as many real world things are.
http://icl.cs.utk.edu/hpcc/ [utk.edu]
Is an attempt to be more comprehensive, at least, by specifying a whole suite of independently scored benchmarks to reflect the strengths and weaknesses of things in a more holistic way. Sure, it's still synthetic, but it can give a better 'at-a-glance' indicator of several generally important aspects of a supercomputer configuration.
The thing probably inhibiting acceptance of this is that very fact, that it is holistic and the winner 'depends' on how you sort the data. This is excellent for those wanting to more comprehensively understand their configurations standing in the scheme of things, but hard for vendors and facilities to use for marketing leverage. Being able to say 'we built *the* fastest supercomputer according to the list' is a lot stronger than 'depending on how you count, we could be considered number one. Vendors will aggressively pursue pricing knowing about the attached bragging rights, and facilities that receive research grant money similarly want the ability to make statements without disclaimers.
Rest assured, though, that more thorough evaluations are done and not every decision in the Top500 is just about that benchmark. For example, AMD platforms are doing more strongly than they would if only HPL score is counted. AMD's memory performance is still outrageously better than Intel and is good for many HPC applications, but Intel's current generation trounces AMD in HPL score. Of course, Intel did overwhelmingly become popular upon release of their 64-bit core architecture based systems, but still..
Is it *REALLY* the Top 500? (Score:2, Interesting)
Most of the locations listed are mostly educational institutions, r&d centers, and computer companies. The results were probably submitted unofficially. There are few exceptions, but they are just that--few. It makes you wonder what the Big Data companies (Google, Yahoo!, etc) actually have running. They have no reason to participate, after all...
Consider something like Yahoo!'s research cluster [yahoo.com]. Why isn't it on this list? Why don't they run the tests?
How appropiate, (Score:2)
That an article about featuring IBM supercomputers comes shortly after a few misguided individuals were posting that "IBM is no longer relevant, they are a OEM reseller nowadays" or that they "only make bloated, slow software"
Accuracy? (Score:2)
Number of simulated years per day isn't exactly the metric you want. I can simulate a million years in a minute on my home pc, just not very accurately. As you get more accurate, the sim years/CPU day will decrease.
So knowing the number of simulated years per cpu day doesn't tell you anything unless you know exactly what algorithm you're using.
This is common knowledge (Score:3, Interesting)
It's fair to criticize Linpack for being a one-trick pony. It measures system performance for dense linear algebra, and nothing else. Jack Dongarra (the guy who wrote Linpack and maintains the top 500 lists) is quite up-front about Linpack's limitations, and he thinks that using a single number as the end-all-be-all of a computer's performance is a bad idea. It's a simple fact of life that certian kinds of computers do better on certain problems. The good guys out at Berkeley even sat down a couple years ago and enumerated [berkeley.edu] all of the problems they found in real-world HPC applications (See the tables on pages 11-12). The real truth here is that people should stop treating Linpack like it's the final measure of system performance. If you are doing pure linear algebra problems, it's a pretty good measurement for your purposes; if you are not, then you use it at your own peril.
Oblig (Score:2)
Re: (Score:2)
What's even funnier is the reality that Windows can't even boot on that many CPUs, let alone scale to utilise them.
You can simulate 100 years/day on your laptop (Score:2)
If you want to run 100 years per day, you can do so with an older model. T
Isn't Modeling Weather Futile? (Score:2, Interesting)
I seem to recall a Nova special I watched many moons ago about "strange attractors" and "fractal behavior" that seemed to indicate that for a large class of complex-valued iterative functions there was a weird phenomenon called the "Butterfly Effect". Apparently... according to this show I saw 20 years ago (and I think that Mandelbrot mentioned it in a lecture I attended a few years later), initial variables which are as intertwined as the rational and irrational numbers can have drastically divergent outco
Re: (Score:2)
Well, it's actually quite possible to model climate but not weather. It's an issue of scales. I won't go into mathematical argument, because I don't understand it anymore, and you probably don't want to know that either.
But look at this thing: http://en.wikipedia.org/wiki/Lorentz_attractor [wikipedia.org] - it is essentially an infinite line curved into a strange "two ears" shape. The line is defined by 3 ordinary differential equations, and it is completely defined by selected point and direction in space.
The interesting
This just in.... (Score:2)
TZ
This far and no HG2G references (Score:2)
I'll save those huge computers a few million years: The answer is 42.
Now try producing that on a mere Milliard Gargantubrain.
Software (Score:2)
Software is the weak point in supercomputing nowadays. Funding for hardware SO outstrips the development of software that runs on the expensive hardware that it's becoming a SERIOUS problem.
Something Doesn't Mul-Add Up Here (Score:2)
Something just doesn't jib here. So it takes you 20 days instead of 1 to compute the next hundred years. Why is this a problem? It's still a hundred years in 20 days. Don't you have even a little bit of patience?
The only reason to compute a hundred years in one day is if you're going to restart the computation each morning to see what the next
I remember when a megflop was FAST (Score:2)
About 13, 14 years for each new 1000x level.
Climate change by means of simulation... (Score:2)
"State-of-the-art systems today can simulate about five years per day of computer time, he says, but some climatologists yearn to simulate 100 years in a day."
Strangely enough, I'm reminded of the Heisenberg uncertainty principle. Considering the power and infrastructure requirements to run a supercomputer with that much processing power today, you might change the climate by simulating it.
Re:Weather Day After Tomorrow (Score:5, Funny)
Don't hold your breath; it'll disrupt the predictions.
Re: (Score:2)
48 hours forecasts are very accurate. Hurricanes aren't so bad either,.
I remember when the big leap was talking about a 3 day forecast to viewers, not it's 7-10. Still with decent accuracy.
Now I note you said 'predictions' and that will never happen. I am assuming you meant forecasts.
Re:Weather Day After Tomorrow (Score:5, Interesting)
44 years ago 1-5 megaflops was hot! What excitement we felt when the CDC6600 was installed at my university!
Back in '85 I was part of a startup building a mini-Cray, reimplementing the Cray instruction set in a smaller, cheaper box. I remember we focused on the Whetstone benchmark a lot, and it turned out that the Whetstone code really was bound up by moving characters around while formatting output strings, etc. We paid very careful attention to efficiently coding the C library string handling routines, and that got us more performance payback than anything we could do to optimize the arithmetic. One needs to understand the benchmark being used.
Re: (Score:2)
Re: (Score:2)
Well, if you don't know if there's gonna be a hurricane, then you don't know what the weather's gonna be, do you?
rj
Re:Weather Day After Tomorrow (Score:5, Funny)
As the hair on your face,
You yearn to displace,
So do it with grace.
Burma Shave
Re: (Score:2)
Yeah, that will be real useful.
"There is a 90% chance of rain in Omaha on Wednesday, October 6th, 4008"
Re: (Score:2, Insightful)
It's marginally more useful if it predicts 0% chance of rain because the average surface temperature of the planet has exceeded 100C on Wednesday, October 6th, 4008
Re: (Score:2)
Thus, running a simulation longer requires an exponentially shorter timestep to keep the errors supressed. Worse, it also demands exponentially more accurate initial conditions since any initial errors are amplified. I doubt we'll ever see serious "the weather will be..." forecasts for more than a few weeks, mainly due to the IC problem.
Re:Uhh, do you have a model? (Score:5, Informative)
Re: (Score:2)
The problem is that separating climate from weather doesn't mean much when doing predictions. You still have to predict the weather to provide data points for the climate. Otherwise, you can't have long term statistic of weather patterns in the future that were predictions of today. And you can't make claims about the weather in the future from the climate predictions.
So all future climate predictions will inherit inaccuracies from weather prediction. They are not separable even when you show how they aren'
Re: (Score:2)
Can't I? - On average the weather will be colder in winter 4008 than it will be in summer 4008.
The rest of the logic in your post is upside down, however we have crossed swords before and I have (in the past) provided you with relevant links that you are still choosing to ignore.
Re: (Score:3, Insightful)
Are you sure about that? You see, you opened a hole so broad that your statement isn't accurate even today. It's always summer and always winter on the earth. So it could be the same temperature in summer and winter in 4008.
Also, you didn't explain what future climate predictions you came to that conclusion over. I might have not been specific enough for a fan boy like you, but I was making the claim to future
Re: (Score:2)
No, in fact my faith in science as the provider of the best available explaination for systematic observations of the natural world means that I'm not even sure the Sun will rise tomorrow BUT it my faith also tells me not worry about it.
Absolute certainty of future events is a sport played by politicians and opionion columnists and I suggest that is why you insist on using their st
Re: (Score:3, Insightful)
I wasn't talking about your faith in science. I am talking about your faith in global warming and how it has to be true.
Re:Uhh, do you have a model? (Score:4, Informative)
IANAM (I am not a meteorologist)
That's for sure.
Here's an analogy: Say you pour two different colored cans of paint into a bucket and start stirring. Weather is like predicting the exact patterns of swirls that you'll see as the colors mix. Very hard to do looking ahead more than a couple of stirs.
Climate is more like predicting the final color that will result after the mixing is done. Not nearly so intractable. The summary is talking about climate, not weather.