First Petaflop Supercomputer To Shut Down 84
An anonymous reader writes "In 2008 Roadrunner was the world's fastest supercomputer. Now that the first system to break the petaflop barrier has lost a step on today's leaders it will be shut down and dismantled. In its five years of operation, the Roadrunner was the 'workhorse' behind the National Nuclear Security Administration's Advanced Simulation and Computing program, providing key computer simulations for the Stockpile Stewardship Program."
The Coyote finally won (Score:3, Funny)
RR must've spent too much time pecking on that Acme birdseed.
Re:The Coyote finally won (Score:5, Informative)
Yeah, just like the OP who was too busy "pecking" and forgot to include the link to the actual article on the decommissioning: http://www.pcmag.com/article2/0,2817,2417271,00.asp [pcmag.com]
In April's Fools' Day _anything_ is possible ... (Score:3)
The Coyote finally won
Yeah, that too, is possible !
so when's the auction? (Score:3)
it's be interesting to see if this thing goes for scrap value, or if someone else'll pick it up for service elsewhere...
Re: (Score:2)
Keep an eye on Ebay for parts.
Comment removed (Score:5, Interesting)
Which OS? (Score:2)
How often do you upgrade really? (Score:2)
I was relatively late to the build your own PC craze, I built my first one in 1995 after about 8 years of being a Mac owner.
But since that time I have found relatively little worthwhile "upgradability" in my systems. I do remember adding a 3DFX card to my Pentium-166 system and replacing a couple of video cards (in the last 2-3 years) whose fans have quit.
When I built systems I tried to get the best bang for my buck out of my CPU, buying just high enough in the product lineup that my parts were "better" bu
Re: (Score:2)
Re: (Score:2)
I'm not arguing the dangers of planned-obsolescence/DRM/black box computing at all (despite the two iPads and 3 iPhones we have here) at all.
I still build my own PCs, but for a whole bunch of reasons I kind of leave them as built anymore.
I'm fine with my existing two year old Core i5 system -- but I kind of threw some money at it when I built it and it's paid off -- SSD boot disk, 2 TB Raid1 storage, 16 GB RAM.
When I'm done, my young son gets them and by then he's thrilled. He's just about to get a slight
Re: (Score:2)
Re: (Score:2)
I think the environmental thing is going to bite the planned obsolescence business strategy in the ass ultimately.
I think environmental regulations on hazardous materials, manufacturers being forced to take back and recycle old products, and possibly even cost of materials will make it harder and harder to release intentionally obsoleted gadgets.
Some of this cost can be passed on to end users, but much of it can't be and I've read editorial content from environmental advocates that even suggests that device
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Sold to indonesia would be my guess.
Stop writing "barrier". (Score:5, Insightful)
"Sound barrier" was and remains OK because there is a physical difference between flying slower than and faster than the speed of sound. But the word "barrier" is now (over)used to make things sound more dramatic. Raising a number from below to above some arbitrary (usually number base-dependent) threshold does not imply crossing a barrier, unless by barrier is meant "barrier to entry of another over-hyped tech piece".
Re: (Score:1)
Milestone (Score:2, Insightful)
Stop complaining about the word "barrier" (Score:2)
Whenever there is a story on supercomputers on /., there will be a comment stating that there was no barrier whatsoever. But that's not quite true.
The truth is that the performance of supercomputers grows that fast because engineers continuously solve problems, which were deemed intractable before (e.g. power consumption, reliability, network performance). The research may not be groundbreaking in the sense of earth-shattering, but definitely in the sense of "wow, I didn't think one could do that!"
Aw, crap! (Score:5, Funny)
I think that thing could host a kick-ass DOOM session!
Re: (Score:2)
They wouldn't mind an upgrade either, which is why they're doing one.
Re:Whiners (Score:5, Informative)
The problem is energy efficiency. In the past 5 years since it was first built, supercomputers have become far more energy-efficient. Roadrunner falls at 444 MFLOPS per Watt, while the current fastest supercomputer (and also a DOE project), Titan, is 2,143 MFLOPS per Watt. Roadrunner uses 2345 kW, and supporting equipment (cooling, backup power adds (on average) 80% more. Assume they get relatively cheap electricity (The Internets tell me the average price charged to industrial customers is 7/kWh), and that means that their electric bill is at least $295.50 PER HOUR. A computer with the same performance but Titan's efficiency would cost $61 per hour. That's the difference between your electric bill being $2.6 million per year and $500,000.
Assuming Titan's cost also scales ($60 million for 17 Petaflops -> ~$3.5 million for 1 Petaflop), then the payback for scrapping it and building a new computer is under 2 years. So yes, it IS saving money to scrap this one. They're not even replacing it with a new one (yet, anyway) - they're using one that was built in 2010.
And also, yes, you CAN use a computer to calculate how your nuclear arsenal is deteriorating. What makes you think they can't?
Re: (Score:2)
And also, yes, you CAN use a computer to calculate how your nuclear arsenal is deteriorating. What makes you think they can't?
Welcome to the New America -- where two decades of coddling has left the general population with the belief that their opinions have equal validity as the knowledge of experts.
Here, it's even in their mission statement (Score:1)
http://nnsa.energy.gov/ourmission/managingthestockpile
"Most nuclear weapons in the U.S. stockpile were produced anywhere from 30 to 40 years ago, and no new nuclear weapons have been produced since the end of the Cold War."
And yet you need an ever increasing super computer to track and simulate them???
Then we get to the unpleasant truth:
http://nnsa.energy.gov/ourmission/recapitalizingourinfrastructure
"Recapitalizing Our Infrastructure The FY2011 Budget Request increase represents the investment needed to tr
Re: (Score:2)
And yet you need an ever increasing super computer to track and simulate them???
In the old days, we just detonated a bomb or two if we needed experimental data. Since the test-ban treaty, however, numerical simulations have been used in lieu of physical experiments. I suspect that the accuracy of numerical simulations is a closely guarded secret, and the DOE hasn't yet decided that the present state of the art can't be improved upon.
There is a rival benchmark, Graph 500 [graph500.org] Roadrunner isn't on it. Neither is Cielo.
But, it's intended to simulate a different sort of problem set.
And yet another perspective comes from Intel’s John Gustafson, a Director at Intel Labs in Santa Clara, CA, “The answer is simple: Graph 500 stresses the performance bottleneck for modern supercomputers. The Top 500 stresses double precision floating-point, which vendors have made so fast that it has become almost completely irrelevant at predicting performance for the full range of applications. Graph 500 is communication-intensive, which is exactly what we need to improve the most. Make it a benchmark to win, and vendors will work harder at relieving the bottleneck of communication.”
The Case for [insidehpc.com]
Re:Top supercomputer is Google? (Score:5, Informative)
I've worked for the DOE for quite a few years now writing software for these supercomputers... and I can guarantee you that we use the hell out of them. There is usually quite a wait to just run a job on them.
They are used for national security, energy, environment, biology and a lot more.
If you want to see some of what we do with them see this video (it's me talking):
http://www.youtube.com/watch?v=V-2VfET8SNw [youtube.com]
Re:A pellet stress simulation? (Score:5, Informative)
I don't get it are you looking for a Funny mod? You linked to a 2D heat transfer simulation done by Matlab. Did you even watch the video?
The second simulation (of a full nuclear fuel rod in 3D) was nearly 300 million degrees of freedom and the output alone was nearly 400GB to postprocess. It involves around 15 fully coupled, nonlinear PDEs all being solved simultaneously and fully implicitly (to model multiple years of a complex process you have to be able to take big timesteps) on ~12,000 processors.
Matlab isn't even close.
Re: (Score:3)
I know I shouldn't respond to AC's but I'm going to anyway:
And it didn't need to be.
As far as geometry goes, it did need to be that detailed. Firstly, the pellets are round and to get the power and heat transfer correct you have to get the geometry correct. Also, pellets have small features on them (dishes on top and chamfers around the edges) that are put there on purpose and make a big difference in the overall response of the system (the dishes, in particular, reduce the axial expansion by a lot). So the detailed geometry is
Re: (Score:2)
Re: (Score:2)
Pellets, as manufactured, are _very_ smooth. This is a decent overview I just found from Google: http://www.world-nuclear.org/info/Nuclear-Fuel-Cycle/Conversion-Enrichment-and-Fabrication/Fuel-Fabrication/#.UVmkjas5yZc [world-nuclear.org]
They start life as powder and then are packed in a way that makes them smooth.
However, just as in any kind of manufacturing: defects happen. A working reactor will have over a million pellets in it. Somewhere in there one is going to be misshapen.
Some of what we can do is run a ton of stati
Re: (Score:2)
Thanks!
Certainly the nuclear reactor industry has done "just fine" without these detailed calculations for the last 60 years. Where "just fine" is: "We've seen stuff fail over the years and learned from it and kept tweaking our design and margins to take it into account". They have use simplified models to get an idea of the behavior and it has worked for them (as far as the reactors run safely and reliably).
However, the "margins" are the name of the game here. If you can do more detailed calculations th
Re: (Score:3)
The fact that the result was displayed on a graph of 200 pixels for a summary for the public has jack to do with the production use of the data. Do you think businesses only produce reports for the shareholder meetings and banks only look at pie charts for making decisions on billions of dollars of assets. Your criticism is disingenuous at best and has nothing to do with the working product of the supercomputer.
As for the degrees of freedom, you have to recall that their working needs are different from you
Re: (Score:2)
See my response above about the fidelity of the calculation.
Industry has been chomping at the bit for decades to get to detailed calculations like this. If you can save a nuclear reactor 1% of their operating cost... that is millions of dollars. Higher fidelity = more money in our economy.
Re:Top supercomputer is Google? (Score:5, Insightful)
Are you somehow under the impression that these supercomputers are used to count nukes and keep track of their addresses?
Nuclear weapons have things like plutonium and uranium in them. The essential part of those is that they're radioactive. That means they decay. So yes, they do change over time. Since the US has agreed not to go firing the things off to see if they still work, the supercomputers are used to simulate the decay process and firing to see if they still work, what the yield is, and how long they're likely to keep working.
It's kind of embarrassing when the president says "turn them into a radioactive parking lot!" after North Korea nukes San Francisco, and the retaliatory strike is a bunch of duds.
"Atomic" clocks don't use radioactive decay.... (Score:4, Informative)
They rely on the resonant frequency of atoms in metal vapors (Cesium or Rubidium), or the output of a hydrogen maser.
http://en.wikipedia.org/wiki/Atomic_clock [wikipedia.org]
Radioactive decay is a chaotic process. So chaotic that it can be used as the basis for a random number generator. Just what you DON'T want in a precise time/frequency reference.
http://www.fourmilab.ch/hotbits/ [fourmilab.ch]
Re:Mod suppression aside, the point is clear (Score:5, Informative)
Atomic clocks have absolutely nothing to do with radioactive decay. http://en.wikipedia.org/wiki/Atomic_clock [wikipedia.org]
Re: (Score:1)
These systems do different things.
The MapReduce framework cannot do every possible algorithm efficiently. It can only do a certain subset of problems. Supercomputers are deigned for problems that require "tightly coupling" between processors. A typical problem is multiplying two large matrices together. MapReduce cannot do this kind of problem efficiently.
Re: (Score:1)
No need to be so rude. I happen to have worked with Hadoop like architectures so I know what I'm talking about. The site says
It doesn't give any big O running time bounds. Even if the algorithm could achieve the standard big O bounds for matrix multiplication, the overhead for Hadoop is still much higher than the overhead for cpu's in a supercomputer to talk to each other.
Re: (Score:1)
My original point was that supercomputers do things that MapReduce architectures are not designed to do. I'm not sure how half of what you just wrote relates to that point. You seem to be saying that computers are so fast now that we only need one machine. In that case, MapReduce vs supercomputers is irrelevant anyway. Instead of putting what I'm saying in some "context" that is completely irrelevant, why don't you try to understand the original point: MapReduce is not designed for tight coupling. You
Re: (Score:2)
So from the graph with a mall cluster of 16 core, Linpack starts to reach 80% efficiency and rising. Indicating it scales well.
What? Looks as if the efficiency is falling. Collapsing, even.
Titan, by the way, has an RMax of 17,590,000 GFlops, and a RPeak of 27,112,500 GFlops (64 percent). Your little 16 core EC2 Cluster has a peak performance of 88 GFlops.
The EC2 16xc1.xlarge configurations are about 30 percent of peak, if I'm reading table 7 correctly. It may be cost effective for small scale simulations, but some projects demand petascale resources, and EC2 can't be expected to scale up that far. The Top500 list is designed to enc
Re: (Score:1)
On multiplication, the issue is efficiency, not whether it can be done. Recall my original claim was
You seem to have ignored the questions I raised about the efficiency (in terms of big O running time, and overhead) of the MapReduce implementation.
I saw the IEEE paper on finite element analysis and MapReduce, it was completely contentless (maybe Google wasn't being such a good friend to you
Roadrunner (Score:1)
I worked on Roadrunner. It was a pain to program, but I'm sorry to see it go. The Cell processor was ahead of its time...
No need to shut it down (Score:4, Insightful)
At today's prices, I'd have it farming Bitcoins.
Re:No need to shut it down (Score:4, Insightful)
Do the math! (Score:2)
Whaddya expect? (Score:1, Funny)
you have "flop" in the name
Re: (Score:2)
I hesitated between moding you up and answering. What you say is somewhat true but wrong nonetheless.
It is true that Roadrunner is very difficult to use. The consequence is that it has been used to run the designed nuclear stockpile application, and that's it. Nothing else, so to speak. And even running that single application has proven difficult.
Now, the machine has been pioneering the accelerator field. It has been the testbed for all new generation computers that are coming now. In some sense, its failu
throw away mentality (actual arcticle link) (Score:3, Informative)
I guess it's not new and shiny anymore, so we can just throw it away.
I did want to read the actual article, but the only link is to a 2008 article.
Fail or what?
http://news.sky.com/story/1071902/supercomputer-pioneer-roadrunner-to-shut-down [sky.com]
That is the article. And i see why they are getting rid of it, not as power efficient as new computers.
Re: (Score:2)
what I've found odd in respect of power/cycle efficiency is that it doesn't seem to be going anywhere except in portable gear.
Example: I'm typing on a dual core AMD laptop (E350 die) which is entirely powered by a 50 Watt power brick. That does the processor, board, optical drive, two hard drives, a bank of 7 flash drives on a bus-powered hub, and a 15.3" widescreen panel. THE SAME HARDWARE on a mini-ITX form factor, *requires* a 200W PSU just for the board.
How does that work??
Re:throw away mentality (actual arcticle link) (Score:5, Informative)
Keep in mind that unlike laptops, the motherboard manufacturer's got no idea of what you'll be pairing the board with. A low-end, cheap PSU at terrible efficiency may be "rated" at 200W but only give out 100W before crapping out. They also give themselves headroom for people who think the motherboard's rated power requirements also include everything else (ie. RAM, CPU, hard drives, etc.).
Actual power usage is far, far below the recommended power output. My computer's sitting idle at a little above 200W, and that's an i5-2500K overclocked with 16GB of RAM, two Radeon HD6950 2GB GPUs, two 7500RPM 3.5" HDDs plus a Vertex 3 SSD, an optical drive, a mouse, a mechanical keyboard requiring double USB ports, a phone recharging, an external eSATA HDD, all running on a full ATX motherboard geared towards power and not efficiency. Oh, and the reading includes two 23" IPS screens with non-LED backlighting (so much more power hungry).
If I remember well, full load (prime95 torture test and furmark running at the same time) topped at around 550W, again with a bunch of peripherals plugged in, a 1GHz overclock above normal, and 2 screens counted in the total. I'd say that that kind of power is very much in line with your laptop, considering just how ridiculously more powerful it is.
Re:throw away mentality (actual arcticle link) (Score:5, Interesting)
It costs a _lot_ to keep these computers running (read Millions with a really big M). The power bill alone is an enormous amount of money.
It literally gets to the point where it is cheaper to tear it down and build a new one that is better in flops / Watt than to keep the current one running.
ummmm guys.... (Score:1)
Re: (Score:1)
Re: (Score:1)
>> Since the final S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular "FLOP" is frequently encountered. Alternatively, the singular FLOP (or flop) is used as an abbreviation for "FLoating-point OPeration", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate, which would then be "flop/s".
Obvious retirement function (Score:2)
Bitcoin server
So, I just wanna know... (Score:1)
How many FLOPs has it flipped over its career?