Cell-based "Roadrunner" Tops Elusive Petaflop Mark 269
prunedude writes "The NY times is reporting that an American military supercomputer, assembled from components originally designed for video game machines, is more than twice as fast as the previous fastest supercomputer, the I.B.M. BlueGene/L. To put the performance of the machine in perspective, Thomas P. D'Agostino, the administrator of the National Nuclear Security Administration, said that if all six billion people on earth used hand calculators and performed calculations 24 hours a day and seven days a week, it would take them 46 years to do what the Roadrunner can in one day."
Enough With The Fanboyism (Score:2, Interesting)
He was obviously asking about people with a clue, not stupid little fanboys who love to spout obviously false quotes and claims supposedly made by console makers.
Sony NEVER once claimed the PS2 was a supercomputer. Not ONCE.
The EE WAS powerful, cheap, and power efficient enough that at the time of its arrival on the market it fell under government scrutiny for its potential military uses. The EE utterly SHIT over any other chip on the market or would be on the market for another two years after its release with regards to its floating point power and heat/power usage combination.
Let me guess, you're another one of those pathetic little fanboys who go around repeating that tired old lie about Sony, the PS2, and Toy Story graphics...
http://builder-news.com.com/2100-1040-250632.html
"One of the basic premises of the Xbox is to put the power in the hands of the artist," Blackley said, which is why Xbox developers "are achieving a level of visual detail you really get in 'Toy Story.'"
Take that petaflop with a grain of salt (Score:5, Interesting)
The real PITA with these machines is that the powers that be are trying to kill two birds with one stone: they want an R&D platform for advanced computing, but they also want to certify an aging and untestable nuclear stockpile. That rather requires a fairly static platform, and so far our experience with ASC has been that when a machine hits that sweet state, they yank it and give us the next one.
Re:exaflop, zettaflop, the yottaflop and the xeraf (Score:3, Interesting)
Are you really arguing that the scientific and social advances from the military arise from secret prisons and lack of moral qualms?
Re:Not in perspective - this is a media number (Score:5, Interesting)
I suspect the first example of this happening was trying to estimate how many angels could fit on the head of a pin.
Other meaningless analogies could be:
The simple fact is that a petaflop computer works faster than humans can conceive and any kind of analogy cannot be comprehended.
myke
NOT MILITARY! (Score:4, Interesting)
Re:This CELL is not single precision (Score:3, Interesting)
Re:Question (Score:3, Interesting)
Re:Yes, but... (Score:3, Interesting)
Data Collection anomalies (Score:3, Interesting)
Comment removed (Score:5, Interesting)
Re:Summary should have a shout out (Score:5, Interesting)
As you probably know your current desktop computer executes software 'in parallel' on multiple cores. Unless that software is written to use multiple cores then the parallelization will probably take place at the task level, in other words, unless special care was taken during the development each core will be executing a different process (or parts of a process, slicing itself up between multiple processes as long as the number of tasks is greater than the number of cpus).
The cell processor and the graphics cards you are talking about (I assume you refer to the nvidia 'cuda' architecture) are not your ordinary processor (though the cell comes a lot closer). They need special software to get the maximum out of their parallel features, which means that you can only exploit that speed when you are trying to solve a particular class of problems.
The issue is that not every problem is easily adapted to a parallel solution, and for some classes of problems it's simply impossible. For general purpose computing a general purpose cpu (with or without multiple cores) is good enough.
If you have a problem that you can solve in a parallel way (say computing chess moves, analyzing genetic strings, ray tracing an image or fluid dynamics, add your own examples here) then it makes sense to invest the extra time to code the problem up in such a way that if you have lots of cpus that they can each work on a small part of the problem.
The graphics cards are closer to a traditional vector processor than the cell, and thus even less suitable to be used as general purpose cpus.
Human Brain (Score:3, Interesting)
Not sure about the software though...
Re:Human Brain (Score:2, Interesting)
1.) The people who claim this, are probably also the ones who benefit if it is believed to be true, in terms of funding, general interest, cool-factor.
2.) Comparing the human brain and computers is not even comparing apples and oranges, it's apples and ellipses. Both are sort of round, but that's about as far as it goes.
3.) We have no idea exactly where we are on the road to matching the human brain in any way. So how can we know how long the road ahead is.
4.) Processing power of the brain is more or less irrelevant to its capabilities. Hundreds of millions of years ago the first insects walked around without falling over every step. We can't even build a decent pair of robotic legs to do that.
All in all, it's like saying the beach has more processing power than we will ever need, the molecules just need to be rearranged. Maybe so, but until that time, it's just sand incapable of adding 1 + 1.