IBM Building 20 Petaflop Computer For the US Gov't 248
eldavojohn writes "When it's built, 'Sequoia' will outshine every super computer on the top 500 list today. The specs on this 96 rack beast are a bit hard to comprehend as it consists of 1.6 million processors and some 1.6TB of memory. That's 1.6 million processors — not cores. Its purpose? Primarily to keep track of nuclear waste & simulate explosions of nuclear munitions, but also for research into astronomy, energy, the human genome, and climate change. Hopefully the government uses this magnificent tool wisely when it gets it in 2012."
End of the world in 2012 (Score:2, Informative)
Re:1.6M Processors, but only 1.6 TB memory? (Score:5, Informative)
My bet is that this is a typo.
1.6 PB seems more reasonable.
Re:1.6M Processors, but only 1.6 TB memory? (Score:5, Informative)
it is indeed 1.6 petabytes: http://www.eetimes.com/news/design/showArticle.jhtml?articleID=213000489 [eetimes.com]
Re:1.6M Processors, but only 1.6 TB memory? (Score:5, Informative)
Re:OH NOES!!! (Score:3, Informative)
Well, if IBM builds skynet, then we win the war by saying PWRDNSYS OPTION(*IMMED).
Re:Why always nuclear simulation? (Score:5, Informative)
New designs are used to maximise yield per mass, enabling you to throw a smaller warhead at a target, which means less chance of interception. It also means a smaller package to maintain, and cheaper to build, along with more warheads per unit of material.
Re:MTBF (Score:3, Informative)
But thank you for pointing out that the architecture is inherently fault-tolerant. When submitting a job to massively parallel machines like this, one of the options presented is how many cores/how much memory for this job (as well as many other performance affecting options such as internal network topology). A bad core can be "skipped" in much the same way as a bad sector on a hard drive; except that in this case, it's possible to repair/replace the bad sector on the fly.
Re:Why always nuclear simulation? (Score:5, Informative)
The parent said that the computer will be used for "mapping every reaction" between molecules. Presumably, since reactions tend to require quantum mechanical descriptions, I guessed the parent meant that the new computer would allow doing such calculations for all reactions in a rather large area.
I don't get what you are saying when you ask a question, relate it to the parent's post, and then say it is irrelavent.
Just a gedanken experiment to amuse myself, while noting that it actually has nothing do with simulating nuclear weapons. Don't get too worked up about it.
And, do you realize how much processing power 20 petaflops is?
Yes, it's about 2 orders of magnitude more than the supercomputer I'm using at the moment. A lot for sure, but still limited to very small system sizes for quantum mechanical calculations. At the moment, even the best methods in practice scale as N^3 or so. With my current 100 TFlops I might do a DFT calculation with O(10000) atoms or so. Two orders of magnitude more CPU power with N^3 scaling gives me roughly a factor of 5 more atoms. 50000 atoms fit into a box of roughly 10x10x10 nm (depending on the material etc., of course). Still a way to go until I'm able to do "square miles"..
If you want to go into classical molecular dynamics, then you're obviously in much better shape. With the current supercomputer that's maybe around 1E9 atoms, and since MD scales linearly, with two orders of magnitude more flops it means around 1E11 atoms. Now these fit into a box on the order of 1 um**3. Again, still quite a way to go to square miles..
In conclusion, atoms are really really tiny, and in 3 dimensions you can pack a lot of them into a very tiny volume.
Also, they so far have not needed to calculate what a nuclear bomb does for each atom (obviously, since it has been nigh impossible), and they probably won't ever need to really. You can study waves and energy effects in great detail, and simulate them accurately, without needing to know where each and every atom goes. This will simply let them be more precise and accurate, as well as speedy.
Yes, that was sort of implied in my previous post. The US nuke labs have been at the forefront in research on numerical methods in topics such as shock propagation (PPM and methods like that) and really really large FEM simulations. Obviously, the actual nuclear reactions are taken into account probabilistically rather than the full quantum mechanical treatment (as my above monologue shows, such a treatment for the primary is far beyond any computer in sight). AFAIK they use Monte Carlo neutron diffusion rather than the classical multigroup diffusion methods that AFAIK are still largely used for civilian reactor design. That being said, I'm sure they are doing a lot of atomic and quantum level simulations as well for small model systems designed to e.g. extract parameters for continuum simulations and such.
Re:don't smell right (Score:2, Informative)
Re:End of the world in 2012 (Score:4, Informative)
Well, whatever the case, once such a computer is built, someone had better ask it whether or not entropy can be reversed.
Re:and just for old time's sake... (Score:3, Informative)
The summary is wrong. I actually did RTFA, and it said 1.6 petabytes, not 1.6 terabytes.
Re:and just for old time's sake... (Score:3, Informative)
Re:and just for old time's sake... (Score:2, Informative)
Can you imagine a Beowolf cluster of those?
I don't think that phrase means what he thinks it means. http://en.wikipedia.org/wiki/Beowulf_(computing) [wikipedia.org]
p.s It's Beowulf not Beowolf.
Re:and just for old time's sake... (Score:3, Informative)
Except of course that TFA was wrong and it was 1.6 PB / 1.6 MCPUs = 1 GB/CPU.
I imagine though it could run a few gigs (jobs of short or uncertain duration).