Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing IBM The Military Technology

IBM Building 20 Petaflop Computer For the US Gov't 248

eldavojohn writes "When it's built, 'Sequoia' will outshine every super computer on the top 500 list today. The specs on this 96 rack beast are a bit hard to comprehend as it consists of 1.6 million processors and some 1.6TB of memory. That's 1.6 million processors — not cores. Its purpose? Primarily to keep track of nuclear waste & simulate explosions of nuclear munitions, but also for research into astronomy, energy, the human genome, and climate change. Hopefully the government uses this magnificent tool wisely when it gets it in 2012."
This discussion has been archived. No new comments can be posted.

IBM Building 20 Petaflop Computer For the US Gov't

Comments Filter:
  • by lukaszg ( 1326959 ) on Tuesday February 03, 2009 @10:49AM (#26709241)
    I've heard about predictions of the end of the World in 2012, now I know the answaer - this machine will become a Singularity [wikipedia.org].
  • by Professeur Shadoko ( 230027 ) on Tuesday February 03, 2009 @10:57AM (#26709393)

    My bet is that this is a typo.
    1.6 PB seems more reasonable.

  • by rubycodez ( 864176 ) on Tuesday February 03, 2009 @11:12AM (#26709643)
  • by Madball ( 1319269 ) on Tuesday February 03, 2009 @11:50AM (#26710433)
    Another reference article: http://www.eetimes.com/news/design/showArticle.jhtml?articleID=213000489 [eetimes.com] Mentions "up to" 4,096 processors per rack. So, at maximum, this would be 393,216 processors. Perhaps they are quad cores and someone took the liberty of multiplying the 393,216x4=1.6M (rounded). A more reasonable assumption may be 100,000 quad-core CPUs (400,000 cores). That would make the summarization of by only 16 times, lol.
  • Re:OH NOES!!! (Score:3, Informative)

    by CompMD ( 522020 ) on Tuesday February 03, 2009 @11:54AM (#26710527)

    Well, if IBM builds skynet, then we win the war by saying PWRDNSYS OPTION(*IMMED).

  • by Richard_at_work ( 517087 ) on Tuesday February 03, 2009 @12:36PM (#26711461)
    Yes, nuclear weapons have a shelf life due to the components included in them - explosives, chemicals etc.

    New designs are used to maximise yield per mass, enabling you to throw a smaller warhead at a target, which means less chance of interception. It also means a smaller package to maintain, and cheaper to build, along with more warheads per unit of material.
  • Re:MTBF (Score:3, Informative)

    by mmell ( 832646 ) on Tuesday February 03, 2009 @12:43PM (#26711637)
    Higher than you're guessing. I've worked on BlueGene/L, BlueGene/S and was involved in some of the development on BlueGene/P. All of these systems have an incredibly agressive monitoring mechanism - voltages, temperatures, fan speeds, as well as half a dozen other hardware categories are monitored at the component level and the data stored in a database where it is analyzed to ensure that the system as a whole IS operational and stays that way.

    But thank you for pointing out that the architecture is inherently fault-tolerant. When submitting a job to massively parallel machines like this, one of the options presented is how many cores/how much memory for this job (as well as many other performance affecting options such as internal network topology). A bad core can be "skipped" in much the same way as a bad sector on a hard drive; except that in this case, it's possible to repair/replace the bad sector on the fly.

  • by joib ( 70841 ) on Tuesday February 03, 2009 @01:35PM (#26712891)
    So why did you bring it up

    The parent said that the computer will be used for "mapping every reaction" between molecules. Presumably, since reactions tend to require quantum mechanical descriptions, I guessed the parent meant that the new computer would allow doing such calculations for all reactions in a rather large area.

    I don't get what you are saying when you ask a question, relate it to the parent's post, and then say it is irrelavent.

    Just a gedanken experiment to amuse myself, while noting that it actually has nothing do with simulating nuclear weapons. Don't get too worked up about it.

    And, do you realize how much processing power 20 petaflops is?

    Yes, it's about 2 orders of magnitude more than the supercomputer I'm using at the moment. A lot for sure, but still limited to very small system sizes for quantum mechanical calculations. At the moment, even the best methods in practice scale as N^3 or so. With my current 100 TFlops I might do a DFT calculation with O(10000) atoms or so. Two orders of magnitude more CPU power with N^3 scaling gives me roughly a factor of 5 more atoms. 50000 atoms fit into a box of roughly 10x10x10 nm (depending on the material etc., of course). Still a way to go until I'm able to do "square miles"..

    If you want to go into classical molecular dynamics, then you're obviously in much better shape. With the current supercomputer that's maybe around 1E9 atoms, and since MD scales linearly, with two orders of magnitude more flops it means around 1E11 atoms. Now these fit into a box on the order of 1 um**3. Again, still quite a way to go to square miles..

    In conclusion, atoms are really really tiny, and in 3 dimensions you can pack a lot of them into a very tiny volume.

    Also, they so far have not needed to calculate what a nuclear bomb does for each atom (obviously, since it has been nigh impossible), and they probably won't ever need to really. You can study waves and energy effects in great detail, and simulate them accurately, without needing to know where each and every atom goes. This will simply let them be more precise and accurate, as well as speedy.

    Yes, that was sort of implied in my previous post. The US nuke labs have been at the forefront in research on numerical methods in topics such as shock propagation (PPM and methods like that) and really really large FEM simulations. Obviously, the actual nuclear reactions are taken into account probabilistically rather than the full quantum mechanical treatment (as my above monologue shows, such a treatment for the primary is far beyond any computer in sight). AFAIK they use Monte Carlo neutron diffusion rather than the classical multigroup diffusion methods that AFAIK are still largely used for civilian reactor design. That being said, I'm sure they are doing a lot of atomic and quantum level simulations as well for small model systems designed to e.g. extract parameters for continuum simulations and such.

  • Re:don't smell right (Score:2, Informative)

    by jsiples ( 1233300 ) on Tuesday February 03, 2009 @01:49PM (#26713225) Homepage
    The ram was wrong, its actually 1.6PB instead of TB
  • by kalirion ( 728907 ) on Tuesday February 03, 2009 @02:44PM (#26714337)

    Well, whatever the case, once such a computer is built, someone had better ask it whether or not entropy can be reversed.

  • by clodney ( 778910 ) on Tuesday February 03, 2009 @03:07PM (#26714789)

    The summary is wrong. I actually did RTFA, and it said 1.6 petabytes, not 1.6 terabytes.

  • by DMUTPeregrine ( 612791 ) on Tuesday February 03, 2009 @03:56PM (#26715667) Journal
    IIRC it switched to goat.cx for a while. So both are correct. Also, be glad you haven't seen the stereogram of it. Some things should not be in 3d.
  • by tellthepeople ( 1451199 ) on Tuesday February 03, 2009 @06:47PM (#26718347)

    Can you imagine a Beowolf cluster of those?

    I don't think that phrase means what he thinks it means. http://en.wikipedia.org/wiki/Beowulf_(computing) [wikipedia.org]

    p.s It's Beowulf not Beowolf.

  • by HTH NE1 ( 675604 ) on Tuesday February 03, 2009 @06:52PM (#26718419)

    Except of course that TFA was wrong and it was 1.6 PB / 1.6 MCPUs = 1 GB/CPU.

    I imagine though it could run a few gigs (jobs of short or uncertain duration).

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...