Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing IBM The Military Technology

IBM Building 20 Petaflop Computer For the US Gov't 248

eldavojohn writes "When it's built, 'Sequoia' will outshine every super computer on the top 500 list today. The specs on this 96 rack beast are a bit hard to comprehend as it consists of 1.6 million processors and some 1.6TB of memory. That's 1.6 million processors — not cores. Its purpose? Primarily to keep track of nuclear waste & simulate explosions of nuclear munitions, but also for research into astronomy, energy, the human genome, and climate change. Hopefully the government uses this magnificent tool wisely when it gets it in 2012."
This discussion has been archived. No new comments can be posted.

IBM Building 20 Petaflop Computer For the US Gov't

Comments Filter:
  • by Anonymous Coward on Tuesday February 03, 2009 @11:17AM (#26709745)

    While you are joking about games (like Quake and Crysis), this computer does sound like a giant graphics card.

    It can do 20 Pflops with 1.6 million processors, so 12.5Gflops per processor, but with 1.6TB of memory, it means its only got 1Mb per processor.

    So it sounds like some kind of giant specialised GPU with local memory.

  • by Anonymous Coward on Tuesday February 03, 2009 @11:28AM (#26709971)

    Sounds like the Cell on steroids to me.

  • Re:Skynet anyone? (Score:3, Interesting)

    by khallow ( 566160 ) on Tuesday February 03, 2009 @11:44AM (#26710293)
    Interesting. There's still the matter of the software. Currently, these machines are running straightforward simulation software. I don't see weather prediction resulting in sentience any time soon. The machines will probably have to grow considerably before the more flexible software subsystems like say the load-balancing code, achieves sentience, and destroys us all.
  • MTBF (Score:4, Interesting)

    by vlm ( 69642 ) on Tuesday February 03, 2009 @12:02PM (#26710725)

    So the real question in an immense cluster like this, is whats the MTBF?

    Simon claims that the Eniac MTBF was 8 hours, although I've seen all kinds of claims on the web from minutes to days.

    http://zzsimonb.blogspot.com/2006/06/mtbf-mean-time-between-failure.html [blogspot.com]

    I would guess this beast will never be 100% operational at any moment of its existence.

    I'm guessing the "cool" part of this won't be the bottomless pile of hardware in one room, but how they maintain this beast. Just working around one of the million CPU fans burning out is no big deal, but how do you deal with a higher level problem like one of the hundreds of network switches failing, etc?

  • by harry666t ( 1062422 ) <harry666t@nospAM.gmail.com> on Tuesday February 03, 2009 @02:10PM (#26713671)
    Hm, it's all about getting the right fitness function, isn't it?

    The processor that would be more fit would: draw less power, compute stuff faster, be cheap to produce, etc. Then it could either have a compatible instruction set, or a new one; in case of a new one, it would have to be able to come up with a way of automatically translating stuff from the old instruction set, or targetting a compiler at it.

    The case with the new instruction sets sounds really, really interesting. I think the actual hardware design would have to be derived from some higher-level representation of the instruction set architecture. I wonder if such a high-level description would be enough to also automatically port a compiler. Hmmmm...

8 Catfish = 1 Octo-puss

Working...