IBM Building 20 Petaflop Computer For the US Gov't 248
eldavojohn writes "When it's built, 'Sequoia' will outshine every super computer on the top 500 list today. The specs on this 96 rack beast are a bit hard to comprehend as it consists of 1.6 million processors and some 1.6TB of memory. That's 1.6 million processors — not cores. Its purpose? Primarily to keep track of nuclear waste & simulate explosions of nuclear munitions, but also for research into astronomy, energy, the human genome, and climate change. Hopefully the government uses this magnificent tool wisely when it gets it in 2012."
Re:and just for old time's sake... (Score:5, Interesting)
While you are joking about games (like Quake and Crysis), this computer does sound like a giant graphics card.
It can do 20 Pflops with 1.6 million processors, so 12.5Gflops per processor, but with 1.6TB of memory, it means its only got 1Mb per processor.
So it sounds like some kind of giant specialised GPU with local memory.
Re:and just for old time's sake... (Score:1, Interesting)
Sounds like the Cell on steroids to me.
Re:Skynet anyone? (Score:3, Interesting)
MTBF (Score:4, Interesting)
So the real question in an immense cluster like this, is whats the MTBF?
Simon claims that the Eniac MTBF was 8 hours, although I've seen all kinds of claims on the web from minutes to days.
http://zzsimonb.blogspot.com/2006/06/mtbf-mean-time-between-failure.html [blogspot.com]
I would guess this beast will never be 100% operational at any moment of its existence.
I'm guessing the "cool" part of this won't be the bottomless pile of hardware in one room, but how they maintain this beast. Just working around one of the million CPU fans burning out is no big deal, but how do you deal with a higher level problem like one of the hundreds of network switches failing, etc?
Re:End of the world in 2012 (Score:3, Interesting)
The processor that would be more fit would: draw less power, compute stuff faster, be cheap to produce, etc. Then it could either have a compatible instruction set, or a new one; in case of a new one, it would have to be able to come up with a way of automatically translating stuff from the old instruction set, or targetting a compiler at it.
The case with the new instruction sets sounds really, really interesting. I think the actual hardware design would have to be derived from some higher-level representation of the instruction set architecture. I wonder if such a high-level description would be enough to also automatically port a compiler. Hmmmm...