Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing IBM The Military Technology

IBM Building 20 Petaflop Computer For the US Gov't 248

eldavojohn writes "When it's built, 'Sequoia' will outshine every super computer on the top 500 list today. The specs on this 96 rack beast are a bit hard to comprehend as it consists of 1.6 million processors and some 1.6TB of memory. That's 1.6 million processors — not cores. Its purpose? Primarily to keep track of nuclear waste & simulate explosions of nuclear munitions, but also for research into astronomy, energy, the human genome, and climate change. Hopefully the government uses this magnificent tool wisely when it gets it in 2012."
This discussion has been archived. No new comments can be posted.

IBM Building 20 Petaflop Computer For the US Gov't

Comments Filter:
  • by Hell O'World ( 88678 ) on Tuesday February 03, 2009 @10:48AM (#26709205)

    Can you imagine a Beowolf cluster of those?

  • Mmm... (Score:5, Funny)

    by Taibhsear ( 1286214 ) on Tuesday February 03, 2009 @10:49AM (#26709231)

    Nice rack(s).

  • OH NOES!!! (Score:4, Funny)

    by Darundal ( 891860 ) on Tuesday February 03, 2009 @10:49AM (#26709233) Journal
    2012! Supercomputer! It's Skynet! RUN FOR YOUR LIVES!
    • by GiovanniZero ( 1006365 ) on Tuesday February 03, 2009 @11:01AM (#26709453) Homepage Journal
      At 6:18 pm EST IBM's super computer went online At 6:19 pm EST IBM's super computer declared nuclear war on humans At 6:20 pm EST there was a SEG FAULT and skynet must reboot to continue genocide
      • Re: (Score:3, Informative)

        by CompMD ( 522020 )

        Well, if IBM builds skynet, then we win the war by saying PWRDNSYS OPTION(*IMMED).

      • by blhack ( 921171 )

        At 6:20 pm EST there was a SEG FAULT and skynet must reboot to continue genocide

        I think what you mean to say was:

        Operatior action required on device SKYNET1A (Cancel Reply Ignore)
        Unchecked Function

        Reply:__________________________________________________

  • Oh, yes. (Score:5, Funny)

    by AltGrendel ( 175092 ) <(su.0tixe) (ta) (todhsals-ga)> on Tuesday February 03, 2009 @10:49AM (#26709237) Homepage
    And also to find the question that "42" answers.
    There are many theories as to what this question might be, and now IBM is building a system that will solve this issue once and for all.
    • Re:Oh, yes. (Score:5, Funny)

      by MightyYar ( 622222 ) on Tuesday February 03, 2009 @11:06AM (#26709531)

      No, no. It's being used to calculate the new national debt.

      • Re: (Score:2, Funny)

        by ukbazza ( 1232802 )

        No, no. It's being used to calculate the new national debt.

        They're going to need a bigger computer.

      • No, it can process over 3 tax returns per day.

      • by AmiMoJo ( 196126 )

        I bet it's cracking AES encryption.

        Government wants to crack encrypted files of enemies. Realises that it can build a computer capable of doing it for some billions of dollars. Finds excuse to build such a machine.

        Although brute-forcing the entire AES keyspace is still infeasible, brute forcing every possible password consisting of 30 typeable characters long certainly isn't.

  • 2012? (Score:2, Offtopic)

    by DodgeRules ( 854165 )
    Is this the real reason the world ends in 2012?
    • Re:2012? (Score:5, Funny)

      by Hal_Porter ( 817932 ) on Tuesday February 03, 2009 @10:59AM (#26709421)

      A group of computer scientists build the world's most powerful computer. Let us call it "HyperThought." HyperThought is massively parallel, it contains neural networks, it has teraflop speed., etc. The computer scientists give HyperThought a shakedown run. It easily computes Pi to 10000 places, and factors a 100 digit number. The scientists try find a difficult question that may stump it. Finally, one scientist exclaims: "I know!" "HyperThought," she asks "is there a God?" "There is now," replies the computer.

  • I've heard about predictions of the end of the World in 2012, now I know the answaer - this machine will become a Singularity [wikipedia.org].
    • In all seriousness, how much processing power would it take to run a program that designs newer and better processors? I would think that 20 petaflops and a good algorithm would be able to produce a processor that is an improvement over the current generation. Then again, I know next to nothing about processor design, so I could be totally wrong.

      • The problem is not the hardware, it's the software. Who's going to write the initial algorithm?

        Ok, given enough processing power you could do a genetic algorithm for processor design that actually provides useful solutions within a reasonable amount of time, but I have a feeling we're far from that point.

        • Re: (Score:3, Interesting)

          by harry666t ( 1062422 )
          Hm, it's all about getting the right fitness function, isn't it?

          The processor that would be more fit would: draw less power, compute stuff faster, be cheap to produce, etc. Then it could either have a compatible instruction set, or a new one; in case of a new one, it would have to be able to come up with a way of automatically translating stuff from the old instruction set, or targetting a compiler at it.

          The case with the new instruction sets sounds really, really interesting. I think the actual hardware de
      • by jvkjvk ( 102057 )

        You're right, we have the horsepower.

        Now, if only we could produce the software Y'know, the set of "good algorithms" to produce the layouts and the other set of "good algorithms" to test fitness and ... everything else necessary to automatically produce solutions.

        *That* seems to be the hard task at the moment. I don't design processors either, so I don't know what types of issues the current design software has but to me it seems that this is probably the hurdle they are facing on that front, not processo

  • by khuber ( 5664 ) on Tuesday February 03, 2009 @10:50AM (#26709257)
    "The system will also act as a giant weather cock,"
    • by v1 ( 525388 ) on Tuesday February 03, 2009 @11:26AM (#26709919) Homepage Journal

      Primarily to keep track of nuclear waste

      And this can't be done with say, Excel?

      • by vlm ( 69642 ) on Tuesday February 03, 2009 @11:56AM (#26710579)

        And this can't be done with say, Excel?

        Ahh, Excel... the first choice in corporate database management systems.

        How many other slashdotters work at fortune XXX firms where on paper some executive bean counter says "we use oracle" but on the ground all databases are done in Excel (along with a smattering of everything else?)

        It is a step up from three jobs ago, where at another fortune XXX the database management system of choice was what boiled down to an administrative assistant and Lotus's word processing solution. Yes we used plain english to request that Patti make changes instead of sql update statements. Also our sql select statements always began with "hey Patti, could you look up...". Any yes, all "ORDER BY" stanzas were in fact powered by swear words and performed by cut and paste.

        Sadly I am not making any of this up.

        • No be fair there are an awful lot of Access 2 DBs* out there, Excel being a spreadsheet and all NOT a database.

          *oh so useful and forwards compatible

      • And this can't be done with say, Excel?

        RTFA that's the point ! They are building this supercomputer to run Excel 2012.

  • by Trailer Trash ( 60756 ) on Tuesday February 03, 2009 @10:50AM (#26709259) Homepage
    Each processor gets its own megabyte of memory? Are these a bunch of refurb pcs from the late 80's?
    • Yeah, something's not right here. I came to the same conclusion.

      Did someone tell the author that they had that much L1 memory, and they didn't understand the difference?

    • by belrick ( 31159 )

      1.6 million processors in 96 racks is 16000 processors per rack, or about 400 processors per U. To me that sounds like an evolution of the cell processor and 1 MB per cell sounds reasonable.

    • It's SIMD. It's a simple set of instructions running on a whole lot of little sets of data. This is the same thing video cards do and it is a great way to solve (some) problems.

    • Re: (Score:2, Informative)

      by jsiples ( 1233300 )
      The ram was wrong, its actually 1.6PB instead of TB
  • by TeknoHog ( 164938 ) on Tuesday February 03, 2009 @10:52AM (#26709291) Homepage Journal
    Because, when you put two processors on a single piece of silicon, it magically becomes one "processor" with two "cores".
    • Re: (Score:3, Funny)

      by MightyYar ( 622222 )

      Yeah, but how many CPUs is that?

    • Mmm, doesn't the difference between a core and a processor have to do with how they are connected?

    • Yes, yes it does. The issues whilst similar for multi-socket and multi-core systems are different due to the single processor having links to system bus and main memory shared between the cores, where as these are separate links on different processors. So as nomenculature goes it is not that bad at all.
  • I would have expected it to have a bit more memory with that many processors.

  • by iYk6 ( 1425255 ) on Tuesday February 03, 2009 @11:01AM (#26709451)

    flops = floating point operation per second
    flop = Gigli

    The article got it mostly right. It mentioned 500-teraflop once, but every other time it spelled flops correctly. Slashdot, on the other hand, fucked up the title, despite the fact that it pretty much just copied it from the article (poorly).

    • by elrous0 ( 869638 ) *
      Don't you blaspheme Ben Affleck in this home, young man!
    • So what does it mean when they talk about them Gigliflops?

  • by sgt scrub ( 869860 ) <saintium@NOSpAM.yahoo.com> on Tuesday February 03, 2009 @11:01AM (#26709463)

    Hopefully the government uses this magnificent tool wisely when it gets it in 2012.

    Sounds like they are going to port the quake mods to the raytrace q4 engine.

  • by Metabolife ( 961249 ) on Tuesday February 03, 2009 @11:19AM (#26709781)

    "...allowing forecasters to create local weather "events" less than one kilometer across, compared with 10 kilometers today and at speeds up to 40 times faster than current systems."

  • So let's see.... (Score:5, Insightful)

    by cbiltcliffe ( 186293 ) on Tuesday February 03, 2009 @11:29AM (#26709997) Homepage Journal

    - IBM is building a computer that will be functional in about 3.5 years.

    - The power of this computer, in 3.5 years, will outshine every other supercomputer currently running today.

    I should hope so! What's the point of taking 3.5 years to build the thing, if it's going to be 3.5 years out of date by the time they build it?

    Heck, in 3.5 years, your desktop computer will be 4 times more powerful than anything currently running today, too.

    Duuh.

    • by m0i ( 192134 )

      If you have a look at http://top500.org/lists/2008/11/performance_development [top500.org] it takes more than 6 years to get 10 times actual performance (quicker than Moore's law, hrm). Given that the actual top is at 1PF, going to 20 in 3.5 years is quite an achievement.

    • Heck, in 3.5 years, your desktop computer will be 4 times more powerful than anything currently running today, too.

      For being so picky about the terms in the article, you are quite lax with your own. I seriously doubt my desktop, in 3.5 years, will be able to do ~ 6 petaflops. :) (4x more powerful than "anything" currently running today)

      Furthermore, 20 vs. ~1.5 petaflops is a goodly sized jump for 3ish years, isn't it? Computer speed growth has seemed to be slowing lately, with an emphasis being on multiple cores, not faster clock speeds like it was 10 years ago. So being able to get 20x the power of the current super

      • It's obvious that I was referring to desktop computers with the "anything running today" wording.

        If you're going to be that intentionally disingenious, why don't you also say that I claimed desktop computers were going to have 4000+ horsepower, since there are industrial earth moving equipment engines that currently put out over 1000. ....wait a minute.....

  • Why can't we let private industry own the computer and the government just purchase time on it? I for one would love to have CGI movies rendered in better-than-real time. This way, us the taxpayers don't have to pay for idle time.

    Also, I can design a database using SQLite with a web front end for keeping track of uranium or anything else for that matter. As long as it is not measured in individual atoms, it'll run fine on my spare 2.4 Single core celeron. There is no need to update the database 100M times a

    • You could probably do that with an old copy of Filemaker Pro and a eeePC.

      Sheesh. It's inventory management, not rocket science.

  • Dan Brown told me so.

  • MTBF (Score:4, Interesting)

    by vlm ( 69642 ) on Tuesday February 03, 2009 @12:02PM (#26710725)

    So the real question in an immense cluster like this, is whats the MTBF?

    Simon claims that the Eniac MTBF was 8 hours, although I've seen all kinds of claims on the web from minutes to days.

    http://zzsimonb.blogspot.com/2006/06/mtbf-mean-time-between-failure.html [blogspot.com]

    I would guess this beast will never be 100% operational at any moment of its existence.

    I'm guessing the "cool" part of this won't be the bottomless pile of hardware in one room, but how they maintain this beast. Just working around one of the million CPU fans burning out is no big deal, but how do you deal with a higher level problem like one of the hundreds of network switches failing, etc?

    • Re: (Score:3, Informative)

      by mmell ( 832646 )
      Higher than you're guessing. I've worked on BlueGene/L, BlueGene/S and was involved in some of the development on BlueGene/P. All of these systems have an incredibly agressive monitoring mechanism - voltages, temperatures, fan speeds, as well as half a dozen other hardware categories are monitored at the component level and the data stored in a database where it is analyzed to ensure that the system as a whole IS operational and stays that way.

      But thank you for pointing out that the architecture is inher

      • by joib ( 70841 )
        Yeah, but an MPI job can't recover from a failed node. Except for checkpointing, of course. So if you launch a job on all those 1.6e6 processors, they all better on average stay up at least long enough that you make some progress and write a checkpoint before one node crashes.
  • Hopefully the government uses this magnificent tool wisely when it gets it in 2012.

    SCENE: The Pentagon, 2012

    Science Advisor: "President Whoever-You'll-Be, IBM has completed our 20 petaflop computer. It is awaiting your command."
    President Whoever-You'll-Be: "Thank you, Advisor. We can use it to compute the long-term effects of nuclear waste disposal, weather fronts, and... just... just how much processing power is in this?"
    SA: *deep sigh* "Over 1.6 million processors and a total of 1.6TB of RAM, sir."
    PWYB: "My GOD, Advisor. Do you know what that much power could do? It... it could...

  • "IBM reckons its 20-petaflops capable Sequoia system will outshine every single current system in the Top500 supercomputer rankings"

    So the computer will be ready in 2012, and it will outperform computers from 2009?

    These multi-year computer construction projects seem very problematic given the pace of change in technology. Memory changes, CPUs change, and the socket specs change — if it takes 3 years to build, it will be obsolete before it's ready. 2012 could be the year that ATI releases 10-petafl

  • The feds have been needed a computer that can balance a budget...think this monster is up to the task? Somehow I doubt it.
  • The eetimes article states a slightly more realistic 4096 processors per rack, or roughly 400,000 processors...

    Still, can you imagine the maintenance plan on this beast?
    Can you imagine the power and cooling involved?
    Even at only 25W per processor, we are talking nearly 10MW of power for the processors alone.

    Much more interesting than the machine itself would be an article on how they plan to keep it up and running.
  • I recall this is some sort of named ad-hoc "law". When the amount core memory falls significantly below speeds, the kinds of computing you can do is severely limited. I believe they mainly plan simulations, where gigaflops per output point is typical and memory needs not as much. Data processing certainly desires balanced memory.
  • Keep track of nuclear waste?
    A freakin pencil and paper wouldn't work for that?

    The rest of the duties are cool, more simulation and research and less underground testing...that's fine.
    But that initial reason is bogus!

One man's constant is another man's variable. -- A.J. Perlis

Working...