Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing IBM Hardware

IBM's Blue Gene Runs Continuously At 1 Petaflop 231

An anonymous reader writes "ZDNet is reporting on IBM's claim that the Blue Gene/P will continuously operate at more than 1 petaflop. It is actually capable of 3 quadrillion operations a second, or 3 petaflops. IBM claims that at 1 petaflop, Blue Gene/P is performing more operations than a 1.5-mile-high stack of laptops! 'Like the vast majority of other modern supercomputers, Blue Gene/P is composed of several racks of servers lashed together in clusters for large computing tasks, such as running programs that can graphically simulate worldwide weather patterns. Technologies designed for these computers trickle down into the mainstream while conventional technologies and components are used to cut the costs of building these systems. The chip inside Blue Gene/P consists of four PowerPC 450 cores running at 850MHz each. A 2x2 foot circuit board containing 32 of the Blue Gene/P chips can churn out 435 billion operations a second. Thirty two of these boards can be stuffed into a 6-foot-high rack.'"
This discussion has been archived. No new comments can be posted.

IBM's Blue Gene Runs Continuously At 1 Petaflop

Comments Filter:
  • by jshriverWVU ( 810740 ) on Tuesday June 26, 2007 @01:08PM (#19651717)
    As a parallel programmer, I'd love to have just one of these chips let alone one of the boards in a nice 2u rack. Can they bought at a reasonable price or strictly research or inhouse?
  • In the Future... (Score:2, Interesting)

    by perlhacker14 ( 1056902 ) on Tuesday June 26, 2007 @01:12PM (#19651807)
    I yearn for the day that this kind of power may be brought into households all over the world. Think: the opportunities presented by such computers available to all are scientifically tremendous. There should be consideration of having these in Libraries, at least. Publically and Freely accessible supercomputing should become a national goal, to be achieved by 2019 at least.
  • What about Memory? (Score:5, Interesting)

    by sluke ( 26350 ) on Tuesday June 26, 2007 @01:32PM (#19652143)
    I recently had a chance to see Francois Gygi, one of the principal authors of qbox (http://eslab.ucdavis.edu/) which is a quantum electronic structure code that has set some performance records on the Blue Gene/L at Livermore. He mentioned that the biggest challenge he faced was the very small amount of memory available to each node of the Blue Gene (something like 256Mb). This forced him to put so much emphasis on the internode communications that simply changing the order of the nodes where the data was distributed in the machine (without changing the way the data itself was split) affected performance by over 100%. This will only get worse as the number of cores per board goes from 2 to 4 on the Blue Gene/P. I couldn't find anything in a quick google search, but does someone know what the plans are for the memory on this new machine?
  • by deadline ( 14171 ) on Tuesday June 26, 2007 @01:33PM (#19652145) Homepage

    Blue Gene [wikipedia.org] is a specialized design that is based on using large amounts of low power CPUs. This approach is also the one taken by SiCortex [sicortex.com]. One of the big problems with heroic computers (computers that are pushing the envelop in terms of performance) is heat and power. Just stacking Intel and AMD servers gets expensive at the high end.

  • Re:I'm ignorant. (Score:2, Interesting)

    by Pingmaster ( 1049548 ) on Tuesday June 26, 2007 @01:34PM (#19652177)
    According to TFA, the uS DoE has an order in for one of these things, so a good 'practical' and eventually 'real' use is to number crunch the movement of energy throughout the US, since there are now people selling electricity back into the grid, there has been talk for several months about needing a system to monitor this. They may also use it to calculate the best routing for black/brownout areas or predict area that will be in need of more power in the near future and help the engineers place their generating stations.

    While they may not all be 'real' right now (in fact i doubt most of the applications for a brand-new, not even delivered supercomputer would be in much more than a hypothetical planning stage), there are definitely many practical solutions that can be done with this.

    Otherwise, why would so many companies spend billions of dollars researching and making these tings if no-one needed to buy them?
  • by Vellmont ( 569020 ) on Tuesday June 26, 2007 @01:54PM (#19652501) Homepage
    Years ago, shortly after the Pentium first came out and the then astounding "x million flops/second" numbers were floating around, I wondered how far we were behind the power of supercomputers. I remember doing some rough calculations and finding that only a few pentiums could do the calculations of a Cray 1. I don't remember the specifics of how many pentiums/cray, or how rough the calculation was, but that's largely unimportant for my point.

    So I have to wonder, what's the equivalent supercomputer that a modern, hefty desktop is capable of performing at? 10 years ago, 20 years ago? Have super-computers accelerated in terms of the speed of increased computing power, stayed the same, or fallen behind desktops?
  • by joib ( 70841 ) on Tuesday June 26, 2007 @03:14PM (#19653683)

    I thought 850 chips were slow by today's standards. What am I missing?


    You can stuff 4096 cores (1024 chips) per rack. Precisely because the chips are a slow low power design.
  • by Anonymous Coward on Tuesday June 26, 2007 @03:33PM (#19653945)
    FYI these are not "normal" PPC 450s ... they are PPC 450 cores with two high end FPUs bolted on (the FPUs from the G5) This works very well if you want to build a big parallel machine like BGP. As you say, no good for a desktop (true) but my point is just this is not a typical embedded PPC chip.
  • Lem (Score:3, Interesting)

    by jefu ( 53450 ) on Tuesday June 26, 2007 @06:13PM (#19656101) Homepage Journal

    As long as Lem has been mentioned, there is also "Non Serviam" (in "A Perfect Vacuum") in which the "Latest IBM models have a top capacity of one thousand personoids". Said personoids occupy themselves, among other things, with debating the existence and nature of God (ie the programmer/person running said IBM).

  • by flaming-opus ( 8186 ) on Wednesday June 27, 2007 @12:05PM (#19664189)
    I'm absolutely possitive that sun did not implement a radix-3000 router of any sort, particularly infinaband. If you look at the earth simulator, a ridiculously high percent of the cost was to build a 640-way crossbar, and even that wasn't quite a full crossbar. I'm sure that the sun design is some sort of tapered fat-tree inside the box. It's possible that they overclocked the all-internal connections, as the traces are only a couple feet long, but there's still up to 8 hops from 1 port to another, assuming a rad-24 building block, 7 hops if you use sidelinks at the top level.

    I'm not arguing that the sun solution is bad because it's commodity-based. That really keeps down the cost. $50million for a top-5 super is quite modest. It's just not as exotic, and thus interesting, as IBM's Blue Gene, Cray's XT4, or NEC's SX-8. (Though even BG and XT use commodity-derived processors, with custom packaging&interconnect) I'm a software guy, so the fact that Sun's system uses vanilla, off-the-shelf solaris/linux makes it somewhat less interesting than the more exotic designs.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...