Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Technology Hardware

Blue Gene/L Tops Its Own Supercomputer Record 238

DIY News writes "Lawrence Livermore National Laboratory and IBM unveiled the Blue Gene/L supercomputer Thursday and announced it's broken its own record again for the world's fastest supercomputer. The 65,536-processor machine can sustain 280.6 teraflops. That's the top end of the range IBM forecast and more than twice the previous Blue Gene/L record of 136.8 teraflops, set when only half the machine was installed."
This discussion has been archived. No new comments can be posted.

Blue Gene/L Tops Its Own Supercomputer Record

Comments Filter:
  • Imagine a Beowulf cluster of...oh, nevermind.
    • Sooo redundant, i mean even in this context it has been done a thousand times.
      • Ah come on. We all knew this would be the fr0st p1st and we all knew it would be modded up, we love our stereotypical, predictable and redundant jokes, it makes slashdot slashdot.
        If you can't beat them, join them and get some free 5, Funnys.

        In soviet russia, one for I, welcome our beowulf cluster of old korean hot grits in outer space :)
  • hmmm (Score:5, Insightful)

    by jigjigga ( 903943 ) on Friday October 28, 2005 @01:22AM (#13894758)
    lets put folding@home (http://folding.stanford.edu/ [stanford.edu]) on that mother!
    • Re:hmmm (Score:5, Informative)

      by thesupraman ( 179040 ) on Friday October 28, 2005 @01:32AM (#13894784)
      While I know that you are joking, one of the major targets of this particular machine is actually basically that, not of course for any direct public benefit, but for the owners.

      This particular machine is of course targeted at LANL, and weapons development (oops, did I say that? I mean 'stockpile stewardship')

      However, protein folding is one of the primary targets of the architecture.

      Oh, and BTW, the IO nodes of this beast run linux. Not exactly a standard kernel, but not far off. The compute nodes run a very simple custom kernel to minimise resource use (after all, they have very limited needs as the IO nodes provide them most services).
      • Re:hmmm (Score:5, Informative)

        by deglr6328 ( 150198 ) on Friday October 28, 2005 @03:57AM (#13895104)
        "This particular machine is of course targeted at LANL, and weapons development (oops, did I say that? I mean 'stockpile stewardship')"

        Just to expand on that, it is worth noting that the ASCI Blue Pacific supercomputer at LLNL was the first to run a fully three dimensional simulation of a nuclear trigger (plutonium fission) implosion and shortly thereafter was the first to run a full 3D simulation of the secondary fusion stage in a thermonuclear device. This computer was capable of ~3 teraflops and took something like 20 days to run those sims. Blue Gene is ~100 times faster than that computer and judging from the time it took ASCI White [lanl.gov] (~10 Tflops) to complete a simulation of a full thermonuclear detonation, it would therefore probably not be unreasonable to assume this new computer is capable of full 3D simulation of a complete thermonuclear bomb detonation (primary and secondary) in mere hours to a couple days. It is a shame that we even "need" nuclear weapons, but if we're going to have them I for one would much rather see tests of them done in silicon instead of in a big mushroom cloud!

        Yes, it is also sad that while other countries use thier supercomputing power mostly to investigate protien folding and earthquake propagation and other purposes generally recognized as peaceful we mainly use ours for simulation nuclear weapons designs; but it is not all bad. The simulations of imploding fusion fuel can (and will) also be used to simulate the implosion of the tiny fusion microcapsules which are imploded in laboratory laserfusion facilities like NIF [llnl.gov]. This has the potential to eventually result in laserfusion (inertial confinement fusion) as a power source. Supercomputers which were mainly intended to be used for weapons research in the past have occasionally also served up a few surprises in completely unrelated fields. The supercomputer Cray X-MP (?) at Sandia (?) labs in the mid 80s was where the first simulations of the giant impact theory of the formation of the moon were validated. Its now the predominant theory of the moon's origin. It is hard to imagine that this new computer won't have a few surprises of its own to reveal even if it only donates a small amount of time to non-defense related research.
      • Re:hmmm (Score:5, Funny)

        by LordFnord ( 843048 ) on Friday October 28, 2005 @06:34AM (#13895440)
        > Oh, and BTW, the IO nodes of this beast run linux

        Yeah? Hmmmm.

        lordfnord@eris:~$ ssh bluegene-l.ibm.com

        Welcome to Linux 2.6.14

        bluegene-l login: falken
        Password: joshua

        Greetings, Professor Falken. Would you like to play a game?

        1. Checkers
        2. Chess
        3. Protein folding
        4. Global thermonuclear war

        Uh-oh.

    • Re:hmmm (Score:3, Interesting)

      by QuantumG ( 50515 )
      I think this [uiuc.edu] is more appropriate.
      • Way off topic, but your link says Glenn Martyna moved to IBM/Watson. Christ, better buy some IBM stock.

        By the way, a trivial point with respect to this [insomnia.org]: Isn't it relativity, not QM, that forbids superluminal communication? I seem to recall non-relativistic QM with instantaneous action at a distance (e.g. Coulomb's Law) being alive and well in the realm of quantum chemistry, or perhaps really anywhere pair creation is not an issue.
        • Whatever, it's geek humour.
    • Re:hmmm (Score:3, Informative)

      by Burz ( 138833 )
      Let's see, the entire multi-year experiment at ClimatePrediction.net could be completed in about... oh 13 days. :^)

      (Not really; I made that up. But if you're curious about how much crunching power we have on tap, visit the project website ;) ).
      • What's so hard about releasing these things under an open source license.

        I have nothing against donating CPU cycles, but I have yet to find a group that doesn't require me to sign a restrictive software license. And for this particular project, it's a university running it no less. Aren't universities supposed to encourage the spread of information?

        (Then again, I'd have to bury my head in the sand and forget about all the patents that universities have amassed, often using tax dollars to fund the research t
        • Re:GPL (Score:3, Informative)

          by bsartist ( 550317 )
          What's so hard about releasing these things under an open source license.

          Basic scientific method, really - control the environment as tightly as you can, and then document everything as thoroughly as you can. The first precludes open source while the experiment is ongoing, while the second requires opening up the source once the experiment's done.

          Aren't universities supposed to encourage the spread of information?

          Accurate information, yes. How would you propose that accuracy could be guaranteed
    • lets put folding@home on that mother!

      Since "folding@home" uses distributed processing to put supercomputer tasks on the home computer, wouldn't running the program on a supercomputer just make it "folding"?

  • Reader (Score:5, Funny)

    by Jozer99 ( 693146 ) on Friday October 28, 2005 @01:24AM (#13894766)
    They say it can launch Adobe Acrobat Reader in ELEVEN SECONDS!!!
  • I wonder how much could be gained via compiler improvements, anyone know what compiler they use?
    • I wonder how much could be gained via compiler improvements, anyone know what compiler they use?

      Since it runs Linux and PowerPC? Probably GCC.
      • Re:compiler? (Score:4, Interesting)

        by joib ( 70841 ) on Friday October 28, 2005 @02:59AM (#13894976)
        Actually, the system is provided with the IBM XL family of compilers.
      • by SnowZero ( 92219 ) on Friday October 28, 2005 @03:22AM (#13895019)
        Don't forget to compile with:
        make -j 65536
        • Re:compiler? (Score:2, Informative)

          by Anonymous Coward

          Actually, most machines are partitioned into front end and back end. The back end is for running large production runs (1000's of PEs) and is usually on accessible as a batch queue. The front end is for compiling and debugging and is interactive (perhaps even running serially). The front end might even be another machine.

          Contrary to popular /. opinion, compiling is not a big task. Especially when compared to the real calculations done.

          Big machines like this usually have another queue on the front end fo

        • You should always set the number of parallel jobs to CPUs+1. In this case make -j 65537.
    • not a compiler issue (Score:5, Informative)

      by Quadraginta ( 902985 ) on Friday October 28, 2005 @02:04AM (#13894864)
      I have some very limited experience with this kind of computing, and I don't think the compiler is anywhere near the limiting factor.

      I strongly suspect the limiting factor is algorithms. That is, the problem is designing code that can efficiently use a massively parallel machine. It's enormously difficult to even imagine how a problem could be solved by breaking it up into 65,000 mini-problems that can be solved simultaneously, and therefore mostly but not entirely independently. People just don't think that way. (Or rather, they do, but only at such a basic level close to the neurons that they are utterly unaware of how it's done.)

      This is one reason "parallel computing" has been the Wave Of The Future(TM) for decades, and exhibits the same kind of "promise" as fusion power -- namely, we are told that ten years from now it will change everything -- and we hear it again every ten years.
    • Re:compiler? (Score:5, Insightful)

      by Aardpig ( 622459 ) on Friday October 28, 2005 @02:51AM (#13894961)
      On something like this, they would probably be programming in High Performance Fortran or Fortran w/ OpenMP -- or some similar dialect that supports massively parallel execution. I'm sure IBM develop an in-house compiler for the language.
      • They can't use OpenMP since they don't have shared memory beoynd 2 cpu:s. HPF is dead, or least dying, due to lackluster scaling beyond a few dozen cpu:s.

        What they use in practice is MPI.
      • HPF is one of those things that sounds like a great idea... until you read through a book on it and realize that they never how to talk about how to do minor things like, say, I/O. The only machine I've ever heard anybody crow about HPF performance on is the Earth Simulator, and it's not as hard to make codes perform well when you have that much memory bandwidth to throw at the problem.

        Most of the codes on the Blue Gene/L at LLNL are coming from earlier ASCI systems and are most likely MPI+Fortran/C code

  • by 278MorkandMindy ( 922498 ) on Friday October 28, 2005 @01:30AM (#13894781)
    ..figure out what the hell we are going to be doing for energy in 15 years??

    "Look to the future and the present will be safe"
  • by strider44 ( 650833 ) on Friday October 28, 2005 @01:34AM (#13894790)
    An IBM engineer was caught remarking "And boy can it hold a lot of porn."
  • by kyle90 ( 827345 ) <kyle90@gmail.com> on Friday October 28, 2005 @01:35AM (#13894793) Homepage Journal
    The damn thing's smarter than I am. Well, that's taking an estimate of 100 teraflops for the human brain, which seems to be popular.
    • by Quirk ( 36086 ) on Friday October 28, 2005 @02:15AM (#13894889) Homepage Journal
      Here's a quick rundown on the numbers. Brain Computing [capcollege.bc.ca]
      • An annoying rundown. I hate it when people don't just estimate the number of ops outright. 3.0 x 10^17 integer operations per second is the estimate. Why integer, I have no idea, as every neural network I've ever simulated used floating point operations, but that's probably because I wasn't aware of some brilliant optimisation (doh!). But whatever, let's pretend that number is accurate for floating point operations. 280.6 teraflops = 280.6 x 10^12 or 2.806 x 10^14, so you need over 1000 of these new co
        • ...so maybe all you need to do is grab the brain of someone recently diseased and slice layer by layer from front to back and scan it with an electron microscope. That should give you a pretty good map. From simulating small parts of the map at a time you should be able to learn a lot. At least enough to provide it with input and output for a virtual environment.

          The above recalled Rudy Rucker's [sjsu.edu] early, first(?) novel Software [amazon.com]. The plot carries your idea along the following lines:

          "Cobb Anderson created the "

      • ""Earth Simulator" supercomputer performs 36 Terra flOps / second. ...Earth Simulators required to model 1 brain = 3.0 x 1017 / 3.6 x 1013 = 8333. ...
        1 Brain = 8333 State-of-the-art Supercomputers
        "

        So, unlike what kyle90 posted, you'd actually need 1,443 (rounded up from 1,442.25) of these Blue Gene/L to accurately model a single human brain.

        To exceed that would require more!
        • 1,443?

          1,443 is less than 2048 (= 2**11th)

          11 Moores = 22 years: x1 human brain.

          22 Moores = 44 years: x2048 human brains.

          33 Moores = 66 years: x4194304 human brains.

          44 Moores = 88 years: x8589934592 human brains.
    • Well, may be you, but it's certainly not smarter than me - I can write code, for instance.
    • Smart? Depends on your definition of "smart".

      You can put a whole bunch of things in parallel - I'm sure computers with the 'intelligence' of a texas instruments scientific calculator could be paralleled hugely to create something that does even more teraflops... it just wouldn't be in a useful way. I know that analogy is flawed, but my point is that even Blue Gene/L is useless and dumb until someone smart comes along and puts a distilled level of their smartness (ie writes a program that mimics a process t
  • by Anonymous Coward on Friday October 28, 2005 @01:39AM (#13894807)
    and the answer had better not be 42.
  • by hansreiser ( 6963 ) on Friday October 28, 2005 @01:44AM (#13894818) Homepage
    The legitimate thing that I can imagine is if it was a cost based contract that was given out before the cost of the hardware was known.

    Was it?
  • Cool (Score:4, Interesting)

    by sheuer ( 926535 ) <sheuer@member[ ]f.org ['.fs' in gap]> on Friday October 28, 2005 @01:47AM (#13894825) Homepage
    Back when It was only half installed I got to take a tour of it while it was in Rochester, MN... Got to walk through it and touch it. Turns out the computer that controls blue gene takes up about half as much space as blue gene itself.
  • by account_deleted ( 4530225 ) on Friday October 28, 2005 @01:51AM (#13894834)
    Comment removed based on user account deletion
  • 1983 (Score:2, Funny)

    by Xrathie ( 921592 )
    WOULD YOU LIKE TO PLAY A GAME? aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    • WOULD YOU LIKE TO PLAY A GAME? aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

      If you're going to be an 80s geek, don't half-ass it like most people. The correct line from WarGames is "SHALL WE PLAY A GAME?"

  • .. since Quake 4 just hit the shelves.
  • more than twice the previous Blue Gene/L record ... set when only half the machine was installed

    When it was half done it was less than half the speed? Impressive. Was there a software/OS upgrade along the way, as well?

  • Picture (Score:3, Informative)

    by TechnoGuyRob ( 926031 ) on Friday October 28, 2005 @02:22AM (#13894902) Homepage
    Here's a picture of the momma: http://en.wikipedia.org/wiki/Image:BlueGeneL-600x4 50.jpg [wikipedia.org]
  • by Jack Earl ( 913275 ) on Friday October 28, 2005 @03:00AM (#13894978) Homepage
    If I had a computer that fast...
    #include <stdio.h>
    int main(){
    int answer = 42;
    printf("%d", answer);
    }
  • by Anonymous Coward
    Just in time for... Windows Vista(tm)
  • by Anonymous Coward on Friday October 28, 2005 @03:06AM (#13894990)
    Notice that the performance has actually increased PER proccessor as you add more proccessors... This is very remarkable in computer technology.

    Normally when you add cpus to a computer you get a increase in performance, but it doesn't increase linearly with each cpu. You have one cpu you have 100% performance, add one more and you may have 180% the performance and add 2 more you may have 300% of the performance etc etc.

    Notice that with half the machine there it got 138 GFlops.

    So if you doubled the size of the machine you'd expect to get something like 260 Gflops per second.

    But you have 280 Gflops per second.

    This pretty much means that as you add cpus the performance of each cpu actually increases slightly. That's a exponentional growth rate, at the beginning of the curve.

    Of course there has to be a technical limit to the system and the amount of space, heat, and electricity it can handle.. but technically if you double the size of the cluster again I wouldn't be suprised if you'd get close to 750 GFlops per second performance.

    This is some seriously hardcore stuff, the future of computing hardware. Todays supercomputer, tomorrow's desktop.. I can't wait.
    • by mj_1903 ( 570130 ) on Friday October 28, 2005 @04:28AM (#13895182)
      That probably only means that they have optimised the architecture over time as would be expected. Things like improved resource management, a slimmer kernel for each CPU, a better compiler, etc. can easily make up for that small performance gain.
      • That probably only means that they have optimised the architecture over time

        The cynic in me thinks they probably optimised the benchmark.
        • Linux is substantially more scaleable now than it was even just 6 months ago (not the vanilla, but quite well tested scaleability patches). This could account for the improvement. I suspect if they ran just half of it now, they'd get a little bit over half the performance (but not much over half - that is how good Linux is these days).

          • Linux is substantially more scaleable now than it was even just 6 months ago (not the vanilla, but quite well tested scaleability patches).


            Perhaps it is, but is has nothing to do with BG, since a) BG doesn't have shared memory, and each 2 cpu node (1 dual core processor) runs its own kernel and b) Linux is only used on the service nodes (the nodes handling disk IO, interactive logins, compiling etc.), not the compute nodes (where the actual action takes place).

            I'm quite sure that the improvements are due to
    • The current record for blue gene wasnt running at a very high efficiency (iirc T_peak 0.7 T_max).
      Plus you dont have to forget that the machince has 64K _dual core_ cpus, with one core dedicated for communication, thus the classification as 64K cpus.
      There could be plenty of room for improvement by utilising this core better.
    • The step from 274 to 280 (really, how did you get 260 when doubling 138?) is much, much smaller than the step from 560 to 750, though. Sorry, but you're essentially talking out of your ass here. :)
    • Sorry, but I have to disagree with your conclusion that this represents exponential growth.

      The effect you speak of (doubling the number of processors giving less than double the final "power") is due to additional overhead - various processors coordinating their work with each other, deciding things like "Should I split this 2 ways or 4?" and so on - and that sort of stuff inevitably increases with the number of processors.

      You can use improved algorithms, special-purpose hardware, etc, etc, to minimize this
  • by heroine ( 1220 ) on Friday October 28, 2005 @03:17AM (#13895010) Homepage
    Time for Earth Simulator to make a Walmart run and get some more Athlons to regain the top of the "supercomputer" chart.
  • by Anonymous Coward
    The faster they make these things, the slower they sing that damn Daisy song.
  • by naden ( 206984 ) on Friday October 28, 2005 @03:57AM (#13895106)
    65536 processors = 64K processors.

    damn that IBM, they take geekiness to just a whole different place.
  • It still takes 15 seconds to start up OpennOffice.org
  • Weather (Score:2, Interesting)

    by Voltageaav ( 798022 )
    The main drawback to forecasting models is that it takes soo long to run all the data, so we have to cut back on the data so that we can actually see what's forecast before it happens. With this this thing running an expanded version of the GFS with 10KM resolution, we might be able to actually get it right for once. ;)
  • OK, so this machine is going to simulate nuclear explosions. Based on other posts, it seems it will be able to do the same work as its predecessor in 1% of the time. Rather than run 100X the number of nuclear simulations in a given timeframe, maybe (just maybe?) the government could use our taxpayer funded supercomputer to do medical research? It seems there is now plenty of horsepower to go around unless they've been stockpiling unused simulation data for decades. Cheers,

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...