Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

BlueGene/L Puts the Hammer Down 152

OnePragmatist writes "Cyberinfrastructure Technology Watch is reporting that BlueGene/L has nearly doubled its performance to 135.3 Teraflops by doubling its processors. That seems likely to keep it at no. 1 on the Top500 when the next round comes out in June. But it will be interesting to see how it does when they finally get around to testing it against the HPC Challenge benchmark, which has gained adherents as being more indicative of how a HPC system will peform with various different types of applicatoins."
This discussion has been archived. No new comments can be posted.

BlueGene/L Puts the Hammer Down

Comments Filter:
  • similarities (Score:4, Insightful)

    by teh_mykel ( 756567 ) on Friday March 25, 2005 @03:35AM (#12044106) Homepage
    does anyone else find the similarities between the computer hardware world and DragonballZ irratating? right when you think its finally over, the best is exposed and found worthy, yet another difficulty comes up - along with the standard unfathomed power increases and bizare advances. then it all happens again :/
  • Math Error? (Score:5, Insightful)

    by mothlos ( 832302 ) on Friday March 25, 2005 @03:51AM (#12044185)
    Roughly as expected, BlueGene/L can now crank away at 135.3 trillion floating point operations per second (teraflops), up from the 70.72 teraflops it was doing at the end of 2004. BlueGene/L now has half of its planned processors and is more than half way to achieving its design goal of 360 teraflops.



    Is it just me or is 135.3 * 2 < 360 / 2?

  • by Shag ( 3737 ) * on Friday March 25, 2005 @04:18AM (#12044282) Journal
    Well yeah, it's a lot of processors. But that's part of the point - these are very low-power, practically embedded-spec, PowerPC chips, so IBM can throw N+1 of them into a system and wind up with something that uses less power than one Big Complex Chip from a competing supplier, yet computes faster, or something like that.

    Given the size and complexity of the Cell, 527 of them might present some cooling problems. (Or cogeneration opportunities, if you hook a good liquid cooling system to a steam turbine...)
  • Cell vs HPC (Score:5, Insightful)

    by adam31 ( 817930 ) <adam31 AT gmail DOT com> on Friday March 25, 2005 @04:24AM (#12044302)
    The HPC Challenge benchmark is especially interesting and I think sheds some light on the design goals IBM had in coming up with the Cell.

    1) Solving linear equations. SIMD Matrix math, check.
    2) DP Matrix-Matrix multiplies. IBM added DP support to their VMX set for Cell (though at 10% the execution rate), check.
    3) Processor/Memory bandwidth. XDR interface at 25.6 GB/s, check.
    4) Processor/Processor bandwidth. FlexIO interface at 76.8 GB/s, check.
    5) "measures rate of integer random updates of memory", hmmmm... not sure.
    6) Complex, DP FFT. Again, DP support at a price. check.
    7) Communication latency & bandwidth. 100 GB/s total memory bandwidth, check (though this could be heavily influenced on how IBM handles its SPE threading interface)

    Obviously, I'm not saying they used the HPC Challenge as a design document, but clearly Cell is meant as a supercomputer first and a PS3 second.

  • Comment removed (Score:2, Insightful)

    by account_deleted ( 4530225 ) on Friday March 25, 2005 @06:43AM (#12044699)
    Comment removed based on user account deletion
  • by mcraig ( 757818 ) on Friday March 25, 2005 @06:54AM (#12044721)
    So what do people think assuming speeds continue to leap ahead in the desktop arena, will it simply encourage further sloppy programming. After all if the choice is to optimise your product for a month to save a few Gigaflops or get it out into the market and so what if its a bit resource hungry, I imagine many teams will get pushed to release sooner rather than later.
  • by ch-chuck ( 9622 ) on Friday March 25, 2005 @08:09AM (#12044916) Homepage
    It could be that the competition for the top of the 500 slot is becoming less of technological achievement and more of just who has the most $$$ to spend. Just like auto racing used to be about improvements in engines and transmissions etc but after a point everybody could make a faster car just by buying more commonly available, well known technology than the other guys. So they put in limitations for the races, only so big a venturi, displacement, etc.

    Anyway, my point is - it's becoming just "I can afford more processors than you can so I win" instead of the heyday of Seymore Cray when you really had to be talented to capture the #1 spot from IBM.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...