Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

BlueGene/L Puts the Hammer Down 152

OnePragmatist writes "Cyberinfrastructure Technology Watch is reporting that BlueGene/L has nearly doubled its performance to 135.3 Teraflops by doubling its processors. That seems likely to keep it at no. 1 on the Top500 when the next round comes out in June. But it will be interesting to see how it does when they finally get around to testing it against the HPC Challenge benchmark, which has gained adherents as being more indicative of how a HPC system will peform with various different types of applicatoins."
This discussion has been archived. No new comments can be posted.

BlueGene/L Puts the Hammer Down

Comments Filter:
  • Finally... (Score:5, Funny)

    by Nos. ( 179609 ) <andrew@nOSPAm.thekerrs.ca> on Friday March 25, 2005 @02:31AM (#12044086) Homepage
    Maybe this thing can keep the WoW service running.
    • More than likely it will just end up with me getting ganked at 135.3 Teraflops :/ *pictures body flopping to the ground* At least corpse runs should be faster....
  • ...how do we slashdot it?
  • How much processing power does one need for any certain application? I know that projects like World Community Grid need massive amounts of computing power, but seriously, 135 TFlops?

    ...ok I couldn't resist

    Imagine a beowulf cluster of these....
  • similarities (Score:4, Insightful)

    by teh_mykel ( 756567 ) on Friday March 25, 2005 @02:35AM (#12044106) Homepage
    does anyone else find the similarities between the computer hardware world and DragonballZ irratating? right when you think its finally over, the best is exposed and found worthy, yet another difficulty comes up - along with the standard unfathomed power increases and bizare advances. then it all happens again :/
    • by TetryonX ( 830121 ) on Friday March 25, 2005 @02:47AM (#12044172)
      If the BlueGene/L can grant me any wish I want for collecting 7 of them, sign me up.
      • If the BlueGene/L can grant me any wish I want for collecting 7 of them, sign me up.

        If your wish is a question of protein folding, it'll try to grant it for you with only 1 of 'em. I wouldn't hold my breath for anything else though.
    • As long as blue gene doesn't turn us into cookies, im fine with it.
    • Fortunately there is a lot less grunting.
    • Comment removed based on user account deletion
    • right when you think its finally over, the best is exposed and found worthy, yet another difficulty comes up (...). then it all happens again

      Man, you need to turn off the Cartoon Network, and go watch ESPN.
    • I believe my HP calculator is built around a "Dragonball" 4mhz processor.
    • In the final round of the 63rd Tenka'ichi Budokai, contestants Blue Gene and SkyNet battle it out!...

      SkyNet sends out a massive army of T-800s at Blue Gene. Blue Gene fires a Demon Cannon and reduces SkyNet to rubble. SkyNet regenerates itself from a single surviving neural net chip. One of SkyNet's puny T-800s throws a Destructo Disc and manages to lop off Blue Gene's top half. A Blue Gene engineer slaps a few thousand processors on Blue Gene and it's back in action. SkyNet gets desperate with a 20-fold K
      • of course not; skynet finds additional (hdd) slaves to calculate the best method of attack, uses spirit bomb (after 20 epsiodes or so filled with grunting) and obliterates bluegene. skynet enjoys a brief 2 week holiday period, only to be challanged by someone equally more powerful. ill say rms.
  • by Anonymous Coward
    ..about overclocking it?

    and what type of frame rate do you get with Quake?
  • Math Error? (Score:5, Insightful)

    by mothlos ( 832302 ) on Friday March 25, 2005 @02:51AM (#12044185)
    Roughly as expected, BlueGene/L can now crank away at 135.3 trillion floating point operations per second (teraflops), up from the 70.72 teraflops it was doing at the end of 2004. BlueGene/L now has half of its planned processors and is more than half way to achieving its design goal of 360 teraflops.



    Is it just me or is 135.3 * 2 < 360 / 2?

    • Re:Math Error? (Score:1, Informative)

      by pacslash ( 784042 )
      135.3 * 2 = 271 360 / 2 = 180 It's just you.
    • Re:Math Error? (Score:5, Informative)

      by Anonymous Coward on Friday March 25, 2005 @03:43AM (#12044361)
      Lets see if I can get this right. I'm going to talk a little bit out of my ass now, but here goes:

      Every 512 node backplane has a peak of 1.4tflop/s according to their design doc.

      The 32k system benched in at 70. The theoretical peak was 91, so the actual performance is about 77 percent of the peak, which is pretty normal.

      The 64k system benched in at 135. The peak should be around 182, so that is 74 percent of the peak.

      The design goal is for 360 at 64k. I'm guessing that 360 is the peak, because my rough estimate calculations put the of 64k nodes at about 364. Lets be nice and assume 70% of peak in the actual machine. That would indicate around 255tflop/s at 64k nodes of actual performance, assuming the thing scales at about the same rate.

      So, they got their math right as long as they are claiming a peak of 360. That's a theoretical max, and so never actually reached. The actual is notably less. My numbers are estimates, and so 364 not equalling 360 doesn't bother me much in the end.

      Anyone care to correct anything?

      Also, can someone explain to me why Cray's Redstorm won't kick this things ass performance wise. Redstorm should have 10k processors, but they are 64bit Opteron 2.4ghz processors with 4x the ram per node. These, I believe, are 700mhz processors.

      I'm confused because Redstorm only has a theoretical peak of about 40tflops off the 10k nodes. IBM's system at 10k should have a peak at around 30tflops. I'm wondering how 64bit opterons at more than 3x the speed could only be 10flops faster at 10k nodes than 700mhz 32bit PPCs with 1/4 the ram. Can someone please explain?

      Also, anyone know why IBM isn't using HPCC. Cray has been using it for the XD1s. I'm guessing the reason IBM hasn't posted results is because they can't even come close in sustained memory bandwidth, mpi latency, and other tests. That's just a guess though. I'd love to hear from someone that actually knows about this stuff.
      • by RalphBNumbers ( 655475 ) on Friday March 25, 2005 @10:36AM (#12046211)
        Well, it comes down to a few different things.

        First off, Opterons are pretty mediocre at double precision floating point benchmarks, it just isn't what they were designed for. Opterons effectively have only a single FPU (technically they have two, but one only does addition, while the other handles all multiplies), while most competing chips in the HPC arena have two full FPUs. They tend to get spanked by PPCs and Itanium2s, and even Xenons can do better.

        Also, you should note that the modified PPC440s in BlueGene have a disproportionate amount of floating point resources. Making them about equivalent to the 970 in that area mhz for mhz, despite being massively outclassed in integer and vector ops. And the floating point units on those 440s are full 64-bit units (as fpus are on many other ostensibly 32 bit chips, as the bit width of a fpu has nothing to do with the integer units and mmus being 32-bit). Plus the PPC has a fused multiply-add instruction, allowing it to theoretically finish 2 FLOPS/unit/cycle, instead of just one.

        And finally, you should know that individual nodes' ram sizes matter very little for Linpack.

        When you take all that together, it's not too surprising that 700Mhz PPC440s with 2 64-bit FPUs each finishing up to 2 FLOPs/cycle (at least 2 of which must be adds) would perform on par with 2.xGhz Opterons finishing a total of 2FLOPs/cycle (at least one of which has to be an add).
    • Re:Math Error? (Score:2, Informative)

      by RaffiRai ( 870648 )
      I might fathom that the layout/grid computing data-flow arrangement might have as much, if not more, effect than the sheer number of processors when you're workong on something like that.

      It seems to me that since the device isn't complete the data management isn't working under optimal conditions.
  • Wait another year... (Score:5, Interesting)

    by Anonymous Coward on Friday March 25, 2005 @03:00AM (#12044216)
    That's like, what, 527 Cell processors?

    Obviously that number's based on an unrealistic, 100% efficient scaling factor. But still. The 137 TFlop is coming from 64,000 processors.

    It's fun to think about what's just around the corner.

    • by Shag ( 3737 ) *
      Well yeah, it's a lot of processors. But that's part of the point - these are very low-power, practically embedded-spec, PowerPC chips, so IBM can throw N+1 of them into a system and wind up with something that uses less power than one Big Complex Chip from a competing supplier, yet computes faster, or something like that.

      Given the size and complexity of the Cell, 527 of them might present some cooling problems. (Or cogeneration opportunities, if you hook a good liquid cooling system to a steam turbine..
    • Well, too be fair, cell uses some fairly stoned tricks to get to that kind of peak power. (the massive memory bandwith is only to small local memories (everywhere else you would call them cache) and the main memory bandwith is laughable compared to the computing resources.
      Although linpack is very nice to parallize, i dont think it would be possible to even get 10% of the theoretical rate on a cell.
      • Well jeez, 25GB/s badwith to local memory is laughable to you? Jesus tap danding christ on a pogo stick!
        • Yes it is.
          25GB/s für 176 GFlops.

          An A64 or P4 has 6GB for 4-6Gflops.
          If you do streaming MACS, you would need 2 loads and 1 store per 2 instructions-> 12byte per flop->176Gflops need 2Tbyte/s. Hope you get good cache hit rate.
          Just as an example: REAL vector computers can sustain their vector units from main memory.
    • I could be wrong, but I bet the reason you get double the performance when you double the number of processors is that they are not adding on more slow processors. They keey adding faster chips. This might be why it seems to scale so well.
    • Note though, that the BGL CPUs have a double-precision floating point pipeline. The Cell is mostly a single-precision chip, so it's somewhat difficult to compare performance directly at the moment.
  • by EvanED ( 569694 ) <evaned@g[ ]l.com ['mai' in gap]> on Friday March 25, 2005 @03:07AM (#12044245)
    ...host a spell check for Slashdot! ...as being more indicative of how a HPC system will peform with various different types of applicatoins."
  • Windows HPC (Score:4, Funny)

    by Cruithne ( 658153 ) on Friday March 25, 2005 @03:16AM (#12044279)
    Oh man, I *so* wanna put Windows HPC on this thing!
    • You'll probably need it if you want to turn on the eye candy in Longhorn...
    • Re:Windows HPC (Score:2, Interesting)

      Well, if you had Windows on this machine (but be serious, please !)... This would only be one every 64 nodes. I explain why.

      Blue Gene is known to run Linux. True, but... In fact, there are two types of nodes in Blue Gene. The computing nodes and the IO nodes. There is 1 IO node for 63 computing nodes. So for a 64000 nodes cluster, there are in fact only 1000 processors that runs Linux. The other 63000 are running an ultra light runtime environment (with MPI and other essential things) to maximize the spee
    • Why do you want to make this baby slower than my ruler?
  • Cell vs HPC (Score:5, Insightful)

    by adam31 ( 817930 ) <adam31 @ g m a i l .com> on Friday March 25, 2005 @03:24AM (#12044302)
    The HPC Challenge benchmark is especially interesting and I think sheds some light on the design goals IBM had in coming up with the Cell.

    1) Solving linear equations. SIMD Matrix math, check.
    2) DP Matrix-Matrix multiplies. IBM added DP support to their VMX set for Cell (though at 10% the execution rate), check.
    3) Processor/Memory bandwidth. XDR interface at 25.6 GB/s, check.
    4) Processor/Processor bandwidth. FlexIO interface at 76.8 GB/s, check.
    5) "measures rate of integer random updates of memory", hmmmm... not sure.
    6) Complex, DP FFT. Again, DP support at a price. check.
    7) Communication latency & bandwidth. 100 GB/s total memory bandwidth, check (though this could be heavily influenced on how IBM handles its SPE threading interface)

    Obviously, I'm not saying they used the HPC Challenge as a design document, but clearly Cell is meant as a supercomputer first and a PS3 second.

    • This is the new PS3?

      Damn, they are way outclassing Microsoft on this one.
    • Im sorry, but DP at only 10% of SP seems utterly useless to me.
      • Re:Cell vs HPC (Score:1, Informative)

        Yeah it is, until you realise SP runs at 256 Gflops.. so even at a modest 25 Gflops it out performs most cores quite well. Cells are obviously built for clusters/multiple connected cores though.. theoretically then you only need 5,400 odd cores to get the same 136 Tflop caps.. (I refer to cores here, since most incarnations are going to have 2, 4, 8 or 16 cells onboard) .. still a fairly decent improvement..
    • Re:Cell vs HPC (Score:3, Interesting)

      by shizzle ( 686334 )

      2) DP Matrix-Matrix multiplies. IBM added DP support to their VMX set for Cell (though at 10% the execution rate), check.
      [...]
      ...clearly Cell is meant as a supercomputer first and a PS3 second.

      I think you've refuted your own argument there: double precision floating point performance is critical for true supercomputing. (In supercomputing circles DP and SP are often referred to as "full precision" and "half precision", respectively, which should give you a better idea of how they view things.)

      In

    • "Obviously, I'm not saying they used the HPC Challenge as a design document, but clearly Cell is meant as a supercomputer first and a PS3 second."

      I'd say it seems a lot more like they thought a supercomputer would be kickass for running a PS3 and designed it accordingly.
      • Re:Cell vs HPC (Score:2, Interesting)

        by tarpitcod ( 822436 )
        I don't think they thought that at all (Let's build a supercomputer). I think the natural problem they were trying to solve.

        This is because when you have the following conditions:

        -- Lots of memory bandwidth needed
        -- Fast floating point
        -- Parallelizable code
        -- Hand tuned kernels OK

        You end up with something that looks lots like a supercomputer. You just turned your compute bound problem into an IO bound problem. We may want to revise that saying -- and say 'You turned your compute bound problem into a c
  • 1.) How many frames/sec is that in Counter-Strike?
    2.) How about CS:S?
    3.) If Apache 2 were installed on it, could it survive a slashdotting?
    4.) How fast could it run Avida?
  • Pics (Score:5, Informative)

    by identity0 ( 77976 ) on Friday March 25, 2005 @03:39AM (#12044355) Journal
    I found it odd that there aren't any pics of the machine on those sites, so I looked around... Here are some pics [ibm.com] of the prototype at top, and the finished version at bottom. It looks like it's going to be in classic "IBM black", like the 2001 monolith : )

    Some more pics [ibm.com] of the prototype.

    For comparison, the Earth simulator [jamstec.go.jp] and big mac [vt.edu].

    Anyone know what kind of facilities blue gene will be housed at? The one for the earth simulator looks like something out of a movie, IBM better be able to compete on the 'cool factor'. : )

    And does anyone else get the warm and fuzzy feelings from looking at these pics, even though there's nothing you could possibly use that much power for? Ahhh, power...
    • I believe they will ship the first of these monsters to Livermore to simultate nukes and other deadly stuff. Number 2 will go to the Lofar [lofar.org] project. It is basically one huge phased array radio telescope with a diameter of 300 kilometer. Just connect some 10000 simple low frequency (~100 MHz) antennas with big fiber pipes to a central computer and do the beam pointing and imaging all in software.

    • Re: (Score:2, Insightful)

      Comment removed based on user account deletion
    • Re:Pics (Score:2, Informative)

      by Anonymous Coward
      I work at IBM in Rochester Minnesota where the machines are built and housed and I have seen the machines that are being shipped around and installed at Lawrence Livermore and other places... it is an awesome sight. VERY LOUD with the hugs fans above it, and the floor in the building had to be dug down 4 feet to allow for the cabling and air ducts to run underneath everything. What most surprised me is not how fast it is, but how well they were able to get it to scale by using fairly low power processors. T
    • Ummm... I think BlueGene (here [ibm.com] and here [ibm.com]) is cooler than the Earth Simulator (here [jamstec.go.jp]).
    • Slashdot is still using the 1976 Cray-1 as the icon for supercomputing, and I think its safe to say supercomputing styling has gone down hill since. Not that these things should be like cars, though here at Slashdot we tend to salivate over them like they were. Don't get me started on people who are into case mods.

      I remember seeing a news article on TV recently about NASA and their upgrades to computer horse power for doing flight simulations and design work. The picture they showed? A late 80's conne

      • Thinking Machines had a parallel processing computer with hundreds of blinking LEDs. It appeared in a documentary about Richard P Feynman. He helped go through the Boolean Logic and reduce the number of logic gates required for the various circuits or something like that..
    • Pics of the Terascale Simulation Facility at LLNL that houses blue gene are available here [llnl.gov]. My office window is on the far left.
    • Here's a link from Lawrence Livermore that shows some system pics and has some entertaining facts...i.e. what this beast is going to be used for (nuclear simulation)... http://www.llnl.gov/pao/news/news_releases/2005/NR -05-03-09.html [llnl.gov]
  • How do these compare to the Cray Supercomputers? Last I checked, Cray was top-dog and everyone else was fighting for second place. I mean, it's cool that you can get 135.3 Teraflops out of the BlueGene, but the Cray X1E delievers up to 147 TFLOPS in a single system. Am I just confused and lost?
    • As in : how many 3Ghz P4 is that ? Or : how many 100Mhz Pentiums (yes 586) how many Sinclair Spectrum 48 ? how many ZX81 ?

      A graph would be neat (but I'd settle with a power of ten) :-)

      It would give an idea of when we'll get that kind of power at home - and don't tell me we'll never know what to do with it...

    • You're confused and lost. According to the top 500 rankings referenced by the article, the highest ranking Cray (an X1) puts out less than 6 TFLOPS.

      So try... a cluster of 25+ X1s and then we'll talk =)!
    • You're confused and lost. When was the last time you checked? 2000? Cuz that's the last time Cray was anywhere near the top (June 2000, there were a handful of Cray T3Es placed in the top 10, with just under 1TF each).

      TZ

  • More than Teraflops (Score:2, Interesting)

    by gtsili ( 178803 )
    What it would also be interesting is the power consumption and heat production figures of those systems when idle and under heavy load and also the load statistics.

    In other words what is the cost in the quest for performance?
    • Best Guestimate:

      3200 amps of pull at 110 VAC... pumping out about 1,200,000 BTU /hr.. plus the cost of AC so another 1200 amps (110 VAC) of juice to pump out the heat (assumes 20c outdoor temp).

      The total power would average around 4400 amps * 110 VAC /(1000w) = 484 KW draw. assuming a 10c/kw HR. it pulls in about $50/hour to run in power alone, assuming really cheap power.

      So yearly power price is going to be running around $400k / year as a conservative estimate.

      My guesses are that each rack

  • ...explain why those genetic reseach need that much amount of cpu power? What calculations take that long to process so they need to build fastest computers. And also, are they sure that the programmers working at research labs are optimizing thier codes effectively so maybe the work done on those computers can be done w/ 1/4th of that current power?
    • Think of all the charges in a protein composed of hundreds of amino acids, each composed of dozens of atoms. Now imagine those charges ineracting during protein folding, in a solution. Let's say that process takes a few miliseconds. Now imagine modeling this process at the femtosecond resolution. This system is severely underpowered.
    • > And also, are they sure that the programmers working
      > at research labs are optimizing thier codes effectively

      Good programmers are cheap compared to computation time on one of those machines. The electric bill alone is nothing to sneeze at.
    • Because the mechanism by which the DNA blueprint is actually used to create proteins (in a mechanism that itself uses proteins) is *spectacularly* complex.

      "IBM estimates that the folding model for a 300-residue protein will encompass more than one billion forces acting over one trillion time steps. Even for Blue Gene, modeling such a folding process is expected to take about a year of around-the-clock processing."
  • So what do people think assuming speeds continue to leap ahead in the desktop arena, will it simply encourage further sloppy programming. After all if the choice is to optimise your product for a month to save a few Gigaflops or get it out into the market and so what if its a bit resource hungry, I imagine many teams will get pushed to release sooner rather than later.
  • One in every home (Score:3, Interesting)

    by BeerCat ( 685972 ) on Friday March 25, 2005 @06:18AM (#12044770) Homepage
    Several decades ago, a computer filled an entire room, and "I think there is a world market for maybe five computers" [thinkexist.com]

    A few decades ago, people thought Bill Gates was wrong when he reckoned there would soon be a time when there was a computer in every home.

    Now, a supercomputer fills an entire room. So how long before someone reckons that there will come a time when there will be a supercomputer in every home?
    • So how long before someone reckons that there will come a time when there will be a supercomputer in every home?

      According to Apple [apple.com] that era was launched a long time ago.
    • So how long before someone reckons that there will come a time when there will be a supercomputer in every home?

      Then it won't be considered 'super' any more, as there will be even faster computers out there.

    • The definition of a supercomputer is a moving line. At any given time, a supercomputer is usually just a machine with an order of magnitude more CPU throughput than a PC. This neglects things supercomputers have that desktops don't like massive I/O capabilities, but in terms of CPU performance, today's desktops are usually as fast as the past's supercomputers.
    • In many respects we already do. If you have a any assortment of modern tech gear you are close to what a supercomputer would have been a few years ago.

      I have a 2 notebooks, a dual processor linux machine, an iMac and a TiVo. All networked and could be turned into a number cruncher with out much difficulty.

      Granted it does not compare to any modern supercomputer but it is close to 1996 >$200,000 computer.
    • I think you're forgetting a very obvious point; most people already have a super computer in their home. Hear me out.

      Desktop computers are so fast these days that engineers at Intel and AMD are actually reaching barriers in physics toward extending their processing power any. Yes, there have been physical problems in the past, and most of them involved etching the chips (we hit a barrier for a while on how small the visible light lasers were capable of etching, so they switched to Ultraviolet). But now the
  • I think the whole point of using a machine of this size is that you write your custom application specifically with it in mind. I would be highly surprised if after leasing one, or a share on one, IBM doesn't provide documentation on how to create an application which takes advantage of the machine's architecture.
  • by ch-chuck ( 9622 ) on Friday March 25, 2005 @07:09AM (#12044916) Homepage
    It could be that the competition for the top of the 500 slot is becoming less of technological achievement and more of just who has the most $$$ to spend. Just like auto racing used to be about improvements in engines and transmissions etc but after a point everybody could make a faster car just by buying more commonly available, well known technology than the other guys. So they put in limitations for the races, only so big a venturi, displacement, etc.

    Anyway, my point is - it's becoming just "I can afford more processors than you can so I win" instead of the heyday of Seymore Cray when you really had to be talented to capture the #1 spot from IBM.

    • by ShadowFlyP ( 540489 ) on Friday March 25, 2005 @09:38AM (#12045710) Homepage
      I think your comparison here is quite unfair to the technological accomplishments of BlueGene/L. This is not simply a case of IBM "throwing more processors" at the problem, but BlueGene is a technological leap over other supercomputers. Not only is BlueGene faster, than for instance the Earth Simulator, but it also consumes FAR LESS power (which in turn minimizes the energy wasted cooling the thing) and takes up much less space. From an article published when BlueGene first overcame the Earth Simulator: "Blue Gene/L's footprint is one per cent that of the Earth Simulator, and its power demands are just 3.6 per cent of the NEC supercomputer." http://www.theregister.co.uk/2004/09/29/supercompu ter_ibm/ So, I say to you, NO! The top 500 race is not simply big companies throwing money at a problem (well, it sort of is), but there is quite a lot of technical accomplishment going on here. You could argue that the people involved may not have the brilliance of Seymore, but they sure do have real talent.
      • I disagree on the point of more money being throwen at a problem. Look at what linpack is. It is really not a hard problem to scale. It scales to thousands of processors (witness ES and machines like the 1k processor T3E). There is a reason that the HPC Challenge benchmark is being invented.

        Having said that I agree with you in that IBM has done some innovative things with BlueGene.
  • I was wondering if a test of loading OpenOffice.org writer would be usefull?

    • I was wondering if a test of loading OpenOffice.org writer would be usefull?

      Yes, but compiling the OpenOffice.org suite is the real time-trial challenge... ;)

      With BlueGene/L, I would hope that the compilation time would be measured in seconds, not hours like with my poor little (by comparison) Athlon.
  • by tarpitcod ( 822436 ) on Friday March 25, 2005 @10:28AM (#12046121)
    What's the scalar performance of one of these beasties?

    Can an Athlon 64 / P4 beat it on scalar code? The whole HPC world has gotten boring since Cray died. Here's why I say that:

    The Cray 1 had the best SCALAR and VECTOR performance in the world.

    The Cray 2 was an ass kicker, the Cray 3 was a real ass kicker (if only they could build them reliably).

    Cray pushed the boundaries, he pushed them too far at some points -- designing and trying to build machines that they couldn't make reliable.

    So it'll be a cold day in hell before I get all fired up over the fact that someone else managed to glue together a bazillion 'killer micros' and win at Linpack...
    Now if someone would bring back the idea of transputers, or we saw some *real* efforts at Dataflow and FP then I'd be excited. I'd love a PC with 8 small, simple, fast, in-order tightly bound cpus. Don't say CELL, all indications are that they will be a *real* PITA to program to get any decent performance out of.
  • by Anonymous Coward
    The 70.72 TF BlueGene/L that debuted on the November list is only 16 of 64 racks of the full machine (25%). BluneGene/L was to be delivered in stages and be a 131072 CPU system when complete (64 racks * 2048 CPUs per rack). The beasty will be well over 200 TF sustained Linpack when it is completed. Oh, and it is binary compatible with System X at Virginia Tech.
  • Whats an applicatoin! New industry standard? :-p
  • ...that by the time Duke Nukem Forever launches, this will be the level of computing power on every desktop? I can hardly wait for Windows mean-time-to-failure to be measured in femtoseconds.
  • Applying the AC relevancy converter, and we get:

    FFFFirst post!

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...