Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Supercomputing Technology

$208 Million Petascale Computer Gets Green Light 174

Posted by samzenpus
from the that's-a-lot-of-solitaire dept.
coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system. Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."
This discussion has been archived. No new comments can be posted.

$208 Million Petascale Computer Gets Green Light

Comments Filter:
  • imagine... (Score:5, Funny)

    by spiffmastercow (1001386) on Wednesday September 03, 2008 @05:47PM (#24866369)
    nah, nevermind
  • by Naughty Bob (1004174) * on Wednesday September 03, 2008 @05:54PM (#24866445)
    I'm glad they've given it a green light.

    Imagine having all that computer power, and not even knowing if it was switched on!
  • Can it figure out how to brew the 'perfect' cup of coffee?
  • Naive question... (Score:3, Interesting)

    by religious freak (1005821) on Wednesday September 03, 2008 @05:56PM (#24866475)
    Yes, I know this is probably a very naive question, but has anyone here actually had the privilege of working on one of these things? I mean, what do they actually use this for?

    I think it's awesome, but are there any concrete advancements that can be attributed to having access to all this computing power?

    Just wondering...
    • Re: (Score:2, Funny)

      by Naughty Bob (1004174) *

      I mean, what do they actually use this for?

      I think it has been designed to run IE8 beta 2.

    • Re:Naive question... (Score:5, Interesting)

      by serviscope_minor (664417) on Wednesday September 03, 2008 @06:16PM (#24866675) Journal

      I don't use one myself, but I know people involved with supercomptuers. They are used for large simulations. Often this comes down to solving large systems of linear equations, since at the inner step finite elements need solutions to these large equation systems. The point is, the larger the computer the larger the grid you can have. This involves simulating a larger volume, or simulating the same volume in more detail (think, for example of weather systems).

      As for concrete advancemants? I'm not in the biz, so I don't know, but I expect so. Apparently they're also used for stellar simulations, so I expect the knowledge of the universe has been advanced. I would be suprised if they haven't seen duty in global warming simulation too.

    • Re:Naive question... (Score:5, Informative)

      by Deadstick (535032) on Wednesday September 03, 2008 @06:22PM (#24866781)

      Weather modeling comes to mind, both terrestrial and space.

      rj

    • Re:Naive question... (Score:5, Informative)

      by mikael (484) on Wednesday September 03, 2008 @06:29PM (#24866869)

      These machines are used to work on simulations that involve aerodynamics and hydrodynamics, quantum electrodynamics (QED), or electromagnetohydrodynamics. All of these simulations require that a mathematical model is constructed from a high density mesh of data points (2048 ^ 3). Blocks of such points are allocated to individual processors. Because of this, each processor must be able to communicate at a high speed with its neighbours (up to 26 neighbours with a cubic mesh).

      Usually, the actual individual calculations per element will be take up less than a page of mathematical equations, but require high precision, so the data values will be 64-bit floating point quantities. A single element might require 20 or more variables. Thus the need for some many processors and high clock speed.

    • by pablomme (1270790)

      Yes, I know this is probably a very naive question, but has anyone here actually had the privilege of working on one of these things? I mean, what do they actually use this for?

      The one application I know this computer is going to run is quantum Monte Carlo [wikipedia.org], which is an electronic-structure method. QMC is intrinsically parallel due to its stochastic nature, but the degree of parallelism involved here requires further breakdown of the algorithm. There are quite a few research groups putting effort into this.

      Other applications, if I am not mistaken, are also meant to be highly parallelizable, possibly nearing the boundary of embarrasingly parallel [wikipedia.org] tasks. This is probably to make sure

    • Re:Naive question... (Score:5, Informative)

      by Ilgaz (86384) on Wednesday September 03, 2008 @06:37PM (#24866961) Homepage

      Do you notice neither USA or Russia blows a portion of planet to test nuclear weapons anymore? It is because the planet is so peaceful so further research is not required? Unfortunately no.

      These monsters can simulate a gigantic nuclear explosion in molecular level.

      Or for peace purposes, they can actually simulate that New Orleans storm based on real World data and pinpoint exactly what would happen.

      • Re: (Score:3, Funny)

        by blantonl (784786)

        Or for peace purposes, they can actually simulate that New Orleans storm based on real World data and pinpoint exactly what would happen.

        Right.

        That's why the City of New Orleans evacuated to Baton Rouge.

        • by Ilgaz (86384)

          I remember FEMA claiming that they have predicted this would happen and reported to Govt. but they didn't care.

          My post sounded like you would use a super computer to do evil things only so I tried to balance it via New Orleans. In fact, every single less nuclear explosion as result of super computers simulation is a positive thing itself. They will keep stupidly designing/testing them anyway.

    • Think of the number of open tabs you could use in Google's new Chrome Browser! With separate processes for each tab, they could have the internet open at once!

    • I mean, what do they actually use this for?

      Very detailed solutions of nonlinear field equations. The kind of thing that aerothermodynamics deals with.

      If someone comes out of the woodwork who happens to be a cross between Alan Turing and Kelly Johnson, maybe that person could use a machine like this to design a combined cycle turbo/ram/scramjet and then Richard Branson could use it to power a real spaceship, not something that's just called a spaceship.

      It's not that crazy to imagine a talented individual could simulate all the expensive work on scram

    • Re:Naive question... (Score:5, Informative)

      by Rostin (691447) on Wednesday September 03, 2008 @08:43PM (#24868189)

      I'm working on a PhD in chemical engineering, and I do simulations. I occasionally use Lonestar and Ranger, which are clusters at TACC, the U. of Texas' supercomputing center. Lonestar is capable of around 60 TFLOPS and Ranger can do around 500-600 TFLOPS. A few users run really large jobs using thousands of cores for days at a stretch, but the majority of people use 128 or fewer cores for a few hours at a time.

      My research group does materials research using density function theory, which is an approximate way of solving the Schroedinger equation. Each of our jobs usually uses 16 or 32 cores, and takes anywhere from 5 minutes to a couple of days to finish. Usually we are interested in looking at lots of slightly different cases, so we run dozens of jobs simultaneously.

      The applications are pretty varied. Some topics we are working on -
      1) Si nanowire growth
      2) Si self-interstitial defects
      3) Au cluster morphology
      4) Catalysis by metal clusters
      5) Properties of strained semiconductors

      • So you're saying more basic research could be completed with more computing time?

        *looks for Amex Gold Charge Card and Amazon EC2 account credentials*

      • by daveime (1253762)

        which is an approximate way of solving the Schroedinger equation

        So is the cat dead or alive then ?

    • Re: (Score:3, Informative)

      by dlapine (131282)

      For a reasonable sample of the things that can be done on a supercomputer, start here: http://www.ncsa.uiuc.edu/Projects/ [uiuc.edu]. Those are just the things running at NCSA.

      Followup with this [teragrid.org], as the science gateways for the TeraGrid are designed to let scientists worry more about the science part and less about the programming part. Part of the reason to build bigger supercomputers is to let non-programmers get work done as well. By having more cycles available, the TeraGrid can allow access for codes that are e

      • by Talderas (1212466)

        Obviously the solution is for every scientists to get a pet coder to write their simulations efficiently.

    • by guruevi (827432)

      I do, on a regular basis. I'm only the sysadmin so I do stuff like this with it:

      #!/bin/sh
      for (( i = 0 ; i <= 10000; i++ ))
      do
      qsub sleep10 > /dev/null
      done

      And sleep10 is a script that says:

      #!/bin/sh
      sleep 10

      Sometimes I have a bit more fun. If it gets cold in the winter I replace sleep10 with:

      #!/bin/sh
      cat /dev/random > /dev/null

      Oh, in case the scientists want to use it, they do more fun stuff like create gigantic arrays of data in MATLAB or transform images.

    • by OS24Ever (245667) *

      Disclaimer, I work for IBM, just me talking though not IBM.

      Typically you'll see some type of parallel processing job that is schedule across part or all of the cluster depending on the data needed. Usually the maui scheduler is used to schedule the jobs and what not.

      What they use on these jobs is basically a bunch of math. What the end result is depends on what the math is they asked it to do. Jobs can last several hours to several weeks.

      • I've gotten some really interesting responses... I knew there would be people who had a few things to say.

        Question: You guys don't actually still use OS2 on desktops, do you? I work for a company that used it at an enterprise level up until about '01/'02.
  • Apparently, by 2020, personal computers will have the same processing power of the human brain (Kurzweil 2005). My personal computer has 2 cores, my friend's personal computer has 8 cores, so let's say 4 cores is an average. Cores double every, what, 18 months? In the next 12 years there's 144 months, which is 8 doublings. So what's that, 1024 cores? So this computer is, clearly, 195 times smarter than a human!

    Or maybe raw processing power just isn't a good indication of how near or far the Singularity

    • Perhaps this is what the funding is actually for... Nah, that'd be giving the government way too much credit, right?
    • There are others limits to the systems power like the ram bandwidth and size / HD size and speed.

    • I'm familiar with the paper. He ball parks cpu to human simulation and I seem to remember somebody else placing it around 2032.

      The big issue often ignored is the neuron networks are NETWORKS more than anything else and you can have as many transistors as you like but if you can not handle ball park interconnects (10**14?) with most moving data in parallel it will be a very slow simulation. The brain is massively PARALLEL so it can handle running as slow as it does.

      CPU evolution greatly impacts estimates and

      • There are an extremely large number of applications out there for AI. Just think of a situation where a decision needs to be made quickly, accurately, and consistently. Everything from medicine to air travel would be touched by it.
    • Re: (Score:3, Interesting)

      by Surt (22457)

      2020 seems unlikely. A reasonably accurate real-time synaptic simulation can run maybe 100 neurons on a high end pc today, probably less. A human brain has about 100 billion neurons, so we're 1 billion times short in computation. Last time I checked, GPUs had not yet been used in neuron simulation, so I'll even give you that we may be 1000 times better off. That's still 1 million X improvement needed to match the brain, or roughly 20 more generations of computer hardware, at a generous 18 months, that l

      • by QuantumG (50515) *

        Kurtzweil is of the opinion that study of brain scanning leads to optimization of the algorithms used in the brain to run faster and better on digital computers. So when he says 2020 he means that the hardware will be commonly available to run these optimized algorithms at sufficient speed to reach human capabilities.. and then he goes on to say that the algorithms will be ready by then as there's all these practical uses for them before we even get to the point where we can combine them together into a wo

  • Didn't Blue Gene/L do nearly 500 TFlops sustained in 2007? Doubling that by 2011 seems a little... slow. Perhaps the architectural difference will have more substantial benefits in real world performance, but by the given numbers alone, it seems like a disappointing upgrade.
  • More crap code (Score:3, Insightful)

    by kramulous (977841) * on Wednesday September 03, 2008 @06:02PM (#24866539)
    Cool thing about the globally addressable petabyte. That way people writing really crappy code that don't bother thinking about their memory storage can just thrash away. And who cares about pipeline stalls.

    I find it funny how the people who have never been formally trained with writing in a language (Mathematics, and just science in general) write the best codes while the majority of the IT people I see write the most appalling code I've ever seen. I think it has something to do with the fact that the science people don't pretend to know everything and are much more willing to learn something new while the IT people already know everything.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Cool thing about the globally addressable petabyte. That way people writing really crappy code that don't bother thinking about their memory storage can just thrash away. And who cares about pipeline stalls.

      I doubt they can just write crappy code. It's very unlikely that all this memory is on a single bus, so the more distant a core is from the memory it's addressing, the slower that access is.

      It's a little bit like putting a video card with 1GB of lightning-quick video RAM in your computer. That VRAM is

    • by Ilgaz (86384)

      The Cray FAQ mentions super computers running on 99% load all the time . I think they still don't have the luxury to waste memory. It is just the programs they run actually needs/will need such a massive memory.

      I understand your point but I don't think they let "buy more RAM" idiots to use such super computing power.

      Remember the Mathematica on OS X was the first 64bit enabled code on PPC G5 since they (scientists) actually needed maxed out G5s (8 GB and 16GB on Quad G5).

      • by kramulous (977841) *

        but I don't think they let "buy more RAM" idiots to use such super computing power.

        There are a few of those idiots around here. They're infecting the system with their 'document classification' and are completely unwilling to acknowledge that there are other techniques for dealing with large dense (usually only 10% in these cases!) matrices. Hilarious when they start telling the linear and non-linear algebraic mathematicians that they don't understand the complexities.

        Here's a great example: finding various subsets of "1-2-3" in "1-2-3-2-4-5-1-7-6" (but gigabytes of the stuff stored in

    • I don't know where TFA got the "globally addressable PB". I think someone was misquoted.

      I can't find any mention of it in the NCSA webpages, and no shared memory system exists on this level, ccNUMA or otherwise (NASA Ames has a 4TB altix system, which is evidently the largest in the world that is publicly acknowledged).

      Software distributed shared memory hasn't really gone anywhere either, so I think someone was fantasizing when they wrote the article... globally accessible filesystems, sure, but shared mem

    • I find it funny how the people who have never been formally trained with writing in a language (Mathematics, and just science in general) write the best codes while the majority of the IT people I see write the most appalling code I've ever seen.

      Actually, most IT people don't have any formal training at all. Most of them are hacks who got into their jobs on the basis of family connections, a year at community college, time in a help desk (especially military helpdesks), or reading a couple of books. Most IT

      • I do want to add one thing as my post above may have sounded too harsh towards IT people... I was referring to them as hacks only in instances where they start writing huge applications or designing big databases without learning how to write code or do database design first. IT people often know much more than I do about keeping a desktop running or a network up, and for that I'm grateful.

        And there's nothing wrong with community college either, or even no degree, if you've made the effort to learn your sk

    • And this is where AI comes in. If I could make a suggestion, the computer would be used by programmers who are really lazy, not just lazy enough to solve repetitive tasks, but lazy enough to write programs that would write programs to solve repetitive tasks. Take your average adept programmer who says, "I am too lazy to grind coffee beans, pour water and all that jazz, I'll write a program that will manage the process for me." That programmer is lazy, but a really lazy programmer is the one who studies huma

    • by infolib (618234)

      I find it funny how the people who have never been formally trained with writing in a language (Mathematics, and just science in general) write the best codes

      Having seen the code floating around a couple of physics labs I can tell you that there's plenty of crap code. Heck, I've written some of it myself!

      Of course, some people are good at it (several because they're just so freaking smart), but in many cases you've got people who would rather focus on something else write a quick kludge to get data from instrument X into plot Y. Not much wrong with that really, until you have to debug/extend it...

  • Vista fast enough?
    Oh I forgot, that would cost 200 peta-dollars,
    so maybe they won't use vista.

  • http://en.wikipedia.org/wiki/IBM_Roadrunner [wikipedia.org] So wouldn't that make this the second?
    • Re: (Score:3, Interesting)

      by Phat_Tony (661117)
      Yeah, that was my thought. Roadrunner at Los Alamos sits at the top of the 500 list [top500.org] with Rmax 1,026,000. I don't know enough about benchmarks to distinguish between "Rmax" and "sustained petascale," but it is achieving over a petaflop. Maybe someone here can tell us more about linpack [top500.org] vs. whatever they're using for this new one. I notice the article linked in the story mentions Roadrunner at the end, but without saying how it compares in speed. It doesn't seem to say by what specific measure this new comput
      • Re: (Score:2, Informative)

        Blue Waters will be the first to deliver a sustained petaflop on "real-world" applications, meaning various scientific simulations [uiuc.edu]. Specifically, the program solicitation [nsf.gov] required prospective vendors to explain how their proposed systems would sustain a petaflop on three types specific types of simulations, one each in turbulence, lattice-guage quantum chromodynamics, and molecular dynamics.

        Granted, Roadrunner was the first machine to deliver a petaflop on the Linpack benchmark [netlib.org] (though certainly IBM's own i

  • by peter303 (12292) on Wednesday September 03, 2008 @06:13PM (#24866649)
    I just saw The Measure of a Man episode on the Star Trek Labor Day marathon. Data has a speed of 60 Teraflops and 100 petabytes of storage. That used to seem large in the late 1980s. (Episode were Data goes on trial whether he is a machine or sentient.)
    • by QuantumG (50515) *

      Demonstration of the triumph of software over hardware!

      I believe it was Minsky who said that a 486 could run a human level intelligence, if only we knew the algorithm, but I can't seem to remember where he said it. Maybe I need new RAM!

    • by vivin (671928)

      Bytes? I thought they used "Quads" as a measurement of storage...

    • Re: (Score:3, Interesting)

      by Bones3D_mac (324952)

      About a decade or so ago, I remember someone very crudely trying to ballpark the amount of storage that would be needed to contain the raw data of the entire human brain complete with a lifetime of experience at around 10 terabytes. Needless to say, that seems incredibly unlikely by today's standards.

      Even if something like this were possible (storage not withstanding), the data itself would likely be unusable until we sufficiently understood just how our brains work with their own data enough to create a cr

      • by QuantumG (50515) *

        Meh, if you really want to throw teraflops at it, wait until we have enough processing power to simulate a human embryo growing to a fetus. That'll tell you a whole heck of a lot. From that you can use non-invasive NMRI to get data which you can infer structure from.. and if you actually understand that structure then you won't have to do any simulation, you can transcode it into something more appropriate for a digital computer. Basically, it all comes down to software because if you're just going to re

        • Researchers have already integrated rat neurons to fly a simulated aircraft. Lets just say the keyboard/monitor bandwidth is way too little for me.
        • Speaking of human rights violations, how about hooking a few terabytes of storage up to a newborn. With an appropriate connector its developing brain should make use of the storage and by studying that you can learn all sorts of nifty stuff. Of course, this will likely make you all squeamish so let's say it's a baby monkey.. or a mouse. Although its not nearly as interesting.

          That actually isn't unrealistic. The recent "Future Intelligence" episode of the Science Channel's "NextWorld" program featured a brie

    • by kabocox (199019)

      Data has a speed of 60 Teraflops and 100 petabytes of storage.

      Data is just pure bloat then... there have been many other fictional AIs that fit in mere K. There are times when I think that we could have a 100 yottaflop, 100 googolflop, or 100 googolplexflop computer and still not have developed AI.

  • what their tech persons blood elf or tauren will look like?

  • It's said... (Score:3, Interesting)

    by jd (1658) <<moc.oohay> <ta> <kapimi>> on Wednesday September 03, 2008 @06:29PM (#24866861) Homepage Journal

    ...Apple used to use a Cray to design their new computers, whereas Seymoure Cray used an Apple to design his.

    More compute power is nice, but only if the programs are making efficient use of it. MPI is not a particularly efficient method of message passing, and many implementations (such as MPICH) are horribly inefficient implementations. Operating systems aren't exactly well-designed for parallelism on this scale, with many benchtests putting TCP/IP-based communications ahead of shared memory on the same fripping node! TCP stacks are not exactly lightweight, and shared memory implies zero copy, so what's the problem?

    Network topologies and network architectures are also far more important than raw CPU power, as that is the critical point in any high-performance computing operation. Dolphinics is quoting 2.5 microsecond latencies, Infiniband is about 8 microseconds, and frankly these are far far too slow for modern CPUs. That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage. I know of no network architecture that provides hardware native reliable multicast, for example, despite the fact that most problem-spaces are single-data, most networks already provide multicast, and software-based reliable multicast has existed for a long time. If you want to slash latencies, you've also got to look at hypercube or butterfly topologies, fat-tree is vulnerable to congestion and cascading failures - it also has the worst-possible number of hops to a destination of almost any network. Fat-tree is also about the only one people use.

    There is a reason you're seeing Beowulf-like machines in the Top 500 - it's not because PCs are catching up to vector processors, it's because CPU count isn't the big bottleneck and superior designs will outperform merely larger designs. Even with the superior designs out there, though, I would consider them to be nowhere even remotely close to potential. They're superior only with respect to what's been there before, not with respect to where skillful and clueful engineers could take them. If these alternatives are so much better, then why is nobody using them? Firstly, most supercomputers go to the DoD and other Big Agencies, who have lots of money where their brains used to be. Secondly, nobody ever made headlines off having the world's most effective supercomputer. Thirdly, what vendor is going to supply Big Iron that will take longer to replace and won't generate the profit margins?

    (Me? Cynical?)

    • "That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage."

      Seems fine to me. I put all my new systems through the ping-pong test, sometimes i even win.

    • by Ilgaz (86384)

      I can easily say that Apple and Cray connection is a valid claim since a very high profile Cray guy confirms it on the Cray FAQ:

      http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Cfaqp3.html#TOC23 [pipex.com]

      The FAQ also explains why a Beowulf can't match a supercomputer for certain tasks.

      What makes me wonder is, what really happened to "Connection Machine" which is a massive break from Von Neumann architecture. It is like a plane compared to a car. How come they didn't evaluate such an invention?
      http://en.wikipedia.org [wikipedia.org]

      • by jd (1658)
        It's not the only "forgotten" architecture. The Transputer was nearly as revolutionary as the Connection Machine, and the AMULET range of CPUs were a rare example of asynchronous general-purpose processors. Processor-In-Memory architectures (which Cray played with for a bit) turned the entire Von Neumann architecture on its head. I'm sure there are other examples of unique, and forgotten, designs that warrant closer examination and which - if reimplemented using the same current limits of knowledge and tech
    • Re:It's said... (Score:4, Informative)

      by Bill Barth (49178) <bbarth.gmail@com> on Wednesday September 03, 2008 @07:36PM (#24867615)
      You could not be more wrong.

      Considering that we've got SDR IB with under 2 microseconds latency for the shortest hops (and ~3 for the longest), I think you need to go update your anti-cluster argument. :) The problems with congestion in fat trees have virtually nothing to do with latency. Yes massive congestion will kill your latency numbers, but given that you don't get cascades and other failures causing congestion without fairly large bandwidth utilization, latency is the least of your worries that that point. Furthermore, the cascades you talk about also aren't common except in extremely oversubscribed networks or in the presence of malfunctioning hardware. We do our best to use properly functioning hardware and to have no more that 2:1 oversubscription (with our largest machine not being oversubscribed at all).

      MPICH ain't that bad (heck, MPICH2, even just it's MPI-1 parts might be considered to be pretty good by some). MPI as standard for message-passing is fine. I'd love to hear what you think is wrong with MPI and see some examples where another portable message passing standard does consistently better. Though it's a bit like C or C++ or Perl in that there are lots of really bad ways to accomplish things in MPI and a handful of good ones. It's low-level enough that you need to know what you're doing. But if you believe anyone that tells you they have a way to make massively parallel programming easy, I've got a bridge you might be interested in.

      Finally, I don't know of much in the way of a "supercomputer" that's using TCP for it's MPI traffic these days, so you can put that old saw out to pasture as well.

      • by jd (1658)
        MPICH isn't bad? Compared to what? LA-MPI should be more robust, OpenMPI is partially hand-turned assembly and damn-well should be faster. Commercial, specially-tuned MPIs should be en better. I see your bridge and raise you one Transputer grid. I prefer Occam's mobile processes and dynamic message passing. MPI can't even do collective operations AS a collective operation - it's invariably implemented as a "for" loop - which is exactly why I talk about the need for reliable multicast. You can do collective
        • by Bill Barth (49178)
          I don't see any data in your reply. There's a heck of a lot of should and "I think" and possibly. There's a tiny bit of assembly int OpenMPI, but nothing that has anything to do with communication, just some atomic test-and-set code. So, I'm not sure how that's relevant. LAM is basically dead (as OpenMPI is a complete rewrite by basically the same team). And I didn't say MPICH wasn't bad, I said it wasn't _that_ bad. It's come a long way from when there was no competition. Besides, for IB, MVAPICH and MVAP
          • by Bill Barth (49178)
            (Damn me for forgetting to preview. There were paragraph breaks in the original, but not the HTML tags to make them show up!)
  • The amount of porn you can download with this thing? Isn't that the number one thing the computer has evolved to?
    • by AP31R0N (723649)

      With that kind of resolution you could recognize individual crab lice and watch them migrate from Jenna to Ron.

  • Don't worry (Score:3, Funny)

    by EEPROMS (889169) on Wednesday September 03, 2008 @06:32PM (#24866895)
    in 40 years some kid will laugh at your pathetic attempt at geek coolness when you mention the Bluewater and say "wow your old, Im amazed anyone needed a warehouse just for one petaflop even my Wango-matic game cube has 50 petaflops"
  • by Anonymous Coward

    Folding @ Home easly trounces this puny supercomputer.

  • Isn't folding at home well into the Petaflop territory now? http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats [stanford.edu]

Loan-department manager: "There isn't any fine print. At these interest rates, we don't need it."

Working...