$208 Million Petascale Computer Gets Green Light 174
coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system.
Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."
imagine... (Score:5, Funny)
$208 Million Petascale Computer Gets Green Light (Score:5, Funny)
Imagine having all that computer power, and not even knowing if it was switched on!
Re: (Score:1, Funny)
Too bad it wasn't a red light with all those geeks around... I know I'd be interested!
Re:$208 Million Petascale Computer Gets Green Ligh (Score:2)
I'm glad they've given it a green light.
Me too. It seems like Urbana sure learned their lesson about giving big, powerful computers red lights [slashfilm.com] about seven years ago. [wikipedia.org]
You know, that IS impressive but... (Score:2, Funny)
Re: (Score:3, Informative)
Re: (Score:2)
You know, that IS impressive but... Can it figure out how to brew the 'perfect' cup of coffee?
I think you meant tea.
No, I think Xaedalus meant the [girlgeniusonline.com] perfect cup [girlgeniusonline.com] of coffee [slashdot.org].
Fixed links: Re:You know, that IS impressive but.. (Score:2)
(That was SUPPOSED to be a preview while I got the links right. Here it is with the links fixed. View them in order...)
You know, that IS impressive but... Can it figure out how to brew the 'perfect' cup of coffee?
I think you meant tea.
No, I think Xaedalus meant the [girlgeniusonline.com] 'perfect' [girlgeniusonline.com] cup [girlgeniusonline.com] of coffee [girlgeniusonline.com].
Naive question... (Score:3, Interesting)
I think it's awesome, but are there any concrete advancements that can be attributed to having access to all this computing power?
Just wondering...
Re: (Score:2, Funny)
I mean, what do they actually use this for?
I think it has been designed to run IE8 beta 2.
Re: (Score:2)
Re:Naive question... (Score:5, Funny)
Come on now. Let's be serious. They're trying to play Crysis.
Re:Naive question... (Score:4, Interesting)
Did you know that a very credible FAQ mentions Apple purchased a Cray for manufacturing/design and someone actually saw them emulate MacOS on that monster?
http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Cfaqp3.html#TOC23 [pipex.com]
I bet they tried some games too :)
Re: (Score:2)
Did you know that a very credible FAQ mentions Apple purchased a Cray for manufacturing/design and someone actually saw them emulate MacOS on that monster?
When told that Apple bought a super computer to design their next Mac, Seymour Cray replied, "That's odd, I'm using a Mac to design my next supercomputer."
Re: (Score:2)
Cray and Steve Jobs are interestingly similar thinking people.
If you think about the fact that nobody (except armed guards and some top clearance people) will actually see the supercomputer and guy even uses a Mac Laptop to display a Macromedia powered animation on that case, you can easily think that guys are very similar to each other. That thing you see on machine is actually a Mac Powerbook http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Criscan/t3d_fr.jpg [pipex.com] . Poor thing displays a single animation all
Re: (Score:2, Funny)
I bet they tried some games too :)
Nonsense! Everyone knows there aren't any games for mac :P
Re: (Score:2)
I was about to add the famous "breakout, superbreakout" joke to my post but I forgot the other names of game.
Using a Mac myself and knowing how evil Mac moderators can be has nothing to do with it of course ;)
Re: (Score:2)
Oh Yeah? What about Photoshop?
Re: (Score:2)
Now now.. lets be totally fair.. They're trying to play Crysis on Vista!
Re:Naive question... (Score:4, Funny)
Re: (Score:2)
I thought this was to run the new web apps that Chrome is supposed to be designed for. Finally I'll have a useful desktop browser.
Re:Naive question... (Score:5, Interesting)
I don't use one myself, but I know people involved with supercomptuers. They are used for large simulations. Often this comes down to solving large systems of linear equations, since at the inner step finite elements need solutions to these large equation systems. The point is, the larger the computer the larger the grid you can have. This involves simulating a larger volume, or simulating the same volume in more detail (think, for example of weather systems).
As for concrete advancemants? I'm not in the biz, so I don't know, but I expect so. Apparently they're also used for stellar simulations, so I expect the knowledge of the universe has been advanced. I would be suprised if they haven't seen duty in global warming simulation too.
Or (Score:3, Informative)
Simulating nuclear explosions.
Re: (Score:2)
The dudes and dudettes at CERN simulated the big bang on a cluster back in 2001, and were suspicious about claims of needing supercomputers for nuke sims.
Maybe they're playing Wargames. Trying to find a single scenario that justifies nukes, perhaps?
Re: (Score:2)
Remember nuclear explosions are not circular, they are elliptical.
Re: (Score:2)
Oddly, that makes perfect sense. XD
Re:Naive question... (Score:5, Informative)
Weather modeling comes to mind, both terrestrial and space.
rj
Re:Naive question... (Score:5, Informative)
These machines are used to work on simulations that involve aerodynamics and hydrodynamics, quantum electrodynamics (QED), or electromagnetohydrodynamics. All of these simulations require that a mathematical model is constructed from a high density mesh of data points (2048 ^ 3). Blocks of such points are allocated to individual processors. Because of this, each processor must be able to communicate at a high speed with its neighbours (up to 26 neighbours with a cubic mesh).
Usually, the actual individual calculations per element will be take up less than a page of mathematical equations, but require high precision, so the data values will be 64-bit floating point quantities. A single element might require 20 or more variables. Thus the need for some many processors and high clock speed.
Re:Naive question... (Score:5, Funny)
And quantum electroptical tomographics. See, I can make shit up, too...
Re: (Score:2)
Magnetohydrodynamics [wikipedia.org]
Electrohydrodynamics [wikipedia.org]
Re: (Score:2)
Yes, I know this is probably a very naive question, but has anyone here actually had the privilege of working on one of these things? I mean, what do they actually use this for?
The one application I know this computer is going to run is quantum Monte Carlo [wikipedia.org], which is an electronic-structure method. QMC is intrinsically parallel due to its stochastic nature, but the degree of parallelism involved here requires further breakdown of the algorithm. There are quite a few research groups putting effort into this.
Other applications, if I am not mistaken, are also meant to be highly parallelizable, possibly nearing the boundary of embarrasingly parallel [wikipedia.org] tasks. This is probably to make sure
Re: (Score:2)
In the case of QMC the parallelization is done with respect to the population of walkers. In the most common implementation of diffusion Monte Carlo, this population is allowed to vary, so the population on different nodes at any given time may be different, preventing 100% efficiency.
Then, the main problem in diffusion Monte Carlo is the need to set a global "reference energy" after every move of the walkers, which enforces synchronization across nodes and requires frequent communication. What to do about
Re:Naive question... (Score:5, Informative)
Do you notice neither USA or Russia blows a portion of planet to test nuclear weapons anymore? It is because the planet is so peaceful so further research is not required? Unfortunately no.
These monsters can simulate a gigantic nuclear explosion in molecular level.
Or for peace purposes, they can actually simulate that New Orleans storm based on real World data and pinpoint exactly what would happen.
Re: (Score:3, Funny)
Right.
That's why the City of New Orleans evacuated to Baton Rouge.
Re: (Score:2)
I remember FEMA claiming that they have predicted this would happen and reported to Govt. but they didn't care.
My post sounded like you would use a super computer to do evil things only so I tried to balance it via New Orleans. In fact, every single less nuclear explosion as result of super computers simulation is a positive thing itself. They will keep stupidly designing/testing them anyway.
Re: (Score:2)
Think of the number of open tabs you could use in Google's new Chrome Browser! With separate processes for each tab, they could have the internet open at once!
Re: (Score:2)
I mean, what do they actually use this for?
Very detailed solutions of nonlinear field equations. The kind of thing that aerothermodynamics deals with.
If someone comes out of the woodwork who happens to be a cross between Alan Turing and Kelly Johnson, maybe that person could use a machine like this to design a combined cycle turbo/ram/scramjet and then Richard Branson could use it to power a real spaceship, not something that's just called a spaceship.
It's not that crazy to imagine a talented individual could simulate all the expensive work on scram
Re:Naive question... (Score:5, Informative)
I'm working on a PhD in chemical engineering, and I do simulations. I occasionally use Lonestar and Ranger, which are clusters at TACC, the U. of Texas' supercomputing center. Lonestar is capable of around 60 TFLOPS and Ranger can do around 500-600 TFLOPS. A few users run really large jobs using thousands of cores for days at a stretch, but the majority of people use 128 or fewer cores for a few hours at a time.
My research group does materials research using density function theory, which is an approximate way of solving the Schroedinger equation. Each of our jobs usually uses 16 or 32 cores, and takes anywhere from 5 minutes to a couple of days to finish. Usually we are interested in looking at lots of slightly different cases, so we run dozens of jobs simultaneously.
The applications are pretty varied. Some topics we are working on -
1) Si nanowire growth
2) Si self-interstitial defects
3) Au cluster morphology
4) Catalysis by metal clusters
5) Properties of strained semiconductors
Re: (Score:2)
*looks for Amex Gold Charge Card and Amazon EC2 account credentials*
Re: (Score:2)
which is an approximate way of solving the Schroedinger equation
So is the cat dead or alive then ?
Re: (Score:3, Informative)
For a reasonable sample of the things that can be done on a supercomputer, start here: http://www.ncsa.uiuc.edu/Projects/ [uiuc.edu]. Those are just the things running at NCSA.
Followup with this [teragrid.org], as the science gateways for the TeraGrid are designed to let scientists worry more about the science part and less about the programming part. Part of the reason to build bigger supercomputers is to let non-programmers get work done as well. By having more cycles available, the TeraGrid can allow access for codes that are e
Re: (Score:2)
Obviously the solution is for every scientists to get a pet coder to write their simulations efficiently.
Re: (Score:2)
I do, on a regular basis. I'm only the sysadmin so I do stuff like this with it:
#!/bin/sh /dev/null
for (( i = 0 ; i <= 10000; i++ ))
do
qsub sleep10 >
done
And sleep10 is a script that says:
#!/bin/sh
sleep 10
Sometimes I have a bit more fun. If it gets cold in the winter I replace sleep10 with:
#!/bin/sh /dev/random > /dev/null
cat
Oh, in case the scientists want to use it, they do more fun stuff like create gigantic arrays of data in MATLAB or transform images.
Re: (Score:2)
Disclaimer, I work for IBM, just me talking though not IBM.
Typically you'll see some type of parallel processing job that is schedule across part or all of the cluster depending on the data needed. Usually the maui scheduler is used to schedule the jobs and what not.
What they use on these jobs is basically a bunch of math. What the end result is depends on what the math is they asked it to do. Jobs can last several hours to several weeks.
Re: (Score:2)
Question: You guys don't actually still use OS2 on desktops, do you? I work for a company that used it at an enterprise level up until about '01/'02.
How many human brains is that? (Score:2)
Apparently, by 2020, personal computers will have the same processing power of the human brain (Kurzweil 2005). My personal computer has 2 cores, my friend's personal computer has 8 cores, so let's say 4 cores is an average. Cores double every, what, 18 months? In the next 12 years there's 144 months, which is 8 doublings. So what's that, 1024 cores? So this computer is, clearly, 195 times smarter than a human!
Or maybe raw processing power just isn't a good indication of how near or far the Singularity
Re: (Score:2)
Re: (Score:2)
There are others limits to the systems power like the ram bandwidth and size / HD size and speed.
Re: (Score:2)
Hehe, actually, the problem is a lack of *software*.
Re: (Score:2)
I'm familiar with the paper. He ball parks cpu to human simulation and I seem to remember somebody else placing it around 2032.
The big issue often ignored is the neuron networks are NETWORKS more than anything else and you can have as many transistors as you like but if you can not handle ball park interconnects (10**14?) with most moving data in parallel it will be a very slow simulation. The brain is massively PARALLEL so it can handle running as slow as it does.
CPU evolution greatly impacts estimates and
Re: (Score:2)
Re: (Score:3, Interesting)
2020 seems unlikely. A reasonably accurate real-time synaptic simulation can run maybe 100 neurons on a high end pc today, probably less. A human brain has about 100 billion neurons, so we're 1 billion times short in computation. Last time I checked, GPUs had not yet been used in neuron simulation, so I'll even give you that we may be 1000 times better off. That's still 1 million X improvement needed to match the brain, or roughly 20 more generations of computer hardware, at a generous 18 months, that l
Re: (Score:2)
Kurtzweil is of the opinion that study of brain scanning leads to optimization of the algorithms used in the brain to run faster and better on digital computers. So when he says 2020 he means that the hardware will be commonly available to run these optimized algorithms at sufficient speed to reach human capabilities.. and then he goes on to say that the algorithms will be ready by then as there's all these practical uses for them before we even get to the point where we can combine them together into a wo
A little slow? (Score:1)
Re: (Score:1)
More crap code (Score:3, Insightful)
I find it funny how the people who have never been formally trained with writing in a language (Mathematics, and just science in general) write the best codes while the majority of the IT people I see write the most appalling code I've ever seen. I think it has something to do with the fact that the science people don't pretend to know everything and are much more willing to learn something new while the IT people already know everything.
Re: (Score:1, Insightful)
I doubt they can just write crappy code. It's very unlikely that all this memory is on a single bus, so the more distant a core is from the memory it's addressing, the slower that access is.
It's a little bit like putting a video card with 1GB of lightning-quick video RAM in your computer. That VRAM is
Re: (Score:2)
The Cray FAQ mentions super computers running on 99% load all the time . I think they still don't have the luxury to waste memory. It is just the programs they run actually needs/will need such a massive memory.
I understand your point but I don't think they let "buy more RAM" idiots to use such super computing power.
Remember the Mathematica on OS X was the first 64bit enabled code on PPC G5 since they (scientists) actually needed maxed out G5s (8 GB and 16GB on Quad G5).
Re: (Score:2)
but I don't think they let "buy more RAM" idiots to use such super computing power.
There are a few of those idiots around here. They're infecting the system with their 'document classification' and are completely unwilling to acknowledge that there are other techniques for dealing with large dense (usually only 10% in these cases!) matrices. Hilarious when they start telling the linear and non-linear algebraic mathematicians that they don't understand the complexities.
Here's a great example: finding various subsets of "1-2-3" in "1-2-3-2-4-5-1-7-6" (but gigabytes of the stuff stored in
1 PB of shared memory !?! (Score:2)
I don't know where TFA got the "globally addressable PB". I think someone was misquoted.
I can't find any mention of it in the NCSA webpages, and no shared memory system exists on this level, ccNUMA or otherwise (NASA Ames has a 4TB altix system, which is evidently the largest in the world that is publicly acknowledged).
Software distributed shared memory hasn't really gone anywhere either, so I think someone was fantasizing when they wrote the article... globally accessible filesystems, sure, but shared mem
Re: (Score:2)
Actually, most IT people don't have any formal training at all. Most of them are hacks who got into their jobs on the basis of family connections, a year at community college, time in a help desk (especially military helpdesks), or reading a couple of books. Most IT
Re: (Score:2)
I do want to add one thing as my post above may have sounded too harsh towards IT people... I was referring to them as hacks only in instances where they start writing huge applications or designing big databases without learning how to write code or do database design first. IT people often know much more than I do about keeping a desktop running or a network up, and for that I'm grateful.
And there's nothing wrong with community college either, or even no degree, if you've made the effort to learn your sk
call(More_crap_code) (Score:2)
And this is where AI comes in. If I could make a suggestion, the computer would be used by programmers who are really lazy, not just lazy enough to solve repetitive tasks, but lazy enough to write programs that would write programs to solve repetitive tasks. Take your average adept programmer who says, "I am too lazy to grind coffee beans, pour water and all that jazz, I'll write a program that will manage the process for me." That programmer is lazy, but a really lazy programmer is the one who studies huma
Re: (Score:2)
I find it funny how the people who have never been formally trained with writing in a language (Mathematics, and just science in general) write the best codes
Having seen the code floating around a couple of physics labs I can tell you that there's plenty of crap code. Heck, I've written some of it myself!
Of course, some people are good at it (several because they're just so freaking smart), but in many cases you've got people who would rather focus on something else write a quick kludge to get data from instrument X into plot Y. Not much wrong with that really, until you have to debug/extend it...
Re: (Score:2)
but will it run... (Score:2)
Vista fast enough?
Oh I forgot, that would cost 200 peta-dollars,
so maybe they won't use vista.
Not so sure its the first (Score:1)
Re: (Score:3, Interesting)
Re: (Score:2, Informative)
Blue Waters will be the first to deliver a sustained petaflop on "real-world" applications, meaning various scientific simulations [uiuc.edu]. Specifically, the program solicitation [nsf.gov] required prospective vendors to explain how their proposed systems would sustain a petaflop on three types specific types of simulations, one each in turbulence, lattice-guage quantum chromodynamics, and molecular dynamics.
Granted, Roadrunner was the first machine to deliver a petaflop on the Linpack benchmark [netlib.org] (though certainly IBM's own i
Star Trek "Data" rated at 60 Teraflops (Score:5, Interesting)
Re: (Score:2)
Demonstration of the triumph of software over hardware!
I believe it was Minsky who said that a 486 could run a human level intelligence, if only we knew the algorithm, but I can't seem to remember where he said it. Maybe I need new RAM!
Re: (Score:2)
Bytes? I thought they used "Quads" as a measurement of storage...
Re: (Score:3, Interesting)
About a decade or so ago, I remember someone very crudely trying to ballpark the amount of storage that would be needed to contain the raw data of the entire human brain complete with a lifetime of experience at around 10 terabytes. Needless to say, that seems incredibly unlikely by today's standards.
Even if something like this were possible (storage not withstanding), the data itself would likely be unusable until we sufficiently understood just how our brains work with their own data enough to create a cr
Re: (Score:2)
Meh, if you really want to throw teraflops at it, wait until we have enough processing power to simulate a human embryo growing to a fetus. That'll tell you a whole heck of a lot. From that you can use non-invasive NMRI to get data which you can infer structure from.. and if you actually understand that structure then you won't have to do any simulation, you can transcode it into something more appropriate for a digital computer. Basically, it all comes down to software because if you're just going to re
Re: (Score:2)
Re: (Score:2)
Speaking of human rights violations, how about hooking a few terabytes of storage up to a newborn. With an appropriate connector its developing brain should make use of the storage and by studying that you can learn all sorts of nifty stuff. Of course, this will likely make you all squeamish so let's say it's a baby monkey.. or a mouse. Although its not nearly as interesting.
That actually isn't unrealistic. The recent "Future Intelligence" episode of the Science Channel's "NextWorld" program featured a brie
Re: (Score:2)
Data has a speed of 60 Teraflops and 100 petabytes of storage.
Data is just pure bloat then... there have been many other fictional AIs that fit in mere K. There are times when I think that we could have a 100 yottaflop, 100 googolflop, or 100 googolplexflop computer and still not have developed AI.
I wonder... (Score:2)
what their tech persons blood elf or tauren will look like?
It's said... (Score:3, Interesting)
...Apple used to use a Cray to design their new computers, whereas Seymoure Cray used an Apple to design his.
More compute power is nice, but only if the programs are making efficient use of it. MPI is not a particularly efficient method of message passing, and many implementations (such as MPICH) are horribly inefficient implementations. Operating systems aren't exactly well-designed for parallelism on this scale, with many benchtests putting TCP/IP-based communications ahead of shared memory on the same fripping node! TCP stacks are not exactly lightweight, and shared memory implies zero copy, so what's the problem?
Network topologies and network architectures are also far more important than raw CPU power, as that is the critical point in any high-performance computing operation. Dolphinics is quoting 2.5 microsecond latencies, Infiniband is about 8 microseconds, and frankly these are far far too slow for modern CPUs. That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage. I know of no network architecture that provides hardware native reliable multicast, for example, despite the fact that most problem-spaces are single-data, most networks already provide multicast, and software-based reliable multicast has existed for a long time. If you want to slash latencies, you've also got to look at hypercube or butterfly topologies, fat-tree is vulnerable to congestion and cascading failures - it also has the worst-possible number of hops to a destination of almost any network. Fat-tree is also about the only one people use.
There is a reason you're seeing Beowulf-like machines in the Top 500 - it's not because PCs are catching up to vector processors, it's because CPU count isn't the big bottleneck and superior designs will outperform merely larger designs. Even with the superior designs out there, though, I would consider them to be nowhere even remotely close to potential. They're superior only with respect to what's been there before, not with respect to where skillful and clueful engineers could take them. If these alternatives are so much better, then why is nobody using them? Firstly, most supercomputers go to the DoD and other Big Agencies, who have lots of money where their brains used to be. Secondly, nobody ever made headlines off having the world's most effective supercomputer. Thirdly, what vendor is going to supply Big Iron that will take longer to replace and won't generate the profit margins?
(Me? Cynical?)
Re: (Score:2)
"That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage."
Seems fine to me. I put all my new systems through the ping-pong test, sometimes i even win.
Re: (Score:2)
I can easily say that Apple and Cray connection is a valid claim since a very high profile Cray guy confirms it on the Cray FAQ:
http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Cfaqp3.html#TOC23 [pipex.com]
The FAQ also explains why a Beowulf can't match a supercomputer for certain tasks.
What makes me wonder is, what really happened to "Connection Machine" which is a massive break from Von Neumann architecture. It is like a plane compared to a car. How come they didn't evaluate such an invention?
http://en.wikipedia.org [wikipedia.org]
Re: (Score:2)
Re:It's said... (Score:4, Informative)
Considering that we've got SDR IB with under 2 microseconds latency for the shortest hops (and ~3 for the longest), I think you need to go update your anti-cluster argument. :) The problems with congestion in fat trees have virtually nothing to do with latency. Yes massive congestion will kill your latency numbers, but given that you don't get cascades and other failures causing congestion without fairly large bandwidth utilization, latency is the least of your worries that that point. Furthermore, the cascades you talk about also aren't common except in extremely oversubscribed networks or in the presence of malfunctioning hardware. We do our best to use properly functioning hardware and to have no more that 2:1 oversubscription (with our largest machine not being oversubscribed at all).
MPICH ain't that bad (heck, MPICH2, even just it's MPI-1 parts might be considered to be pretty good by some). MPI as standard for message-passing is fine. I'd love to hear what you think is wrong with MPI and see some examples where another portable message passing standard does consistently better. Though it's a bit like C or C++ or Perl in that there are lots of really bad ways to accomplish things in MPI and a handful of good ones. It's low-level enough that you need to know what you're doing. But if you believe anyone that tells you they have a way to make massively parallel programming easy, I've got a bridge you might be interested in.
Finally, I don't know of much in the way of a "supercomputer" that's using TCP for it's MPI traffic these days, so you can put that old saw out to pasture as well.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Can You Imagine (Score:2, Funny)
Re: (Score:2)
With that kind of resolution you could recognize individual crab lice and watch them migrate from Jenna to Ron.
Don't worry (Score:3, Funny)
Can't take another 40 (Score:5, Funny)
in 40 years some kid will laugh at your pathetic attempt at geek coolness when you mention the Bluewater and say "wow your old..."
Forty more years of the kids saying "your"? Kill me now! :)
F@H is already past 2.5 Petaflops (Score:2, Interesting)
Folding @ Home easly trounces this puny supercomputer.
folding at home? (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
Ya know there's a tv series now? Season 2 is about to start.
enjoy ;) [mininova.org]
Re: (Score:2)
I wouldn't be surprised if the actual delivery date was 2012 and some Govt. official said "IBM guys, there is a possibility that Terminator freaks and Mayan 2012 freaks combine, change it to 2011"
Look what CERN had to deal with and still dealing with on HADRON super collider :)
Re:Yes, but the article doesn't address a few ques (Score:5, Funny)
It will not run 32 bit linux, so of course, the admins in charge are going to bitch about the lack of adobe flash support.
Re: (Score:2, Funny)
Nah but it will finally run Vista.
Re: (Score:2)
Well you don't /expect/ Aero with that do you? Sheesh... You'll need over 9000!!1! video cards for that.
Re: (Score:2)
Re: (Score:2)
I wonder how much power this sucker is gonna drain from the local power grid
Judging by the current machines [top500.org], the most power-efficient one using some 19.1 Watts per core, and assuming further advances in efficiency (say down to 10 W per core by 2011), about 2 GigaWatts.