$208 Million Petascale Computer Gets Green Light 174
coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system.
Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."
Naive question... (Score:3, Interesting)
I think it's awesome, but are there any concrete advancements that can be attributed to having access to all this computing power?
Just wondering...
Star Trek "Data" rated at 60 Teraflops (Score:5, Interesting)
Re:Naive question... (Score:5, Interesting)
I don't use one myself, but I know people involved with supercomptuers. They are used for large simulations. Often this comes down to solving large systems of linear equations, since at the inner step finite elements need solutions to these large equation systems. The point is, the larger the computer the larger the grid you can have. This involves simulating a larger volume, or simulating the same volume in more detail (think, for example of weather systems).
As for concrete advancemants? I'm not in the biz, so I don't know, but I expect so. Apparently they're also used for stellar simulations, so I expect the knowledge of the universe has been advanced. I would be suprised if they haven't seen duty in global warming simulation too.
Re:Not so sure its the first (Score:3, Interesting)
It's said... (Score:3, Interesting)
...Apple used to use a Cray to design their new computers, whereas Seymoure Cray used an Apple to design his.
More compute power is nice, but only if the programs are making efficient use of it. MPI is not a particularly efficient method of message passing, and many implementations (such as MPICH) are horribly inefficient implementations. Operating systems aren't exactly well-designed for parallelism on this scale, with many benchtests putting TCP/IP-based communications ahead of shared memory on the same fripping node! TCP stacks are not exactly lightweight, and shared memory implies zero copy, so what's the problem?
Network topologies and network architectures are also far more important than raw CPU power, as that is the critical point in any high-performance computing operation. Dolphinics is quoting 2.5 microsecond latencies, Infiniband is about 8 microseconds, and frankly these are far far too slow for modern CPUs. That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage. I know of no network architecture that provides hardware native reliable multicast, for example, despite the fact that most problem-spaces are single-data, most networks already provide multicast, and software-based reliable multicast has existed for a long time. If you want to slash latencies, you've also got to look at hypercube or butterfly topologies, fat-tree is vulnerable to congestion and cascading failures - it also has the worst-possible number of hops to a destination of almost any network. Fat-tree is also about the only one people use.
There is a reason you're seeing Beowulf-like machines in the Top 500 - it's not because PCs are catching up to vector processors, it's because CPU count isn't the big bottleneck and superior designs will outperform merely larger designs. Even with the superior designs out there, though, I would consider them to be nowhere even remotely close to potential. They're superior only with respect to what's been there before, not with respect to where skillful and clueful engineers could take them. If these alternatives are so much better, then why is nobody using them? Firstly, most supercomputers go to the DoD and other Big Agencies, who have lots of money where their brains used to be. Secondly, nobody ever made headlines off having the world's most effective supercomputer. Thirdly, what vendor is going to supply Big Iron that will take longer to replace and won't generate the profit margins?
(Me? Cynical?)
F@H is already past 2.5 Petaflops (Score:2, Interesting)
Folding @ Home easly trounces this puny supercomputer.
Re:Naive question... (Score:4, Interesting)
Did you know that a very credible FAQ mentions Apple purchased a Cray for manufacturing/design and someone actually saw them emulate MacOS on that monster?
http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Cfaqp3.html#TOC23 [pipex.com]
I bet they tried some games too :)
Re:Star Trek "Data" rated at 60 Teraflops (Score:3, Interesting)
About a decade or so ago, I remember someone very crudely trying to ballpark the amount of storage that would be needed to contain the raw data of the entire human brain complete with a lifetime of experience at around 10 terabytes. Needless to say, that seems incredibly unlikely by today's standards.
Even if something like this were possible (storage not withstanding), the data itself would likely be unusable until we sufficiently understood just how our brains work with their own data enough to create a crude simulation to act as an interpreter. And, even with that, it's probably safe to assume that each brain sampled will likely have highly unique methods of storage and recall, each requiring their own custom-built brain-simulation interpreter.
Somehow, I don't think we'll be seeing anything close to this happening within our lifetimes short of violating our ethics regarding the rights of human life. Basically, something to the effect of strapping someone down while we inject their brain with nanobots designed to disassemble the brain one cell at a time, and then emulate the cell that was just removed, until the entire brain has been replaced with a nanobot driven substitute. (Only with a few added features to allow communication with external devices.)
Re:How many human brains is that? (Score:3, Interesting)
2020 seems unlikely. A reasonably accurate real-time synaptic simulation can run maybe 100 neurons on a high end pc today, probably less. A human brain has about 100 billion neurons, so we're 1 billion times short in computation. Last time I checked, GPUs had not yet been used in neuron simulation, so I'll even give you that we may be 1000 times better off. That's still 1 million X improvement needed to match the brain, or roughly 20 more generations of computer hardware, at a generous 18 months, that leaves us at 30 years, 2038.
I will be seriously surprised if an even vaguely accurate simulation of the human brain is running before 2050.
Re:Naive question... (Score:1, Interesting)
In our lab (www.lcse.umn.edu) we use these types of systems for simulating stellar convection. Amongst our collaborators the uses range from computational chemistry to geophysics.
Real world applications of that sort of research would include topics such as controlled fusion and tsunami prediction.