Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing Technology

$208 Million Petascale Computer Gets Green Light 174

coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system. Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."
This discussion has been archived. No new comments can be posted.

$208 Million Petascale Computer Gets Green Light

Comments Filter:
  • Naive question... (Score:3, Interesting)

    by religious freak ( 1005821 ) on Wednesday September 03, 2008 @06:56PM (#24866475)
    Yes, I know this is probably a very naive question, but has anyone here actually had the privilege of working on one of these things? I mean, what do they actually use this for?

    I think it's awesome, but are there any concrete advancements that can be attributed to having access to all this computing power?

    Just wondering...
  • by peter303 ( 12292 ) on Wednesday September 03, 2008 @07:13PM (#24866649)
    I just saw The Measure of a Man episode on the Star Trek Labor Day marathon. Data has a speed of 60 Teraflops and 100 petabytes of storage. That used to seem large in the late 1980s. (Episode were Data goes on trial whether he is a machine or sentient.)
  • Re:Naive question... (Score:5, Interesting)

    by serviscope_minor ( 664417 ) on Wednesday September 03, 2008 @07:16PM (#24866675) Journal

    I don't use one myself, but I know people involved with supercomptuers. They are used for large simulations. Often this comes down to solving large systems of linear equations, since at the inner step finite elements need solutions to these large equation systems. The point is, the larger the computer the larger the grid you can have. This involves simulating a larger volume, or simulating the same volume in more detail (think, for example of weather systems).

    As for concrete advancemants? I'm not in the biz, so I don't know, but I expect so. Apparently they're also used for stellar simulations, so I expect the knowledge of the universe has been advanced. I would be suprised if they haven't seen duty in global warming simulation too.

  • by Phat_Tony ( 661117 ) on Wednesday September 03, 2008 @07:27PM (#24866835)
    Yeah, that was my thought. Roadrunner at Los Alamos sits at the top of the 500 list [top500.org] with Rmax 1,026,000. I don't know enough about benchmarks to distinguish between "Rmax" and "sustained petascale," but it is achieving over a petaflop. Maybe someone here can tell us more about linpack [top500.org] vs. whatever they're using for this new one. I notice the article linked in the story mentions Roadrunner at the end, but without saying how it compares in speed. It doesn't seem to say by what specific measure this new computer's speed surpasses a petaflop.
  • It's said... (Score:3, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday September 03, 2008 @07:29PM (#24866861) Homepage Journal

    ...Apple used to use a Cray to design their new computers, whereas Seymoure Cray used an Apple to design his.

    More compute power is nice, but only if the programs are making efficient use of it. MPI is not a particularly efficient method of message passing, and many implementations (such as MPICH) are horribly inefficient implementations. Operating systems aren't exactly well-designed for parallelism on this scale, with many benchtests putting TCP/IP-based communications ahead of shared memory on the same fripping node! TCP stacks are not exactly lightweight, and shared memory implies zero copy, so what's the problem?

    Network topologies and network architectures are also far more important than raw CPU power, as that is the critical point in any high-performance computing operation. Dolphinics is quoting 2.5 microsecond latencies, Infiniband is about 8 microseconds, and frankly these are far far too slow for modern CPUs. That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage. I know of no network architecture that provides hardware native reliable multicast, for example, despite the fact that most problem-spaces are single-data, most networks already provide multicast, and software-based reliable multicast has existed for a long time. If you want to slash latencies, you've also got to look at hypercube or butterfly topologies, fat-tree is vulnerable to congestion and cascading failures - it also has the worst-possible number of hops to a destination of almost any network. Fat-tree is also about the only one people use.

    There is a reason you're seeing Beowulf-like machines in the Top 500 - it's not because PCs are catching up to vector processors, it's because CPU count isn't the big bottleneck and superior designs will outperform merely larger designs. Even with the superior designs out there, though, I would consider them to be nowhere even remotely close to potential. They're superior only with respect to what's been there before, not with respect to where skillful and clueful engineers could take them. If these alternatives are so much better, then why is nobody using them? Firstly, most supercomputers go to the DoD and other Big Agencies, who have lots of money where their brains used to be. Secondly, nobody ever made headlines off having the world's most effective supercomputer. Thirdly, what vendor is going to supply Big Iron that will take longer to replace and won't generate the profit margins?

    (Me? Cynical?)

  • by Anonymous Coward on Wednesday September 03, 2008 @07:40PM (#24867017)

    Folding @ Home easly trounces this puny supercomputer.

  • Re:Naive question... (Score:4, Interesting)

    by Ilgaz ( 86384 ) on Wednesday September 03, 2008 @07:41PM (#24867035) Homepage

    Did you know that a very credible FAQ mentions Apple purchased a Cray for manufacturing/design and someone actually saw them emulate MacOS on that monster?

    http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Cfaqp3.html#TOC23 [pipex.com]

    I bet they tried some games too :)

  • by Bones3D_mac ( 324952 ) on Wednesday September 03, 2008 @07:46PM (#24867095)

    About a decade or so ago, I remember someone very crudely trying to ballpark the amount of storage that would be needed to contain the raw data of the entire human brain complete with a lifetime of experience at around 10 terabytes. Needless to say, that seems incredibly unlikely by today's standards.

    Even if something like this were possible (storage not withstanding), the data itself would likely be unusable until we sufficiently understood just how our brains work with their own data enough to create a crude simulation to act as an interpreter. And, even with that, it's probably safe to assume that each brain sampled will likely have highly unique methods of storage and recall, each requiring their own custom-built brain-simulation interpreter.

    Somehow, I don't think we'll be seeing anything close to this happening within our lifetimes short of violating our ethics regarding the rights of human life. Basically, something to the effect of strapping someone down while we inject their brain with nanobots designed to disassemble the brain one cell at a time, and then emulate the cell that was just removed, until the entire brain has been replaced with a nanobot driven substitute. (Only with a few added features to allow communication with external devices.)

  • by Surt ( 22457 ) on Wednesday September 03, 2008 @09:17PM (#24868001) Homepage Journal

    2020 seems unlikely. A reasonably accurate real-time synaptic simulation can run maybe 100 neurons on a high end pc today, probably less. A human brain has about 100 billion neurons, so we're 1 billion times short in computation. Last time I checked, GPUs had not yet been used in neuron simulation, so I'll even give you that we may be 1000 times better off. That's still 1 million X improvement needed to match the brain, or roughly 20 more generations of computer hardware, at a generous 18 months, that leaves us at 30 years, 2038.

    I will be seriously surprised if an even vaguely accurate simulation of the human brain is running before 2050.

  • Re:Naive question... (Score:1, Interesting)

    by Anonymous Coward on Wednesday September 03, 2008 @11:27PM (#24869129)

    In our lab (www.lcse.umn.edu) we use these types of systems for simulating stellar convection. Amongst our collaborators the uses range from computational chemistry to geophysics.

    Real world applications of that sort of research would include topics such as controlled fusion and tsunami prediction.

           

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...