Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Science

Half-Petaflop Supercomputer Deployed In Austin 130

SethJohnson writes "Thanks to a $59 million National Science Foundation grant, there's likely to be a new king of the High Performance Computing Top 500 list. The contender is Ranger, a 15,744 Quad-Core AMD Opteron behemoth built by Sun and hosted at the University of Texas. Its peak processing power of 504 teraflops will be shared among over 500 researchers working across the even larger TeraGrid system. Although its expected lifespan is just four years, Ranger will provide 500 million processor hours to projects attempting to address societal grand challenges such as global climate change, water resource management, new energy sources, natural disasters, new materials and manufacturing processes, tissue and organ engineering, patient-specific medical therapies, and drug design."
This discussion has been archived. No new comments can be posted.

Half-Petaflop Supercomputer Deployed In Austin

Comments Filter:
  • by EvanED ( 569694 ) <{evaned} {at} {gmail.com}> on Saturday February 23, 2008 @11:48PM (#22532052)
    Seems like the "processor hours" metric needs some adjustment to account for multi-core. Otherwise I could build one of these with 15,744 single-core processors and claim the same performance.

    Why are you associating processor-hours with performance anyway? You could hook up 15,744 286s and get the same number of processor-hours too. So why don't you complain about that?
  • Re:Now We Know (Score:3, Insightful)

    by Chris Snook ( 872473 ) on Sunday February 24, 2008 @12:42AM (#22532338)
    If I had mod points, I'd mod this insightful, not funny. There are a lot of HPC projects that were planning to use Barcelona, that were held back by the TLB bug. I'm sure anything approaching this magnitude already had a contract with AMD that includes guaranteed delivery dates and penalties, either directly or through the OEM. If you don't have a signed contract with AMD or with someone who has one with AMD, you're going to have to wait in line.
  • by Anonymous Coward on Sunday February 24, 2008 @06:00AM (#22533670)
    Yes. This is how science works. What you and your whiny brethren call "doctrine", scientists tend to call "Accepted theories". You have always, and will always, have to argue a whole lot more if you're on the other team. You have a whole lot of people to convince, for a start.

    Who should get precedence: a medical researcher who is trying to prove that HIV does not cause AIDS, or a biological chemist who is looking for a cure for cancer?

    Besides which, the text says "global climate change", not "proving global warming".
  • by Anonymous Coward on Sunday February 24, 2008 @10:01AM (#22534584)
    Some fraction of this machine was originally supposed to be in production in May of last year (a requirement of the original request for proposals), but as far as I know it wasn't even accessible to friendly users until some time last fall. I don't understand how TACC, Sun, and/or AMD avoided getting hit with penalties from the NSF.
  • Re:4 year lifespan (Score:3, Insightful)

    by pimpimpim ( 811140 ) on Sunday February 24, 2008 @10:10AM (#22534648)
    Within four years the Performance/Watt ratio will have dropped compared to state of the art, so it would make very little sense to keep the thing taking valuable computer room space and working hours of the technical staff. It happens with all supercomputing machines, just Moore's law in practice. What I think is still a big problem is that there are still many problems getting the hardware work correctly in parallel. Often half a year or longer is lost debugging file system/network issues, which is a considerable time compared to the total effective lifespan. With all the multicores in the making, a sturdy parallel computing implementation is very much needed!
  • Re:actually... (Score:2, Insightful)

    by Anonymous Coward on Sunday February 24, 2008 @11:15AM (#22534998)
    BlueGene/L at LLNL already peaks significantly faster than Ranger. The only question is whether Ranger can get a =sustained= number that passes BG/L and considering the difference in peak performance, it's unlikely. June is going to be an interesting list as there could be quite a bit of shuffle at the top.

    You are also correct about the scalability of BG. If you look at last June's list and last November's list you'll see a big difference in performance for BG/L. That's entirely due to simply adding more racks.
  • by MaxShaw ( 1151993 ) on Sunday February 24, 2008 @11:33AM (#22535146)
    A loop where the counter can overflow is by definition not infinite. Here's an actual infinite loop for you to try:

    while (true) {
    //Do nothing
    }
    Call me back when your computer finishes this one.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...