Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Cloud Supercomputing

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2 54

An anonymous reader writes "In honor of Doc Brown, Great Scott! Ars has an interesting article about a 1.21 PetaFLOPS (RPeak) supercomputer created on Amazon EC2 Spot Instances. From HPC software company Cycle Computing's blog, it ran Professor Mark Thompson's research to find new, more efficient materials for solar cells. As Professor Thompson puts it: 'If the 20th century was the century of silicon materials, the 21st will be all organic. The question is how to find the right material without spending the entire 21st century looking for it.' El Reg points out this 'virty super's low cost.' Will cloud democratize access to HPC for research?"
This discussion has been archived. No new comments can be posted.

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2

Comments Filter:
  • FTA (Score:5, Insightful)

    by Saethan ( 2725367 ) on Wednesday November 13, 2013 @02:37PM (#45415169)
    FTA:

    Megarun's compute resources cost $33,000 through the use of Amazon's low-cost spot instances, we're told, compared with the millions and millions of dollars you'd have to spend to buy an on-premises rig.

    Running somebody else's machines for 18 hours costs less than buying a machine that powerful for yourself to run 24/7...

    NEWS AT 11!

  • El Reg (Score:2, Insightful)

    by spike hay ( 534165 ) <`ku.em.etaloiv' `ta' `eci_ulb'> on Wednesday November 13, 2013 @02:39PM (#45415181) Homepage

    How about let's not use the anti-science mouthbreathers at the Register as a source.

  • HPC? (Score:5, Insightful)

    by NothingMore ( 943591 ) on Wednesday November 13, 2013 @02:44PM (#45415243)
    "Supercomputing applications tend to require cores to work in concert with each other, which is why IBM, Cray, and other companies have built incredibly fast interconnects. Cycle's work with the Amazon cloud has focused on HPC workloads without that requirement." While this is cool, Can you really call something like this an HPC system if you are picking work loads that require little cross node communication? The requirement of cross node communication is pretty much the whole reason large scale HPC machines like ORNL's Titan exist at all. Wouldn't this system be classified closer to HTC because it is targeting workloads that are similar to those which would be able to run on HTC Condor pools?
  • Good but not great (Score:5, Insightful)

    by Enry ( 630 ) <enry.wayga@net> on Wednesday November 13, 2013 @03:01PM (#45415427) Journal

    So this ran for 18 hours, or about $1800/hour. That gives you just under $44,000 per day, or $16 million for a year.

    Give me $16 million a year and I can build you a very kick-butt cluster - the one I'm just finishing up is 5000 cores at about $3 million.

    EC2 is great if your needs are small and intermittent. But if you're part of a larger organization that has continual HPC needs, you're going to be better off building it yourself for a while.

  • by Yohahn ( 8680 ) on Wednesday November 13, 2013 @03:57PM (#45415995)

    The problem is that in a number of cases a researcher could easily use HTC, but they follow the fashion of HPC, using more specialized resources than necessary.
    Don't get me wrong, there are a number of cases where HPC makes sense, but usually what you need is a large amount of memory, or a large amount of processors.
    HPC only makes sense where you need both.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...