Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Operating Systems Supercomputing Technology

Virtualizing a Supercomputer 57

bridges writes "The V3VEE project has announced the release of version 1.2 of the Palacios virtual machine monitor following the successful testing of Palacios on 4096 nodes of the Sandia Red Storm supercomputer, the 17th-fastest in the world. The added overhead of virtualization is often a show-stopper, but the researchers observed less than 5% overhead for two real, communication-intensive applications running in a virtual machine on Red Storm. Palacios 1.2 supports virtualization of both desktop x86 hardware and Cray XT supercomputers using either AMD SVM or Intel VT hardware virtualization extensions, and is an active open source OS research platform supporting projects at multiple institutions. Palacios is being jointly developed by researchers at Northwestern University, the University of New Mexico, and Sandia National Labs." The ACM's writeup has more details of the work at Sandia.
This discussion has been archived. No new comments can be posted.

Virtualizing a Supercomputer

Comments Filter:
  • Re:Why? (Score:1, Interesting)

    by Anonymous Coward on Monday February 08, 2010 @09:28PM (#31067812)

    Perhaps those 5 nodes only cost 50k.

    How much would it cost to rewrite your one of a kind software and retest and verify it? There are other costs here that they are not letting us in on.

  • by Barny ( 103770 ) on Monday February 08, 2010 @09:32PM (#31067832) Journal

    Well, not sure how good they are now, but back when I studied at Uni we examined a few super-computer clusters and the rule of thumb in most cases was 1 CPU core per node was stuck doing IO for that node anyway, this was all before the move to Hypertransport with AMD though, so it may be much different for them now.

    The fact was, it was a number that was constant, it wouldn't get worse with more nodes, it was always x nodes lost per y nodes, as this is. Just add more nodes :)

    A worse problem would be if it was x^2 nodes per y nodes, then you're just throwing away money adding more.

  • OSS ftw. (Score:2, Interesting)

    by Asadullah Ahmad ( 1608869 ) on Monday February 08, 2010 @09:32PM (#31067838)

    It is really pleasant to see more and more OSS projects which are being deployed at national level and large infrastructures.

    Hopefully some less greedy company who benefit from such projects will start paying the volunteer developers. But then again, I have found that a lot of times if you are doing something as a hobby/interest/challenge, rather than because you were employed to do it, the outcome will be more refined and efficient. Though I have yet to experience the latter part first hand.

  • Re:Why? (Score:4, Interesting)

    by Spazed ( 1013981 ) on Monday February 08, 2010 @09:34PM (#31067854)
    Most of them would be running an application done in C/C++ or some other low level language with threading. The whole advantage of super computers isn't that they have an absurd ghz rating, but an insane amount of cores. This could be useful for testing how a network of desktop computers would work, which it sounds like from the summary they are doing.

    TL:DR; Normal desktop software doesn't run faster on a super computer than on your 4 year old laptop.
  • Re:Cool. (Score:4, Interesting)

    by TubeSteak ( 669689 ) on Monday February 08, 2010 @10:01PM (#31068014) Journal

    Now we'll never need to build another expensive supercomputer. We'll just "virtualize" them on cheap desktops.

    I think you've got it backwards.
    Now we're virtualizing cheap desktops on supercomputers.

    What they're doing only makes sense if 5% of 4096 nodes* is cheaper than coding your app to run natively on the supercomputer.
    Like really big hard drives, when you get up to supercomputer levels of performance, 5% is a lot to give away.

    *Anyone know exactly what a node entails?

  • not a good idea. (Score:1, Interesting)

    by Anonymous Coward on Monday February 08, 2010 @10:14PM (#31068062)

    Virtualizing a Supercomputer is never the correct solution. supercomputers have in their nature a system of managing lesser processes. that system could be extended rather than adding another virtual management system to run parallel to the existing management system burdened with maintaining it as another running process.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...