Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft Supercomputing Windows Hardware

Fastest-Ever Windows HPC Cluster 216

An anonymous reader links to an eWeek story which says that Microsoft's "fastest-yet homegrown supercomputer, running the U.S. company's new Windows HPC Server 2008, debuted in the top 25 of the world's top 500 fastest supercomputers, as tested and operated by the National Center for Supercomputing Applications. ... Most of the cores were made up of Intel Xeon quad-core chips. Storage for the system was about 6 terabytes," and asks "I wonder how the uptime compares? When machines scale to this size, they tend to quirk out in weird ways."
This discussion has been archived. No new comments can be posted.

Fastest-Ever Windows HPC Cluster

Comments Filter:
  • by MrMr ( 219533 ) on Tuesday June 24, 2008 @11:55AM (#23918325)
    Sure, just pop in a Rocks cd and hit the power switch. That will format the harddrives for you as well.
  • by mpapet ( 761907 ) on Tuesday June 24, 2008 @12:04PM (#23918541) Homepage

    and I have a very hard time believing most of the claims of fact in this story.

    "When we deployed Windows on our cluster, which has more than 1,000 nodes, we went from bare metal to running the Linpack benchmark programs in just four hours,"

    Hmmm. And what installer was this? Is it available commercially? How much is the license for the version with this mythical four-hour installer?

    "The performance of Windows HPC Server 2008 has yielded efficiencies that are among the highest we've seen for this class of machine," Pennington said.

    What "class" would that be? I imagine it would explicitly exclude Free clusters.

    One should question whether the efficacy of any institution/research project using their grant money wisely given the amount of money required to fulfill Microsoft's licensing requirements.

    Furthermore, If research projects are actually considering wasting their grant dollars on Microsoft licenses, then the outlook for American R&D is grim.

  • by saleenS281 ( 859657 ) on Tuesday June 24, 2008 @12:24PM (#23919039) Homepage
    So basically you have no facts, but you're writing them off as idiots because they used the MS package. Nevermind they might be saving money in the long run by paying less people to administrate it because the MS tools get the job done. Or perhaps that they don't have to spend time tweaking things for months because MS has assigned them resources to do this. Let's just assume they're idiots and are wasting money, because if MS is involved, that MUST be it!!!11
  • by backwardMechanic ( 959818 ) on Tuesday June 24, 2008 @12:35PM (#23919335) Homepage
    Compared to?

    Last time I checked, the major alternative was free. The expensive part is finding someone who knows how to specify the hardware and set it up. That must be even harder for Windows, given the number of previous successful installs.

    I'd love to know how they intend to license this - per node?
  • Re:finally (Score:5, Insightful)

    by nmb3000 ( 741169 ) on Tuesday June 24, 2008 @12:53PM (#23919785) Journal

    I got to test Server 2008 before it was released to the public. All our internal applications identified 2008 as "Vista".

    I have no idea why this is modded Informative.

    Vista uses the NT kernel, version 6.0, build 6000. SP1 puts it up to 6001.
    Server 2008 uses the NT kernel, version 6.0, build 6001.

    Is it any surprise that software build prior to Server 2008 being released see it as Vista?

    In related news, both Ubuntu 8.04 and Fedora 9 report being Linux v2.6.

  • by mpapet ( 761907 ) on Tuesday June 24, 2008 @01:06PM (#23920119) Homepage

    So basically you have no facts,

    Wrong. I get paid a decent salary running Windows clusters.

    but you're writing them off as idiots because they used the MS package.
    Wrong. This article is an advertisement disguised as news.

    Nevermind they might be saving money
    Might is a pretty big maybe.... I *know* a Linux-based cluster costs less. Especially as we get into 2008 pricing.

    in the long run
    In the long run we are all dead. Please define the time frame in which chosing Microsoft saves money. More facts please.

    by paying less people to administrate it
    Will they actually get the job done? It's impossible to know either way unless they cure cancer with the thing. Facts please.

    MS tools get the job done.
    Which tools are these? Gui's? I imagine the average win32 admin WOULD be at a loss in a Linux cluster. /etc/ is just mind-boggling!

  • by saleenS281 ( 859657 ) on Tuesday June 24, 2008 @01:17PM (#23920351) Homepage
    Wrong. I get paid a decent salary running Windows clusters.
    Which gives you NO FACTS about THEIR situation. The local janitor probably knows more about their install than you do.

    Wrong. This article is an advertisement disguised as news.
    DEFINITELY sounds like something from someone who "makes a decent salary running Windows clusters".

    Might is a pretty big maybe.... I *know* a Linux-based cluster costs less. Especially as we get into 2008 pricing.
    You have NO IDEA what they paid. You have NO IDEA if a linux cluster costs less money. If this is an "advertisement" like you claimed above, MS may have GIVEN them EVERYTHING for FREE.

    Will they actually get the job done? It's impossible to know either way unless they cure cancer with the thing. Facts please.
    Facts? You're right, I am required to respond to your blanket trolling assumptions with facts, rather than provide the flip side of the coin.

    Which tools are these? Gui's? I imagine the average win32 admin WOULD be at a loss in a Linux cluster. /etc/ is just mind-boggling!
    I suppose if you REALLY ran windows clusters for a living you'd know which tools.
  • by Cutie Pi ( 588366 ) on Tuesday June 24, 2008 @03:16PM (#23922761)

    Accessibility can mean: 1) able to access, 2) easy to use. When it comes to supercomputers, th former is very much true nowadays, but the latter is not. And it's not just a matter of programming. Pretty much all supercomputers can be programmed with a standard programming environment, say C + MPI + SCALAPACK libraries. (I think more could be done on that side too, but that is a different story).

    But the steps required to actually run the programs can be exceedingly difficult. I liken it to the state of desktop linux about 12 years ago... Yes, it was accessible in that PCs were everywhere and you could grab a free copy of Slackware, but the setup process was mind numbing. Setting up X was not for the faint hearted as it required knowing intimate details about your graphics and display hardware. There were stern warnings that using the wrong modeline values could damage your CRT. Nowadays even my grandmother could install Ubuntu and everything would be automatically detected. That's the progress that I think needs to happen on the supercomputer user interface side of things.

  • by Anonymous Coward on Tuesday June 24, 2008 @05:03PM (#23924381)

    I read the link, here's a particular quote that caught my eye:

    "The consortium is committed to making supercomputing resources more widely available. However, its high-performance computing (HPC) systems were Linux-based, and CINECA realized that an increasing number of researchers, especially those in private industry sectors, were unfamiliar with Linux-based tools and interfaces. Acquiring the necessary expertise to use the consortium's resources was too time-consuming and costly for many enterprises."

    Smells like fud to me. WTF?! *If* you're going to write a program that needs an HPC system to run *then* you're going to have to learn how to program for an HPC? Hello?! It's call mpi, upc, shmem, hell, even pvm. And they've been around for what, 19 years? What's it got to do with linux? Nothing, *except* that Linux has *excellent* support for that clustering software.

    WTF do you need windows for? A cluster version of notepad? Duke Nukem Forever? And lastly, what's so *time consuming* and *costly* about doing the following:

    1) Open a browser. any damn browser.
    2) Type in the url: www.google.com
    3) Type in the search box:

    mpi upc shmem pvm site:wikipedia.org

    4) Follow the freakin' links and *READ* the info.

    Cost: $0
    Getting a freakin' clue about the current standard of cluster programming: priceless.

    What of kind of researchers from private industry do they have that they don't know how to use google or wikipedia? Gartner experts? Microsoft employees?

  • Re:finally (Score:3, Insightful)

    by mspohr ( 589790 ) on Tuesday June 24, 2008 @05:12PM (#23924493)
    I think the surprise here is that MS is using same core that's in their very shaky Vista software to run their server software.
  • by jsac ( 71558 ) on Tuesday June 24, 2008 @05:20PM (#23924607) Journal

    What "class" would that be? I imagine it would explicitly exclude Free clusters.

    This cluster has appeared in the last three Top 500 lists. In June and November 2007 it had a performance of 62.68 TFlops with 70% efficiency, running Linux. In June 2008 it had a performance of 68.48 TFlops with 77% efficiency, running Windows HPC Server 2008.

    http://www.top500.org/system/details/8757
    http://www.top500.org/system/ranking/8757

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...