Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft Supercomputing Windows Hardware

Fastest-Ever Windows HPC Cluster 216

An anonymous reader links to an eWeek story which says that Microsoft's "fastest-yet homegrown supercomputer, running the U.S. company's new Windows HPC Server 2008, debuted in the top 25 of the world's top 500 fastest supercomputers, as tested and operated by the National Center for Supercomputing Applications. ... Most of the cores were made up of Intel Xeon quad-core chips. Storage for the system was about 6 terabytes," and asks "I wonder how the uptime compares? When machines scale to this size, they tend to quirk out in weird ways."
This discussion has been archived. No new comments can be posted.

Fastest-Ever Windows HPC Cluster

Comments Filter:
  • Re:finally (Score:4, Interesting)

    by Zashi ( 992673 ) on Tuesday June 24, 2008 @11:27AM (#23917671) Homepage Journal

    You've no idea how right you are.

    I got to test Server 2008 before it was released to the public. All our internal applications identified 2008 as "Vista".

  • by Gazzonyx ( 982402 ) <scott.lovenberg@gm a i l.com> on Tuesday June 24, 2008 @11:29AM (#23917743)
    The Windows Server 2K8 code base must be better than previous versions of Windows. From what I understood, Windows didn't scale for clustering due to problems with file locking (IIRC, the overhead for tracking locks grew quickly enough that the performance was marginalized past about 4 nodes). Unless they're using an iSCSI SNS server that handles the locks over a clustered file system. Still, this is leaps and bounds beyond previous versions of Windows WRT clustering!
  • Only six teras ? (Score:3, Interesting)

    by billcopc ( 196330 ) <vrillco@yahoo.com> on Tuesday June 24, 2008 @11:57AM (#23918373) Homepage

    So.... six terabytes... isn't that horribly small by today's standards ? I mean, our small backup server here is 2 teras, it's just a cheap PC with a bunch of SATA drives in it.

    Does that mean my gaming rig and media server, when combined, constitute an "HPC Cluster" worthy of the top 100 ?

    Ghey.

  • Re:But why?! (Score:2, Interesting)

    by Kingston ( 1256054 ) on Tuesday June 24, 2008 @12:10PM (#23918681)
    It looks like Microsoft engineers have been working with the NCSA and a beta version of Microsoft HPC server 2008 as part of a Microsoft marketing push for this software. The marketing pdf is here [microsoft.com]. Microsoft want to increase their foothold in HPC, it's a growing, high margin market.
  • Okay... (Score:3, Interesting)

    by ledow ( 319597 ) on Tuesday June 24, 2008 @12:17PM (#23918847) Homepage

    But the statistics for the top500.org show that over 9000 processors is way above normal for a supercomputer cluster up there. In fact less than 5% of machines in the entire 500 have more than 8000 processors, with the majority around the 1-4k mark. Oh, and 85% run Linux-only with an amazing 5 (not percent, actual projects) running Microsoft-only. So it looks like MS did this through hardware brute-force, not some amazing feat of programming. But then, that's true of them all. Although being in the top500 list is "good PR", it doesn't mean that much.

    I wonder what the licensing is like for a 9000-processor Windows Server, though?

  • by idiot900 ( 166952 ) * on Tuesday June 24, 2008 @12:25PM (#23919059)

    Can someone explain why anyone could possibly want Windows on a scientific computing cluster? What does Windows offer that Linux doesn't?

    Much of my work involves running molecular dynamics simulations. By HPC standards these are tiny calculations (in my case, usually 32 CPUs at a time). All science HPC software I'm aware of is Unix-oriented, and everything runs on Linux. At my institution we have an OS X cluster and we are in the process of purchasing a Linux cluster. We didn't even consider Windows - given the difficulties we've experienced administering Windows on the desktop, a Windows cluster just seems like an expensive exercise in frustration.

  • by UnknowingFool ( 672806 ) on Tuesday June 24, 2008 @12:36PM (#23919371)
    Cost is another factor. I don't know how much volume discounts come into play but running 9000+ cores might cost a great deal if it wasn't built by MS themselves. Also they were able to tweak the OS code and kernel as they see fit. A Windows HPC customer may not have that flexibility.
  • Re:But why?! (Score:3, Interesting)

    by gmack ( 197796 ) <gmack@noSpAM.innerfire.net> on Tuesday June 24, 2008 @12:51PM (#23919735) Homepage Journal

    It's growing yes but its actually a very low margin market. The whole idea of an HPC cluster is saving money.

    Somehow I doubt it's the margins so much as the fact that Linux dominates it and they are afraid Linux will use that to gain a foothold elsewhere.
     

  • Re:*yawns* (Score:2, Interesting)

    by labmonkey09 ( 992534 ) on Tuesday June 24, 2008 @12:54PM (#23919815)
    There is a difference between super computing and HPC. Up till now Linux has had little to compete with in scaled out HPC rigs. Allot of that has to do with node pricing and the fact that Sun has been asleep at the wheel (no pun intended - if know SunOs you should be laughing). However, priced right this and Solaris are a real competative threat for Linux. Linux is not a great platform for HPC. The kernel doesn't scale to extreme levels (total througput pegs early) and Tx latencey gets pretty wide at the top end. You have to over-scale to flatten the latencey curve and this causes other problems that can affect throughput like locality of data. Solaris is a great platform because it provides low latency spread, good througput and solid reliability. Windows (believe it or not) provides the highest total throughput under a given load with a given hardware set but it's latencey is not as good as Solaris.
  • by Cutie Pi ( 588366 ) on Tuesday June 24, 2008 @01:23PM (#23920499)

    While I don't agree that Microsoft Windows HPC Server is the best software to manage a supercomputer, the linux diehards out there should pay attention to a problem that Microsoft is trying to tackle: accessible supercomputing. See one of their case studies [microsoft.com] as an example.

    The bottom line is, these days pretty much anyone has access to a few TFlops of compute power, but the learning curve for getting something running on these machines is pretty intimidating, especially for non-CS based disciplines. I've had to take a 1-2 day class, plus futz around with the clunky command-line tools for a few days or so, on every supercomputer I've used, just to get simple jobs running. In my experience, people learn to game the various batching and queuing systems such that their jobs run faster than everyone else's, further shutting out the newcomers.

    HPC vendors would be wise to focus more attention on the tools and interfaces so that Joe-researcher can set the number of nodes and go, rather than having to manually edit loadleveler text files, sending them to the queue, and then coming back next day to find the job failed due to a typo in the startup script.

    On multi-TFLOP systems, not everyone needs 99.5% efficiency with all the implementation details that requires. These days, many people just want their job to run reasonably quickly, with no fuss.

    The same thing happened several years ago with the move to high level languages like Python and Ruby. Sure, they're slower than C++ and FORTRAN. But for the vast majority of applications, you wouldn't know the difference on modern processors. And the turn around time and user-friendliness on these languages is so much better, using them is a no-brainer.

    Hopefully Microsoft can spur the industry in this direction.

  • humph..... (Score:3, Interesting)

    by advocate_one ( 662832 ) on Tuesday June 24, 2008 @01:46PM (#23921035)
    now see how fast the identical hardware runs with Linux on it... bet it goes way faster...
  • by bockelboy ( 824282 ) on Tuesday June 24, 2008 @02:06PM (#23921477)

    From your case study:

    """
    In addition, it is investigating ways to allow users to connect remotely to the cluster. It expects to complete the project and move the cluster into production by March 2009.
    """

    By time the cluster in the case study allows users to remotely log in, the hardware will have lost at least 1/2 of its value.

    While more work is needed to make things user friendly, you have to remember that the funding is there for CPUs; not many folks are forward looking enough to realize researchers really need funding into making stuff easier.

  • more similar (Score:5, Interesting)

    by DrYak ( 748999 ) on Tuesday June 24, 2008 @02:11PM (#23921591) Homepage

    In related news, both Ubuntu 8.04 and Fedora 9 report being Linux v2.6.
    Except that Linux kernel is just a tiny part of a distribution. In fact, those two distribution don't even share the same version, yet alone build. (Distrowatch pages for Ubuntu [distrowatch.com] and Fedora [distrowatch.com] could tell you the difference in version for most common components)

    Whereas Server 2008 and Vista share a tad more of their code base.

    and *that* is relevant.

    And could be humorously be alluded to because of the mis-detection of some software.

  • by rs232 ( 849320 ) on Tuesday June 24, 2008 @02:14PM (#23921647)
    "Microsoft is trying to tackle: accessible supercomputing"

    Assuming MS was responding to this imagioned problem ..

    "The contest showed that supercomputers .. are accessible [supercomputingonline.com] to people interested in pursuing science, simulation or modeling"

    "but the learning curve for getting something running on these machines is pretty intimidating, especially for non-CS based disciplines. I've had to take a 1-2 day class, plus futz around"

    You actually programed a supercomouter - cool. What type and where exactly? How does HPC Server differ in respect to other solutions?

    "the Blue Gene family of supercomputers has been designed to deliver ultrascale performance within a standard programming environment [ibm.com]"

    "Hopefully Microsoft can spur the industry in this direction"

    You mean like continually inventing Apple, badly .. :)
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday June 24, 2008 @03:50PM (#23923301) Homepage Journal
    ...is that it is a true HPC clustering environment. They demoed the 2003 cluster edition at SC|05, and frankly I was not impressed. Nor were most other people, it was not a highly-popular stand. That could be because they were demonstrating things like Excel on the Cluster Edition. A clustered spreadsheet?! Oh, and the version of MPI they are using is derived from MPICH. For those who are unfamiliar with clustering and message passing, MPI is pretty horrible at the best of times, and MPICH is a nasty implementation of it. MPICH is great as proof-of-concept, is very portable, and superb as a teaching tool, but it's slow, lacks many of the features found in more modern implementations, and exists in a million different variants because it's really not very pluggable.

    However, MPI itself has serious issues. The master copy of the program starts/stops slave programs vis SSH, although some implementations also support inetd-style starts. Messages sent to multiple machines are sent sequentially, rather than via a reliable multicast, wasting bandwidth and wasting CPU cycles.

  • Re:finally (Score:3, Interesting)

    by nmb3000 ( 741169 ) on Wednesday June 25, 2008 @06:15PM (#23941803) Journal

    What was removed ?

    Honestly, most of what bothers me are UI changes that didn't need to be made and in any case make the UI worse, not better.

    That said, the only feature removed that comes to mind immediately is the File Types association dialog box from the Folder Options control panel / dialog. In every version of Windows you've been able to add/change file verbs and actions as well as do things like change the icon, description, etc. This gave you a very fine level of control and it was great for those who wanted/needed to use it. In Vista that dialog/tab was nuked and replaced with some vanilla "what program do you want to open this file with" crap.

    There are more that I can't recall offhand, but that's probably the biggest. Personally, I think it almost comes across as an insult to Windows administrators that they'd just go off and remove something like that.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...