Windows Breaks Into Supercomputer Top 10 294
yanx0016 writes "Wow, that's some news this week at SuperComputing 08. Apparently Microsoft Windows HPC Server 2008, with a Chinese hardware OEM (Dawning), made #10 on the Top500 list, edging out #11 by only 600 Gflops. Folks were shocked to see Microsoft getting so serious around HPC; I think we are only beginning to see a glimpse of Microsoft in the HPC field."
Retarded (Score:5, Interesting)
Honestly, why would anyone want to roll-out something like this on Windows. A lot of extra expense for little practical value.
Cost per MIP or how many CALs (Score:5, Interesting)
So how many CALs are required to access the system?
And if I want to make the system available to a different researcher every 2 hours how much is it going to cost them to be license compliant?
How much cpu power am I going to need to compute the licensing costs?
http://www.microsoft.com/resources/sam/lic_cal.mspx
Potentially bogus (Score:5, Interesting)
A couple of years ago I was surprised when one of my HPC customers issued a press release saying that their machine ran Windows HPC. The high-speed interconnect we'd sold them had no Windows drivers. You can guess what was going on: MicroSoft paid for the press release, and the machine actually ran Linux.
Dawning's previous fast machine ran Linux.
Re:Retarded (Score:2, Interesting)
Re:Potentially bogus (Score:5, Interesting)
What is most interesting to me is that in the case of HPC, the situation between Windows and Linux is reversed. Linux has overwhelming market share in HPC, compared to Windows status as a niche player (and that is being generous). Despite this fact, Microsoft regularly gets fawning coverage in the media for their HPC efforts, far more than they should be if you consider their marketshare. It's like PC Magazine going on and on about all the latest developments in the Linux desktop market.
Re:From the article, pricing is (Score:2, Interesting)
The price per node is $450 commercial.
The #10 place was achieved with 30720 cores.
That is $13,840,000 for the HPC Server licenses. I presume each comes with the stanadard 50 or 100 CAL's.
Beyond that you are licensing 30720 cal's per each new user that gets access to the system.
Re:Off topic, but I have to mention it (Score:5, Interesting)
A C64, according to this guy runs at about 320 flops.
That just can't be. I remember the Programmer's Reference Manual showing most normal instructions finishing in 2 or 3 clocks, or maybe 350,000 IPS. I can't imagine that FLOPS would be 1,000 times slower than other opcodes. I mean, I'm pretty certain I could re-implement them in assembler in many fewer than 1,000 instructions.
Re:Does not compute. M$ is not for HPC. (Score:2, Interesting)
Well, the mods go either way, mostly. We're into deep offtopic territory here now. It really depends on whether or not people with mod points think that a) his drivel is valuable; and b) whether or not his shilling is "OK" because of (a).
Read this [slashdot.org] if you have time. It's linked from the journal that documents his gaming of the moderation system, but it captures the whole thing very well. That's who you're dealing with here, so I generally recommend just stepping away or risk getting some twitter on your shoes, which is generally not hygienic :)
Re:Yeah, mut how much useful stuff is happening? (Score:5, Interesting)
considering that FLOPS refers to the number of floating point operations the processor can perform per second, which would be the same regardless of what OS a system is running, i would have to say that your guess is incorrect.
also, considering that most supercomputers are actually supercomputing clusters, the "supercomputer" in question is probably running more than just a single instance of the OS. since the Dawning 5000A uses Quadcore Opteron processors, and is listed as having 30720 cores, it should have 7680 processors. and since Windows Server 2008 can only use 8 processors (i think HPC is limited to 4), the 5000A must have at minimum 960 nodes. and since each node would be running its own instance of Windows HPC, the Dawning 5000A must be running at least 960 instances of Windows.
i don't know how Windows HPC compares to Linux or other OSes, but running a bloated OS on a supercomputing cluster would definitely have a large impact on its real world performance.
An attempt to artificially inflate my linux ego: (Score:5, Interesting)
#10 on the list uses a AMD x86_64 Opteron Quad Core @ 1900 MHz and has 30720 cores and pumps out 180600 GFlops.
#8 on the list uses a AMD x86_64 Opteron Quad Core @ 2100 MHz and has 30976 cores and pumps out 205000 GFlops.
#10 runs windows, #8 runs linux.
Working through this: Gflops/# of cores/Mhz per core I get:
#10 with 3.094 Gflops/Mhz and #8 with 3.151 Gflop/Mhz
This leaves the linux machine getting 57 more KFlops per Mhz than the windows box.
disclaimer: Totally useless mental farking, without knowing more about the systems other components and more about the processor generations it's silly to assume the 57 KFlops is purely due to the OS, but hey, it's windows and everyone loves an easy target. :D
Re:Off topic, but I have to mention it (Score:3, Interesting)
Well, here's another data point. In C64 BASIC, floats were the default datatype. I distinctly remember that FORI=1TO100:NEXT took right at 1 second, giving 100 FLOPS. However, that also includes the overhead from the world's. slowest. interpreter. I'd think surely calling the same functions from assembly would be less terrible.
Not that any of this should detract from the original point: supercomputers are mind-bogglingly fast when compared to the hardware a lot of us here grew up with.
Re:Retarded (Score:1, Interesting)
As the poster above points out, I'm talking about stuff like your interconnect. Not having looked too hard at it, I'm going to guess that your choice is limited to Intel 10GbE or maybe, perhaps, one of the Infiniband vendors has a Windows driver. Maybe.
I'm referring to the system integrator (who likely has very little experience working with Windows or Microsoft) and 3rd party software support. Your choice of MPI library is going to be Intel and maybe one more. Who the hell knows what their OpenMP or shmem support and performance is?
So? I'm referring to the OS overhead per GFLOPS. Windows HPC is unproven and untested and will have a long way to go to match the low overhead of existing HPC tuned operating systems.
Well no, I wouldn't expect it to. I would expect my experience of HPC to translate pretty well, though.
Re:From the article, pricing is (Score:4, Interesting)
(For the uninformed, not all CALs are created equally and the parent is assuming that these are named licenses that must be purchased for each user. Many different kinds of CALs exist, and I suspect these are either physical unit licenses or concurrent access licenses, i.e.: you purchase 1 per node, period.)
From an IT management perspective this is one of the biggest BS headaches around.
CEO - 'So you have to pay extra to connect to the server even after you paid for the server software'
IT - 'Yes $35 dollars per seat or we could go by server connection'
CEO - 'So it's simple then we just multiply number of employees by number by $35'
IT = 'No, it's by connection. If a computer is connecting to a server it needs a call or the server needs a CAL for a connection. We need to figure out which is cheaper for us. Has nothing to do with whether a person is using the computer. Here's an estimate'
CEO - 'Holy crap, okay be done with it'.
IT - 'Well we have got to decide if we want everyone to have full access to the Exchange Server'
CEO - 'Well multiply by $35 and be done with it'
IT - 'Well that's more like $75 to $100 depending on how many CALS we get'.
Ad nauseam explaining all the different CALS and different licensing options.
Actually how the conversation got started was by handing the 3 required quotes to Purchasing. Three different prices from three different Certified MS vendors. None able to totally explain why they differed and all willing to say we would be compliant if we purchased these.
The question which came back was why do 4 $475 dollar servers end up costing us $7000.
Then we got into client and office suite licensing OEM vs non-OEM.
But basically they studied and learned in depth enough about it that NT4 was the last server version purchased.
So basically if they had only required a $699 license to SCO we would probably be still using Windows Servers.
So while MS classes fill the young techies head with knowledge on CALs, other OS classes are concentrating on different protocols and how email servers, web servers, dhcp servers
We kind of find more pride in fixing the problem, and less in endurance phone calls to MS tech support,and intricate knowledge of MS licensing options.
Practicality (Score:2, Interesting)
I work for an engineering firm that is constantly running numerical analysis as part of our primary business. We run about 500 Linux boxes (with varying numbers of cores) arranged into multiple clusters. Our desktops run Windows.
Our pre-processing tools are Windows-based and our post-processing tools are Windows-based. Institutional knowledge/experience and mature tools means that this isn't going to change. Our in-house solvers are Linux-based not by choice, but because Microsoft doesn't offer a cost-effective solution for running our simulations on Windows boxes.
Even though we've developed utilities to make the process as smooth as possible, having Linux as part of the process chain is still an enormous pain and one we'd like to resolve. We've been watching Windows HPC with interest and can only hope for the price point to become reasonable.
Although Windows remains in poor regard with many here, there are many companies in the business of engineering simulation that would happily welcome Windows HPC.
Re:The OS is very important (Score:4, Interesting)
Microsoft has only been in this game for a short time and only recently got support for 256 cores. Getting support is one thing, getting **good**, optimised, support is quite another and that will take some time to get right.
I would argue that NT as a server platform is older than Linux as a server platform. I remember it being noted in a CS text book about the far higher SQL performance on Microsoft and Solaris systems when compared to linux, for instance, and that having much to do with the system architecture. I believe people use it because it's more optimized and has a more efficient underlying architecture. At NCSA, which is down the street from me, they run a pretty serious Windows HPC 2008 cluster, and they have very good things to say about its performance compared to the linux systems. The deployment time is also another plus, which is really remarkable for a cluster. One of the biggest issues, though, is issue resolution. When they have some sort of issue (don't let your windows 98 imaginations run wild, I am talking about little hiccups here) Microsoft usually has a hotfix or patch out in hours. The problem resolution and support positively topple any linux distribution and even Sun.
I would say that Windows HPC 2008 will be a pretty serious offering for small businesses that prefer to use easier to maintain Windows-based IT infrastructures. With enough time in cluster computing, they'll probably start picking up more enterprise customers as well. It's really nothing to laugh at-- it's the only solid non-unix offering, which is a big step ahead for companies not trapped in the 70's technology-wise.
Maybe with this global economic crisis, more companies will embrace this technology in order to cut IT and support overhead. You can crunch the numbers on a team of unix guys versus a couple of NT guys and a license. Support and effective administration infrastructure goes a long way. I think Microsoft is going to take back some of this market where Linux got ahead because Microsoft simply had no comparable offering.