Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Windows IT Linux

Windows Cluster Hits a Petaflop, But Linux Retains Top-5 Spot 229

Twice a year, Top500.org publishes a list of supercomputing benchmarks from sites around the world; the new results are in. Reader jbrodkin writes "Microsoft says a Windows-based supercomputer has broken the petaflop speed barrier, but the achievement is not being recognized by the group that tracks the world's fastest supercomputers, because the same machine was able to achieve higher speeds using Linux. The Tokyo-based Tsubame 2.0 computer, which uses both Windows and Linux, was ranked fourth in the world in the latest Top 500 supercomputers list. While the computer broke a petaflop with both operating systems, it achieved a faster score with Linux, denying Microsoft its first official petaflop ranking." Also in Top-500 news, reader symbolset writes with word that "the Chinese Tianhe-1A system at the National Supercomputer Center in Tianjin takes the top spot with 2.57 petaflops. Although the US has long held a dominant position in the list things now seem to be shifting, with two of the top spots held by China, one by Japan, and one by the US. In the Operating System Family category Linux continues to consolidate its supercomputing near-monopoly with 91.8% of the systems — up from 91%. High Performance Computing has come a long way quickly. When the list started as a top-10 list in June of 1993 the least powerful system on the list was a Cray Y-MP C916/16526 with 16 cores driving 13.7 RMAX GFLOP/s. This is roughly the performance of a single midrange laptop today."
This discussion has been archived. No new comments can be posted.

Windows Cluster Hits a Petaflop, But Linux Retains Top-5 Spot

Comments Filter:
  • by mattventura ( 1408229 ) on Sunday November 14, 2010 @02:39PM (#34224372) Homepage

    2.57 petaflops per second

    floating point operations per second per second?

    • by Shikaku ( 1129753 ) on Sunday November 14, 2010 @02:40PM (#34224390)

      I'd say Google datacenters accelerate at about that rate.

    • by K. S. Kyosuke ( 729550 ) on Sunday November 14, 2010 @02:44PM (#34224414)
      One petaflop, two petaflops... (Anyway, I didn't know that MS has already shipped so many flops...)
      • by jimrthy ( 893116 )
        That sig's awesome.
    • by History's Coming To ( 1059484 ) on Sunday November 14, 2010 @02:45PM (#34224426) Journal
      I can only presume this is a manifestation of Moore's Law, the curve is now so steep the computers are accelerating as they're running. Or maybe a typo ;)

      I'm willing to bet that the top end is going to become less and less relevant and we're going to be judging processors more and more by the "flops-per-watt" and "flops-per-dollar" rating. We're already in a position where clusters of commercial games machines make more sense than a traditional supercomputer for many applications, and I dread to think how much energy could be harvested from these using some efficient heat exchangers.
    • by Barefoot Monkey ( 1657313 ) on Sunday November 14, 2010 @02:48PM (#34224448)

      2.57 petaflops per second

      floating point operations per second per second?

      Well-spotted. It appears that this particular supercomputer gets faster the longer it is left running. Clearly the reason that it ran faster with Linux than with Windows was because in the latter case it needed to be restarted after every Patch Tuesday, thus limiting the potential speed increase to 6.88 zettaflops.

      • by Kjella ( 173770 )

        thus limiting the potential speed increase to 6.88 zettaflops.

        If it could become a million times faster by installing Windows, there's something very very wrong with the world.

    • Yeah! Don't you get it? It's accelerating at that petafloppage!
    • 2.57 petaflops per second

      floating point operations per second per second?

      "What is: the speed of a supercomputer falling off a cliff?"

      Trebeck: "That's correct. You select next."

      "I'll take: 'Bad jokes' for 1,000, Alex."

  • Dual boots? (Score:4, Funny)

    by backslashdot ( 95548 ) * on Sunday November 14, 2010 @02:50PM (#34224470)

    So it dual boots? press the option key or something to get into Windows and play Crysis?

    • by zlogic ( 892404 )

      This machine is probable capable of playing Crysis with a framerate higher that 20fps

    • by Nidi62 ( 1525137 )
      You assume even this computer is powerful enough to play Crisis on anything other than low settings.
  • Interesting (Score:3, Insightful)

    by quo_vadis ( 889902 ) on Sunday November 14, 2010 @03:01PM (#34224584) Journal
    It is interesting that there are 6 new entrants in the top 10. Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray. I suspect we might be seeing something like a paradigm shift, such as when people moved from custom interconnect to GbE and infiniband. Or when custom processors began to be replaced by Commercial Off The Shelf processors.
    • Re:Interesting (Score:4, Informative)

      by alexhs ( 877055 ) on Sunday November 14, 2010 @03:23PM (#34224746) Homepage Journal

      Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray

      Funny that you mention Cray, as the Cray-1 [wikipedia.org] was the first supercomputer with vector processors [wikipedia.org], what GPGPUs actually are.

      • by vlm ( 69642 )

        Funny that you mention Cray, as the Cray-1 [wikipedia.org] was the first supercomputer with vector processors [wikipedia.org], what GPGPUs actually are.

        Cray-1 date of birth 1976

        CDC Star-100 date of birth 1974 (not a stellar business/economic/PR success, but it technically worked)

        ILLIAC IV design was completed in 1966. Implementation, however, had some problems. Debatable, but sort of true to say it was first booted up in 1972 but wasn't completely debugged for a couple years. As if there has ever been a completely debugged system.

        Thats the problem with "first", theres so many of them.

        • by alexhs ( 877055 )

          I went by "The vector technique was first fully exploited in the famous Cray-1" from the wikipedia :)

          Apparently the difference between the CDC Star-100 and the Cray-1 is the adressing mode : Star-100 fetched and stored data in main memory while the Cray-1 had 64 64-bit registers.

          On the account of ILLIAC IV, Wikipedia says it "was finally ready for operation in 1976". It booted in 1972, but wasn't reliable enough to run applications at that time. It was usable in 1975, operating only Monday to Friday and hav

      • Vector processors in supercomputing are like bellbottoms, they constantly go in and out of style. Like you said the first supercomputers were vector machines, but then with the rise of cots vector processors fell out of style for a while, came back briefly with the earth simulator, and now is back with the gpu. Unlike previous vector processors gpus have a lot more restrictions(ESP when it comes to memory bandwidth and latency), but also unlike previous vector processors gpus are dirt cheap an d developme
    • Well part of the problem is that the definition of supercomptuer has become a little blurred, in particular with regards to the top500. Many things people are calling supercomputers really aren't, they are clusters. Now big clusters are fine, there are plenty of uses for them. However there are problems that they are not good at solving. In clusters, processors don't have access to memory on other nodes, they have to send over the data. So long as things are pretty independent, you can break down the proble

    • Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray. I suspect we might be seeing something like a paradigm shift, such as when people moved from custom interconnect to GbE and infiniband. Or when custom processors began to be replaced by Commercial Off The Shelf processors.

      Nope. GPUs are just today's version of the FPUs in the 90s. Right now, they're a feature omitted in most systems, with software emulation taking up the s

      • You will NOT be seeing a gtx480 equivalent in your cpu die any time soon.

        From a thermal and 'oh dear god so many transistors' perspective it's pretty much impossible since as process improvements occur the gpu's just use more transistors to suit.

        Fpu's were a drop in the bucket compare to modern decent gpu's

        • You will NOT be seeing a gtx480 equivalent in your cpu die any time soon.

          No you won't. AMD's original idea was to have an SMP motherboard, with an Opteron in one socket, and a GPU in another. This would be eminently doable with a GTX 480, and anything else you can name. FPUs had their own sockets to start with, too.

          These days, having a GPU as a seperate core on a chip would be the expected way to go, and there's no reason that couldn't happen. Sure, it's a lot of transistors, but with entirely separate

    • Your nostalgia is showing. The latest Crays don't immerse their boards in flouronert. They aren't hand connected by teams of unusually small weavers. They don't use chips clocked ten times faster than anyone else's. They aren't physically small. They do, however, have nifty interconnects-- which probably explains why Jaguar is 75% efficient, and Tianhe-1A is 54% efficient.

  • The last two sentences on the summary are the most interesting ones. If you thought that the rate of growth of memory and processing power on standard home/office computers is out of hand just look at the supercomputers. These things are basicly old when delivered and their life is practically max. 3-5 years, after that nobody cares. And that is a pity considering how much these beasts cost and they are mostly funded with public (tax) money because running a business selling processor time from these things

    • by Teun ( 17872 )
      But being ahead of the herd has never been cheap and the rewards (or losses for being late) have made it a necessity.
    • by gmack ( 197796 )

      Pretty sure Render Farms have the same obsolescence problem and there are businesses that depend on them.

    • Re: (Score:3, Insightful)

      by Junta ( 36770 )

      Not really. Yes, there is something of an arms race for the top500, but even after the top500 no longer lists a system it will almost certainly still be in use by someone for practical purposes other than benchmarking.

      • by bbn ( 172659 )

        The weather institute had the most powerful supercomputer in this (small) country. It was not that old, but it was time to upgrade. The new supercomputer was now the fastest supercomputer and the old one became the second fastest.

        What happened to the second fastest supercomputer in the country? It was scraped. They could not afford to keep running it due to cost of powering it. They could not sell or give it away, because the economics of it did not add up. Anyone needing such computing power were better of

        • by wisty ( 1335733 )

          I'm going to guess that the old supercomputer used Pentium 4s. An old Core2 system is probably still OK, but P4 systems are only good if your heaters need upgrading.

  • The US had best take the processor speed race very seriously. Who knows what kind of either military or economic domination might get a leg up with super computers? And once on top it can take a century or two to dislodge a leader in technology.

  • Finally a Windows box capable of running Duke Nukem Forever!

  • I need one (Score:3, Interesting)

    by florescent_beige ( 608235 ) on Sunday November 14, 2010 @03:58PM (#34225014) Journal

    I need one so I can recalculate my budget spreadsheet in a femtosecond. These nanosecond pauses are getting old.

    On a lighter note, so, why isn't this stuff changing our lives? I remember in the late 90's I read a story about how gigaflop computing would revolutionize aeronautics, allowing the full simulation of weird new configurations of aircraft that would be quantum leaps over what we had. Er, have.

    Can I answer my own question? I mean, can I answer two of my questions? No, make that three now. Anyway, my perspective is that the kinds of engineers who have the knowledge required to write this kind of software aren't software engineers. In fact, aeronautics is rife with some of the most horrifying software imaginable. Much of it being Excel macros. Seriously. I wrote some of it.

    • by Jeremy Erwin ( 2054 ) on Sunday November 14, 2010 @07:13PM (#34226530) Journal

      n fact, aeronautics is rife with some of the most horrifying software imaginable. Much of it being Excel macros.

      This is why Windows HPC is going to change everything

  • by Entropius ( 188861 ) on Sunday November 14, 2010 @04:03PM (#34225060)

    I do scientific high-performance computing, and there is simply no reason anyone would want to run Windows on a supercomputer.

    Linux has native, simple support for compiling the most common HPC languages (C and Fortran). It is open source and extensively customizable, so it's easy to make whatever changes need to be made to optimize the OS on the compute nodes, or optimize the communication latency between nodes. Adding support for exotic filesystems (like Lustre) is simple, especially since these file systems are usually developed *for* Linux. It has a simple, robust, scriptable mechanism for transferring large amounts of data around (scp/rsync) and a simple, unified mechanism for working remotely (ssh). Linux (the whole OS) can be compiled separately from source to optimize for a particular architecture (think Gentoo).

    What advantage does Windows bring to a HPC project?

    • The only "advantage" is when you're defaulted to Windows because an ISV has a required shrink wrap application available only for Windows.

    • by devent ( 1627873 )
      Maybe Exchange, Outlook and ActiveDirectory?
    • Duh. Obviously to run Crysis.

    • You will probably laugh but banks and finances do not: Excel spreadsheets.
      Microsoft HPC solution allows distribute it across many nodes.
      Trust me: *huge* money are there (alas, not for you, not for me and not for science).
      It's much cheaper for a bank to rent a supercomputer to calculate a heavy spreadsheet written by programming-challenged but money-wise CPA then to hire a money-challenged, HPC-wise guy to rewrite (and perpetually modify it on a short notice) this spreadsheet to FORTRAN.

      • Re: (Score:3, Informative)

        by gartogg ( 317481 )

        Any other application built for windows has the same issue.
        I work for a company doing modeling for insurance, and the software for catastrophe modeling (RMS, AIR, Eqecat,) all are windows only. The simulations and models take days to run for a large data set, and the software/modeling companies aren't about to switch off of windows for the software licensed from them, and there is nowhere else to go.

        • Re: (Score:3, Informative)

          by kramulous ( 977841 )

          As a professional HPC programmer, using "days to run for a large data set" is absolutely meaningless to me.

          Define large. Means different things to different people.

    • by Deviant ( 1501 ) on Sunday November 14, 2010 @08:29PM (#34227010)

      One of the big developments of late has been in data mining of the data from your ERP system / data warehouse to answer questions about your clients/business and to find interesting patterns in your data. Couple this with the fact that buisinesses are trying to retain more and more data in a live database to make this data mining more deep/interesting and the needs for massive database servers with the power to run some crazy complex queries/reports is on the rise.

      The popular example of this sort of thing in Wal-Mart who retain everything in an electronic form and can do scary things like see pictures of you based on correlating their digital security footage with your credit card purchase at a point in time at a particular register or track the differences in sales of individual products during unusual events like Hurricanes etc.
      http://developers.slashdot.org/article.pl?sid=04/11/14/2057228 [slashdot.org]

      Many businesses run Microsoft SQL Server as the backend for their ERP system and/or as their primary database. This would allow them to build a nice little HPC system to do the sorts of scary things with massive amounts of data that they have been wanting to do. I end with this funny cartoon on the subject.
      http://onefte.com/2010/09/21/target-markets/ [onefte.com]
       

  • The summary clearly states "Linux retains a spot in the top-5", then goes on to say that China has 2 "top spots", with America and Japan only having one spot a piece. And while that may be true if you limit it to a "top-4", America is tied with China if you count the number 5 position. So why does the OP pull this slight of hand, only counting the top 4 as the "top spots" after making reference to the "top-5" as the measure of top positions? Looks like bias to me.

  • by Beelzebud ( 1361137 ) on Sunday November 14, 2010 @04:46PM (#34225438)
    But will it play Flash Video smoothly at full screen?
  • Top machine: 2500 x 10^12 floating point ops x 64 bits = 160 x 10^15 bits per second

    Human brain: 10^11 neurons x 10^4 synapses x 100 Hz firing rate = 100 x 10^15 bits per second.

    I am not saying it will wake up tomorrow and launch Skynet, but until now inadequate hardware was a barrier to human-level AI.

    And yes, I am quite aware that a synapse firing is not directly comparable to a binary bit. Call this a rough comparison.

  • From the article:

    "While the computer broke a petaflop with both operating systems, it achieved a faster score with Linux, denying Microsoft its first official petaflop ranking."

    If the machine broke a petaflop with both operating systems (Linux and Windows), how was it denied an official Petaflop ranking? It achieved it, why doesn't it count? Is each machine only allowed one ranking? Seems sort of odd, the ranking of the same machine with different OSs would be interesting, no?

If the code and the comments disagree, then both are probably wrong. -- Norm Schryer

Working...