Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing AMD Intel Hardware

New Top 500 Supercomputer List 138

geaux and other readers let us know that the new Top 500 Supercomputer list is out. The top two both break the Petaflops barrier: LANL's IBM "RoadRunner" and ORNL's Cray XT5 "Jaguar." (Contrary to our discussion a few days back, IBM's last-minute upgrade of RoadRunner salvaged the top spot for Big Blue. Kind of like bidding on eBay.) The top six all run in excess of 400 Teraflops. HP has more systems in the top 500 than IBM, reversing the order of the previous list. Both Intel and AMD issued press releases crowing over their wins, and both are correct — AMD highlights its presence in 7 of the top 10, while Intel boasts that 379 of the top 500 use their chips.
This discussion has been archived. No new comments can be posted.

New Top 500 Supercomputer List

Comments Filter:
  • has the Top 500 Supercomputer List been slashdotted already?
    • by Kagura ( 843695 ) on Monday November 17, 2008 @07:26PM (#25795095)
      SETI@home [berkeley.edu] gets 495 teraFLOPS, according to this site: http://boincstats.com/stats/project_graph.php?pr=sah [boincstats.com]

      Sure, it's not one supercomputer, but it still does more calculations for one purpose than any other single supercomputer can.
      • SETI@home gets 495 teraFLOPS, according to this site: http://boincstats.com/stats/project_graph.php?pr=sah [boincstats.com]
        Sure, it's not one supercomputer, but it still does more calculations for one purpose than any other single supercomputer can.

        While I can't see the actual article, if the summary is correct than most of the top 6 computers run faster than that. While it's an impressive feat for SETI, there are faster computers in a single unit now.

        • by Kagura ( 843695 )

          While I can't see the actual article, if the summary is correct than most of the top 6 computers run faster than that. While it's an impressive feat for SETI, there are faster computers in a single unit now.

          From the article: The top six all run in excess of 400 Teraflops.

          Hmm, no, the summary does not say that at all. Maybe you misread the '500'? ;)

          • by CaptainPatent ( 1087643 ) on Monday November 17, 2008 @07:39PM (#25795251) Journal

            Hmm, no, the summary does not say that at all. Maybe you misread the '500'? ;)

            Come again? FTS:

            The top two both break the Petaflops barrier

            AKA 1000 TeraFlops.

            The top six all run in excess of 400 Teraflops.

            (I don't know how many are over the 500Tflop barrier but those are the computers between the 1PetaFlop and 400Tflop mark. ;-)

          • by pablomme ( 1270790 ) on Monday November 17, 2008 @07:42PM (#25795297)

            Hmm, no, the summary does not say that at all. Maybe you misread the '500'? ;)

            if the summary is correct than most of the top 6 computers run faster than that

            Maybe you misread the 'most'? 8)

            Whatever the exact number, the summary clearly says that there are at least 2 supercomputers achieving more than 1 Petaflop, which is over twice Seti's performance. So your statement about there being no faster supercomputer than Seti is still incorrect.

      • by grub ( 11606 )

        Sure, it's not one supercomputer, but it still does more calculations for one purpose than any other single supercomputer can.

        Maybe, but the CPU latency totally sucks bag compared to anything on the Top 500 :)
      • Image how fast they would be if they weren't riddled with porn!
      • that's an interesting question. since most modern supercomputers are just extremely larger clusters with very high I/O bandwidths running on a high-speed network. such systems allow for optimal large-scale parallel processing, which is a defining characteristic of supercomputers.

        supercomputing clusters are still running separate instances of the OS on each node in the cluster. so aside from the geographic proximity of the cluster nodes and each node dedicating all of its processing power to the cluster, the

        • Wrong. There is a world of difference between a supercomputer and a cluster. The network is not simply "faster" or "higher bandwidth." These things are custom-designed to fit the type of processing a supercomputer needs to do. These are not embarrassingly parallel codes. In fact, that's why the Top 500 is bogus. It ranks computers based on their ability to run an embarrassingly parallel code. We really ought to be measuring with something like HPCC [utk.edu].
      • it's true and it's not true. if you have a chunk (unit, package, whatever you call it) to process, you wouldn't have it processed as fast as let's say 200 teraflops cluster. there are two main reasons for this:

        1) all users are updating their work orders in some intervals, not immediately. (for sure this can be changed in implementation, but according to character of distributed computing, you don't want to do it)

        2) some units needs to be processed whole at once, it means you cannot break them to piece
      • by PitaBred ( 632671 ) <slashdot@pitabre ... org minus distro> on Monday November 17, 2008 @08:16PM (#25795651) Homepage

        On the other hand, the Folding @ Home project, which is actually doing something useful with all those cycles, has broken the PetaFLOP [1up.com] mark, and did so over a year ago.

        • Re: (Score:3, Insightful)

          that's a pretty shortsighted statement. (and it also happens to be why the SETI project has had such a hard time receiving funding in the past.)

          just because it hasn't produced any practical results yet doesn't mean it's not useful. unless you're assuming that we are alone in the universe, which is a pretty big assumption, the SETI project is an incredibly important scientific endeavor. and through SETI@Home, the resource costs of the project has been largely subsidized by volunteers who're contributing thei

          • When SETI gets a signal from extraterrestrial life, it'll be decades (at least) before we even get a signal back.

            In that time, Folding@Home could find useful data about hundreds of proteins.

            Seti@Home has a hard time receiving funding for a good reason. We know for a fact that the data gained from Folding@Home will be of use. We don't know if Seti@Home will ever find a signal, let alone one we could reply to or do anything useful with.
            • by lysergic.acid ( 845423 ) on Monday November 17, 2008 @09:34PM (#25796507) Homepage

              what does it matter if we can't communicate back when/if we find a signal? the impact such knowledge would have on human society, on human history, would be tremendous just by the philosophical implications alone.

              should we stop all fundamental science research just because they don't have immediate practical or technological applications? there's something to be said for the search of knowledge for its own sake. and not knowing whether or not you will ever find an answer to a particular scientific question, or if you will succeed in a particular objective, is hardly a good reason for not even trying.

              do you think the inventor of the first microscope knew beforehand that he was paving the way for the scientific field of microbiology? do you think the first bacteriologists knew that their pioneering work would eventually lead to the discovery of antibiotics and revolutionize modern medicine?

              it's impossible to know exactly what consequences will follow the discovery of extraterrestrial intelligence, but it doesn't take a genius to realize that, whatever happens, it will have a profound impact on how humanity perceives itself and their relationship with the cosmos. whether we're alone in the universe is one of the fundamental questions of science, philosophy, and theology. just because answering it will not make computers run faster or cure cancer doesn't mean it's not worth asking.

              • of course we shouldn't halt funding. But (and this from a seti geek) I think that the funding ratios are about right. I would prefer to vastly reduce funding on lots of things, but reduction does not mean elimination. I would like to see funding increase lots of places.

                That all said I think what we (in the states at least) need is a law stating that
                1) Congress shall make votes only on one bill at a time (no riders).
                2) Congress shall officially call BS on pork spending.
                3) military funding shall be diverte

            • by Surt ( 22457 )

              Of course, if the alien signal explains how to break the speed of light barrier, the secret to eternal life, or even just a cure for cancer, we'd be bummed if we missed it.

              • by caluml ( 551744 )

                Of course, if the alien signal explains how to break the speed of light barrier, the secret to eternal life, or even just a cure for cancer, we'd be bummed if we missed it.

                Yep: "Sorry boss, was just tuning around the bands, trying to hear the latest stockmarket news. Guess I missed it. My bad. And unemployed."

          • Folding at home IS part of searching for a cure for AIDS. You've just essentially said that if everybody shared the attitude of Pitabred, they would all be running software that helps find a cure for AIDS, because they had this bad attitude that didn't see the point in bothering to help find a cure for AIDS. Not only did you insult him for no good reason, but that doesn't even make any sense, and will probably trigger one of those damned posts about the Chewbaka defense.
            Oh, and

            • Re: (Score:3, Insightful)

              by Kagura ( 843695 )
              If you aren't devoting all of your resources towards combating the heat death of the universe [wikipedia.org], then pray tell, what sort of short-sighted project are you wasting your time and money on?

              Why do you have internet access, or a computer at all? You should be out volunteering your time for better causes, rather than posting to an internet forum that nobody will read in three days' time.

              Or, maybe, we can balance doing several of these things at once. SETI@home, Folding@home, Einstein@home, and whatever tickl
            • Re: (Score:1, Offtopic)

              by Kagura ( 843695 )
              Also, I have one final thing [wikipedia.org] I want you to consider. This is Chewbacca. Chewbacca is a Wookiee from the planet Kashyyyk. But Chewbacca lives on the planet Endor. Now think about it; that does not make sense!

              Why would a Wookiee, an eight-foot tall Wookiee, want to live on Endor, with a bunch of two-foot tall Ewoks? That does not make sense! But more important, you have to ask yourself: What does this have to do with this case? Nothing. Ladies and gentlemen, it has nothing to do with this case! It does not
            • what?
          • I don't think he's saying it's bad because they haven't found aliens yet. I think he's saying saving lives might be a bigger deal than discovering alien lives. I would agree that saving lives is a bigger deal, but I would disagree with how he implies that discovering alien life is not useful at all. Just maybe not as useful as saving our own lives.
          • if everyone shared your attitude, no one would bother searching for a cure for AIDS, and manned flight would have been given up on long before it was even attempted.

            Almost nobody funded AIDS research until it was shown to exist at all. There were almost no dollars in flight research until it was shownt o be possible either.

            What's wrong exactly with funding things that are known to be possible over things that aren't until they're at the very least proven likely?

            • What about the Wow! signal [wikipedia.org]? Not exactly conclusive proof of alien life, I know. But IMHO Seti is something worth devoting some effort to. Maybe not as much as say Folding@Home, but I feel it's important enough to dedicate some effort to.

              Actually, on my quad core I run both Seti and Folding, when I have it on and am leaving it alone (not so much at the moment).

              • Its a personal value-judgement obviously, between decoding semi-random background radiation noise that may or may not contain anything intelligent and attempting to learn how something we know happens actually occurs and solving known problems with that knowledge.

        • After seeing all the naysayers reply, I had to throw my $0.02 in: I run about one Folding simulation a night on my PS3 and I'm glad to be contributing to real scientific research.

      • Re: (Score:3, Informative)

        by bmgoau ( 801508 )

        Folding@home is sustaining over 4.2 Petaflops and rising quickly.
        You can see statistics here: http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats [stanford.edu]
        And a nice graph charting the rise here: http://img388.imageshack.us/my.php?image=foldinghome20kx2.png [imageshack.us]

        I also enjoy reading the Wiki article on the NSA's headquaters: http://en.wikipedia.org/wiki/National_Security_Agency#Facilities [wikipedia.org]

        It goes on to talk about their own private chip fab, and how they are using an inordinate amount of power. I can only assume they a

    • has the Top 500 Supercomputer List been slashdotted already?

      Indeed... sure they hosted the list on a Cray supercomputer, but due to budget cuts they hooked it up to a 56k modem.

    • Sadly they couldn't use one of them to host their page.

    • Yeah, they should've just included the amount of Linux systems in the summary instead of forcing everyone to RTFA.

      • Yeah, they should've just included the amount of Linux systems in the summary instead of forcing everyone to RTFA.

        Why - is it not 500? Are there still High-Performance Windows experiments going on?

      • by ekimd ( 968058 )
        Linux: 391 - 78%
        • According to this [top500.org] it's on 439 of them, 87.80 %. But that seems a bit general, maybe it's counting Solaris as Linux? Or would that be in Unix?

  • by MosesJones ( 55544 ) on Monday November 17, 2008 @07:20PM (#25794991) Homepage

    Wily Coyote certified Genius was said Monday to be disappointed that the Road Runner had yet again managed to elude him but denied that the Road Runner had demonstrated more brains than him.

    "Although it may appear that the Road Runner is smarter due to the fact that I have fallen off cliffs, blown myself up and run into brick walls in fact I am significantly more intelligent and am an ACME top engineer"

    IBM's Media Minder for Road Runner passed on a single comment

    "MIP MIP"

  • by Anonymous Coward on Monday November 17, 2008 @07:20PM (#25794999)

    Where's Skynet?

  • Only the top 23 run Vista well.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      #10 is running windows hpc.

  • NVIDIA Tesla [nvidia.com] -- if not it should be.
    • by Surt ( 22457 ) on Monday November 17, 2008 @10:34PM (#25796969) Homepage Journal

      No, because it takes thousands of those to match what the top computers can do.

      Look at the specifications for the high end one:
      http://www.nvidia.com/object/tesla_s1070.html [nvidia.com]
      345 GFlops

      The bottom of the top500 list is now 12.64 Tflop/s. So to make it to the bottom of the top500, you need 36 top of the line teslas (and that assumes you lose nothing to network issues, which isn't true). So call it at least 40 teslas to get to the bottom of the list.

      To get to the top of the list, you'd need about 3500 teslas.

      • You're even understating the problem a bit there.

        The hard part isn't the 3500 Teslas, the hard part is the interconnecting of them in a functionally useful way.

        Interconnecting 3500 Teslas at all isn't the hard part. Interconnecting them so that they can produce real results to real problems and be programmed for with a realistic API is the problem.

        At several points, the Top 500 list's top computers have actually had less hardware than some of the lower entries, but better interconnect technology to take ad

    • by Ataru ( 50540 )

      Yes, at number 29.

  • by Gates82 ( 706573 ) on Monday November 17, 2008 @07:48PM (#25795349)

    HP has more systems in the top 500 than IBM

    Something tells me it is more impressive to have majority stake in the top 10 super computers than in the remaining 490.

    --
    So who is hotter? Ali or Ali's Sister?

    • Re: (Score:3, Interesting)

      more impressive, yes. more profitable? no.

      but i'm glad to see AMD dominating the top 10 spots. i've always had a, perhaps irrational, affinity for AMD as the consummate underdog. plus, they always seemed to have a lower cost-to-performance ratio than Intel chips.

    • by fatphil ( 181876 )
      I'd wager HP makes more money from its presence in the 10-500 zone than IBM does. However, impressive and money are not necessarily correlated.

      I'm not so sure there's much that's impressive here, as the benchmark used is a pitifully out-of-date one. Most of the big boxes are simple evolutions. There have been very few revolutions for several years. (Earth simulator, perhaps, and the very low-power POWER clusters.)
      • by Gates82 ( 706573 )

        Myself, I feel profit margin is more important then profit total. I would rather own Rolls Royce the GMC. (I suppose GMC is probably a poor example now since they do not post a profit).

        --
        So who is hotter? Ali or Ali's Sister?

      • While the number itself is highly synthetic, it isn't necessarily out of date any more than any other single benchmark. HPCC attempts to score more stuff through a more varied test suite, but any one score of the suite is equally capable of being called irrelevant or uninteresting.

        That said, there are a number of interesting items in the list. Cray's #2 showcased AMD's current generation's IO capabilities to the extent they've never been showcased before (right before they lose that exclusive benefit with

    • by Junta ( 36770 )

      Technically impressive, absolute given, as the current top few offer unique technologies (Cray's Interconnect, IBM's processors).

      From a profitability standpoint, undoubtedly the more successful a vendor is closer to the top of the list, the more they undoubtedly had to give up margin-wise for the bragging rights.

      From a marketability standpoint, things get a tad awkward I think for the vendors at the top. IBM has Cell and BlueGene showcased, which we all understand can be used to great ends at the expense o

      • by Sangui5 ( 12317 )

        It's far worse than just shaving away at the profit margin. Bragging rights for being the processor (Intel vs AMD vs IBM), the interconnect (Myrinet vs InfiniBand vs Altix), or the assembler (IBM vs HP vs Dell vs etc etc etc) are highly coveted. In general, the top end supercomputers are sold at such heavy discounts that the profit margin may well be a loss margin instead. This is especially true if the machine is bought by an academic institution; universities *never* pay retail for this sort of thing.

        I

    • I disagree; places in the top ten are too tempting to buy for bragging rights. After all if I was in the market for a small supercomputer I would be swayed by the argument - we built the world's #1! Having more of the top 500 tells me that independent people are more likely to choose HP than they are to choose IBM.

    • Depends on which way you swing.
    • I assume you are not a supercomputer user. First of all, the top 10 is short-lived and just for bragging rights.

      Secondly, as a user, you don't really care. At no point any given user will be able to reach full capacity of the computer anyway. What counts is how many users are sharing the machine: if your program has to wait a week in the queue, you could have already done your calculation on a smaller machine. And how well the supercomputer is suited for your needs. IBM machines can score high in the Linpac

  • Way to go guys - you just slashdotted top500.org :'(

    • by zoefff ( 61970 )

      Way to go guys - you and I just slashdotted top500.org :'(

      Fixed that for you. ;-)

  • For the record, how long would one of them take to crack 256-bit AES?
    • Re:AES (Score:5, Informative)

      by Iamthecheese ( 1264298 ) on Monday November 17, 2008 @08:32PM (#25795805)
      A device that could check a billion billion (1018) AES keys per second would require about 3*10^51 years to exhaust the 256-bit key space.(Wikipedia) [wikipedia.org]

      A round can now be done with 16 table lookups and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the AddRoundKey step.

      (Wikipedia) [wikipedia.org]

      Assuming 14 rounds for your 256 bit encryption thats 42 operations per round. At a trillion operations/second you get 1*10^12/42(love that number)=23,809,523,809, call it 24 billion rounds per second. Divide by a billion billion to try to match Wikipedia's number: 1/24,000,000,000= 41*10^-10. We can add those nine zeros straight over to get 41/3*10^61 years.


      Did I do that right?
      • Not to diminish the scale any, but...

        The average time to crack it wouldn't require exhausting key space, the chances are very low that you would have to.

        Also, the type of operations make a huge difference. Top500 is a specific set of 64-bit precision operations being measured. These systems may be much faster at the AES calculation.

        That said, no matter how you fiddle the math, it comes out to a uselessly long time even assuming they had *one* protected piece of data they needed to function (given my lapto

      • Re:AES (Score:4, Interesting)

        by Kjella ( 173770 ) on Tuesday November 18, 2008 @12:27AM (#25797911) Homepage

        We can add those nine zeros straight over to get 41/3*10^61 years. Did I do that right?

        Close enough. The bigger problem (yes, really) is that you'll be long out of energy even if you built a Dyson sphere around the sun and captured 100% of its energy at 100% efficiency for the rest of its lifetime, since each operation requires at least kT/2 = 2*10^-23J under ideal conditions. Even at E=mc^2 you'll only get 8.6*10^69 = 2^233 operations out of the Sun. So even if you could build an ungodly massively parallel computer that could break AES256 in a reasonable time, you wouldn't have the power to operate it.

        • each operation requires at least kT/2 = 2*10^-23J under ideal conditions

          Reference? Not that I don't believe you, but that I'd like to read up on how a number like that is computed.

          • by Kjella ( 173770 )

            Short answer: Check out http://en.wikipedia.org/wiki/Reversible_computing [wikipedia.org]

            Actually it states the VNL bound as kT*ln2, kT/2 is the energy in one microscopic degree of freedom but that little constant hardly matters. I was using an ideal 3 Kelvin at the background radiation level (any lower and you'll spend more on cooling). Basicly, it's thermodynamics that states that for every bit state lost, entropy must increase by kT*ln2. As you can read there's a lot of hand-waving about reversible computing but it woul

  • Too bad there's no Buy It Now option on the RoadRunner. :(
  • by Nefarious Wheel ( 628136 ) on Monday November 17, 2008 @08:04PM (#25795541) Journal
    I await with trepidation
    For a Cray upon my wrist
    Because life's a simulation
    So I'll give the knobs a twist.

    Now reality has schism'd
    And I wonder if I'm missed
    Because I've gone into recursion
    And my manager is pissed.

    Burma Shave

  • bit a shame that site like top500.org which symbolizes dreamed horsepower and advertise 24/7 HA systems cannot withstand sudden increase in page hits. Those, who are preaching water, but drinking wine...
  • by Junta ( 36770 ) on Monday November 17, 2008 @08:29PM (#25795783)

    I don't think IBM/Los Alamos suddenly plopped down something in response to cray at the last minute (frankly, I don't think you could move that fast).

    Any hardware upgrades were almost certainly in plan, and if there were none, they've had 6 months of tuning to extract better numbers oout of what they had.

    I wouldn't be surprised if in 6 months, the Cray without any additional hardware managed a better number than RoadRunner without additional hardware. However, such a victory is diminished somewhat by the energy the Cray undoubtedly consumes to acheive what performance they do get.

    • right you are. The contracts for these machines were signed a couple of years ago. They might have sped things up, in order to get on the top500 list, but they didn't add hardware just for a little showmanship. These labs can afford to put out a bunch of press releases related to top500, but they don't care enough about it to spend many millions of dollars.

      The list reflects the computers, the computers don't exist for the sake of the list.

  • by NimbleSquirrel ( 587564 ) on Monday November 17, 2008 @10:21PM (#25796867)
    As others have pointed out, IBM didn't just boost the RoadRunner in response to Cray's claim of being the fastest: they have been quietly adding to it all along.

    To me, the Cray Jaguar is actually two machines: an XT4 cluster (which was around 400 Teraflop/s back in June) and the XT5 cluster. Cray completely redesigned the switching architecture, the memory management, and the cooling to create the XT5. The XT5 really is a completely different machine. Cray seem determined to take #1 spot, but combining the XT4 and XT5 clusters for a better overall measurement has the disadvantage of making the XT5 look less efficient.

    IBM has retained the crown with a system that has fewer processors and uses half the energy. By comparison, the Jaguar is a lumbering beast that uses far more power and requires far more real estate. However, if you look at the performance of the XT5 alone, those figures get more competetive.

    • Cray ran HPL only on the XT5 for Top500. Unlike the earlier press release about the 1.64 petaflop machine, which is a combination of XT4 and XT5 cabinets, the peak listed at Top500 is 1.38 petaflops. This is accounted for solely by the 200 cabinets of XT5.
    • by joib ( 70841 )

      They're not that different. The network is compatible, the physical design is compatible, e.g. you can slot in XT4 nodes into a XT5 enclosure (which is what Cray recommends if you have jobs with high bandwidth requirements).

      From an end-user perspective the only difference is that XT5 nodes have twice the number of cores (2 sockets per node instead of 1).

    • I suspect cray is much more interested in getting paid to build really complex supercomputers. Supercomputer vendors don't compete in the top500, supercomputer buyers do. Cray could build a 2 petaflop computer, or 5 or 20, if a customer came along with a large enough check. So could IBM, or HP, or SGI. It's really up to the customer.

      That said, customers are only marginally interested in getting to the top of the top500. Those computers have a job to do, and that job isn't getting to the top of some artifici

    • The press releases from Oak Ridge and Cray (unlike the summary on slashdot) were careful to claim jaguar as the fastest computer for "open science". They were, no doubt, aware that Los Alamos might have bought more hardware since June.

      The machine at Los Alamos is used for classified Department of Energy projects; probably simulating nuclear warhead functionality on the aging pile of B83s and B61s sitting in the US arsenal.

      The machine at Oak Ridge gets used for unclassified research that ends up in peer-revi

    • Cray seem determined to take #1 spot, but combining the XT4 and XT5 clusters for a better overall measurement has the disadvantage of making the XT5 look less efficient

      As the saying goes, it isn't whether you win or lose, it's how you play the game.

      But to come oh so close and yet so far. It just makes me want to spout cliches all day long.

  • linux stats (Score:5, Informative)

    by iplayfast ( 166447 ) on Monday November 17, 2008 @11:35PM (#25797535)

    Here's the droids you've been looking for:
    http://www.top500.org/charts/list/32/os [top500.org]

    Linux has dropped to only 77.8%, although specific other linux versions make up another 5% (approx)

    And windows has had a huge increase to .2%

    Good going! :)

  • by flyingfsck ( 986395 ) on Monday November 17, 2008 @11:39PM (#25797565)
    The most amazing thing on the Top500 site is that number 10 runs Windows! W00T! Go, Microsoft, Go!
  • Contrary to our discussion a few days back, IBM's last-minute upgrade of RoadRunner salvaged the top spot for Big Blue.

    While the IBM machine is still technically the world's fastest, the referenced article [slashdot.org] claims that Jaguar is the world's fastest supercomputer for OPEN SCIENTIFIC RESEARCH, which is true [ornl.gov]. The Roadrunner system, on the other hand, is mainly used in classified research.

  • Other than for historical comparisons, what's the point of tracking the top "500" when nearly half the list turns over between June and November?

    The 500th computer on the 11/2008 list hits an Rmax of 12.6 TFlops. That computer would have been #270 in June, so all computers below 270 in June were essentially wiped off the list in 6 months (not accounting for the ones that upgraded of course).

  • From the list http://www.top500.org/stats/list/32/osfam [top500.org]
      the Linux share is 87%, Windows is 1% (Less than the Desktop share of Ubuntu).

  • The first supercomputer using nvidia GPUs + CUDA API makes its debut at #29 http://www.top500.org/system/9853 [top500.org]

    More info here: http://www.hpcwire.com/topic/processors/Tokyo_Tech_Boosts_TSUBAME_Super_with_GPUs.html [hpcwire.com]

  • Remember when Apple used to claim that they were in the super computer league. Now I don't even see Apple in the top 500--HP and IBM make up nearly 75% of the total and even a bunch of dells are listed.

    Perhaps Apple should try networking a cluster of iPods or Apple TV units.

    Where are the days thats Apple ran the ads of a "super computer in box." The Mac OS barely even qualifies as 64-bit these days.

    • They moved to consumer space and Intel processors. The one impressive showing in the Top500 they had was probably at least in part to prove 'see, PPC is powerful, buy us instead of Intel systems!'. Now they don't need that market leverage to vindicate their desktop processor choice, and it's simply not worth it to them as it has no relevance to the high-margin market they are successfully pursuing.

      It terms as 'barely even qualifies as 64-bit', that seems a strange characterization. I would say it either

  • I could have sworn they claimed an Rpeak of 1.6 petaflops for Jaguar, yet Top500 lists it as 1.3. Or were they unable to get the whole configuration to run a single job in time?

    Also, I guess Los Alamos must have added some rather than just tuned, because the Rpeak rose on Roadrunner (given the nature of Rpeak, only new hardware can explain increases.

  • Both Intel and AMD issued press releases crowing over their wins, and both are correct -- AMD highlights its presence in 7 of the top 10, while Intel boasts that 379 of the top 500 use their chips.

    There are lies, damn lies and statistics (or as they like to call it now, "Marketing").

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...