Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing

Latest Top 500 Supercomputer List Released 130

chrb writes "BBC News is reporting on the release of the June 2010 Top 500 Supercomputer list. Notable changes include a second Chinese supercomputer in the top ten. A graphical display enables viewing of the supercomputer list by speed, operating system, application, country, processor, and manufacturer."
This discussion has been archived. No new comments can be posted.

Latest Top 500 Supercomputer List Released

Comments Filter:
  • by somersault ( 912633 ) on Monday May 31, 2010 @09:36AM (#32406414) Homepage Journal

    I think power requirements are probably the main problem, rather than the hardware. It must be pretty trivial to add more cores to a system that's already using tens of thousands of them, but you're going to need a lot of power.

    These systems are only really getting "faster" for parallel tasks too - if you gave them a sequential workload then I assume they would fare worse than a high end gaming machine!

  • By Processor (Score:4, Interesting)

    by TheRaven64 ( 641858 ) on Monday May 31, 2010 @09:38AM (#32406432) Journal
    The view by processor is quite interesting. AMD has the top spot, but the majority of the top 500 have Intel chips. There are still two SPARC64 machines in the top 100, and a third one down at 383. All three SPARC64 machines are in Japan, which isn't entirely surprising. IBM makes a good showing, but it's interesting to see how far behind x86 they are, in a market that was traditionally owned by non-commodity hardware.
  • by TheRaven64 ( 641858 ) on Monday May 31, 2010 @09:43AM (#32406454) Journal
    Interestingly, the Chinese machines don't seem to be using Chinese CPUs yet. I was hoping to see at least one Loongson in the top 500.
  • LINPACK (Score:3, Interesting)

    by ProdigyPuNk ( 614140 ) on Monday May 31, 2010 @09:49AM (#32406504) Journal
    I think this is the first benchmarking article I've read in years where the organizers actually know what their benchmark program does: http://www.top500.org/project/linpack [top500.org]. Refreshing to see real statistics (as good as they can make them), instead of the normal crap that is most hardware articles anymore.

    I wonder what kind of score these beasts would get on 3DMark ?
  • Re:By Processor (Score:1, Interesting)

    by Anonymous Coward on Monday May 31, 2010 @09:50AM (#32406514)

    What's more interesting is is that the Chinese supercomputer is second overall with only 55680 cores (Intel) and 1.271 peta FLOPS.
    That's almost 170000 cores less than the number 1 (AMD), and only 500 tera FLOPS less.
    And it's 70000 cores less than the number 3 (IBM) and 200 tera FLOPS faster.

  • Food? What food? (Score:5, Interesting)

    by hcpxvi ( 773888 ) on Monday May 31, 2010 @09:53AM (#32406532)
    Of the UK entries in this list, the first few are Hector (the national supercomputing facility), ECMWF, Universities, financial institutions etc. But there are also some labelled "Food industry". I wonder what I am eating that requires a supercomputer?
  • by Plekto ( 1018050 ) on Monday May 31, 2010 @10:32AM (#32406814)

    I was curious if any privately owned(non-corporate or government) machines made the list, and where they placed.

  • actual purpose (Score:3, Interesting)

    by Iamthecheese ( 1264298 ) on Monday May 31, 2010 @10:40AM (#32406902)
    In years past as many as 7 out of 10 officially listed computers were for security research. Now, contrary to the article, that's down to 2.

    Jaguar -- general research (http://www.nccs.gov/computing-resources/jaguar/)
    Roadrunner -- security research (http://www.lanl.gov/)
    Kraken XT5 -- general research (National Institute for Computational Sciences/University of Tennessee)
    Tianhe-1 -- unstated
    Pleiades -- security research (nukes)

    "Recently expanded to accommodate growing demand for high-performance systems able to run the most complex nuclear weapons science calculations, BGL now has a peak speed of 596 teraFLOPS. In partnership with IBM, the machine was scaled up from 65,536 to 106,496 nodes in five rows of racks; the 40,960 new nodes have double the memory of those installed in the original machine"

    Intrepid -- General research
    Ranger -- General research
    Red Sky -- General research

    It makese me wonder whether the machines for nuclear research went underground or maybe it just doesn't take a top ranking supercomputer to calculate a nuclear explosion anymore.
  • Re:By Processor (Score:4, Interesting)

    by stevel ( 64802 ) * on Monday May 31, 2010 @12:42PM (#32408170) Homepage

    System and component vendors don't make money on these "lighthouse account" supercomputer sales. My experience, having worked in the past for a vendor that did this a lot, is that they're a money-loser. The motivation is bragging rights, though that can be fleeting. I know of several times that my employer declined to bid on a supercomputer deal as it would just be too expensive.

    Typically, these systems are actually sold by system vendors (Dell, HP, IBM) and not processor vendors, though the processor vendor will support the bid. That #1 "AMD" system is actually a Cray. Software also plays a large part in success or failure.

  • Re:actual purpose (Score:4, Interesting)

    by rdebath ( 884132 ) on Monday May 31, 2010 @01:18PM (#32408450)

    As I understand it most of the nuclear research simulations that it would be nice to run simply cannot be done on any modern machines. If it's only a few particles they can be simulated on a laptop but the interesting interactions need to simulate millions or billions of points with every single one of them influencing every other one in the simulation.

    As a simple example, a genetic algorithm was used to program some reconfigurable FPGA chips. A layout was grown on the chip the did the job but broke just about every rule for FPGA design. There were parts of the layout on the chip that were not connected to any circuit but removing them made the device fail to work. Transferring the layout to a different chip got you a non-working circuit. It would be great to be able to simulate this ... not a chance it's too big, by so very many orders of magnitude.

    http://www.netscrap.com/netscrap_detail.cfm?scrap_id=73 [netscrap.com]

  • by Anonymous Coward on Monday May 31, 2010 @05:51PM (#32411396)

    I always thought computers (and supercomputers) were nothing more than proprietary implementations of someone's attempt to simplify their pseudocode. It all boiled down to memory and bus bandwidth issues, not the speed of the processor. That's where the DEC/Compaq/HP Alpha was retired as was HP PA-RISC, yet theoretically the Sun SPARC and IBM Power designs should succeed. Instead we see these astonishingly bogus processors that you call "general purpose" when they are nothing more the biproduct of bad marketing coupled with monopolized software lewding from the likes of Microsoft. Remember it was Microsoft that was pushing the industry for what they termed as a "Windows-ready" server that allowed you to run other competitive operating systems despite having drivers that were only "Windows-ready." They tried to destroy DEC Alpha mid-stream, and that's when Apple backed-out of their bid to purchase Alpha from DEC to be their core desktop and server architecture. It realy broke my heart to see Apple pass Alpha to lease a castrated IBM Power (PowerPC) processor. Alpha and Apple would have been God-tier, but instead it was squandered into the misplaced butchery called Pentium 4 and Athlon XP's hypertransport-bus.

    I am truly rooting for Sun SPARC to pick-up where the apical bud of Alpha was suddenly terminated, for legacy purposes that the true stylish white-box Unix systems should continue. Sun and HP's PARISC have always had the most elegant of memory bus architectures, and that's where DEC was heading to finish their product on using Rambus technologies, but things just tend to fumble around.

    At-least now you can get a once $7k dual Alpha rackmount for under $200 and it is still faster and more power-efficient than a modern Intel and AMD system. Still, it's bad. Just bad, man.

  • Re:Linux (Score:3, Interesting)

    by evilviper ( 135110 ) on Monday May 31, 2010 @07:55PM (#32412486) Journal

    I don't think that there is any real appreciable difference between the amount of knowledge and training needed on one vs. the other when comparing systems that perform similar functions.

    I'm afraid I'd have to completely disagree with you there. My preferred example isn't Active Directory, but CIFS...

    With Windows, you do all the user management, then click-through the painful server setup wizard to turn on file sharing, and everyone's happy and thinks it's oh so easy to do... Right up until some guy tries to upload a a big file, and for no reason everything hangs-up at about the 2GB mark.

    Congratulations, you've been screwed by Microsoft. No way in hell will you be able to determine what's going wrong by examining the OS, logs, included help documentation, registry settings, etc. You are screwed. You must now find external sources of information to determine why this would possibly happen. After either buying some 3rd party books on the subject (Microsoft's books are crap), or scouring Microsoft's website for every possible keyword that might be in the description of this problem, you just might find the answer (or maybe not, it really does depend on luck).

    The software from this the largest and most profitable company in the world, somehow consistently determines that your gigabit network is ACTUALLY a slow, high lag (most likely dial-up) link, and dynamically switched to some ancient version of the protocol to give you slightly less overhead, which seems to work fine at first blush, but just doesn't allow UPLOADING (downloading is okay) files over 2GBs. Armed with this knowledge, you now get to delve deep into the bowels of the registry, and find a half-dozen irrationally named keys and change some completely arbitrary DWORD values to some other completely arbitrary DWORD values that only those with the source code, and entire debugging teams, can determine for you.

    You DO have to be just as intelligent as a Unix admin to setup a reliable Windows network, since the fundamental laws of computing still apply. Then, on top of all that, you have to memorize the magic Microsoft spell-book, learn all the magic incantations if you want to do anything other than the very basic default settings (eg., for when they perform badly, are terribly insecure, or just completely blow up on you, like the above).

    There's really no debate about it. Samba has a config file that stays under 1K even in the complex setups. With Windows, you've got a 20MB+ registry to look through, and absolutely no way to know what each value might do... Even if you had to look through the full Samba source code, it wouldn't come close to being as cumbersome as the Windows registry, AND you'd actually know everything there is to know about it, rather than some anecdotes here and there, you get from the Microsoft spell book...

    I say this as someone who knows it all extremely well... Admin for over 100 Windows systems for a lot of years, from NT4.0 to 2000, and 2003. More recently, doing everything I can to get positions as a Unix admin, even taking a pay cut, rather than put up with the nightmares that a Windows admin position brings. I still occasionally get dragged back into the Windows world, when my current employer has something that the idiot MCSEs can't figure out, on our dwindling number of Windows systems, or when a former employer or someone else who knows me by reputation gets desperate and throws enough money at me to come back and fix some show-stopper on the Windows systems I long ago told them they needed to get rid of.

    In short, I agree with the GP. Windows Admin is inherently a job where you can only peek through the keyhole and turn one screw at a time, while a Unix admin can strip out entire components and work with them in isolation. A Linux (or BSD, or OpenSolaris) Admin in particular can get the blueprints to everything, and KNOW exactly how everything works, with a fairly modest amount of effort... Even if you were unbelievably lucky

"Plastic gun. Ingenious. More coffee, please." -- The Phantom comics

Working...