Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing

Latest Top 500 Supercomputer List Released 130

chrb writes "BBC News is reporting on the release of the June 2010 Top 500 Supercomputer list. Notable changes include a second Chinese supercomputer in the top ten. A graphical display enables viewing of the supercomputer list by speed, operating system, application, country, processor, and manufacturer."
This discussion has been archived. No new comments can be posted.

Latest Top 500 Supercomputer List Released

Comments Filter:
  • by TheRaven64 ( 641858 ) on Monday May 31, 2010 @09:42AM (#32406450) Journal

    These systems are only really getting "faster" for parallel tasks too - if you gave them a sequential workload then I assume they would fare worse than a high end gaming machine!

    I doubt it. A good fraction of them use POWER6 processors, which are still a lot faster than any x86 chip for most sequential workloads. On top of that, they typically have a lot more I/O bandwidth. They might only be a bit faster, but it would have to be a really high-end gaming rig to be faster.

  • Re:By Processor (Score:3, Insightful)

    by pwilli ( 1102893 ) on Monday May 31, 2010 @09:44AM (#32406458)
    I would have expected more AMD-based systems in the top-100, because super computers are usually built with cheap and moderately fast Processors, the market segment where AMD gives lots of bang for the buck.
  • by Anonymous Coward on Monday May 31, 2010 @09:46AM (#32406474)

    Make the definition of "computer" just a bit looser and it probably could make the list.

    The defintiion is already pretty damn loose.

  • Re:Linux (Score:1, Insightful)

    by Anonymous Coward on Monday May 31, 2010 @09:49AM (#32406512)

    All our admins and all of our users only know Microsoft systems. Training isn't free.

  • by cshbell ( 931989 ) on Monday May 31, 2010 @09:50AM (#32406518)
    The list should more accurately be called, "Top 500 publicly-acknowledged supercomputers." You can go right on thinking that the US NSA, British MI6, and even some private industries (AT&T?) don't have vastly larger supers that are not publicly disclosed.
  • Re:Linux (Score:5, Insightful)

    by Pharmboy ( 216950 ) on Monday May 31, 2010 @09:55AM (#32406550) Journal

    All our admins and all of our users only know Microsoft systems. Training isn't free.

    So your users can't use Linux on the server? Or is it that all the users use super computers on the desktop? Our biz has all MS on the desktop and all Linux on the server. Obviously it is completely seamless. As for the admins, any admin worth their salt is always learning new things to just keep up with technology as it changes. Learning Linux by installing it on one system to start is trivial, and in certain situations, much easier to setup than Windows, such as DNS servers, web servers, etc.

    If your admins can only work on a server if it uses a mouse, you need new admins.

  • by Entropius ( 188861 ) on Monday May 31, 2010 @09:59AM (#32406580)

    Parallel tasks are the whole point of using a supercomputer. The gains made in speed for sequential tasks really haven't been that great; Moore's Law for sequential tasks fell apart a while back.

    Being able to parallelize a task is a prerequisite for putting it on a supercomputer.

  • Re:By Processor (Score:3, Insightful)

    by Entropius ( 188861 ) on Monday May 31, 2010 @10:01AM (#32406596)

    If you're Intel you have more money to spend on marketing, which means "we'll give you a cut rate on a lot of 10000 processors just so we can have the bragging rights."

  • by TheRaven64 ( 641858 ) on Monday May 31, 2010 @10:08AM (#32406642) Journal
    Not even remotely true. The big difference is not the bandwidth between the nodes, it's the latency. Nodes in a supercomputer can exchange data in well under a millisecond. Nodes in SETI@Home can exchange information in a few hundred milliseconds. Don't think that's important? A single 2GHz core runs 200,000,000 cycles in the time that it takes to send a message between two relatively close SETI nodes. It executes closer to 200,000 instructions in the time that it takes to exchange data between two supercomputer nodes. This means that for things that are not embarrassingly parallel problems, a pair of supercomputer nodes will be up to 100 times faster than a pair of SETI nodes with identical processors. In practice, they won't spend all of their time communicating, so they'll probably only be ten times faster. Of course, when you scale this up to more than two nodes, the delays are increased a lot on a SETI-like system, so something using a few hundred nodes can be far more than only two orders of magnitude faster on a supercomputer.
  • by Anonymous Coward on Monday May 31, 2010 @10:43AM (#32406918)

    Do you actually think that everything was and is invented in US? A man that doesn't know the history will lose the future.

  • by compro01 ( 777531 ) on Monday May 31, 2010 @10:59AM (#32407096)

    If you're gonna open it up like that, Folding@Home would almost certainly take first place.

    http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats [stanford.edu]

  • Re:Linux (Score:1, Insightful)

    by Anonymous Coward on Monday May 31, 2010 @11:20AM (#32407328)

    Ya for Linux!

    Seriously, if this doesn't make every PHB take notice I can't imagine what would. (Hey boss, its free too!)

    How is this relevant to the environment most PHBs control? We're talking supercomputers here.. Ferraris.. Lamborghinis... not super reliable diesel trucks. Most PHBs want uptime, not go-fast-real-quick.

  • Re:Linux (Score:1, Insightful)

    by Anonymous Coward on Monday May 31, 2010 @12:45PM (#32408196)


    If your admins can only work on a server if it uses a mouse, you need new admins.

    Agreed. Often times you can't count on morons simply being canned or replaced though. The fact is there's a lot of fools out their that think "system administration" simply means knowing which button to click in the right order. Any understanding beyond that simply doesn't exist, and is lost on them.

    This limitation isn't simply one of "GUI vs CLI" or "Windows vs Linux". It's really one of wanting to understand something beyond the UI presented to you. We all know real systems, Windows or Linux screw up in ways that pointy-clicky, or even "type in the magic command" knowledge won't help you. People unwilling to learn the system beyond the basics are fools, and will always remain fools until they expand beyond basics.

  • Re:Linux (Score:4, Insightful)

    by Black Art ( 3335 ) on Monday May 31, 2010 @02:20PM (#32409012)

    In my experience Windows admins require *MUCH* more training than Linux admins. There is much more "black magic" that they need to know to be good at their jobs.

    A Windows admin needs to know all the secret registry hacks to make things run well. They need to know all the non-intuitive places that Microsoft hides the settings for whatever services need to be configured. They also need to know how to recover things when it all goes horribly wrong.

    Most Linux systems have text files to configure things. The files are in a predictable place. Updates are pretty easy and clear.

    But Microsoft has scammed people into believing that leaving it harder than just putting up with the same old crap. In this case I just wish that people did get what they pay for...

       

  • by et764 ( 837202 ) on Tuesday June 01, 2010 @11:46AM (#32419242)
    One guy I used to work with that used to work in supercomputing claimed he did one project involving aerodynamic simulations of Pringles chips. Apparently they were originally shaped like wings, and would become airborne when traveling along high speed conveyor belts. They used a simulation to find a shape that wouldn't generate so much lift.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...