Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Supercomputing China United States

China Bumps US Out of First Place For Fastest Supercomptuer 125

An anonymous reader writes "China's Tianhe-2 is the world's fastest supercomputer, according to the latest semiannual Top 500 list of the 500 most powerful computer systems in the world. Developed by China's National University of Defense Technology, the system appeared two years ahead of schedule and will be deployed at the National Supercomputer Center in Guangzho, China, before the end of the year."
This discussion has been archived. No new comments can be posted.

China Bumps US Out of First Place For Fastest Supercomptuer

Comments Filter:
  • by Anonymous Coward on Monday June 17, 2013 @03:02PM (#44032469)

    Your information is out of date. Most supercomputers in the last decade have been distributed memory machines, so 'distributed computing' is what this is already. Also, as someone that's using a machine somewhat further down the list (in the 30s), if you have a big supercomputer that you feel is a waste, can you give me an account? Because my job (in fluid dynamics simulations) is basically dependent on their existence, and I've got applications for the biggest machine I can get my hands on.

  • by Nite_Hawk ( 1304 ) on Monday June 17, 2013 @03:08PM (#44032525) Homepage

    I'll bite. You seem to think that distributed computing, however you are defining that, is a better solution. I am going to assume your primary objection then is using infiniband (or some other low latency interconnect such as Numalink or Gemini). What then, would you propose to do with the class of problems that are rely on extremely low latency transmission of data between nodes?

  • by clong83 ( 1468431 ) on Monday June 17, 2013 @09:38PM (#44035717)
    Not the poster upthread, but as someone else who runs fluids codes on big machines, I will chime in:
    A lot of the guys on the big NICS machines aren't using ANSYS. They're using their own research codes that are tailored for parallel performance and/or to solve specific and difficult problems that commercial codes don't do well, like fluid-structure interaction. I know there are guys that depend on licensing somehow or another and this is artificially limiting. But I never really understood it. If all you want is a basic, parallel fluids solver, there are some open-source options. Probably won't scale well, but it sure beats spending half your lab budget to get only 8 processors.

    Even if you have your own in-house solver, you will of course run into problems with latency as you scale up. I usually run on around 100-200 processors, depending on the problem. I would love use more, but the communication costs start to take over. Some guys can run on 10-100,000 processors. Not sure what they are doing, but I am guess whatever they are computing requires very little communication between nodes, or has been optimized to an extreme degree. Hard to imagine those guys are running a normal fluids solver with an unstructured grid. That'd be a huge waste.

    And I agree to whomever said that if someone know of a big wasted supercomputer with idle time on it, please advertise it here! All the ones I've ever seen are more-or-less utilized to their full extent.

If you want to put yourself on the map, publish your own map.

Working...