Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
China Supercomputing

China Has Almost Half of The World's Supercomputers, Explores RISC-V and ARM (techtarget.com) 90

Slashddot reader dcblogs quote Tech Target: Ten years ago, China had 21 systems on the Top500 list of the world's largest supercomputing systems. It now has 219, according to the biannual listing, which was updated just this week. At its current pace of development, China may have half of the supercomputing systems on the Top500 list by 2021.... U.S. supercomputers make up 116 of the latest Top500 list.

Despite being well behind China in total system count, the U.S. leads in overall performance, as measured by the High Performance Linpack (HPL) benchmark. The HPL benchmark is used to solve linear equations. The U.S. has about 38% of the aggregate Top500 list performance. China is in second, at nearly 30% of the performance total. But this performance metric has flip-flopped between China and the U.S., because it's heavily weighted by the largest systems. The U.S. owns the top two spots on the latest Top500 list, thanks to two IBM supercomputers at U.S. national laboratories. These systems, Summit and Sierra, alone, represent 15.6% of the HPL performance measure.

Nathan Brookwood, principal analyst at Insight 64, says China is concerned the U.S. may limit its x86 chip imports, and while China may look to ARM, they're also investigating the RISC-V processor architecture.

Paresh Kharya, director of product marketing at Nvidia, tells Tech Target "We expect x86 CPUs to remain dominant in the short term. But there's growing interest in ARM for supercomputing, as evidenced by projects in the U.S., Europe and Japan. Supercomputing centers want choice in CPU architecture."
This discussion has been archived. No new comments can be posted.

China Has Almost Half of The World's Supercomputers, Explores RISC-V and ARM

Comments Filter:
  • by Anonymous Coward

    It often doesn't matter how much computing power you have, in absolute terms. What matters is how you utilize what you do have.

    For example, a company can buy a massively powerful web server, but if they use a slow language like Ruby for their web apps then the server's power is squandered.

    Likewise, if a nation with many supercomputers uses them on pointless climate simulations based off of climate data that has been "adjusted" for political purposes, it's like that computing power doesn't really exist at al

    • by Anonymous Coward

      It often doesn't matter how much computing power you have, in absolute terms. What matters is how you utilize what you do have.

      Sounds like you are trying to justify having a small penis.

    • Likewise, if a nation with many supercomputers uses them on pointless climate simulations based off of climate data that has been "adjusted" for political purposes

      We've watched the same historical data repeatedly get adjusted down, further and further, so yeah. If innocent of malice, then they are repeatedly adjusting the data as if it were the raw data, while the raw data is in many cases only available as bitmap graphs. (cue the person who will provide a link to the page that links to these bitmaps, claiming that the data is available, or a link to the page that links to the adjusted data, claiming that its the raw data)

    • For example, a company can buy a massively powerful web server, but if they use a slow language like Ruby for their web apps then the server's power is squandered.

      Another way of looking at the same scenario is that a powerful server (which is relatively cheap, and a one-time expense) allows them to use a very flexible high-level language while maintaining adequate performance.

      The higher-level language brings them many benefits and cost savings, including faster development and testing, greater ease of modification and extension, and possibly fewer bugs.

      When assessing cost-effectiveness it is always essential to look at the whole picture and the long term.

  • How many super-computers that can process all of the information ever created in a nano-second does a country need?

    • Re:So? (Score:5, Informative)

      by Snotnose ( 212196 ) on Saturday June 22, 2019 @05:57PM (#58806040)
      When you want to know how your nuclear bomb will work without actually, you know, building one and blowing it up, supercomputers are really useful.

      Not to mention weather forecasts, finding dissidents via face recognition, etc etc ad naseum.
      • Exactly. I work in engineering and week long simulations are a desktop are common. There are many seemingly simple problems like electric motor optimization that would take weeks, a supercomputer comes in handy for these problems. Amazingly these are often considered too "low level" for the supercomputers in our organization, these are used for tougher problems like protein folding, quantum molecular calcs, plasma dynamics, and so on.

        If we magically had a supercomputer capable of simulating the entire unive

    • There are still a lot of problems that require massive computer power. Physics problems like nonlinear fluid dynamics and plasmas. Decryption. AI / machine learning.

      Where the right tradeoff point between cost and compute power is for a large country is not clear.

      • you were doing fine until you mentioned the farce that is AI / Machine learning. Nothing new in decades, only faster hardware. And of course no intelligence, artificial or otherwise.

  • Imagine for home computing what it would take to have all the software around to be compatible with ARM or RISC machines. The demand for programmers wanting to work on super computers with different architectures may need a special funnel to raise a crop of such programmers. maybe free college for those who achieve advanced degrees and training for such programming could provide our supply. China is so large that they probably can acquire more of those types of programmers but the Chinese language itself
    • Imagine for home computing what it would take to have all the software around to be compatible with ARM or RISC machines.

      It's called portable source code. Vanishingly few of the lines of code written per year are assembler.

    • by AmiMoJo ( 196126 )

      Chinese supercomputers have been using home grown CPUs for years now. They saw this coming and in any case didn't want to be reliant on foreign technology. They have their own line of CPUs that are RISC based with 260 on-die cores. There are 256 somewhat simpler cores and four more complex management cores.

      They run a home grown OS too.

      • Chinese supercomputers have been using home grown CPUs for years now. They saw this coming and in any case didn't want to be reliant on foreign technology.

        They weren's doing all that well until some number of years ago, the US decided to stop the export of top end Intel CPUs to China to stop them building supercomputers. That provided the impetus needed to get results from the program, which they did.

  • This means nothing. We have no idea of the amount of unreported or classified "supercomputers" that exist in any country. We know that it is highly likely that companies like Amazon (AWS), Google, Microsoft, IBM, Walmart (yes! Walmart), etc vastly exceed the computing power that any country or government has. A "Supercomputer" is mostly irrelevant for the scenarios a state might need computing for. They would be looking for massive parallel processing which the mentioned companies would have and greatly exc

    • by AHuxley ( 892839 )
      That can be guessed at in the USA.
      Look at power use and production near a US "base/camp/fort/port".
      Need cooling towers?
      Add in the cooling water and do some math. Using a lot of treated wastewater per day?
      Re "more computing power" Ponder the math done and past designs. Lots of consumer CPU products sold to the US gov that make a super computer?
      A chip the can do one type of math really quickly? But is a design for the gov only?
      That can make junk consumer crypto fail in real time? That might need
  • Sure, modern processors Just In Time convert the horribly inefficient ancient i86 instructions into something else that they can execute efficiently. But that costs time and transistors, and more importantly generates heat. Which is why ARM wins on low power applications.

    The reason that we still use i86 is that people still insist on writing large amounts of code in the archaic C/++ programming language (like I am at the moment). And that means that the instruction set is baked into the compiled code. S

    • Sure, modern processors Just In Time convert the horribly inefficient ancient i86 instructions into something else that they can execute efficiently. But that costs time and transistors, and more importantly generates heat. Which is why ARM wins on low power applications.

      These aren't low power cores and at this scale it's an irrelevance. What you can't do is customise an x86 core. With something you own or have a license to you can pick exactly the fast, wide vector unit you want, precisely the optimal memo

      • by PCM2 ( 4486 )

        What you can't do is customise an x86 core.

        Well, perhaps not, but Intel has been shipping Xeon chips with bolted-on FPGAs for a while now, the idea being to tailor the processor's performance for specific workloads. And if you order in enough volume, it will even roll custom variants for you, as it's been doing for AWS.

    • by amorsen ( 7485 )

      x86 is still the leader in operations per Joule. That is likely the reason that x86 is used.

      Recent Apple chips are getting close though, but those are not available to anyone but Apple. At the same time their absolute performance is lower, which means you need more chips, which in turn requires more interconnect.

      Also, x86 has decent I/O. Non-x86 generally does not bother with having tons of PCI-e lanes. If you want to use modern GPUs as accelerators, you need decent I/O to the CPU cores.

  • by ras ( 84108 ) <russell+slashdot ... rt DOT id DOT au> on Sunday June 23, 2019 @05:28AM (#58807832) Homepage

    There was a time when the top500 was good indicator of where the most powerful clusters were. That time has gone. The bigger and most powerful clusters of computers aren't on it. They live in data centres owned by the likes of Google, Amazon and Microsoft, and are rented out.

    If you are a physicist needing a few weeks of massive computational power you can fight the political battles to prove you are the most worthy user of some centrally managed supercomputer, or throw some dollars at Google or Amazon. If you are the countries weather bureau or some similar institution that uses huge amounts of compute power 24 x 7 x 365 then it probably still makes sense cost wise to buy your own, but if you are just a casual user who wants it for a few months you are better off renting off one of those guys.

    I see the slashdot bewulf cluster joke has made it's appearance again. It was funny once. Today the millions of cheap boxes these companies have made it a reality. Google's search engine, Facebook, Netflix, Youtube are massive applications soaking up more CPU power that any of todays super computers could provide, and they run on networks of cheap boxes. AlphaGoZero wasn't a triumph of super computing. The real triumph was they designed an algorithm that let them run thousands of training runs in parallel, so they could utilise thousands of loosely couple computers. That allowed them to throw 250MW at a bunch of TPU's that were unused at the time. Those TPU's didn't have to be in the same rack. In fact, they didn't have to be on the same side of the planet.

    China isn't dominating the TOP500 because western countries have lost of their technical dominance. It's moving into a niche western countries technical dominance has meant they no longer want, so they have vacated it. The capitalist system has found a cheaper way, western countries have moved on, and China hasn't.

    • by serviscope_minor ( 664417 ) on Sunday June 23, 2019 @06:12AM (#58807906) Journal

      If you are a physicist needing a few weeks of massive computational power you can fight the political battles to prove you are the most worthy user of some centrally managed supercomputer, or throw some dollars at Google or Amazon.

      No you can't not for many problems. Google and amazon provide large volumes of CPUs. They don't provide the massively wide, very fast and very low latency interconnects that distinguish supercomputers from clusters. Why would they? Those things are expensive and power hungry (something like half the power to the computers goes on networking in those machine) and so something of a waste of money for their use case and their customer's use cases.

      If you have an embarrassingly parallel problem then a cluster is the best choice. If you don't then a proper supercomputer is your only choice.

      The capitalist system has found a cheaper way

      Total unmitigated bullshit. Stop dragging politics into technical discussions. You're an embarrassment to the tech community.

It is easier to write an incorrect program than understand a correct one.

Working...