China Has Almost Half of The World's Supercomputers, Explores RISC-V and ARM (techtarget.com) 90
Slashddot reader dcblogs quote Tech Target:
Ten years ago, China had 21 systems on the Top500 list of the world's largest supercomputing systems. It now has 219, according to the biannual listing, which was updated just this week. At its current pace of development, China may have half of the supercomputing systems on the Top500 list by 2021.... U.S. supercomputers make up 116 of the latest Top500 list.
Despite being well behind China in total system count, the U.S. leads in overall performance, as measured by the High Performance Linpack (HPL) benchmark. The HPL benchmark is used to solve linear equations. The U.S. has about 38% of the aggregate Top500 list performance. China is in second, at nearly 30% of the performance total. But this performance metric has flip-flopped between China and the U.S., because it's heavily weighted by the largest systems. The U.S. owns the top two spots on the latest Top500 list, thanks to two IBM supercomputers at U.S. national laboratories. These systems, Summit and Sierra, alone, represent 15.6% of the HPL performance measure.
Nathan Brookwood, principal analyst at Insight 64, says China is concerned the U.S. may limit its x86 chip imports, and while China may look to ARM, they're also investigating the RISC-V processor architecture.
Paresh Kharya, director of product marketing at Nvidia, tells Tech Target "We expect x86 CPUs to remain dominant in the short term. But there's growing interest in ARM for supercomputing, as evidenced by projects in the U.S., Europe and Japan. Supercomputing centers want choice in CPU architecture."
Despite being well behind China in total system count, the U.S. leads in overall performance, as measured by the High Performance Linpack (HPL) benchmark. The HPL benchmark is used to solve linear equations. The U.S. has about 38% of the aggregate Top500 list performance. China is in second, at nearly 30% of the performance total. But this performance metric has flip-flopped between China and the U.S., because it's heavily weighted by the largest systems. The U.S. owns the top two spots on the latest Top500 list, thanks to two IBM supercomputers at U.S. national laboratories. These systems, Summit and Sierra, alone, represent 15.6% of the HPL performance measure.
Nathan Brookwood, principal analyst at Insight 64, says China is concerned the U.S. may limit its x86 chip imports, and while China may look to ARM, they're also investigating the RISC-V processor architecture.
Paresh Kharya, director of product marketing at Nvidia, tells Tech Target "We expect x86 CPUs to remain dominant in the short term. But there's growing interest in ARM for supercomputing, as evidenced by projects in the U.S., Europe and Japan. Supercomputing centers want choice in CPU architecture."
Utilization matters most (Score:1, Informative)
It often doesn't matter how much computing power you have, in absolute terms. What matters is how you utilize what you do have.
For example, a company can buy a massively powerful web server, but if they use a slow language like Ruby for their web apps then the server's power is squandered.
Likewise, if a nation with many supercomputers uses them on pointless climate simulations based off of climate data that has been "adjusted" for political purposes, it's like that computing power doesn't really exist at al
Re: (Score:1)
It often doesn't matter how much computing power you have, in absolute terms. What matters is how you utilize what you do have.
Sounds like you are trying to justify having a small penis.
Re: (Score:2)
Likewise, if a nation with many supercomputers uses them on pointless climate simulations based off of climate data that has been "adjusted" for political purposes
We've watched the same historical data repeatedly get adjusted down, further and further, so yeah. If innocent of malice, then they are repeatedly adjusting the data as if it were the raw data, while the raw data is in many cases only available as bitmap graphs. (cue the person who will provide a link to the page that links to these bitmaps, claiming that the data is available, or a link to the page that links to the adjusted data, claiming that its the raw data)
Re: (Score:2)
The data is adjusted for instrument systematic issues. This is normal in metrology. The raw data is available.
But what has that to do with climate?
metrology
n noun the scientific study of measurement.
DERIVATIVES
metrological adjective
ORIGIN
C19: from Greek metron 'measure' + -logy.
Re: (Score:2)
For example, a company can buy a massively powerful web server, but if they use a slow language like Ruby for their web apps then the server's power is squandered.
Another way of looking at the same scenario is that a powerful server (which is relatively cheap, and a one-time expense) allows them to use a very flexible high-level language while maintaining adequate performance.
The higher-level language brings them many benefits and cost savings, including faster development and testing, greater ease of modification and extension, and possibly fewer bugs.
When assessing cost-effectiveness it is always essential to look at the whole picture and the long term.
So? (Score:2)
How many super-computers that can process all of the information ever created in a nano-second does a country need?
Re: (Score:2)
Er, China was the most developed nation in the world when Europeans were running around wearing little except paint and waving spears at one another. And, of course, when North America was inhabited solely by Native Americans. In the 12th century, while European knights fought incessant wars over tiny pieces of land, the Chinese were printing and using bank notes. For those who tend to equate "civilization" with advanced weaponry, the Mongol invasions of Japan, which took place in 1274 and 1281, exploited C
Re: (Score:2)
What the fuck is wrong with China! Couldn't they just invade a neighbor like a normal developing nation?
Well, under President Xi, they've gone into areas they've never dreamt of before, such as having a complete surveillance infrastructure that has everybody's 'social credit' and an account of all their activity. If you thought Facebook or Google were bad, you've seen nothing. Given all the data they've collected on all their citizens for all sorts of purposes - from potential dissidents to 'breed-ready women', their computing requirements have grown as well.
It used to be that the reason a country used a
Re:So? (Score:5, Informative)
Not to mention weather forecasts, finding dissidents via face recognition, etc etc ad naseum.
Re: (Score:2)
Which supercomputer on that list is run by NWS?
Re: (Score:3)
Re: (Score:2)
Computing power has helped a lot. But don't discount the billions of dollars poured into monitoring as well.
The current state of the atmosphere is input into the models every twelve hours or so (six hours, maybe?). Given the chaotic nature of weather, improved estimates of this initial forecast state lead to much better forecasts. And the dramatic increase in available satellite and station data (along with much improved data assimilation algorithms) has made this much more accurate.
P. Bauer, A. Thorpe,
Re: (Score:2)
Exactly. I work in engineering and week long simulations are a desktop are common. There are many seemingly simple problems like electric motor optimization that would take weeks, a supercomputer comes in handy for these problems. Amazingly these are often considered too "low level" for the supercomputers in our organization, these are used for tougher problems like protein folding, quantum molecular calcs, plasma dynamics, and so on.
If we magically had a supercomputer capable of simulating the entire unive
Re: (Score:2)
There are still a lot of problems that require massive computer power. Physics problems like nonlinear fluid dynamics and plasmas. Decryption. AI / machine learning.
Where the right tradeoff point between cost and compute power is for a large country is not clear.
Re: (Score:1)
you were doing fine until you mentioned the farce that is AI / Machine learning. Nothing new in decades, only faster hardware. And of course no intelligence, artificial or otherwise.
Re: (Score:2)
Re: (Score:2)
Then using the math that will get results to make use of a limited "supercomputer" design.
Get that national "super" compute ranking.
Buying up a nations CPU production to ensure "jobs".
Generations of experts get to understand how to code. How to design a CPU. The networks between the CPU hardware.
What kind of math they can do. What math never to accept as it shows the limitations of the networked co
Re: (Score:2)
They generally use GPUs for much of the processing now, and the interconnects are a lot more capable than just basic ethernet. Lots of fancy software too.
Re: (Score:2)
Help Wanted? (Score:1)
Re: (Score:2)
Imagine for home computing what it would take to have all the software around to be compatible with ARM or RISC machines.
It's called portable source code. Vanishingly few of the lines of code written per year are assembler.
Re: (Score:2)
Re: (Score:2)
Chinese supercomputers have been using home grown CPUs for years now. They saw this coming and in any case didn't want to be reliant on foreign technology. They have their own line of CPUs that are RISC based with 260 on-die cores. There are 256 somewhat simpler cores and four more complex management cores.
They run a home grown OS too.
Re: (Score:2)
Chinese supercomputers have been using home grown CPUs for years now. They saw this coming and in any case didn't want to be reliant on foreign technology.
They weren's doing all that well until some number of years ago, the US decided to stop the export of top end Intel CPUs to China to stop them building supercomputers. That provided the impetus needed to get results from the program, which they did.
Announcement of Chinese hacking our supercomputers (Score:2)
In 3...2...1.
Re: (Score:2)
In the top 10, that's pretty much true of all of them. US supercomputing spend slowed down until China was #1, suddenly US had money to spend again to be back on top.
Now in general, all the instutitions are glad of the budget and can do useful things with them, but they are only ever funded to prove how on top of technology a given government is.
Re: (Score:2)
What about classified, unreported, AWS, M$, Google (Score:1)
This means nothing. We have no idea of the amount of unreported or classified "supercomputers" that exist in any country. We know that it is highly likely that companies like Amazon (AWS), Google, Microsoft, IBM, Walmart (yes! Walmart), etc vastly exceed the computing power that any country or government has. A "Supercomputer" is mostly irrelevant for the scenarios a state might need computing for. They would be looking for massive parallel processing which the mentioned companies would have and greatly exc
Re: (Score:2)
Look at power use and production near a US "base/camp/fort/port".
Need cooling towers?
Add in the cooling water and do some math. Using a lot of treated wastewater per day?
Re "more computing power" Ponder the math done and past designs. Lots of consumer CPU products sold to the US gov that make a super computer?
A chip the can do one type of math really quickly? But is a design for the gov only?
That can make junk consumer crypto fail in real time? That might need
i/x86 seems a bad architecture for super computer (Score:2)
Sure, modern processors Just In Time convert the horribly inefficient ancient i86 instructions into something else that they can execute efficiently. But that costs time and transistors, and more importantly generates heat. Which is why ARM wins on low power applications.
The reason that we still use i86 is that people still insist on writing large amounts of code in the archaic C/++ programming language (like I am at the moment). And that means that the instruction set is baked into the compiled code. S
Re: (Score:2)
C and C++ leave certain things undefined in the languages, that instead get defined by the processor architecture (or rather ABI) it gets compiled for: things such as width of the basic types, byte-order and ordering of memory operations.
When you write and test C/C++ code on a certain processor, then that code gets written for and tested only with those, and it could very well break if compiled to another ABI that has other set of types, and the other byte-order.
There is already a difference between C/C++ p
Re: (Score:2)
Sure, modern processors Just In Time convert the horribly inefficient ancient i86 instructions into something else that they can execute efficiently. But that costs time and transistors, and more importantly generates heat. Which is why ARM wins on low power applications.
These aren't low power cores and at this scale it's an irrelevance. What you can't do is customise an x86 core. With something you own or have a license to you can pick exactly the fast, wide vector unit you want, precisely the optimal memo
Re: (Score:2)
What you can't do is customise an x86 core.
Well, perhaps not, but Intel has been shipping Xeon chips with bolted-on FPGAs for a while now, the idea being to tailor the processor's performance for specific workloads. And if you order in enough volume, it will even roll custom variants for you, as it's been doing for AWS.
Re: (Score:2)
x86 is still the leader in operations per Joule. That is likely the reason that x86 is used.
Recent Apple chips are getting close though, but those are not available to anyone but Apple. At the same time their absolute performance is lower, which means you need more chips, which in turn requires more interconnect.
Also, x86 has decent I/O. Non-x86 generally does not bother with having tons of PCI-e lanes. If you want to use modern GPUs as accelerators, you need decent I/O to the CPU cores.
The Top500 time has probably past (Score:4, Interesting)
There was a time when the top500 was good indicator of where the most powerful clusters were. That time has gone. The bigger and most powerful clusters of computers aren't on it. They live in data centres owned by the likes of Google, Amazon and Microsoft, and are rented out.
If you are a physicist needing a few weeks of massive computational power you can fight the political battles to prove you are the most worthy user of some centrally managed supercomputer, or throw some dollars at Google or Amazon. If you are the countries weather bureau or some similar institution that uses huge amounts of compute power 24 x 7 x 365 then it probably still makes sense cost wise to buy your own, but if you are just a casual user who wants it for a few months you are better off renting off one of those guys.
I see the slashdot bewulf cluster joke has made it's appearance again. It was funny once. Today the millions of cheap boxes these companies have made it a reality. Google's search engine, Facebook, Netflix, Youtube are massive applications soaking up more CPU power that any of todays super computers could provide, and they run on networks of cheap boxes. AlphaGoZero wasn't a triumph of super computing. The real triumph was they designed an algorithm that let them run thousands of training runs in parallel, so they could utilise thousands of loosely couple computers. That allowed them to throw 250MW at a bunch of TPU's that were unused at the time. Those TPU's didn't have to be in the same rack. In fact, they didn't have to be on the same side of the planet.
China isn't dominating the TOP500 because western countries have lost of their technical dominance. It's moving into a niche western countries technical dominance has meant they no longer want, so they have vacated it. The capitalist system has found a cheaper way, western countries have moved on, and China hasn't.
Re:The Top500 time has probably past (Score:5, Informative)
If you are a physicist needing a few weeks of massive computational power you can fight the political battles to prove you are the most worthy user of some centrally managed supercomputer, or throw some dollars at Google or Amazon.
No you can't not for many problems. Google and amazon provide large volumes of CPUs. They don't provide the massively wide, very fast and very low latency interconnects that distinguish supercomputers from clusters. Why would they? Those things are expensive and power hungry (something like half the power to the computers goes on networking in those machine) and so something of a waste of money for their use case and their customer's use cases.
If you have an embarrassingly parallel problem then a cluster is the best choice. If you don't then a proper supercomputer is your only choice.
The capitalist system has found a cheaper way
Total unmitigated bullshit. Stop dragging politics into technical discussions. You're an embarrassment to the tech community.