

New Supercomputing Record Set - Using AMD's Instinct GPUs (tomshardware.com) 23
"AMD processors were instrumental in achieving a new world record," reports Tom's Hardware, "during a recent Ansys Fluent computational fluid dynamics simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory."
The article points out that Frontier was the fastest supercomputer in the world until it was beaten by Lawrence Livermore Lab's El Capitan — with both computers powered by AMD GPUs: According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly...
Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.
The article points out that Frontier was the fastest supercomputer in the world until it was beaten by Lawrence Livermore Lab's El Capitan — with both computers powered by AMD GPUs: According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly...
Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.
Really? (Score:1)
"serving as the brains of some of the fastest supercomputers on Earth." /. "Editor" calling a CPU a Brain.??
A
Re: Really? (Score:1)
Re: Really? (Score:1)
Somewhere Edsgar Dijkstra is writing an angry memo.
Re: Really? (Score:2)
Possibly because you're spelling his name wrong.
Re: Really? (Score:2)
1946' ENIAC was known as the "giant brain".
Re: (Score:2)
These folks actually want to further science and technology, not just fleece plebs.
Fluid dynamics is nice (Score:2)
But can it run Cities Skylines at a decent framerate?
Either the first or CS II.
NVIDIA doesn't give a shit about FP64 (Score:2)
Supercomputing are the crumbs on the table.
Re: (Score:2)
In other news though, let's celebrate AMD reaching total sales of 1024 of their MI Instinct GPUs!
Re: NVIDIA doesn't give a shit about FP64 (Score:2)
They reported $5B in revenue for Instinct in Q4 2024
Re: (Score:2)
And at $10k apiece, that means 500,000 units.
OpenFoam Prevents $20,000 Seat/Year Fluent Cost (Score:2)
[BTW, if you have an old Epyc or Threadripper, with 256
Re: (Score:2)
What stops you from just buying those parts? I would presume someone with a skill set to optimize a 3d model for aerodynamics should be able to afford the what, 2000 bucks these parts would cost.
but... (Score:3)
can it do CUDA?
Re: (Score:3)
can it do CUDA?
No, not natively. But ZLUDA and SCALE can make it happen. It didn't take long to find this with Google.
Energy savings? (Score:2)
Re: (Score:2)
Most likely yes.
Only if you don't understand the HPC market (Score:3)
It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.
No. From the late 2000's to the mid 2010's, Nvidia dominated supercomputers with their GPUs. There was initially resistance to switching to CUDA, but once the switch started happening, Nvidia dominated. Then GPUs got too expensive. Around the mid 2010's, GPUs were more than half the total cost of the entire system. Government agencies started pushing back and demanded that Nvidia lower their prices. Nvidia essentially walked away. Since then, all government systems have been AMD, with an occasional Intel system.
This absence of Nvidia from government HPC systems will continue, especially since government supercomputers (along with gaming GPUs) are now noise for Nvidia revenue. Nvidia doesn't care that much anymore.
This situation also highlights a big problem for AMD. Since AMD counts government supercomputers as data center, their share of the relatively more competitive hyperscaler and enterprise data center AI market is far lower than what their data center revenue numbers would suggest. This is one of the reasons why there are many AMD skeptics for future AI growth.
Re: (Score:3)
The government also makes really big supercomputers. These things run codes for huge systems and have often had really wacky architectures (like Roadrunner). People often customise, i.e. rewrite their code for such systems because they are so expensive to buy and run that it's worth it. Especially as the important codes will be run and run again.
AMD GPUs often have higher memory bandwidth, and really good FP64. That's worth customising for, especially if you have AMD engineers on speed dial. For everyone el
Re: Only if you don't understand the HPC market (Score:2)
When pytorch starts supporting ROCm your wish will be fulfilled, I guess.
Re: (Score:2)
Pytorch does support ROCm in theory at any rate and it's notoriously half arsed.
Re: (Score:2)