Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AMD Supercomputing

New Supercomputing Record Set - Using AMD's Instinct GPUs (tomshardware.com) 23

"AMD processors were instrumental in achieving a new world record," reports Tom's Hardware, "during a recent Ansys Fluent computational fluid dynamics simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory."

The article points out that Frontier was the fastest supercomputer in the world until it was beaten by Lawrence Livermore Lab's El Capitan — with both computers powered by AMD GPUs: According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly...

Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

This discussion has been archived. No new comments can be posted.

New Supercomputing Record Set - Using AMD's Instinct GPUs

Comments Filter:
  • "serving as the brains of some of the fastest supercomputers on Earth."
    A /. "Editor" calling a CPU a Brain.??

  • But can it run Cities Skylines at a decent framerate?
    Either the first or CS II.

  • Supercomputing are the crumbs on the table.

  • Fluent costs over $20,000, per user, per year, which is a high bar for people who want to cars that are fuel/power efficient. I appreciate the work that everyone has done on OpenFoam. OpenFoam had some early form of GPU solve Hopefully, seeing the test will rekindle the interest in GPU solve, BUT, solve is only part of the problem; the other part of the problem is the mesh, which is only lighter-threaded is also part of the time-consuming CFD process.

    [BTW, if you have an old Epyc or Threadripper, with 256
    • by Kokuyo ( 549451 )

      What stops you from just buying those parts? I would presume someone with a skill set to optimize a 3d model for aerodynamics should be able to afford the what, 2000 bucks these parts would cost.

  • by vbdasc ( 146051 ) on Sunday April 13, 2025 @01:06PM (#65302817)

    can it do CUDA?

    • can it do CUDA?

      No, not natively. But ZLUDA and SCALE can make it happen. It didn't take long to find this with Google.

  • Will the new engines save more energy than was expended in their design?
  • by larryjoe ( 135075 ) on Sunday April 13, 2025 @03:57PM (#65303155)

    It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

    No. From the late 2000's to the mid 2010's, Nvidia dominated supercomputers with their GPUs. There was initially resistance to switching to CUDA, but once the switch started happening, Nvidia dominated. Then GPUs got too expensive. Around the mid 2010's, GPUs were more than half the total cost of the entire system. Government agencies started pushing back and demanded that Nvidia lower their prices. Nvidia essentially walked away. Since then, all government systems have been AMD, with an occasional Intel system.

    This absence of Nvidia from government HPC systems will continue, especially since government supercomputers (along with gaming GPUs) are now noise for Nvidia revenue. Nvidia doesn't care that much anymore.

    This situation also highlights a big problem for AMD. Since AMD counts government supercomputers as data center, their share of the relatively more competitive hyperscaler and enterprise data center AI market is far lower than what their data center revenue numbers would suggest. This is one of the reasons why there are many AMD skeptics for future AI growth.

    • by serviscope_minor ( 664417 ) on Sunday April 13, 2025 @04:19PM (#65303213) Journal

      The government also makes really big supercomputers. These things run codes for huge systems and have often had really wacky architectures (like Roadrunner). People often customise, i.e. rewrite their code for such systems because they are so expensive to buy and run that it's worth it. Especially as the important codes will be run and run again.

      AMD GPUs often have higher memory bandwidth, and really good FP64. That's worth customising for, especially if you have AMD engineers on speed dial. For everyone else, where it's a crapshoot as to whether pytorch will work today on your particular GPU, it's not worth the fucking around. The government labs can get really good use out of AMD just as they did for Cell and on custom codes, so the NVidia ecosystem advantage is much smaller.

      But also FOR FUCK'S SAKE AMD, STOP FUCKING AROUND. Just fucking make pytorch work on your GPUs including consumer ones, and not with a half arsed CUDA compat layer. Just do it properly, like a few apple users did for free already. Maybe hire them I dunno.

    • Also Nvidia is focused on selling AI, AI, and AI. They can sell as many AI GPUs as they can make. That is until the bubble bursts on AI.

Two is not equal to three, even for large values of two.

Working...