Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Operating Systems

OpenMP 6.0 Released (phoronix.com) 11

Phoronix's Michael Larabel reports: The OpenMP Architecture Review Board announced from SC24 that OpenMP 6.0 is now available as a major upgrade to the OpenMP specification for multi-process programming within C / C++ / Fortran. A big emphasis on OpenMP 6.0 is making it easier for developers to embrace. OpenMP 6.0 aims to make it easier to support parallel programming in new applications, easier to adapt to new use-cases, and more fine-grained developer control.

OpenMP 6.0 simplifies task programming with support for task execution by free-agent threads, allowing for recording of task graphs for efficient replay, and other improvements. OpenMP 6.0 also brings support for array syntax applications, better control over memory allocations and accessibility, easier writing of asynchronous data transfers, and other improvements for enhanced device support / offloading. There is also easier programming of loop transformations, support for induction, support for C23 / Fortran 2023 / C++23, grater user control of storage resources and memory spaces, and other improvements.

OpenMP 6.0 Released

Comments Filter:
  • by phantomfive ( 622387 ) on Thursday November 14, 2024 @07:02PM (#64946747) Journal
    Using #pragma for logic in C is kind of awkward, generally I find threads easier to work with.
    • by Victotronics ( 2663263 ) on Thursday November 14, 2024 @08:48PM (#64946977)
      It's widely used in scientific applications. Distributing a loop with many many many iterations (statically or dynamically) over many cores is way easier in OpenMP than with threads. It's really an elegant way of expressing thread parallelism without having fork and join. The tasking mechanism brings it closer to traditional fork/join but it's still much more elegant.
      • So is it like Nvidia's Cuda? I'm not terribly familiar with either, though I'm casually trying to learn Cuda. The iteration over cores sounds a bit like Cuda.
        • It's more flexible than CUDA. In particular its task model is very powerful. It's like recursively spawning threads, but easier.
      • by dargaud ( 518470 )
        Yes, it's easy to add a pragma before a loop, but:
        1) does it even do anything ? If the loop is not parallelizable, then omp fails with no indication. It still compiles and works as before, which is fair, but you have no way to know what really is going on without looking at the assembly code, or advanced gcc optimizing messages
        2) often it's slower than before and I've been beaten by that basically every time I've tried to use it. With several threads accessing *different ranges* of the same data at the sa
        • by ET3D ( 1169851 )

          Yes, it's great for trivial loops and not so great for less trivial. Also not always very good on CPUs with too many cores. But it still tends to be simpler than manually using threads, which has all the same downsides but is more complicated to work with.

          Also, I like omp simd, which can provide SIMD acceleration more easily than programming it manually, even though it does need a lot of massaging to get working.

    • by godrik ( 1287354 ) on Thursday November 14, 2024 @08:50PM (#64946983)

      Yes! Plenty of scientific applications are written using OpenMP. I use it in production for webservers that have routes that are computationally costly. It is pretty easy to make simple parallelism go well.

      Now, the GPU parts of OpenMP are a bit rough to use because the dev tool chains are fairly annoying to install and make portable.

    • by ET3D ( 1169851 )

      It's very convenient. In its simplest form you can just use the pragma on a for loop and get parallelisation.

    • Lots of people.

      It has a huge advantage, precisely because it uses pragmas:
      You can write your code single-threaded. Once it works, you can add parallelism using pragmas. If there are issues, you can switch off parallelism in any part of your code to find bugs, without rewriting it.

      Sure some performance related changes will be needed to avoid performance pitfalls such as false sharing, but it is extremely powerful and simple to use.

      For HPC applications OpenMP and OpenMPI is where itâ(TM)s at.

      It can also

  • From the summary - "...support for induction, support for C23 / Fortran 2023 / C++23, grater user control of storage resources and memory spaces, and other improvements. "

    Actually they announced there is "Greater user control of storage resources and memory spaces".

    I'm not sure that using a grater would be a good way of controlling those things.

The opossum is a very sophisticated animal. It doesn't even get up until 5 or 6 PM.

Working...