Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics AMD

AMD's New Variable Graphics Memory Lets Laptop Users Reassign Their RAM To Gaming (theverge.com) 40

AMD has introduced Variable Graphics Memory (VGM) for its AI 300 "Strix Point" laptops, allowing users to convert up to 75% of their system memory into dedicated VRAM via the AMD Adrenalin app, enhancing gaming performance for titles requiring more VRAM. The Verge reports: You might be wondering: does that extra video memory actually make a difference? Well, it depends on the game. Some games, like Alan Wake II, require as many as 6GB of VRAM and will throw errors at launch if you're short -- Steam Deck, Asus ROG Ally, and Lenovo Legion Go buyers have been tweaking their VRAM settings for some time to take games to the threshold of playability. But in early testing with the Asus Zenbook S 16, a Strix Point laptop that's already shipped with this feature, my colleague Joanna Nelius saw that turning it on isn't a silver bullet for every game. With 8GB of VRAM, the laptop played Control notably faster (65fps vs. 54fps), but some titles had smaller boosts, no boost, or even slight frame rate decreases.

AMD's New Variable Graphics Memory Lets Laptop Users Reassign Their RAM To Gaming

Comments Filter:
  • I thought ever since the AGP days of video cards it's been possible to use a portion of system memory as video memory?
    • As far as I'm aware, partitioning of RAM was done at the system level via BIOS and was static, requiring a reboot to change. This appears to be a dynamic allocation that can be done on demand.

      =Smidge=

      • vampire video to the extreme!

      • by AmiMoJo ( 196126 )

        Current gen AMD APUs (their term for CPUs with a built in GPU) have 512MB of on-board dedicated memory, and the driver allocates some main RAM for GPU assets as well. Data in main RAM has to be transferred into the GPU memory to be operated on.

        They appear to be just allowing the user to dedicate a certain amount of main RAM to the GPU driver, and then lie to the game about how much VRAM the GPU actually has. The main benefit is that the game has an idea of how much VRAM to use (scaling texture quality and t

    • I thought ever since the AGP days of video cards it's been possible to use a portion of system memory as video memory?

      Yes but this has always been fixed in the BIOS and always been laughably small. By default on most iGPU implementations this is between 128MB and 512MB and needs to be set in BIOS.

      From what I understand looking around online it seems that AMD's drivers have historically handled this in the background variably, without any fixed setting and that as a result some games shat the bed when they queried how much vRAM was actually available to them. It seems the news here is that setting this option will results i

      • Intel iGPUs are fully unified and have been for a long time. They have a memoryHeap of size of AVAILABLE_SYSTEM_RAM, and all of their memoryTypes are suitable for all purposes, and DEVICE_LOCAL, HOST_COHERENT, and HOST_VISIBLE.
        AMD iGPUs still have partitions. The dedicated VRAM section has a few special flags to indicate what's different about that region:
        MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD, and MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD.
        The shared section is standard HOST_COHERENT, HOST_VISIBLE, indica
        • No you've got this idea backwards. Intel's iGPU is fully unified in that it is fully shared. I.e. 100% of the available vRAM is listed as shared RAM and mapped in the OS along with all the performance costs that shared GPU memory managed by the OS entails. As far as I understand AMD here is doing the reverse - sectioning off a portion of RAM to make sure it is *not* available to be shared but rather exclusively for GPU use.

          Honestly tech information is hard to come by here. Do you have any docs on this VGM?

          • UMA has a meaning.
            It means local to both devices.
            Intels are UMA (DEVICE_LOCAL, HOST_VISIBLE, HOST_COHERENT)
            They're not "shared".

            AMD has partitions, with no UMA.
            They partition it into 2 blocks.
            A "dedicated" block (not mapped on the CPU), and a "shared" block which is HOST_VISIBLE, and HOST_COHERENT, but not DEVICE_LOCAL.
            Different things can be done with different blocks based on the locality. You cannot, for example, use the "shared" block as texture memory on the AMD.
            I have nothing backwards on it
          • Also, the idea that there is a performance cost for UMA is absurd.
            AMD is doing the same thing Intels used to do back in the dark ages of iGPUs. UMA is the future.
            AMD does partitioning because they have really slow connectivity to the I/O die, and so coherency between CCDs is *very* expensive.
            If they were to do UMA on their devices, it would murder the performance of their parts; but that is a "them" problem, not a UMA problem.
    • by ledow ( 319597 )

      Yep.

      Been a thing for so long, on so many platforms, that the only thing new here is doing it "on the fly" (but I'm pretty sure I've used machines where you could do it on the fly via Windows drivers, etc. too).

      But there's a reason that anything considered a decent GPU would avoid ever doing so and that reason has barely changed. GDDR exists for a reason.

      Contending with system RAM via the the memory controllers over a bus with other peripherals is never, ever going to be as good as having a dedicated and in

    • It was certainly done well on the Silicon Graphics (SGI) O2 workstation. It could share all of it's 1GB max system memory with the CRM onboard graphics controller. The memory is also accessible by the ICE accelerator. There are demos by SGI that show this capability. One of them loads a giant satellite image and zooms down onto it from space. It's an awesome system. I have two of them and one is on my desktop still right now.
  • I remember some internal video cards using shared RAM on PCs in the early 2000s. Definitely not barn-burner performance. I can see having dedicated VRAM, but "swapping" to normal RAM seems new... or having all RAM on the machine be dual-ported so it can act as VRAM if needed, although this makes subsequent RAM upgrades more difficult to the relative rarity of dual-ported memory modules.

    • by Luckyo ( 1726890 )

      Shared RAM is still a norm on overwhelming majority of PCs that do not have a dedicated graphics card but instead GPU sits on the CPU and shares system memory with CPU.

      This is about dynamic allocation of said memory, rather than the current mechanism where you allocate memory in BIOS and it sticks until you reboot into BIOS and set a new memory value.

    • by unrtst ( 777550 )

      I can see having dedicated VRAM, but "swapping" to normal RAM seems new... or having all RAM on the machine be dual-ported so it can act as VRAM if needed, although this makes subsequent RAM upgrades more difficult to the relative rarity of dual-ported memory modules.

      Forget doing RAM upgrades. This has "32GB LPDDR5X on board".

      • Yep more soldered to the main board crap. Can't have consumers upgrading without paying a full replacement fee. What I would like to see is this backfiring due to the sudden availability of cheap BGA rework stations and parts pick and place machines in the consumer market. (Would help with the repair side of things too.)
    • by Creepy ( 93888 )

      It can still be done, in fact, Intel even tried to release a series of GPU/CPU hybrids seeing a market for real-time ray tracing, where the entire scene pretty much has to be in memory to calculate reflections and calculate color for any object based on some graphical model like Phong shading [wikipedia.org]. Intel tried to capitalize on that with Larrabee, but non-graphics memory speeds were too slow to really cut it for anything outside of real-time raytracing demos. You could see even then that it did specular (reflecti

    • We never left shared RAM. In fact technically the majority of video RAM in the computer market globally is shared memory as it is a component of all iGPUs and iGPUs are included in a mind boggling number of CPUs these days.

      In fact this announcement here is a reflection of the fact that shared memory dominates and needs to be optimised to work better going forward. /Disclaimer: Posted from a machine with 8GB of shared memory for it's Intel IrisXe graphics.

  • I kinda wish I could do this for the nVidia card in my home AI workload machine when performance is negotiable and a model wants 24GB but AMD still doesn't support a CUDA - compatible api so, OK, I guess?

    Learn from Compaq maybe.

    • I think AMD would love to support CUDA if Nvidia let them but they protect that shit and are bound to be litigious about it, it sells a shitload of hardware.

      Through sheer necessity though AMD has 2 open source protocols they support (OpenCL, ROCm) but CUDA is entrenched and Nvidia has no reason to optimize for something anyone can use.

    • No you do not. In fact AI workloads is quite possibly the worst case scenario for this performance wise. Unlike gaming where it's more important to have sufficient vram to not hit discs to load textures, AI workloads are incredibly latency intensive. Using shared RAM for this decimates performance. Specifically in discussions around VGM that are ongoing everyone is quite clear: do not enable this if you want to use raytracing as it has the same kind of latency dependent workload.

    • Well, there is ROCm, which, if AMD can meet certain price performance points, could become interesting for prosumers and hobbyists. (Yeah, I know, I know).

      This capability of easily resizing RAM dedicated to a "GPU" (Let's recognize that in this application it's more an NPU/AI) engine raises an interesting prospect. Doing some rough estimates below (and in come cases mentally converting from C$ back to USD$).

      The cost-sensitive poorly addressed AI window that seems open at the moment is the ~48GB-192+GB space

  • This sounds like going back to editing your config.sys and autoexec.bat files to load himem.sys to free up memory.

    • Quite the opposite. The existing way of managing shared memory was changing settings in the BIOS. The point of this option here is that it can be done dynamically without you changing settings.

  • As someone who's completely uninterested in modern gaming, I would be interested in reclaiming video RAM for general computing.

    • The Disk Copy [earlymacintosh.org] utility allowed copying a 400KB floppy in 4 passes on a computer with only 128KB of RAM.

      Under normal operation, 22KB of the 128KB was reserved for video. Disk Copy borrowed most of that memory so a floppy could be duplicated in only 4 passes.

      • by rossdee ( 243626 )

        I had a Compucolor II in 1980 that had a diskcopy program that used the display memory as a buffer

    • I would be interested in reclaiming video RAM for general computing.

      To what end? A CPU based task would access video RAM via the PCI bus and that would be painfully slow. A GPU based task can already access video RAM and does so.

      You don't game? Good for you, I'm not gaming right now either, but I'm still using 1.8GB of video memory. Your GPU is used for more than just games.

    • You can reclaim video RAM as a ramdisk, and then use it for swap. It's an old trick on Linux, I remember testing it in the 00s: https://wiki.archlinux.org/tit... [archlinux.org] Of course, there are potential issues as the GPU may compete to access the same memory. In my experience, closed source video drivers are more problematic.

      There are also OpenCL based approaches that should be safer, but they also have their issues. For example https://github.com/Overv/vramf... [github.com] which I recalled worked fine as a user-only ramdisk

  • When AMD accquired ATi I wondered What's taking so long?
    When intel started developing their own iGPUs (instead of licensing PowerVR designs), I wondered what's taking so long?
    When Windows Vista arrived with a new display driver model, and more goodies under the hood, I wondered Is this the day? Alas, the answer was nope.
    When Apple did it in their Mx Desktops, I said, sure this will lit a fire under the X86-64 crowd's ass, but, again, nope....

    It is finally here. It took them tooooo long, but is a welcome dev

    • by ddtmm ( 549094 )
      Don’t know why you’re holding your breath. Why would a company spend a bunch of time and resources on something that a bit on money soent on more ram can fix.
  • All new Macs have unified memory for that. No distinction between CPUs and GPUs. With more bandwidth than the CPus can ever use. So data doesnâ(TM)t need to be copied from cpu to gpu.
    • So do all iGPUs in Intel and AMDs.
      The "dedicated" part of the RAM is for graphics APIs that were coded around that concept (OpenGL, mostly).
      Letting it be variable lets you change how graphics engines respond to that value. If you use Vulkan, or a newer API, it'll give you a better idea of how it all actually works, rather giving you control of host/device coherency and visibility.
      In reality, every single byte of RAM is attached to the root complex that both the CPU and GPU have access to, with full cache
      • The latency is still much higher than when the RAM is on a SoC.

        • Nope.
          No idea where you got that idea from.
          • I got it backwards. It's bandwidth that's higher because there are more connections. If you have 4 sticks of DDR5 you might get 250GB of bandwidth. An M2 Macbook Pro starts at 200GB/s but you can go up to 800GB/s with M2 Ultra.

            • The bandwidth is indeed much higher on a MBP than on an Intel or AMD. That's because they've got more memory channels.
              They have more memory channels, because they wanted their IGP to be well-fed.
              On my M1 Max, for example, my CPU can only saturate about ~200GB/s of my 400GB/s, and my GPU can saturate ~300GB/s.
              The bottleneck on a PC desktop part, is the fact that they've only got 2 channels for memory, period. They could increase that if they wanted, but it has nothing to do with whether or not it's a SoC.

Riches: A gift from Heaven signifying, "This is my beloved son, in whom I am well pleased." -- John D. Rockefeller, (slander by Ambrose Bierce)

Working...