Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics

Nvidia Details 'Neural Texture Compression', Claims Significant Improvements (techspot.com) 17

Slashdot reader indominabledemon shared this article from TechSpot: Games today use highly-detailed textures that can quickly fill the frame buffer on many graphics cards, leading to stuttering and game crashes in recent AAA titles for many gamers... [T]he most promising development in this direction so far comes from Nvidia — neural texture compression could reduce system requirements for future AAA titles, at least when it comes to VRAM and storage.... In a research paper published this week, the company details a new algorithm for texture compression that is supposedly better than both traditional block compression (BC) methods as well as other advanced compression techniques such as AVIF and JPEG-XL.

The new algorithm is simply called neural texture compression (NTC), and as the name suggests it uses a neural network designed specifically for material textures. To make this fast enough for practical use, Nvidia researchers built several small neural networks optimized for each material... [T]extures compressed with NTC preserve a lot more detail while also being significantly smaller than even these same textures compressed with BC techniques to a quarter of the original resolution... Researchers explain the idea behind their approach is to compress all these maps along with their mipmap chain into a single file, and then have them be decompressed in real time with the same random access as traditional block texture compression...

However, NTC does have some limitations that may limit its appeal. First, as with any lossy compression, it can introduce visual degradation at low bitrates. Researchers observed mild blurring, the removal of fine details, color banding, color shifts, and features leaking between texture channels. Furthermore, game artists won't be able to optimize textures in all the same ways they do today, for instance, by lowering the resolution of certain texture maps for less important objects or NPCs. Nvidia says all maps need to be the same size before compression, which is bound to complicate workflows. This sounds even worse when you consider that the benefits of NTC don't apply at larger camera distances.

Perhaps the biggest disadvantages of NTC have to do with texture filtering. As we've seen with technologies like DLSS, there is potential for image flickering and other visual artifacts when using textures compressed through NTC. And while games can utilize anisotropic filtering to improve the appearance of textures in the distance at a minimal performance cost, the same isn't possible with Nvidia's NTC at this point.

This discussion has been archived. No new comments can be posted.

Nvidia Details 'Neural Texture Compression', Claims Significant Improvements

Comments Filter:
  • Stop posting /vertisements.

    This isn't a new thing for game developers, it costs performance because you're just upscaling textures, a completely separate implementation already shipped in stuff like God of War Ragnarok. Just because it has the words "Nvidia" on it doesn't mean it's the world's most interesting news.
    • Oh fuck off, announcements like this are genuinely interesting compared to the political bullshit normally on Slashdot. This isn't a Slashvertisement. You can't go out and buy NTC for the low low price of whatever.

      • Except it's not interesting. Even in the theory stage it comes with entirely too many drawbacks for modern systems. MAYBE it will be useful on the successor to the Switch if it has enough power to utilize the tech, but no third party dev is seriously going to optimize a cross-gen game for an nVidia only tech that is unusable on the other consoles and a significant amount of PCs without a shitload of baksheesh coming their way for doing so.

        • Oh for sure. I totally don't have 200 games using NV specific features.

          NV has 82% of the PC GPU market.
          That is why "a third party dev is seriouslt going to optimize a cross-gen game for an nVidia only tech that is unusable on other consoles and a significant amount of PCs.

          With all of the XBox and PS5s added in, AMD still has a smaller market share than NV.
          • Really? They're going to dual pipeline their entire texture workflow for a tech that makes distant objects look worse(which given that file sizes are currently only a problem because of 4K means you're going to be able to see that in crystal clear detail) and renders DLSS effectively unusable because of artifact generation? They tout that at a ridiculously small compression size it's way better than BC, but nobody compresses game textures that much in the first place precisely because of the issues it produ

            • Right now, today, every single engine in production has pipelines that work differently on AMD and NV hardware to support feature differences.

              So, yup.

              Note, I wasn't opining on the value of this particular vendor-specific-feature. It may suck.
              My objection was to the following statement:

              but no third party dev is seriously going to optimize a cross-gen game for an nVidia only tech that is unusable on the other consoles and a significant amount of PCs without a shitload of baksheesh coming their way for doing so.

              That's just nonsense.
              DLSS 1 really kind of sucked ass, but it was still rapidly adopted. FSR, same thing. They'll do anything for more frames.

        • by fazig ( 2909523 )
          Yeah, I get it that there's still a lot of irrational hatred for NVIDIA around here, but I wouldn't be so dismissive.
          Look at the implementation rates of NVIDIAs proprietary DLSS (not the 3.0 frame insertion thing) vs AMDs open FSR. You might think that DLSS is used almost nowhere because you could just go for FSR, but in reality DLSS has still seen widespread implementation where a lack of DLSS is almost only found in AMD sponsored titles. Based on how DLSS started you might have also dismissed it as usele
          • Except this tech renders DLSS effectively unusable. Moreover, it kills AF. Texture compression advances are only useful if they produce a superior or equal product, especially as 4k screens become more common. This, by their own admission, does not.

            • by fazig ( 2909523 )
              Again, DLSS started out as being crap.
              This is one big difference between neural net tech and more traditional algorithms, some more considerable improvement is possible as they spend more resources on training the networks.

              Will it be good some day? I don't know for sure. But acknowledging what modern machine learning has managed, dismissing something like this the moment a research paper is released seems fairly ignorant without some more cogent points about why it couldn't work on a more fundamental lev
              • DLSS was crap, but it worked at it's primary function and showed noticeable improvement with every month. One thing that hasn't changed however is that artifacts visible in the original image become magnified in DLSS even now. This tech introduces way more artifacts than block compression because for the most part it's the same idea applied to a different part of the pipeline. Double DLSSing is a bad idea.

                • by fazig ( 2909523 )
                  Garbage in, garbage out.
                  If you're expecting magic that creates perfection, then of course you're setting it up for disappointment. It's known as the Nirvana Fallacy.

                  For things to be useful they don't even need to produce a superior or equal product. At the most basic level, the gains must outweigh the cost. In this case, the freed up resources could justify some visual degradation to some degree at least.

                  This advancement is in the same vein as things like this https://developer.nvidia.com/b... [nvidia.com] which a
  • No, I am not going to take accept attempt at a graphic driver feature as being useful. Game text was terrible under FXAA. The research papers were well-written, but the result didn't seem as good as traditional anitialiasing. I'm surprised if anyone is using FXAA, anymore.

"Your stupidity, Allen, is simply not up to par." -- Dave Mack (mack@inco.UUCP) "Yours is." -- Allen Gwinn (allen@sulaco.sigma.com), in alt.flame

Working...