Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics

Nvidia Hints At Replacing Rasterization and Ray Tracing With Full Neural Rendering (tomshardware.com) 131

Mark Tyson writes via Tom's Hardware: A future version of [Deep Learning Super Sampling (DLSS) technology] is likely to include full neural rendering, hinted Bryan Catanzaro, a Nvidia VP of Applied Deep Learning Research. In a round table discussion organized by Digital Foundry (video), various video game industry experts talked about the future of AI in the business. During the discussion, Nvidia's Catanzaro raised a few eyebrows with his openness to predict some key features of a hypothetical "DLSS 10." [...]

We've seen significant developments in Nvidia's DLSS technology over the years. First launched with the RTX 20-series GPUs, many wondered about the true value of technologies like the Tensor cores being included in gaming GPUs. The first ray tracing games, and the first version of DLSS, were of questionable merit. However, DLSS 2.X improved the tech and made it more useful, leading to it being more widely utilized -- and copied, first via FSR2 and later with XeSS. DLSS 3 debuted with the RTX 40-series graphics cards, adding Frame Generation technology. With 4x upscaling and frame generation, neural rendering potentially allows a game to only fully render 1/8 (12.5%) of the pixels. Most recently, DLSS 3.5 offered improved denoising algorithms for ray tracing games with the introduction of Ray Reconstruction technology.

The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies, as the last two additions have targeted other aspects of rendering. Digital Foundry asked that question to the group: "Where do you see DLSS in the future? What other problem areas could machine learning tackle in a good way?" Bryan Catanzaro immediately brought up the topic of full neural rendering. This idea isn't quite as far out as it may seem. Catanzaro reminded the panel that, at the NeurIPS conference in 2018, Nvidia researchers showed an open-world demo of a world being rendered in real-time using a neural network. During that demo the UE4 game engine provided data about what objects were in a scene, where they were, and so on, and the neural rendering provided all the on-screen graphics.
"DLSS 10 (in the far far future) is going to be a completely neural rendering system," Catanzaro added. The result will be "more immersive and more beautiful" games than most can imagine today.
This discussion has been archived. No new comments can be posted.

Nvidia Hints At Replacing Rasterization and Ray Tracing With Full Neural Rendering

Comments Filter:
  • by paralumina01 ( 6276944 ) on Monday September 25, 2023 @08:36PM (#63877053)
    n/t
    • It's likely just marketing hype since current hardware can't yet do neural rendering in real-time.

      The idea is that you give a neural engine a description of the scene along with as many parameters as needed to constrain it to what you want. Then you tell it the changes from that scene to the next, like "Actor X turns 45 degrees to the right and grins." Then the neural engine seamlessly creates the frames on the fly.

      The more details you give it, the more the result will match your intentions, but whatever yo

      • by Kisai ( 213879 )

        The problem is this is all strongly barking up the wrong tree.

        We've hit, for all considerations the maximum power and thermal limit for a GPU in a desktop. Nobody is going to buy a high end GPU if it requires as much power as their stove or clothes dryer, and to be vented to the outside because it throws off the thermal equilibrium of homes (thus requiring additional A/C)

        Like just to run my RTX 3090, one GPU, alone, for 4 hours, raises the temperature of my bachelor apartment enough that I don't need heat d

        • by gTsiros ( 205624 ) on Tuesday September 26, 2023 @06:28AM (#63877755)

          That 3090 was the most power hungry GPU that ever existed when it was released. 350 W. There are only 5 cards available today that (nominaly) use more power.

          Anyway, you know you can always limit the power consumption of your GPU, either directly or indirectly, yes? It doesn't *need* to go full tilt all the time.

        • by DarkOx ( 621550 )

          I think the issue here is that GPU makers are trying to build chips that are all things to all people. ATI/AMD abandon their terrascale platform because it was best suited to tasks that neatly fit the single operation multi-data model. Which wait-for-it rendering frames for video games mostly does.

          Those chips were also very power hungry and had some efficiency problems when you could not quite find a way to pipeline operations to keep every compute unit engaged but that was/is probably solvable with ways

        • My cat likes to hang around my pc's exhaust fan in the winter.
      • by AmiMoJo ( 196126 )

        They seem to be talking lower level than that. You give it a polygon mesh and it renders it, but instead of using shaders it has a neural net that can reproduce the texture and lighting of the material. It should be able to cope well with things that traditional rasterizing and raytracing don't, like materials looking weird close up or far away.

        I'm sure it's a lot of hype at this point, but they are probably right that at some point the fundamental nature of rendering is going to move away from rasterizatio

    • by locater16 ( 2326718 ) on Tuesday September 26, 2023 @01:18AM (#63877477)
      Raytracing is dead, AI is the future, discard your old hardware then kill yourself in shame that you don't have the new yet!
    • I call BS on this since the vast majority of games run on AMD hardware in consoles.

  • by Twisted64 ( 837490 ) on Monday September 25, 2023 @08:38PM (#63877057) Homepage

    This is going to make rendering bugs SUPER interesting.

  • by Luckyo ( 1726890 ) on Monday September 25, 2023 @09:33PM (#63877155)

    "Current thing" that nvidia is desperately trying (and utterly failing) to sell their cards on is AI rendering that is DLSS3. For those that don't know, DLSS3 is a mode where card takes two frames and tries to guess what frame(s) that would go between these two would look like and present that so that nvidia can pretend that card's output frame rate is many times in their marketing.

    For gamers, this is a strictly anti-feature. It makes gaming strictly worse when enabled. Reason being that need for high FPS is primarily about responsiveness of the game, i.e. "I move my mouse, how fast does game react to it". For normal rendering, higher FPS is strictly better for responsiveness unless game implements some really warped and fucked up input/output system (which no modern games really do). Essentially the inputs are typically processed each rendered frame, and the more frames card can render, the less time is spent to render each frame = less time taken to process each player's input and deliver output based on it. So your game's responsiveness to your inputs scales directly with frame rate. High frame rate is a strict upgrade to responsiveness.

    This is why it actually helps to have really high frame rates that are barely observable for humans in shooters like counter-strike, because correctly tracking target with your mouse improves for most people with more responsive game.

    And DLSS3 does the opposite. It gives you an illusion of higher frame rate, while the game's actual frame rate not only stays low, but suffers from increased delay as actual frames are delayed in the pipeline to ensure that fake inserted frames aren't slightly out of sync because they come a bit too late or too early. That means that enabling DLSS3 will always make game's responsiveness objectively worse.

    Ergo, it's an anti-feature for gaming. And this is what nvidia is trying to sell cards to gamers on. At this point, it's fairly obvious that nvidia hasn't been a gaming company in several years, and it's fully transformed into an AI accelerator company. Which looks at the gaming market not as an actual market segment worth developing for any more, but a byproduct to which AI based solutions should be jury rigged to save costs.

    • Yep. It's essentially a clever "sleight of hand". It also sacrifices image quality as well, since it's upscaling a lower resolution image for the sake of higher (but by all other metrics inferior) frame rates. The ai decides which parts of the image it thinks will actually matter, and makes it's best effort to hallucinate what the image should look like, giving special attention to the parts it thinks we pay attention to.

    • Not to disagree entirely with you on the anti-feature claim, but the argument is fairly weak. The highest reaction time of humans is between 200 and 150 milliseconds (with training). At 30 frames per seconds, a crappy standard for today's GPUs performance, each frame is rendered in ~33 milliseconds, meaning you could pack a bit more than 6 frames in your reaction time window (with training). At 144 FPS, the ideal refresh rate for Counterstrike (which a compulsory Google search reports as "Criminally smooth
      • The highest reaction time of humans is between 200 and 150 milliseconds (with training)

        Note that this reaction speed is after the change is perceived. The change can be noticed much, much quicker.

        • That's true and I'd upvote you, but parent is also missing out a key distinction between haptic reaction that you don't expect (150-200ms might be right) and haptic reaction that you're expecting (less than 10ms). Can't fnd the source now though, but it was in the context of manipulating robots and what can 5G bring to the table
      • by AmiMoJo ( 196126 )

        Practical testing has shown there is an advantage to higher frame rates in gaming, which is not surprising. While it may take your body 150ms to react, getting started with that reaction as early as possible is a benefit. Your eyes have to see something happen, and your brain has to recognize it and formulate a response. The more information it has, the earlier it has it, the better. What's more, the brain is tuned to look for changes in your field of view, by millions of years of evolution.

      • Reaction time is irrelevant when human timing precision with anticipation is many times greater and timing perception is greater than that yet. Even ignoring competitive games every millisecond counts when it comes to the feeling of immediacy of controls which is fun in and of itself even ignoring the content of the game.
      • by Luckyo ( 1726890 ) on Tuesday September 26, 2023 @07:26AM (#63877869)

        This is the applied variant of widely disseminated pseudo-science behind the "cinematic 24 frames per second experience" claim. No shame in falling victim to this, the amount of money spent to market this about a decade ago was insane.

        Reality however is completely opposite, and has been tested several times and proven objectively false. Perhaps the most publicly visible test has been ironically sponsored by nvidia itself, back in the day before DLSS3, when it was selling people on actual high frame rate gaming.

        https://www.youtube.com/watch?... [youtube.com]

      • Not to disagree entirely with you on the anti-feature claim, but the argument is fairly weak. The highest reaction time of humans is between 200 and 150 milliseconds (with training). At 30 frames per seconds, a crappy standard for today's GPUs performance, each frame is rendered in ~33 milliseconds, meaning you could pack a bit more than 6 frames in your reaction time window (with training). At 144 FPS, the ideal refresh rate for Counterstrike (which a compulsory Google search reports as "Criminally smooth. For hardcore and professional players"), you have 7 ms/frame, which means your GPU could render 25 frames by the time you actually perceived a movement. Is it a cheap trick? Absolutely yes.

        Reaction time != adding latency and thinking it doesn't matter. Reaction time is not perception time.

        If I'm playing some hyper competitive FPS and in frame 1000 I see an enemy and in frame 1006 you see it due to render latency well guess what I have an advantage because I reacted 33ms * 6 = 198ms sooner than you did. The same applies to all the rest of the gamer nonsense... hyper polling keyboards and mice, people intentionally using TN displays... It's not about human reaction time it's about minimizing

      • I've been working in video games for over a decade, and 0 vs 0.05 vs 0.1 vs 0.15 vs 0.2 seconds are miles apart in terms of feel, once you've acquired a sense for it. Trying plugging in an old Guitar Hero set, and then play with 0.2 seconds of audio/visual lag, and compare it to something properly calibrated, and the difference is night and day.

        There's no such thing as a singular 'reaction time', the context always matters. Braking your car in an emergency famously takes over a second, because we're talking

      • It has nothing to do with reaction speed, and everything to do with collision algorithms. A target can only be hit if it's visible, and the higher frame-rate allows more frames where the target can be hit.

        More detail: take the case of a projectile traveling between the player's weapon and a target that appears for n milliseconds, and a frame-rate r in frames/sec. The number of frames that target is visible, and hence can be hit by the projectile, is proportional to n * r. With a lower frame-rate, the targ

    • by im_thatoneguy ( 819432 ) on Monday September 25, 2023 @11:56PM (#63877371)

      For gamers, this is a strictly anti-feature. It makes gaming strictly worse when enabled. Reason being that need for high FPS is primarily about responsiveness of the game, i.e. "I move my mouse, how fast does game react to it".

      This isn't necessarily true. For one thing DLSS3 frame generation is entirely optional. For another, synthesized frames can be created by warping existing frames based on the depth buffer and the input. This was popularized for VR headsets to minimize nausea to hugely positive effect. Most critical responsiveness is in aim, not motion. So parallax isn't a concern when you project a wider FOV and then re-render the depth buffer with a slight pan or tilt.

      For normal rendering, higher FPS is strictly better for responsiveness unless game implements some really warped and fucked up input/output system (which no modern games really do).

      DLSS isn't a "normal" input/output system for exactly that reason. And the DLSS 3 uprezzing allows you to render more "true" frames which lets you get even more benefit. Most of DLSS is intended to allow you to run at a lower resolution, get higher framerate and therefore by definition lower latency.

    • For gamers, this is a strictly anti-feature.

      I'm not a professional gamer, but I can say that if you're calling this an anti-feature then you're either sponsored and being paid to play in a competitive FPS league, or more likely you've simply never used it.

      The increased latency of all these features is virtually imperceptible in an FPS game. Meanwhile these "anti-features" have made the visual effects far smoother and helped maintain higher frame rates, especially on 4K screens. You think gamers will be better served with lower framerates? If so they

      • The increased latency of all these features is virtually imperceptible in an FPS game.

        Depends on what you're going to/from. The benefits of higher frame rates in specific competitive FPS games has been tested and shown. In real frames there's a strong benefit in going from 60hz to 120hz, and a drastically reduced one going from 120hz to 240. Returns diminish damn quickly.

        If you're using inter-frame generation to go from 30hz to 60hz? yeah the latency is there and strong. 120hz to 240hz? Completely agree, not a real issue.

        If so they can always buy AMD. They aren't doing that though, why do you think that is?

        Well.. they do, what do you think ps5 and xbox series x run, nvidia? no

        • by jsonn ( 792303 ) on Tuesday September 26, 2023 @04:19AM (#63877645)

          The increased latency of all these features is virtually imperceptible in an FPS game.

          Depends on what you're going to/from. The benefits of higher frame rates in specific competitive FPS games has been tested and shown. In real frames there's a strong benefit in going from 60hz to 120hz, and a drastically reduced one going from 120hz to 240. Returns diminish damn quickly.

          If you're using inter-frame generation to go from 30hz to 60hz? yeah the latency is there and strong. 120hz to 240hz? Completely agree, not a real issue.

          It should be noted that the latency of interpolating 30Hz to 60Hz is actually *higher* than just using 30Hz.

        • Going to 240Hz from 120Hz is mostly good for reduced motion blur from eye movements and that will obviously be even more true for going from 240Hz to 480Hz. It's still an improvement but next frame latency at 60Hz is already pretty snappy feeling for a video game, to me improving motion clarity becomes a bigger deal pretty quickly.
        • The perfect style of game where 30fps doesn't even matter.. coincidence no?

          Why wouldn't it matter? Why would you think having a visually jerky experience with chapter motions not smooth, display scrolling not smooth, and animations of the environment not smooth suddenly "not matter" simply because your view is not first person with a gun visible in the frame?

          Much of your post was at least on point, but with this last line you've just lost all credibility by claiming that frame rate doesn't matter for one game. It's like you're getting latency and frame rate confused (the latter is

          • Why wouldn't it matter? Why would you think having a visually jerky experience with chapter motions not smooth, display scrolling not smooth, and animations of the environment not smooth suddenly "not matter" simply because your view is not first person with a gun visible in the frame?

            Low framerates in turn based games do not affect the gameplay in the ways they do in first person shooters and other real-time games. I did not specifically say this because I thought it self evident.

            One frame per second will not greatly affect your chances of winning or losing chess.

            It changes it from game breaking and actual quality of gaming, vs non gameplay affecting.

      • by fazig ( 2909523 )
        AMD is also introducing frame generation with FSR 3, which is supposed to be launched very soon.
        But of course we don't talk about that... while "Team Red" has already been salivating over it being better than DLSS 3.

        I'm just waiting for the "at least it's not AI" and "at least it works on all graphics cards" knee-jerk that someone who doesn't know their arse from a hole in the ground would use.
        • by Luckyo ( 1726890 )

          It's just as much of an anti-feature if implemented in a similar fashion.

          • by fazig ( 2909523 )
            Can you be a bit more specific about that?
            • by Luckyo ( 1726890 )

              Do you want me to repost the entire original post, just with "amd" replacing "nvidia"?

              • by fazig ( 2909523 )
                I want something that can be used as a checklist, because from my end it already looks like it's implemented in "a similar fashion" where AMD also pairs it with some low latency stuff, because the interpolation does necessarily increase input lag.
                Also because as I know people who like to pretend to be critical and objective, to some literal apples and orangers are of "a similar fashion" and then to the same people if you painted a green apple in red, then that makes the red apple totally different.
          • It's just as much of an anti-feature if implemented in a similar fashion.

            Only for your idiotic view of gaming. You are more than welcome to play with it turned off, stop gaslighting those of us who actually use these real world features to great effect to improve our experience.

            Be less of a gatekeeping arsehole.

      • by Luckyo ( 1726890 )

        >but I can say that if you're calling this an anti-feature then you're either sponsored

        Here's a video sponsored by nvidia demonstrating my claim to be true:

        https://www.youtube.com/watch?... [youtube.com]

        • >but I can say that if you're calling this an anti-feature then you're either sponsored

          Here's a video sponsored by nvidia demonstrating my claim to be true:

          https://www.youtube.com/watch?... [youtube.com]

          Your claim isn't true or not true. It's irrelevant, and by linking a post that talks about being a better gamer with higher fps vs not just shows you didn't understand (presumably didn't bother) to read my post.

          Stop being a gatekeeping arsehat. Not everyone plays like you, that doesn't make features that benefit us "anti-features" simply because it doesn't suit your retarded and narrow idea of what "gaming" is.

          • by Luckyo ( 1726890 )

            The topic:

            >The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies, as the last two additions have targeted other aspects of rendering. Digital Foundry asked that question to the group: "Where do you see DLSS in the future? What other problem areas could machine learning tackle in a good way?" Bryan Catanzaro immediately brought up the topic of full neural rendering.

            How it's going...

    • 1kHz is where we'll begin to see eye-movement related visual artifacts go away so it's not completely unjustifiable in all cases, but fake frames at the expense of latency are bad until you hit the threshold of "good enough" latency with the fake frames on. Consider this though, many games can't be that latency-critical, how else could you justify 100Hz USB polling still being standard for USB game pads in Windows...

      That all said though I agree, in the future with 1000Hz VR headsets it'll be a good idea bu

    • It's only antifeature if you're not buying Nvidia, guess what, more then enough people buy nvidia.
      • by Luckyo ( 1726890 )

        Indeed. Most of them are AI compute people. They actually see AI features as what they are, AI features. They aren't interested in the gaming side, because that's not what they use these cards for.

        That was my point in fact.

    • Almost everything about this post is complete baloney.

      Yes, DLSS FG doesn't improve responsiveness but it does improve game fluidity a ton and makes playing the game a ton more pleasurable experience. It's only caveat is that your base frame rate must be above roughly 50fps to enjoy it fully.

      And DLSS FG is only one of three things that DLSS encompasses, the other two being image upscaling and ray reconstruction which has shown a ton of potential and it generally greatly improve image quality.

      You may ha

      • Comment removed based on user account deletion
        • The biggest shortcoming of DLSS is that it's 100% proprietary. That's a valid concern and that's the only reason not to like/accept it. I hate vendor lock-ins 'cause they inevitably lead to higher prices and stifle competition.

          NVIDIA could have at least released DLSS APIs into the public domain or/and merged them with DirectX, so that they could have been reimplemented by Intel and AMD however they see fit. I'm afraid that's not going to happen ever.

    • For those that don't know, DLSS3 is a mode where card takes two frames and tries to guess what frame(s) that would go between

      Nope. That's DLSS Frame Generation, which is just a part of DLSS3. DLSS3 also includes advances to DLSS Super Resolution (upscaling) and NVIDIA Reflex (latency reduction) tech.

      enabling DLSS3 will always make game's responsiveness objectively worse.

      True, but from what I've seen and read; because all games with framegen must include Reflex, the increase in latency is only around 10ms when using framegen This is less than a full frame at 60hz. It may well be that nobody who is playing twitch-shooters competitively is using framegen. That's fine, because generally the engines for t

      • Good VR games have already minimized everything else, Reflex doesn't do shit. To get down to 10 ms, you need native 100 Hz.

        • PS. I can promise you, Apple doesn't use interpolation on Vision Pro and never will ... this road leads to obsolescence.

      • by Luckyo ( 1726890 )

        Subject of the topic is AI frame generation, not AI upscaling. You're not the first trying this sleight of hand, relying on people not knowing the difference between DLSS and DLSS2 (AI upscaler, reduces input latency) and DLSS3 (AI image generator, increases input latency).

        So is the topic AI upscaling, or AI image generation.

        Drumbeat...

        >The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies,

    • Though I think most of these AI-enhanced 'features' are indeed marketing fluff, I have to disagree that they impact game responsiveness.

      Yes, DLSS3 does fudge actual frame rates by faking/upscaling in-between, but the vast majority of frames have to do with eye-candy anyway, and not responsiveness or input in any way. Professional games (and game-engines) have separate display and input systems, and often distinct physics-frames as well, that update at wildly different rates.

      For example a RTS (so generally n

      • by Luckyo ( 1726890 )

        Your disagreement is irrelevant. It's factually irrefutable that DLSS3 AI frame generation increases input latency.

        The rest is a pointless rhetorical wank on irrelevant topics like physics.

  • by rsilvergun ( 571051 ) on Monday September 25, 2023 @09:40PM (#63877161)
    So I'd like to say this isn't coming anytime soon. But I have no doubt they'd like it too. They've kind of Hit the limit of what they can do with the amount of bandwidth and cores that can throw at the problem. So what they want to do is run games at lower resolutions internally so they can upscale to high resolutions and sell you a fancy video card that can do those high resolutions but it's cheap to make.

    That would be fine if not for all the lag it tends to introduce. It's especially noticeable on fighting games making it basically useless for those. But I guess if you're into competitive games you'll notice it either way.
    • not to mention power, a 12v rail can only do so much -- to wring more performance they'd probably have to go with external PSU's or somehow get a new psu standard with 24v rails or similar pushed through.

      • There's already work being done in that direction. But then there's also work being done in completely different directions than what Nvidia and AMD both do. Something closer to what arm CPUs do. I'm not so sure it'll work out because from what little I understand there's really specific reasons why Nvidia and AMD chose the design they did for video cards. But it's one of those things that if it does work out Nvidia is in deep trouble because it would be a complete paradigm shift on the level of when pixel
      • not to mention power, a 12v rail can only do so much -- to wring more performance they'd probably have to go with external PSU's or somehow get a new psu standard with 24v rails or similar pushed through.

        If one 12VHPWR cable isn't enough add two for twice the fun.

    • It's because where it sits in the rendering pipeline.

      NVIDIA have produced papers on how to quickly render extrapolated frames, but that would have to be far more invasive inside the game engine (and would mean the game engine having to go full raytracing, no longer hybrid). Not just a bolt on post processor which takes TAA inputs. Interpolation is easy to bolt on, so interpolation is what they push commercially. Non VR/twitch gamers eat it up and are getting used to it, 24Hz cinema style.

      Only VR stands in t

      • frame rate is much less important to begin with. And there's a *lot* of twitch games. I know fighting games are pretty niche, but stuff like Fortnite and Counter Strike and Overwatch come to mind. Basically all the "esports" games where people like pushing 120 fps+.
  • This sounds more like "Nvidia Executive hints that the most cost-effective rendering option is about to become your brains in a jar."

    Tinfoil brain-protecting hats about to gain another use case!

  • Because the article doesn't explain it in any meaningful way -- can anyone care to give the Cliff Notes version? :)

    • by clambake ( 37702 )

      I may be entirely wrong, but from what I'm gathering instead of carefully placing lighting and textures all over your scene, you just hand it the basic geometry and label it in some way like "this is an X" (this is a trash can, this is a cyber car, this is a rainswept street, this is a neon sign, etc) and the neural network paints it appropriately on the fly.

      • by Morpeth ( 577066 )

        Ah interesting, thanks. I'll be interested to read more about performance over current lighting/rendering tech. I've done some work with Unreal and lighting it just a resource hog when baking lighting (as beautiful as it is), though maybe the whole idea is this doesn't need to do that, more dynamic perhaps?

    • by phantomfive ( 622387 ) on Tuesday September 26, 2023 @01:07AM (#63877471) Journal
      1) Interpolating between pixels. Only 1 out of every 8 pixels (or whatever) is rendered completely, and the "AI" interpolates the rest.
      2) Interpolating between frames. The "AI" can see what was in the previous frame and see what should be drawn in the current frame.

      Combining these two techniques surprisingly works very well. My guess is that a different pixel out of each 9 pixel block is chosen to be rendered in each frame, and thus at high frame rates it's not really noticeably.
    • by fazig ( 2909523 )
      The article is one of the examples of bad journalism where the topic is beyond the author's understanding but still letting his imagination run wild.
      It goes on some vague statement that was being made by a higher up NVIDIA employee in a "roundtable discussion", where video game industry insiders talked about the future of AI in the industry.

      So that's a fairly flimsy basis to make some grand predictions on. At this point I'd call it little more than conjecture fuelled by the current AI hype which made the
    • The guy is clearly drowning in marketing speak, but if I were to hazard a guess he's talking about a variation on NeRF and mixing it up with AI generator content production, even though they are orthogonal.

      NeRF is most commonly used to render light fields created from images, but it can also be as a method of continuous level of detail and volumetric prefiltering. Facebook sponsored research produced a paper called Deep Appearance Prefiltering which is probably closest to what he is talking about. It won't

    • by ceoyoyo ( 59147 )

      A neural network is sort of a piecewise linear* approximation to a complicated function. The mapping between the polygons, textures, lights and camera in your 3d scene, and the 2d grid of pixels painted on the screen is a complicated function.

      Both rasterization and raytracing are approximate methods for computing that function. They optimize in ways that we've discovered usefully reduce the computing while having acceptably small impact on the quality. In theory, you could train a neural network to approxim

  • Think of all the ray tracers that will be out of work if this happens.

  • [maybeajoke?]Oh, hell. Good luck finding bugs in rendering when your neural network is hallucinating! How would you even know what was a bug? Can I watch the debugging process? Sec, let me just get my popcorn...[/maybeajoke?]
  • Using "full" AI to paint a picture is guessing. Using Ray Harryshausen is an algorithm. Sorry, I meant Ray Tracing. It may be cheaper, but it may be wrong.

  • Artificial noise, higher consumption, lower render rates just to say "Yeah, we can also do THAT with AI" They really ran out of ideas what to come up with next.
  • Most technologies, ideas, whatever you want to call them, created by nvidia are proprietary and vendor specific. AMD on the other hand offers them freely. No, the argument "but g-sync is objectively better" does not fly.
    I do not follow either company's advancements very closely these years, so, crossing fingers I'm not downright wrong.

  • You could absorb the entire rendering engine including animation into the drivers and call it DLSS, but it's kinda silly ... how is it Deep Learning Supersampling any more? DLSS is a TAA replacement, nothing more.

    The DLSS brand is kinda strong ... but so is RTX. Call it RTXe or something (RTX engine). That at least makes some sense.

  • nVidia executives at this point are always more talking for investors than the industry or customers. At this point they are all always aware that investors no longer care about the "toys" they sell and are far more interested in nVidia selling GPUs at $70,000 as "AI accelerators" and making themselves a dependency for "AI enablement" in everything they can manage.

    So this answer is about saying AI a few more times to keep the stock up, but without even suggestion of a plausible improvement as a result of t

  • AI may supply the goods. But we need more light bounces.

    Today's games can still look fairly ugly because light isn't bounced around often enough. When they are, colours bleed into each other in a much more convincing way. Compare: https://www.skytopia.com/proje... [skytopia.com]

    Also: https://imgur.com/a/fJAG9bc [imgur.com]
  • Am I the only one who thinks DLSS is dumb? I never use it and don't know why I would. It literally makes the image quality worse. Why would I want that?

  • 1. This won't make games cheaper.
    2. I cannot wait for people to all play the same game yet have it be totally inconsistent between each of the players.

Their idea of an offer you can't refuse is an offer... and you'd better not refuse.

Working...