Nvidia Hints At Replacing Rasterization and Ray Tracing With Full Neural Rendering (tomshardware.com) 131
Mark Tyson writes via Tom's Hardware: A future version of [Deep Learning Super Sampling (DLSS) technology] is likely to include full neural rendering, hinted Bryan Catanzaro, a Nvidia VP of Applied Deep Learning Research. In a round table discussion organized by Digital Foundry (video), various video game industry experts talked about the future of AI in the business. During the discussion, Nvidia's Catanzaro raised a few eyebrows with his openness to predict some key features of a hypothetical "DLSS 10." [...]
We've seen significant developments in Nvidia's DLSS technology over the years. First launched with the RTX 20-series GPUs, many wondered about the true value of technologies like the Tensor cores being included in gaming GPUs. The first ray tracing games, and the first version of DLSS, were of questionable merit. However, DLSS 2.X improved the tech and made it more useful, leading to it being more widely utilized -- and copied, first via FSR2 and later with XeSS. DLSS 3 debuted with the RTX 40-series graphics cards, adding Frame Generation technology. With 4x upscaling and frame generation, neural rendering potentially allows a game to only fully render 1/8 (12.5%) of the pixels. Most recently, DLSS 3.5 offered improved denoising algorithms for ray tracing games with the introduction of Ray Reconstruction technology.
The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies, as the last two additions have targeted other aspects of rendering. Digital Foundry asked that question to the group: "Where do you see DLSS in the future? What other problem areas could machine learning tackle in a good way?" Bryan Catanzaro immediately brought up the topic of full neural rendering. This idea isn't quite as far out as it may seem. Catanzaro reminded the panel that, at the NeurIPS conference in 2018, Nvidia researchers showed an open-world demo of a world being rendered in real-time using a neural network. During that demo the UE4 game engine provided data about what objects were in a scene, where they were, and so on, and the neural rendering provided all the on-screen graphics. "DLSS 10 (in the far far future) is going to be a completely neural rendering system," Catanzaro added. The result will be "more immersive and more beautiful" games than most can imagine today.
We've seen significant developments in Nvidia's DLSS technology over the years. First launched with the RTX 20-series GPUs, many wondered about the true value of technologies like the Tensor cores being included in gaming GPUs. The first ray tracing games, and the first version of DLSS, were of questionable merit. However, DLSS 2.X improved the tech and made it more useful, leading to it being more widely utilized -- and copied, first via FSR2 and later with XeSS. DLSS 3 debuted with the RTX 40-series graphics cards, adding Frame Generation technology. With 4x upscaling and frame generation, neural rendering potentially allows a game to only fully render 1/8 (12.5%) of the pixels. Most recently, DLSS 3.5 offered improved denoising algorithms for ray tracing games with the introduction of Ray Reconstruction technology.
The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies, as the last two additions have targeted other aspects of rendering. Digital Foundry asked that question to the group: "Where do you see DLSS in the future? What other problem areas could machine learning tackle in a good way?" Bryan Catanzaro immediately brought up the topic of full neural rendering. This idea isn't quite as far out as it may seem. Catanzaro reminded the panel that, at the NeurIPS conference in 2018, Nvidia researchers showed an open-world demo of a world being rendered in real-time using a neural network. During that demo the UE4 game engine provided data about what objects were in a scene, where they were, and so on, and the neural rendering provided all the on-screen graphics. "DLSS 10 (in the far far future) is going to be a completely neural rendering system," Catanzaro added. The result will be "more immersive and more beautiful" games than most can imagine today.
Sounds like marketing hype (Score:5, Insightful)
Re: (Score:2)
It's likely just marketing hype since current hardware can't yet do neural rendering in real-time.
The idea is that you give a neural engine a description of the scene along with as many parameters as needed to constrain it to what you want. Then you tell it the changes from that scene to the next, like "Actor X turns 45 degrees to the right and grins." Then the neural engine seamlessly creates the frames on the fly.
The more details you give it, the more the result will match your intentions, but whatever yo
Re: (Score:3)
The problem is this is all strongly barking up the wrong tree.
We've hit, for all considerations the maximum power and thermal limit for a GPU in a desktop. Nobody is going to buy a high end GPU if it requires as much power as their stove or clothes dryer, and to be vented to the outside because it throws off the thermal equilibrium of homes (thus requiring additional A/C)
Like just to run my RTX 3090, one GPU, alone, for 4 hours, raises the temperature of my bachelor apartment enough that I don't need heat d
Re:Sounds like marketing hype (Score:4, Interesting)
That 3090 was the most power hungry GPU that ever existed when it was released. 350 W. There are only 5 cards available today that (nominaly) use more power.
Anyway, you know you can always limit the power consumption of your GPU, either directly or indirectly, yes? It doesn't *need* to go full tilt all the time.
Re: (Score:2)
I think the issue here is that GPU makers are trying to build chips that are all things to all people. ATI/AMD abandon their terrascale platform because it was best suited to tasks that neatly fit the single operation multi-data model. Which wait-for-it rendering frames for video games mostly does.
Those chips were also very power hungry and had some efficiency problems when you could not quite find a way to pipeline operations to keep every compute unit engaged but that was/is probably solvable with ways
Re: (Score:2)
Re: (Score:2)
They seem to be talking lower level than that. You give it a polygon mesh and it renders it, but instead of using shaders it has a neural net that can reproduce the texture and lighting of the material. It should be able to cope well with things that traditional rasterizing and raytracing don't, like materials looking weird close up or far away.
I'm sure it's a lot of hype at this point, but they are probably right that at some point the fundamental nature of rendering is going to move away from rasterizatio
Re:Sounds like marketing hype (Score:4, Funny)
Re: (Score:2)
I call BS on this since the vast majority of games run on AMD hardware in consoles.
This is going (Score:3)
This is going to make rendering bugs SUPER interesting.
Re: (Score:2)
Re: (Score:2)
The neural network can render bugs as part of the natural scene as actual bugs, wings and all. You won't know the difference between bug and bug.
More anti-features from nvidia? (Score:5, Insightful)
"Current thing" that nvidia is desperately trying (and utterly failing) to sell their cards on is AI rendering that is DLSS3. For those that don't know, DLSS3 is a mode where card takes two frames and tries to guess what frame(s) that would go between these two would look like and present that so that nvidia can pretend that card's output frame rate is many times in their marketing.
For gamers, this is a strictly anti-feature. It makes gaming strictly worse when enabled. Reason being that need for high FPS is primarily about responsiveness of the game, i.e. "I move my mouse, how fast does game react to it". For normal rendering, higher FPS is strictly better for responsiveness unless game implements some really warped and fucked up input/output system (which no modern games really do). Essentially the inputs are typically processed each rendered frame, and the more frames card can render, the less time is spent to render each frame = less time taken to process each player's input and deliver output based on it. So your game's responsiveness to your inputs scales directly with frame rate. High frame rate is a strict upgrade to responsiveness.
This is why it actually helps to have really high frame rates that are barely observable for humans in shooters like counter-strike, because correctly tracking target with your mouse improves for most people with more responsive game.
And DLSS3 does the opposite. It gives you an illusion of higher frame rate, while the game's actual frame rate not only stays low, but suffers from increased delay as actual frames are delayed in the pipeline to ensure that fake inserted frames aren't slightly out of sync because they come a bit too late or too early. That means that enabling DLSS3 will always make game's responsiveness objectively worse.
Ergo, it's an anti-feature for gaming. And this is what nvidia is trying to sell cards to gamers on. At this point, it's fairly obvious that nvidia hasn't been a gaming company in several years, and it's fully transformed into an AI accelerator company. Which looks at the gaming market not as an actual market segment worth developing for any more, but a byproduct to which AI based solutions should be jury rigged to save costs.
Re: More anti-features from nvidia? (Score:2)
Yep. It's essentially a clever "sleight of hand". It also sacrifices image quality as well, since it's upscaling a lower resolution image for the sake of higher (but by all other metrics inferior) frame rates. The ai decides which parts of the image it thinks will actually matter, and makes it's best effort to hallucinate what the image should look like, giving special attention to the parts it thinks we pay attention to.
Re: More anti-features from nvidia? (Score:3, Insightful)
Re: (Score:2)
The highest reaction time of humans is between 200 and 150 milliseconds (with training)
Note that this reaction speed is after the change is perceived. The change can be noticed much, much quicker.
Re: (Score:2)
Re: (Score:2)
Practical testing has shown there is an advantage to higher frame rates in gaming, which is not surprising. While it may take your body 150ms to react, getting started with that reaction as early as possible is a benefit. Your eyes have to see something happen, and your brain has to recognize it and formulate a response. The more information it has, the earlier it has it, the better. What's more, the brain is tuned to look for changes in your field of view, by millions of years of evolution.
Re: (Score:2)
Re: More anti-features from nvidia? (Score:4, Interesting)
This is the applied variant of widely disseminated pseudo-science behind the "cinematic 24 frames per second experience" claim. No shame in falling victim to this, the amount of money spent to market this about a decade ago was insane.
Reality however is completely opposite, and has been tested several times and proven objectively false. Perhaps the most publicly visible test has been ironically sponsored by nvidia itself, back in the day before DLSS3, when it was selling people on actual high frame rate gaming.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Not to disagree entirely with you on the anti-feature claim, but the argument is fairly weak. The highest reaction time of humans is between 200 and 150 milliseconds (with training). At 30 frames per seconds, a crappy standard for today's GPUs performance, each frame is rendered in ~33 milliseconds, meaning you could pack a bit more than 6 frames in your reaction time window (with training). At 144 FPS, the ideal refresh rate for Counterstrike (which a compulsory Google search reports as "Criminally smooth. For hardcore and professional players"), you have 7 ms/frame, which means your GPU could render 25 frames by the time you actually perceived a movement. Is it a cheap trick? Absolutely yes.
Reaction time != adding latency and thinking it doesn't matter. Reaction time is not perception time.
If I'm playing some hyper competitive FPS and in frame 1000 I see an enemy and in frame 1006 you see it due to render latency well guess what I have an advantage because I reacted 33ms * 6 = 198ms sooner than you did. The same applies to all the rest of the gamer nonsense... hyper polling keyboards and mice, people intentionally using TN displays... It's not about human reaction time it's about minimizing
Re: (Score:2)
I've been working in video games for over a decade, and 0 vs 0.05 vs 0.1 vs 0.15 vs 0.2 seconds are miles apart in terms of feel, once you've acquired a sense for it. Trying plugging in an old Guitar Hero set, and then play with 0.2 seconds of audio/visual lag, and compare it to something properly calibrated, and the difference is night and day.
There's no such thing as a singular 'reaction time', the context always matters. Braking your car in an emergency famously takes over a second, because we're talking
Re: (Score:2)
It has nothing to do with reaction speed, and everything to do with collision algorithms. A target can only be hit if it's visible, and the higher frame-rate allows more frames where the target can be hit.
More detail: take the case of a projectile traveling between the player's weapon and a target that appears for n milliseconds, and a frame-rate r in frames/sec. The number of frames that target is visible, and hence can be hit by the projectile, is proportional to n * r. With a lower frame-rate, the targ
Re:More anti-features from nvidia? (Score:4, Interesting)
For gamers, this is a strictly anti-feature. It makes gaming strictly worse when enabled. Reason being that need for high FPS is primarily about responsiveness of the game, i.e. "I move my mouse, how fast does game react to it".
This isn't necessarily true. For one thing DLSS3 frame generation is entirely optional. For another, synthesized frames can be created by warping existing frames based on the depth buffer and the input. This was popularized for VR headsets to minimize nausea to hugely positive effect. Most critical responsiveness is in aim, not motion. So parallax isn't a concern when you project a wider FOV and then re-render the depth buffer with a slight pan or tilt.
For normal rendering, higher FPS is strictly better for responsiveness unless game implements some really warped and fucked up input/output system (which no modern games really do).
DLSS isn't a "normal" input/output system for exactly that reason. And the DLSS 3 uprezzing allows you to render more "true" frames which lets you get even more benefit. Most of DLSS is intended to allow you to run at a lower resolution, get higher framerate and therefore by definition lower latency.
Re: (Score:2)
Re: (Score:2)
Aimbot is a thing. Seems like someone had a bit of a freudian slip.
Re: (Score:3)
For gamers, this is a strictly anti-feature.
I'm not a professional gamer, but I can say that if you're calling this an anti-feature then you're either sponsored and being paid to play in a competitive FPS league, or more likely you've simply never used it.
The increased latency of all these features is virtually imperceptible in an FPS game. Meanwhile these "anti-features" have made the visual effects far smoother and helped maintain higher frame rates, especially on 4K screens. You think gamers will be better served with lower framerates? If so they
Re: (Score:2)
The increased latency of all these features is virtually imperceptible in an FPS game.
Depends on what you're going to/from. The benefits of higher frame rates in specific competitive FPS games has been tested and shown. In real frames there's a strong benefit in going from 60hz to 120hz, and a drastically reduced one going from 120hz to 240. Returns diminish damn quickly.
If you're using inter-frame generation to go from 30hz to 60hz? yeah the latency is there and strong. 120hz to 240hz? Completely agree, not a real issue.
If so they can always buy AMD. They aren't doing that though, why do you think that is?
Well.. they do, what do you think ps5 and xbox series x run, nvidia? no
Re:More anti-features from nvidia? (Score:4, Informative)
The increased latency of all these features is virtually imperceptible in an FPS game.
Depends on what you're going to/from. The benefits of higher frame rates in specific competitive FPS games has been tested and shown. In real frames there's a strong benefit in going from 60hz to 120hz, and a drastically reduced one going from 120hz to 240. Returns diminish damn quickly.
If you're using inter-frame generation to go from 30hz to 60hz? yeah the latency is there and strong. 120hz to 240hz? Completely agree, not a real issue.
It should be noted that the latency of interpolating 30Hz to 60Hz is actually *higher* than just using 30Hz.
Re: (Score:2)
Re: (Score:2)
The perfect style of game where 30fps doesn't even matter.. coincidence no?
Why wouldn't it matter? Why would you think having a visually jerky experience with chapter motions not smooth, display scrolling not smooth, and animations of the environment not smooth suddenly "not matter" simply because your view is not first person with a gun visible in the frame?
Much of your post was at least on point, but with this last line you've just lost all credibility by claiming that frame rate doesn't matter for one game. It's like you're getting latency and frame rate confused (the latter is
Re: (Score:2)
Why wouldn't it matter? Why would you think having a visually jerky experience with chapter motions not smooth, display scrolling not smooth, and animations of the environment not smooth suddenly "not matter" simply because your view is not first person with a gun visible in the frame?
Low framerates in turn based games do not affect the gameplay in the ways they do in first person shooters and other real-time games. I did not specifically say this because I thought it self evident.
One frame per second will not greatly affect your chances of winning or losing chess.
It changes it from game breaking and actual quality of gaming, vs non gameplay affecting.
Re: (Score:2)
But of course we don't talk about that... while "Team Red" has already been salivating over it being better than DLSS 3.
I'm just waiting for the "at least it's not AI" and "at least it works on all graphics cards" knee-jerk that someone who doesn't know their arse from a hole in the ground would use.
Re: (Score:2)
It's just as much of an anti-feature if implemented in a similar fashion.
Re: (Score:2)
Re: (Score:2)
Do you want me to repost the entire original post, just with "amd" replacing "nvidia"?
Re: (Score:2)
Also because as I know people who like to pretend to be critical and objective, to some literal apples and orangers are of "a similar fashion" and then to the same people if you painted a green apple in red, then that makes the red apple totally different.
Re: (Score:2)
Checklist:
1. Does it increase input latency?
-If yes, it's an anti-feature.
Re: (Score:2)
You unironically explained DLSS and DLSS2.
Re: (Score:2)
It's just as much of an anti-feature if implemented in a similar fashion.
Only for your idiotic view of gaming. You are more than welcome to play with it turned off, stop gaslighting those of us who actually use these real world features to great effect to improve our experience.
Be less of a gatekeeping arsehole.
Re: (Score:2)
Actually, nvidia sponsored view of gaming. Not mine.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
>but I can say that if you're calling this an anti-feature then you're either sponsored
Here's a video sponsored by nvidia demonstrating my claim to be true:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
>but I can say that if you're calling this an anti-feature then you're either sponsored
Here's a video sponsored by nvidia demonstrating my claim to be true:
https://www.youtube.com/watch?... [youtube.com]
Your claim isn't true or not true. It's irrelevant, and by linking a post that talks about being a better gamer with higher fps vs not just shows you didn't understand (presumably didn't bother) to read my post.
Stop being a gatekeeping arsehat. Not everyone plays like you, that doesn't make features that benefit us "anti-features" simply because it doesn't suit your retarded and narrow idea of what "gaming" is.
Re: (Score:2)
The topic:
>The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies, as the last two additions have targeted other aspects of rendering. Digital Foundry asked that question to the group: "Where do you see DLSS in the future? What other problem areas could machine learning tackle in a good way?" Bryan Catanzaro immediately brought up the topic of full neural rendering.
How it's going...
Re: (Score:2)
24 frames per second cinematic experience stans have evolved to be AI fake frame generation stans.
I look forward to seeing your final form.
Re: (Score:2)
1kHz is where we'll begin to see eye-movement related visual artifacts go away so it's not completely unjustifiable in all cases, but fake frames at the expense of latency are bad until you hit the threshold of "good enough" latency with the fake frames on. Consider this though, many games can't be that latency-critical, how else could you justify 100Hz USB polling still being standard for USB game pads in Windows...
That all said though I agree, in the future with 1000Hz VR headsets it'll be a good idea bu
Re: More anti-features from nvidia? (Score:2)
Re: (Score:2)
Indeed. Most of them are AI compute people. They actually see AI features as what they are, AI features. They aren't interested in the gaming side, because that's not what they use these cards for.
That was my point in fact.
Re: (Score:2)
Almost everything about this post is complete baloney.
Yes, DLSS FG doesn't improve responsiveness but it does improve game fluidity a ton and makes playing the game a ton more pleasurable experience. It's only caveat is that your base frame rate must be above roughly 50fps to enjoy it fully.
And DLSS FG is only one of three things that DLSS encompasses, the other two being image upscaling and ray reconstruction which has shown a ton of potential and it generally greatly improve image quality.
You may ha
Re: (Score:2)
Re: (Score:2)
The biggest shortcoming of DLSS is that it's 100% proprietary. That's a valid concern and that's the only reason not to like/accept it. I hate vendor lock-ins 'cause they inevitably lead to higher prices and stifle competition.
NVIDIA could have at least released DLSS APIs into the public domain or/and merged them with DirectX, so that they could have been reimplemented by Intel and AMD however they see fit. I'm afraid that's not going to happen ever.
Comment removed (Score:5, Informative)
Re: (Score:2)
DLSS FG is normally coupled with DLSS image upscaling and NVIDIA reflex. Image upscaling increases FPS to the point your new FPS is significantly improved and NVIDIA reflex strips off some milliseconds off of that. The net result is lower latency than native at the cost of worse image quality (AI generated/interpolated) every second frame but people have admitted these frames are near impossible to spot during normal gameplay, and they are ever harder to spot if your base framerate is above 50.
Digital Fou
Re: (Score:2)
Re: (Score:2)
For those that don't know, DLSS3 is a mode where card takes two frames and tries to guess what frame(s) that would go between
Nope. That's DLSS Frame Generation, which is just a part of DLSS3. DLSS3 also includes advances to DLSS Super Resolution (upscaling) and NVIDIA Reflex (latency reduction) tech.
enabling DLSS3 will always make game's responsiveness objectively worse.
True, but from what I've seen and read; because all games with framegen must include Reflex, the increase in latency is only around 10ms when using framegen This is less than a full frame at 60hz. It may well be that nobody who is playing twitch-shooters competitively is using framegen. That's fine, because generally the engines for t
Re: (Score:2)
Good VR games have already minimized everything else, Reflex doesn't do shit. To get down to 10 ms, you need native 100 Hz.
Re: (Score:2)
PS. I can promise you, Apple doesn't use interpolation on Vision Pro and never will ... this road leads to obsolescence.
Re: (Score:2)
Subject of the topic is AI frame generation, not AI upscaling. You're not the first trying this sleight of hand, relying on people not knowing the difference between DLSS and DLSS2 (AI upscaler, reduces input latency) and DLSS3 (AI image generator, increases input latency).
So is the topic AI upscaling, or AI image generation.
Drumbeat...
>The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies,
Re: (Score:2)
Though I think most of these AI-enhanced 'features' are indeed marketing fluff, I have to disagree that they impact game responsiveness.
Yes, DLSS3 does fudge actual frame rates by faking/upscaling in-between, but the vast majority of frames have to do with eye-candy anyway, and not responsiveness or input in any way. Professional games (and game-engines) have separate display and input systems, and often distinct physics-frames as well, that update at wildly different rates.
For example a RTS (so generally n
Re: (Score:2)
Your disagreement is irrelevant. It's factually irrefutable that DLSS3 AI frame generation increases input latency.
The rest is a pointless rhetorical wank on irrelevant topics like physics.
Re: (Score:2)
Tell me you know [youtube.com] nothing about ray tracing [youtube.com] (RT) without telling me you know nothing about ray tracing.
Global illumination (GI) can be done in either software or hardware. Hardware ray tracing can accelerate GI and adds photorealism. Hardware RT also unifies and provides a wholistic methodology to rendering instead of ad-hoc lighting techniques. Pure rasterization alone has bad lighting (unrealistic.)
Ray tracing can also be used for reflections [youtu.be]. Without ray tracing using SSR (Screen Space Reflections) for
Re: (Score:2)
While I prefer high frames first, quality second some people want quality first.
It should be mentioned that this really depends on the game. If it's not a quick reflexes game, I definitely prefer higher quality.
Re: (Score:2)
This Half Life 2 RTX page [nvidia.com] has before/after RT comparison pictures.
And many objects are replaced with far more detailed ones. So keep in mind it's not only raytracing making a difference.
Re: (Score:2)
and the textures are changed too. So its not really a meaningful comparison at all. I'd like to see the non-RT scenes rendered with the same models and textures at the RT ones.
Re: (Score:2)
Re: (Score:2)
You could want dynamic and realistic lighting and then still do some visual designs that essentially cheat the realistic lighting that is used everywhere else.
Something like that can and is often done in postprocessing based on information from the various "maps" that are generated in the process, where it does not interfere with the ray tracing.
That is the actual ray tracing where rays are cast and check for collisions with triangles and then carry the information from th
Re: (Score:2)
Notice the sleight of hand. Subject is DLSS3 AI fake frame generator. AI generated fake images increase input latency.
Unable to refute anything said, but needing to, move to talk about something completely different: DLSS, AI upscaler technology that functions on a completely different principle reducing input latency.
Re: (Score:2)
no sleight of hand. they are just confused because of bad nvidia naming scheme. this came up in the discussion referenced in the article. the nvidia employee claimed enthusiasts would know the difference and talk about subfeatures like frame generation, ray reconstruction, etc but more than likely they aren't disentangling it because the confusion helps them in self serving ways e.g. on Nvidia YouTube channel they post before and after DLSS frame generation video comparisons without explaining it isn't a d
Re: (Score:2)
Thing is, its disentangled in the OP itself.
>And of course, "Deep Learning Super Sampling" no longer really applies, as the last two additions have targeted other aspects of rendering.
Re: (Score:2)
This is your brain on fanboyism. The topic is AI frame generation and its future. The very OP specifically notes that "DLSS" doesn't even technically apply to DLSS 3 being talked about, because it's not longer a super sampling technique.
And then you get Team Green brainlets come out with these "nvidia for lyfe" takes all over this thread.
DSS adds quite a bit of lag (Score:5, Interesting)
That would be fine if not for all the lag it tends to introduce. It's especially noticeable on fighting games making it basically useless for those. But I guess if you're into competitive games you'll notice it either way.
Re: (Score:3)
not to mention power, a 12v rail can only do so much -- to wring more performance they'd probably have to go with external PSU's or somehow get a new psu standard with 24v rails or similar pushed through.
Re: (Score:2)
Re: (Score:2)
not to mention power, a 12v rail can only do so much -- to wring more performance they'd probably have to go with external PSU's or somehow get a new psu standard with 24v rails or similar pushed through.
If one 12VHPWR cable isn't enough add two for twice the fun.
Re: (Score:2)
It's because where it sits in the rendering pipeline.
NVIDIA have produced papers on how to quickly render extrapolated frames, but that would have to be far more invasive inside the game engine (and would mean the game engine having to go full raytracing, no longer hybrid). Not just a bolt on post processor which takes TAA inputs. Interpolation is easy to bolt on, so interpolation is what they push commercially. Non VR/twitch gamers eat it up and are getting used to it, 24Hz cinema style.
Only VR stands in t
Yeah but if it's not a twitch game (Score:2)
This can only intensify the brain-drain... (Score:2)
This sounds more like "Nvidia Executive hints that the most cost-effective rendering option is about to become your brains in a jar."
Tinfoil brain-protecting hats about to gain another use case!
And full neural rendering is what? (Score:2)
Because the article doesn't explain it in any meaningful way -- can anyone care to give the Cliff Notes version? :)
Re: (Score:2)
I may be entirely wrong, but from what I'm gathering instead of carefully placing lighting and textures all over your scene, you just hand it the basic geometry and label it in some way like "this is an X" (this is a trash can, this is a cyber car, this is a rainswept street, this is a neon sign, etc) and the neural network paints it appropriately on the fly.
Re: (Score:2)
Ah interesting, thanks. I'll be interested to read more about performance over current lighting/rendering tech. I've done some work with Unreal and lighting it just a resource hog when baking lighting (as beautiful as it is), though maybe the whole idea is this doesn't need to do that, more dynamic perhaps?
Re:And full neural rendering is what? (Score:5, Informative)
2) Interpolating between frames. The "AI" can see what was in the previous frame and see what should be drawn in the current frame.
Combining these two techniques surprisingly works very well. My guess is that a different pixel out of each 9 pixel block is chosen to be rendered in each frame, and thus at high frame rates it's not really noticeably.
Re: (Score:2)
It goes on some vague statement that was being made by a higher up NVIDIA employee in a "roundtable discussion", where video game industry insiders talked about the future of AI in the industry.
So that's a fairly flimsy basis to make some grand predictions on. At this point I'd call it little more than conjecture fuelled by the current AI hype which made the
Re: (Score:2)
The guy is clearly drowning in marketing speak, but if I were to hazard a guess he's talking about a variation on NeRF and mixing it up with AI generator content production, even though they are orthogonal.
NeRF is most commonly used to render light fields created from images, but it can also be as a method of continuous level of detail and volumetric prefiltering. Facebook sponsored research produced a paper called Deep Appearance Prefiltering which is probably closest to what he is talking about. It won't
Re: (Score:2)
A neural network is sort of a piecewise linear* approximation to a complicated function. The mapping between the polygons, textures, lights and camera in your 3d scene, and the 2d grid of pixels painted on the screen is a complicated function.
Both rasterization and raytracing are approximate methods for computing that function. They optimize in ways that we've discovered usefully reduce the computing while having acceptably small impact on the quality. In theory, you could train a neural network to approxim
Job losses (Score:2)
Think of all the ray tracers that will be out of work if this happens.
Hallucinations / Bugs? (Score:2)
Replacing algorithms by heuristics (Score:2)
Using "full" AI to paint a picture is guessing. Using Ray Harryshausen is an algorithm. Sorry, I meant Ray Tracing. It may be cheaper, but it may be wrong.
Totally unnecessary (Score:2)
Against NVIDIA on principle. (Score:2)
Most technologies, ideas, whatever you want to call them, created by nvidia are proprietary and vendor specific. AMD on the other hand offers them freely. No, the argument "but g-sync is objectively better" does not fly.
I do not follow either company's advancements very closely these years, so, crossing fingers I'm not downright wrong.
Please don't rape language unnecessarily (Score:2)
You could absorb the entire rendering engine including animation into the drivers and call it DLSS, but it's kinda silly ... how is it Deep Learning Supersampling any more? DLSS is a TAA replacement, nothing more.
The DLSS brand is kinda strong ... but so is RTX. Call it RTXe or something (RTX engine). That at least makes some sense.
Unimaginative, vague, and useless answer... (Score:2)
nVidia executives at this point are always more talking for investors than the industry or customers. At this point they are all always aware that investors no longer care about the "toys" they sell and are far more interested in nVidia selling GPUs at $70,000 as "AI accelerators" and making themselves a dependency for "AI enablement" in everything they can manage.
So this answer is about saying AI a few more times to keep the stock up, but without even suggestion of a plausible improvement as a result of t
Global illumination (Score:2)
Today's games can still look fairly ugly because light isn't bounced around often enough. When they are, colours bleed into each other in a much more convincing way. Compare: https://www.skytopia.com/proje... [skytopia.com]
Also: https://imgur.com/a/fJAG9bc [imgur.com]
Advanced DLSS sounds dumb (Score:2)
Am I the only one who thinks DLSS is dumb? I never use it and don't know why I would. It literally makes the image quality worse. Why would I want that?
All fun and good. (Score:2)
2. I cannot wait for people to all play the same game yet have it be totally inconsistent between each of the players.
Re: THANKS oBAMA!!1 (Score:2)
Yes I donâ(TM)t want my neurons rendered like and expensive cut of beef!
Re: (Score:3)
Re:Custom player prompts? (Score:4, Insightful)
But seriously, even the Trek writers saw issues with the very idea of having complex things being generated based on just a few simple prompts.
What the went into in only a few episodes was that the generations were vague, which then either meant that you had to lower your expectations or that you had to spend a lot of time to fine tune essentially everything. For the latter the writers thought that there would be people that specialize in it.
And those are just the things that the writers, likely not with much programming experience, could foresee decades ago.
In reality, especially if you want software that runs well in realtime with dynamic user inputs, there are a lot of case specific requirements to make things work with a decent compromise between accuracy/precision and performance that require insight that genrate AI hasn't even began to scratch the surface of.
Re: (Score:2)
Re: (Score:2)
And even the credit for that goes to Apple's engineers coming up with a solution of how to neatly fit all that into a small case and not to Jobs.
If someone said to you to build a time machine without being more specific, and somehow you managed to pull it off, would you be the inventor of the time machine or the person that told you to build one?
And the next thing you'll know is people claimi
Re: (Score:2)