Gamers React With Overwhelming Disgust To DLSS 5's Generative AI Glow-Ups (arstechnica.com) 124
Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.
While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."
Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says. Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.
"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."
Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."
Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."
DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."
Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says. Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.
"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."
Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."
Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."
DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Not really. (Score:2)
,,,gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games.
No, not really.
Re:Not really. (Score:4, Interesting)
Re: (Score:2)
New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Oh Noes! future generations will like different things than Me. NOOOOOOOOOOO!!!! The kids must like what I want them to like!
Re: (Score:2)
"Gamers Hate" (Score:5, Insightful)
Yeah, dude. Its the fucking internet. "Gamers Hate" literally every type of change in the past decade. That's just how the vocal minority goes. Same shit happened with hardware ray tracing, because early demos were not the best. No diff today w/ this tech. It was an early tech preview, not a final product. It will be tweeked and refined, and then we'll forget all about all of this bullshit nonsense, and the tech will be common place... ya'know, just like DLSS (original). Yeah, that had massive hate too. Now DLSS just gets a shrug for the most part from the general community because its decent now. And this tech too shall improve to that point eventually.
Re:"Gamers Hate" (Score:5, Insightful)
Yeah, dude. Its the fucking internet. "Gamers Hate" literally every type of change in the past decade.
It's easy to pretend this is some meme, but trying to defend this shows you haven't seen this. Even in the demos one thing is obvious: All the games end up looking the same. This is rubbish. This destroys creativity. Even if you think one example looks great, that doesn't change the fact that all games ultimately will look like that.
Now DLSS just gets a shrug for the most part from the general community because its decent now.
It is anything but. A significant portion of gamers still leave it off. It still can't deal with many effects rendered with shaders, it still screws up in reflections, even with ray reconstruction it has a tendency to cause artefacts when moving. That last part for me is a personal deal breaker. I can live with something that doesn't look great, these days I run with ray tracing and DLSS off because a stable image that doesn't have distracting rubbish going on is worth more than perfect lighting.
That's before we talk about the technical limitations of some parts of it. E.g. DLSS4's frame generation is incompatible with vsync, attempting to turn it on creates unplayable input latency. The tech is effectively useless unless you have a 144Hz monitor with Gsync.
DLSS gets a shrug because a very significant number of gamers leave it off. The problem there is, we now get to live with games that have objectively rubbish optimisation as every developer idiot just throws their world into UE 5, cranks the bling up, and shit that turdburger out the door leaving the users to depend on DLSS to make it playable.
It will be tweeked and refined
And we've been "tweeking" and refining DLSS for just shy of a decade and it is still rubbish.
Re:"Gamers Hate" (Score:5, Informative)
DLSS (in its original AA/upscaling definition) is amazing. It looks infinitely better than whatever internal scaling your monitor can do. It gave us back something we lost in the transition away from CRTs, which is the ability to play games at something other than your monitor's native resolution.
Frame gen is more of a mixed bag, I tend to think of it as a motion smoothing effect rather than "free performance". It's only useful in a narrow range of scenarios. The marketing of "Turn 20 fps into 120 fps with 6x frame gen" is BS.
Now this new stuff sounds like AI content generation, i.e. slop, meaning it's totally useless.
Re: (Score:2)
It gave us back something we lost in the transition away from CRTs, which is the ability to play games at something other than your monitor's native resolution.
Nvidia GPUs have had scaling functionality which is nearly free since NV20. DLSS creates detail, but mere scaling will do that, and has been able to for... oh no, I'm old decades.
Re: (Score:2)
*All* AI upscaling / frame generation is "AI content generation". The tweened frames do not exist. The data between the pixels does not exist. Its being made up.
There's not much difference between this and just scaling the same image down and then AI upscaling it back up.
Re: (Score:2)
Yeah let's not normalize that. If you upscale by 2x and you don't keep any of the original pixels, you're making shit up, you're not upscaling. Changing the shading "while you're at it" is not cool and spits in the face of art direction.
Re:"Gamers Hate" (Score:4, Funny)
Hey, wait until you hear about linear interpolation in animation. Did you know some of the animation frames in game models don't exist? The data between keyframes is being made up! The models are moving by themselves. Spooky.
And audio! Did you know we only sample audio at around 48Khz? It's not even a continuous analogue wave! The audio in the gaps between the samples every 0.000020833333333333 seconds is being made up! That's not even real data!
Oh, and don't get me started on pixel latency on LCD displays. Did you know that the image you see is actually different from what your graphics card puts out? Those blurred pixels in the few milliseconds between each frame don't exist! They're being made up by the LCD panel!
It's all a conspiracy, I tells ya. Big Tweening is out to get us.
Re: (Score:2)
It's not even a continuous analogue wave!
It is after you run it through a DAC, which is a required step in turning audio from a digital representation into pressurized sound waves traveling through air. The air molecules are quite analog, until maybe you get down to the quantum level.
Re: (Score:2)
My (jokey) point was that the actual data between the samples doesn't exist in the file; you're always hearing an approximation. An infinitesimally close approximation at high enough sample rates, but still an approximation.
Your name checks out, tho. :D
Re: (Score:2)
Who said that we are debating internal monitor scalers vs DLSS? That's a false comparison. Here's the actual comparison:
- Good well optimised games, vs poorly optimised games that require a low internal render resolution + DLSS.
- Well designed and well lit games, vs poorly designed games that just rely on raytracing to make a game seem good + DLSS to make it run.
No one here is proposing you run at the wrong resolution. In that regard I would absolutely recommend DLSS, but only as a last resort.
You see when
Graphics don't matter. (Score:3)
DLSS (in its original AA/upscaling definition) is amazing. It looks infinitely better than whatever internal scaling your monitor can do. It gave us back something we lost in the transition away from CRTs, which is the ability to play games at something other than your monitor's native resolution.
Frame gen is more of a mixed bag, I tend to think of it as a motion smoothing effect rather than "free performance". It's only useful in a narrow range of scenarios. The marketing of "Turn 20 fps into 120 fps with 6x frame gen" is BS.
Now this new stuff sounds like AI content generation, i.e. slop, meaning it's totally useless.
I thought DLSS helped on Cyberpunk 2077 but didn't do a damned thing for STALKER2.
However I came to the conclusion years ago that graphics don't matter.
The most damning evidence came when I played Mass Effect Andromeda, the Mass Effect trilogy is beloved for many reasons and it's not necessarily the graphics (which were bad and highly consolidated even for 2007) however it had a story, an atmosphere, likeable characters and above all else, good gameplay. So good the gameplay still largely stands up to
Re: (Score:2)
OMG Dwarf Fortress is getting another graphics upgrade!?!
Re: (Score:2)
Minecraft will never be the same [youtube.com] ;)
Re: (Score:2)
It seems like a lot of games look the same these days anyway. Especially the faces, they all seem to be using the same or very similar models and references.
Re: (Score:2)
There's lots of techniques for frame interpolation and upscaling.
DLSS 5 is more like looking at AI slop videos. Because that's what it is - AI slop. That's the reason why the images look like they do. It's just breaking down the game images and turning them into a big prompt to generate new video.
That's why Nvidia is claiming you don't need to work on graphics anymore because it's going to turn the whole imaging pipeline into a big AI image generation prompt.
The next game engine will simply output text to a
Re: (Score:2)
Same shit happened with hardware ray tracing, because early demos were not the best. No diff today w/ this tech. It was an early tech preview, not a final product. It will be tweeked and refined, and then we'll forget all about all of this bullshit nonsense, and the tech will be common place..
Yeah. That’s also how we get enshittification. You know, like when every damn HDTV vendor decided to adopt that motion smoothing nonsense, giving movie fans (and Directors) a wholly shitty experience. The hell is the point in wasting Hollywood production when it comes out looking like it was filmed through a GoPro attached to a drug dog tweaking on coke.
Not into games like I used to be, but I remember previous iterations and advancements. This shit more sounds like some developer discovered the eq
Re: (Score:2)
"Waah, I got used to janky video, and now I'm annoyed when it looks more like real life!"
The Matrix, realized quickly that a perfect world is ultimately rejected by the human race.
AI already gets the gist. Because we keep asking it to turn everyone into talking babies bitching about RAM prices and guitar-playing cats.
Re: (Score:2)
It is not the image that it was, therefore you have to conclude it is not a good e
Re: (Score:2)
Re: (Score:2)
Yeah, dude. Its the fucking internet. "Gamers Hate" literally every type of change in the past decade. That's just how the vocal minority goes. Same shit happened with hardware ray tracing, because early demos were not the best. No diff today w/ this tech. It was an early tech preview, not a final product. It will be tweeked and refined, and then we'll forget all about all of this bullshit nonsense, and the tech will be common place... ya'know, just like DLSS (original). Yeah, that had massive hate too. Now DLSS just gets a shrug for the most part from the general community because its decent now. And this tech too shall improve to that point eventually.
Change aversion posts are unfalsifiable and fail to address the merits of the underlying system.
Re: (Score:2)
"Gamers Hate" literally every type of change in the past decade.
And yet gamers still cough up >$200B a year - more than the revenue of movies, music, and books combined.
Re: (Score:3)
I've seen the demos and I don't see how people are seeing the results as any more yassified than the originals. Even in the main example people keep picking, the DLSS 5 ON shots show more freckles and other imperfections than the original.
And as for comments that it's "depressing" that that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."... man, these people are SO CLOSE to getting it.
Re: (Score:2)
Uncanny Valley like problem? (Score:4, Interesting)
The Uncanny Valley is when computer art is too close to human but not good enough. It results in slightly off looking people that give people the heebie jeevies.
This could very well be a similar effect but applied to non-human pictures as well as representations of humans.
Re: (Score:2)
The main thing that is "uncanny" IMHO is not the quality of the graphics, but rather, the way things move (something they deliberately didn't modify - they use the original geometry). So you get a hyperrealistic character but moving like a rigged armature.
Anyway, what's "uncanny" is "what you're not used to". If you took modern video game footage back and showed them to people 1-2 decades ago, they'd find it "uncanny". Same with CGI in movies. The whipper snappers here probably don't know this, but there
Re: (Score:3, Insightful)
Not really. We've actually had better and more realistic graphics before. The problem here is that offloading design to AI like this makes everything look the same.
They chose the perfect example here, the resident evil game. The problem isn't so much that the character looks good, that's a bonus - a character that looks as good as well developed game from 7 years ago isn't an uncanny valley problem. The problem is that the AI changed things that were deliberate design choices. In this case Resident Evil is
Re: (Score:3)
Unlike unnaturally smooth shading, poor lighting models, and limited texture resolution?
There is nothing fundamental about what AI can or can't make things look like. Anyone who has spent time with a modern image generation model (say, Nano Banana 2) knows that the only meaningful limit to what it can create is the limits of what you can imagine. To the degree that DLSS 5 might be too limited**, it's something that
Re: (Score:2)
Unlike unnaturally smooth shading, poor lighting models, and limited texture resolution?
You're begging the question. Preicsely none of that is a given. You're wowed by a demonstration applied to three infamously bad examples. Even resident evil doesn't implement basic modelling shaders that were released in an NVIDIA paper 23 years ago and first used in games in 2007.
I'm glad you're impressed, but the graphics here look no better than for example Metro Exodus, a game released 7 years ago. Just because 3 developers didn't give a shit doesn't mean we need crappy AI fix for the problem.
There is nothing fundamental about what AI can or can't make things look like.
Of course
Uhhh (Score:5, Insightful)
Maybe I'm missing something, but from Nvidia's announcement video every single example looks significantly better in a number of ways. A lot of modern games suffer from shitty and incorrect lighting - especially those using UE5. Now, does that mean that every scene in every game that can use DLSS5 looks better? No, probably not. Is it perfect? No. But neither were the originals. Does it probably need more training for a bigger variety of games? Yeah, probably. But "air-brushed pornography"? No. "Yassified, looks-maxed freaks"? No. Mangled art direction? Maybe, depending on the game, but mostly probably not.
If adding more realistic details in a somewhat realistic looking game detracts from your "art direction" than maybe your art direction isn't that good to begin with.
Re:Uhhh (Score:5, Informative)
The cherry picked some super bad examples for the non-DLSS views. It's ludicrously gamed. This was published 19 years ago [nvidia.com] (it's a technique from Gpu Gems 3 [nvidia.com]) and it was real-time then. I'm sure I can find a better/more recent method but I hope you get the point.
Re:Uhhh (Score:5, Insightful)
Maybe I'm missing something, but from Nvidia's announcement video every single example looks significantly better in a number of ways.
The thing you're missing is that they demonstrated multiple different games with multiple different artistic directions and yet ... they all look the same when DLSS is on. That's the big problem here. You're no longer getting a game made by developers, you're getting a game interpreted by NVIDIA's training set.
Take a closer look before you declare better. Here's what I see:
1. Resident Evil the character looks amazing. But the environment looks amazing as well, which is a problem since there was a clear design choice to have a background brown fog make the environment not stand out. Someone added that on purpose. DLSS5 removed it. ... ha nothing could make that game worse, this I'll take as an absolute win.
2. Hogwarts Legacy was clearly lit by an overhead light casting a strong shadow over the face of the character. This game supports raytraced shadows so this shadow was rendered correctly. DLSS5 all but removed it, making it virtually impossible to tell where the lighting is coming from. The character now almost looks completely front lit.
3. Starfield fundamentally changed the character's looks. The face is actually a different dimension. Not bad enough that all lighting will look the same, but are we expecting every character to be rendered with a catwalk model chin now? (Sidenote: What isn't visible in that video but was visible in an extended cut was that the lighting is no longer visually stable. Not bad enough that reflection react differently, and fast moving object glitch out with DLSS, but now we get to contend with subsurface scattering and ambient occlusion popping in and out depending on what the AI thinks about any given frame. There was a scene with a chapter in Starfield standing up and going for a walk. I can't even tell you what the chachter looked like because I was distracted by the unstable shadows in the environment screwing up. And this AI instability is something DLSS hasn't fixed for its 8 year existence).
4. EA
Give me bland graphics over this crap anyday.
Re: (Score:3)
Because the character hardly had "looks" to begin with? That's like charging that a realistic rendering of Cloud Strife "fundamentally changed the character's looks from FF7".
I think there's a legitimate complaint that they could follow scene art direction better - and probably will focus more on that in future revisions**. But complaining that it makes someone look more like an actual person instead of a CG blob isn't just an "OK Boomer" sort of thi
Re: (Score:2)
Because the character hardly had "looks" to begin with?
That was a choice. I mean don't get me wrong, that character looks better. I'm sure the AI finds her infinitely more fuckable, but the design of the chacter model was a choice, a choice that was now overriden by some AI glowup rubbish. I'm not fan of the looks of the Starfield characters, but at least the look was unique.
This is a problem in general with AI. AI redraws the scene. Fundamentally. It takes direction, but when you do something like say highlight only one object in an image editor and ask AI to
Re: (Score:2)
Oh come on. What percentage of modern game sales are of games where the authors deliberately attempted to make it look bad? The overwhelming majority push for realism. And even for those that don't push for realism, almost none of those deliberately push for "looking like subpar or outdated CG".
If you want to jump back 5-10 years in gaming technology, you're free to, but the market does not agree.
Re: Uhhh (Score:3)
This is an incredibly superficial view of graphical fidelity. Most games do not just push for realism in the sense that it looks just like real life.
Instead they pursue artistic realism, where liberties are taken with regards to lighting, shadow effects, particle physics, and even character models to evoke a specific reaction from the player. That is called artistic direction and atmosphere.
Also you seriously cannot believe that this default GenAI look is somehow realistic either. The lighting alone hardly
Re: (Score:2)
I'd be careful throwing the word "most" around there, because personally I think it's highly genre dependent. In a horror game, I'd likely agree with you, particularly in regard to something like lighting. But in something like an FPS, I guarantee realism is likely higher on their list. The "can it run Crysis?" meme did
Re: Uhhh (Score:2)
You bring up crysis and I raise you Modern Warfare and Battlefield. Back then a lot of shooters went with the brown/piss filter approach to graphical fidelity that became so widely mocked and for good reason. Not only did it age poorly from a graphical standpoint, it all just looks so samey.
And even putting aside artistic intent, realism can actually be highly detrimental for game design as it can make readability and visual clarity incredibly difficult.
For example, modern battlefield games still have issue
Re: (Score:2)
to evoke a specific reaction from the player.
Yeah, and sometimes that reaction is "Wow, someone fucked up, because this looks bad." Sometimes the direction they took is just objectively the wrong one.
Re: (Score:2)
1. Looking again, yeah, the background is too bright overall and isn't actually correct for the weather, light sources, buildings and whatnot. There are also some issues with the details/fog on the left under the bridge or whatever that structure is. But that can be further tweaked and fix. And I'm sure they will continue to do so as they have in the past.
2. Disagree. There is no actual overhead light in this scene. Or at least not one that bright. Because nothing else in that scene is illuminated properly
Re: (Score:2)
If developers add this to their games, it will fit the art direction.
If it is also a drop-in enhancement for older games, you won't have to do it. I'll enjoy the improved lighting, and you can stick with the bad lighting that makes you so happy. Also note that sometimes what you take to be an intentional artistic decision is actually a desperate attempt to boost framerates or hide flaws, and the original intent may be better served by this.
Did you wat
Re: (Score:2)
If developers add this to their games, it will fit the art direction.
Except we have evidence that developers take the easy route over their actual artistic vision time and time again. Many shitty game designs have been attributed to this already. E.g. a developer may *want* to create a certain atmosphere, but their bastard exec is putting pressure on shipping the product and threatening to fire everyone, and suddenly every game looks like the same shit. You think the Cistine Chaple would look the way it does if the pope came in every day and shouted at DaVinci that he better
Re: (Score:2)
You're making a moral hazard argument for a lighting tool. It just sounds silly. Lazy devs and bad managers will stay lazy and bad, the good ones may be able to do even better.
The ceiling of the Sistine Chapel was painted by Michelangelo.
Re: (Score:2)
What's your opinion then on modding? If you played Starfield, did you use any mods? What about reshade presets?
Re: (Score:2)
If the complaint is a lack of developer control, does that mean that if Nvidia can install knobs in DLSS for developers, then you'd no longer have a beef with it?
Re: (Score:2)
Almost. And NVIDIA would say they have done that, but they have presented ample evidence to the contrary. If the developer had control than maybe the Resident Evil example wouldn't have had the atmosphere ruined (that game is 100% atmosphere), Harry Potter would have retained the cartoonish look rather than whatever attempt at realistic was shat onto that one Harry Potter scene they demonstrated, and Starfield ... well the chacter models would still be ugly :-)
That said, developer control doesn't fix the ar
Re: (Score:2)
And the art direction argument is just plain dumb. If this is a feature that can be dropped into older games, and detracts from the intended direction in some way, don't turn it on. If the developers are adding it to games, then it clearly fits their direction.
And the complaints using new jargon I don't really get (first I've heard "yassified" or "looks-maxed"), are simply backwards. Idiotically backwards. Th
Re: (Score:2)
funny how you characterize deliberate choices as "shitty and incorrect" .. are you some elite game developter/artist?
Not even. The shittiest and most incorrect lighting is not deliberate nor does it have anything to do with art direction. Starfield is the only example I need for this.
Quick look (Score:5, Interesting)
I don't care about this much, but I took a quick look.
The DLSS 5 face looks about a decade better than the non-DLSS face. The latter is how faces have appeared for decades in games: not quite there. The former is far better: the eyes look real. Skin tone is convincing. Nice.
The alley/street/whatever is dramatically different. Not necessarily better or worse, but very different. So that's a thing that will bother people that are all wrapped around the axle about "atmosphere."
Overall I prefer the DLSS 5 version pretty strongly. It looks better.
There. A former gamer that likes it. Hate on, you spoiled brats.
Re:Quick look (Score:5, Insightful)
So that's a thing that will bother people that are all wrapped around the axle about "atmosphere."
The sad part about you putting the word "atmosphere" in quotes is that you fail to realise that the resident evil games have all fundamentally been about atmosphere. It's what drew you in back in the days of the original Playstation where a character's face was modelled with polygons you could count on your fingers.
Yeah-nah I don't want all my games looking the same thanks to an AI interpretation. The problem is, we now have another tool that will make developers not give a damn. Not good enough that DLSS has destroyed any motivation for developers to optimise their games, now we get to contend with them shitting out crappy games and relying on AI to make it look real.
But hey as long as the tech demo applied to an infamously bad game looks good right?
Re: (Score:3)
So, what is it that you're complaining about? That a tech demo intended to make games look different makes them look different? That the lighting tools available
Re: (Score:2)
you fail to realise
I realize AAA game designers will adapt and use new capabilities to produce better "atmosphere," just as they always have. Shit tier games will still be shit, but different. Ultimately, nothing of value will be lost.
Re: (Score:2)
just as they always have.
Games in the past 10 years are all evidence to the contrary. It is very clear that developers are taking the path of least resistance. There's a reason most games the past decade have looked the same, and that's largely been the adoption of UE5, the endless use of pre-existing libraries, and the idea that you can just throw RTX lighting elements into the scene instead of lighting things properly.
You may be wowed by the demonstration today. I'm not, because all that it's achieved is graphics on a quality sim
Re: (Score:2)
The problem is that the characters end up looking like an AI generated version of themselves. There's a distinct look to AI generated humans, especially the kind that end up in crappy ads, facebook posts and general online slop. So you're taking these characters and making them look stylistically similar to humans in AI generated slop content. For people who regularly see these, and find them annoying/worthless, now seeing their favorite characters transformed into a similar style is off putting and aesthet
Re: (Score:2)
I think you're looking at it backwards. Game devs and AI devs are both trying to create characters that look human. What do you expect will happen? That CGI will look like CGI? AI generation has produced shockingly realistic looking images. Game devs have been trying to do that for generations. What do you expect will happen?
Re: (Score:2)
Re: (Score:2)
And do you realize that what you're complaining about is that the developers decided to make their characters pretty, not that Nvidia did some nefarious thing? All Nvidia did was drop something in that radically improves the lighting.
That game characters look like girls who want to look like game characters shouldn't be much of a surpris
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
But think about the things you've listed - bumps, roughness, height, reflected color, refracted color. Those are all controlled by how the lighting engine interacts with the layers of PBR textures. If your height/bump/normal map doesn't cast shadows on itself, wrinkles look flat. A lighting e
Re: (Score:3)
A former gamer
Nvidia's insane pricing claims another soul. Welcome to the club, former gamer.
Re: (Score:2)
5000 dollars for a video card... heck lets buy 5
Re:Quick look (Score:5, Insightful)
The tech is potentially good, but the way nvidia used it was pretty horrible, getting characters that are purposefully stylized and doing these terrible "what if mario was real" kind of hell.
A game made with this from the ground up, with the NN trained to draw the character faces and all that would be awesome, but it was of really bad taste how they shown it.
Re: (Score:2)
Re: (Score:2)
So then... (Score:4, Funny)
...this is what we're hating today?
Re:So then... (Score:4, Insightful)
Well, I don't harbor any more hate for it than I do a child's fidget spinner or Barbie doll. It's just a toy that's uninteresting to me.
What I hate is that they've expended gobs of cash, petroleum, water, and talent on this, while the drivers for their $1,000 GPU, that I have to run every day to work, have been neglected. After their last year of fuckups, I've started ignoring their driver updates.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I take it from your anonymous response that you are also in the Trump-Epstein files.
That actually looks pretty darn good... (Score:4, Interesting)
Re:That actually looks pretty darn good... (Score:5, Interesting)
I didn't.
First of all, the stills look OK. But there's still an uncanny valley affect.
Second, there was a radical variation in the styles used in each game. The Hogwarts woman 25s into the video, for instance, looked cartoonish. The guy immediately afterwards looked intended to be real albeit with cartoonish clothing, which set against his cartoonish cart driver looked very out of place.
As an aside, the Hogwarts woman looked OK in paused scenes but seemed to have something seriously wrong with her face, like the skin was sliding off or something, when animated.
Third, the animation styles are the same bobbing and necks making weird angles when speaking type stuff we've seen in video games since the 1990s. DLSS doesn't fix that. So you get the extreme weirdness of "real" people making video game movements. That adds more uncanny valley sense to the whole thing.
And this is a video of them showcasing it all going to plan. They've mostly used cut scenes. They've choosen sequences where there's a lot of animation in the first half (while showing non-DLSS) and relatively little in the second (showing DLSS 5.) So you're seeing a VERY cherry picked selection.
Despite this, it's all uncanny valleys, which is why gamers are not happy with it.
This is what you get when you replace the executive staff at a graphics card maker from enthusiastic gamers to AI boosting charlatans.
Re: (Score:2)
I've seen papers about turning old-school graphics or even bad n64 level graphics into photo realistic images. This I think is ok but not as impressive and it's being done in realtime which I was expecting to start happening... not at 4k. 1080 looking like a movie is fine with me. I can't tell 4k sitting 8ft from the TV anyhow. I do think some of their work is in need of serious tweaking. I imagined we'd have simple poly graphics by now with metadata indicating what used to be shaders and textures for the
Re: (Score:2)
I would love to be able to game with images like that.
I can go either way. I don't put nearly as much emphasis on graphics as I do on whether the game is fun to play. I expect CGI realism in movies, but not in games. A $5 indie game with an interesting, fun, and unique hook is far more preferable to me than an irrationally expensive ($50+) AAA game with awesome graphics and shitty game play.
Some people are complaining that DLSS5 makes all the graphics look the same, but my complaint is that all the modern AAA games are all the same in game play. They all ran t
"air brushed pornography"? (Score:2)
"Generally bullish" (Score:2)
Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games.
I know I'm far from the only person who didn't like earlier versions of DLSS making games look like animated oil paintings, much less the latest one making them look like Sora videos.
Re: (Score:2)
Re: (Score:2)
It's not just a tool to handle lighting and it's not better, it's different, in a way that most people don't think is better. Some people prefer their games to look like games and how the developers intended, rather than like AI-generated artwork even if it has more detail that was never intended to be there and more photorealistic lighting even if that wasn't the game's intended art style.
Re: (Score:2)
And that's the problem with the argume
Re: (Score:2)
I doubt many game developers would try doing that and I expect there would be a game dev campaign to ask gamers to turn DLSS5 off, much like the movie industry with motion smoothing. With this technology the look of the finished human-made artwork would just be a visual prompt for DLSS5 to imagine a replacement for. Developers would have to try making a character face that DLSS5 turns into a different face that looks like what they had in mind...good luck with that.
Re: (Score:2)
It's a tool for them, why wouldn't they try using it? Why would they include it in their pipeline if they didn't like it?
"Developers would have to try making a character face that DLSS5 turns into a different face that looks like what they had in mind...good luck with that."
You mean, tweak the lighting? Your argument doesn't make sense. You seem to be conflating lighting
Re: (Score:2)
Why do you think it just changes the lighting? Have you seen the demo? It's changing features including on people's faces. That's much more analogous to having ChatGPT redesign your models. The way it face-swaps the Resident Evil characters and a futbol player is downright comical.
https://www.theverge.com/enter... [theverge.com]
Re: (Score:2)
Lighting is where the textures have their effect. That's what you're seeing in the demo, the lighting engine cranking up the effect of the multiple layers of PBR textures. Which, if you are unfamiliar, include the diffuse texture (which used to be the only textu
Awesome! (Score:2)
The demo videos look pretty awesome to me! I can't wait to see this uplifting game graphics. I know a bunch of naysayers will jump on me saying I'm out of touch and just don't understand their perspective, but I'm a gaming veteran not some out of touch nub. It looks to me like there's still plenty of room for art direction, and I do understand that this tech will homogenise games to some degree, but if I can have games looking photorealistic like that in real-time, that's a net win for me. The real test wil
Noise Wars 2.0 (Score:5, Interesting)
The noise wars destroyed radio and music, now the video wars are coming for your eyes.
Moar Contrast! (Score:3, Interesting)
The image from the press release reminds me of an anecdote from a cinematography manual from a few decades ago. In the story, a cinematographer spent hours using fog machines and lens filters to give a scene a mysterious, dreamy look. Later, in the editing suite, a video engineer "corrected" the black levels, undoing all the cameraman's work.
In the "before" picture from the press release, the fog naturally lowers the contrast in the background. I imagine the artists involved celebrated when they got the sodium street lamps to subtly tone the atmosphere. With "DLSS 5 On", everything is uniformly high contrast, a much clearer day in a city that was never meant to be clear. Similar aesthetic in some of the other demos: DLSS 5 seems to apply an overcooked tone-mapping that makes everything uniformly contrasty.
They've built a very expensive video engineer that corrects things that aren't wrong.
Re: (Score:2)
Monkey Christ... but in reverse (Score:3)
Kind of interesting but.. (Score:2)
This looks like cool tech demo, not like something I would actually like to run in most cases. It needs significant control from game developers, for example to only apply the effect on certain objects and only before e.g. lighting and shadows are rendered. Definitely not whole scenes.
Re: (Score:2)
Though it really doesn't make sense to apply lighting to anything but an entire scene. And as it is a lighting technique, it is the rendering of lighting and shadows, not a separate step.
Re: (Score:2)
It's very obviously more than mere lighting, it's clearly changing textures, removing fog, doing the equivalent of changing meshes etc.
Re: (Score:2)
Open up a game that includes a sharpness control. Play with it. It looks like it is changing the texture, appearing to add details that weren't there before. It isn't, but
Re: (Score:2)
But if you look at one of Nvidias examples from Resident Evil: Requiem (first image in article):
https://arstechnica.com/gaming... [arstechnica.com]
and look at the Cigarettes sign on the building on the right, you can see obvious mesh distortions in the DLSS 5 version. How is this caused by lighting alone?
This gamer reacts with overwhelming positivity. (Score:3)
Gamer Luddites (Score:2)
this reminds of the first digital cameras. too expensive and too primitive but what does everybody carry now? :)
Re: (Score:2)
And no, it doesn't make or supplant artistic decisions. Nvidia is, because this is a tech demo where they apply it to existing titles to show off the capabilities. When devs use it, they'll use it to suit their intent.
Which means it is an utterly brilliant idea. I love it and eagerly await being able to use it. Should I ever be able to afford a card that supports it. I have doubts my 3070ti will.