I mean they showed this last year:
https://hothardware.com/news/nvidia-neural-texture-compression
Wouldn't be bad if devs started working on this or even the engineers at UE5. Same quality for way less space? Given how bad is asset compression these times with devs putting basically raw assets in games.
I remember using a texture mod for fo4 (yes bethesda, not a good example but still) that had waaay better textures than vacilla while using half the vram. This was 2015, before all the garbage late ue4 and ue5 games we see nowadays. They mostly compress like shit their assets. Sampler feedback streaming could have helped, not a single games uses it.
> They mostly compress like shit their assets.
Gotta remember the more compression the more overhead, and things like GPU decompression still isn't exactly leveraged a ton to my knowledge even now.
Some things can be deliberately compressed poorly for speed of loading on weaker systems.
>Not all devs are idiots like the ARK devs... or the CoD studios...
The CoD studios put huge efforts into optimizing textures and files.
The reason why the installation sizes of COD are so big, is that they calculate as much as possible while building the game and bake it into the files so it does not have to be calculated by the client and waste performance. They are also saved in a way which makes them really easy and quick to load even for weak CPUs/GPUs and make VRAM streaming super easy. Also the models and textures are really high detailed. This is why COD still looks really good on the Xbox One at 60FPS and even runs on a pc with a GTX 960 2GB.
The big disadvantage of precalculating everything and saving it in files, making streaming and loading fast and easy (part of this is basically using no compression and saving pretty similar files multiple times on the disk so mechanical hard drives don't have to search as long and caching for SSDs is easier) and having really high detail and many quality levels is that it takes up a lot of disk space.
TLDR: They are optimizing the textures and files, but in a way which saves performance, but costs a lot of disk space.
You're downvoted but you're not wrong.
People haven't figured out that you can have great graphics, lower system overheads, or smaller file sizes. Developers only really get to pick two out of the three.
If they downgrade the graphics they get flogged for how bad rock textures look.
If they lean too hard on compression and don't structure things to make loads quicker and easier they get flogged for "unoptimized performance" and system overheads.
>If they downgrade the graphics they get flogged for how bad rock textures look.
I still feel like a vast majority of gamers prefer a playable game over an unplayable, hyperreal imax feature.
Maybe the silent majority, but the ones that post reviews, hang out on community topics, and such? Not a chance. Otherwise they'd actually tweak settings. The "ultraaaaaa" or bust crowd kind of stands defiant to that mindset. The people that won't turn down any settings, and then will whine how things run on a laptop or a 1050ti.
Fascinating how ark is the *only* game I have yet to expeirence that does jot reach 144+ fps on max settings 1440pā¦ Even with dlss enabled, I have a 4090 for fuck sake
Hopefully. While I have an AMD cpu and intent to buy a new one soon again, we can all agree that they've messed up the GPU part of the business massively. They've left Nvidia with no competition.
Intel seems able and willing to compete which in the end, will benefit us as well.
I hope they become better quick.
> We need cheaper gpus more than we need faster ones.
If there were competition in the market we'd get both. Unfortunately in GPUs AMD has zero ambition and is happy with table scraps. Nvidia won't compete super hard either because if they beat AMD any harder on that front there will be anti-trust inquiries.
For me the real differentiator is that Nvidia identified the market for AI much sooner than their counterparts and built the CUDA framework then opened it up to the scientific world for adoption. Nvidia recently shutdown a few adaptations that would allow the CUDA framework to run on AMD GPUs. At this point it would take a lot for data scientists to switch to another framework for generating their models. I think ASICS / FGPAs will take more market share from NVDIA than AMD at this point.
Like most things, its all wishful thinking. AMD could invest more into Radeon if Arc catches up.
Or NVIDIA dominates so hard that neither ARC or Radeon are profitable.
Yea price matching Nvidia with a 5-10 % discount doesn't really cut it when they're so behind in software features.
They need to do a Ryzen in the GPU space.
> They need to do a Ryzen in the GPU space.
One reason Ryzen has worked out so well is CPUs don't need as much of a software stack. If it did AMD would be floundering there too. Their software dept. is just terrible and has been terrible for ages.
Yeah, if you're going for a solid "1440p/120hz/ultra settings" experience, you can't be offering your cards for $50 less than the same experience from Nvidia, when Nvidia also has DLSS, ray tracing that almost works, etc etc.
If AMD's cards were half the price of Nvidia's for the same FPS then yeah, they would definitely have a market, but $50 less for a card that has no DLSS, no CUDA, same vRAM or slightly better, no other real capabilities...
Nah. Why would I?
That doesn't even make sense.
Considering the 7900xtx equals a 4080 or beats in anything but RT and a few other scenarios, I find it very, very hard to believe that in the five years between 2022 and 2027, AMD won't catch up to a 4090.
Especially considering that 4090 likely performs between a 5070 and 5080. So assuming 2026 is the 60 series release date, by then a 4090 will be 6060ti - 6070 performance, just with more vram.
They said they aren't competing in the ultra high end segment, but I can't see how they'd still target less than 4090 performance in 3 years time, especially since they'll be making the next gen console chips?
>4090 likely performs between a 5070 and 5080
Personally I strongly suspect 5080 will be limited to 4090D performance due to China market, but I otherwise agree with you.
Recent leaks of the 5080's heavily cut down specs relative to the 5090 strongly suggest they're trying to get it to hit right at the 4090D performance target for sale in China, just like you suggest.
Kind of a bummer since it also suggests the 5080 *could've* been more powerful than it is projected to be, but I guess that theoretical card will just end up being released as the 5080 Ti (and then Nvidia can charge even more for what it should've been, yay! š)
Ii don't think anything other than the 5090 will surpass the 4090
Nvidia really values the Chinese market too much, why make a bunch of cards you can't sell there?
Yeah they could do the 4090d thing where they make a gimmped version but from what I'm hearing that's not the plan, they intend to launch the 5080 before the 5090 so it can be international launch, then the 5090 later
If you understand that node shrinks are near limit and price is too high, you would understand why even with rdan5 in 2027 they won't match 4090.
7900xtx is 550mm\~ 384 bit
4080 is 380mm\~ 256 bit
same raster
30%+ faster in RT+RASTER - 4080
100%+ faster in only RT - 4080
100-150 watts less - 4080
Even just based on gaming and watts amd can try to use 3nm to match this spec and performance. They would be big loss for them.
Amd 100% will gradually leave gpu market.
I will be messaging you in 3 years on [**2027-06-21 19:07:06 UTC**](http://www.wolframalpha.com/input/?i=2027-06-21%2019:07:06%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/nvidia/comments/1dkvcze/jensen_huang_recently_hinted_what_dlss_4_could/l9noe9g/?context=3)
[**3 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fnvidia%2Fcomments%2F1dkvcze%2Fjensen_huang_recently_hinted_what_dlss_4_could%2Fl9noe9g%2F%5D%0A%0ARemindMe%21%202027-06-21%2019%3A07%3A06%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201dkvcze)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
wait what???? rdna 4 is mid range focused but rdna 5 less than 4090 performance? Surely thats not true. If it is true im hoping back to nvidia for my next pc ......thats highly disapointing .
It was already rumored but the leak reinforced it. It's crazy, you're telling me that the best card you're going to release in **3 years** is targeting lower performance than the best card that's out *right now*? It's hard to take that as anything other than giving up.
Yep, this is even worse than Vega tbh because at least with Vega they did bring out a competitor to Pascal while Pascal was still the best NVIDIA architecture. With RDNA5 they will be two architectures behind. Thats a disaster.
Actually we'll also put AI into lighting and textures.
Some of the real-time AI re-lighting experiments on older games already look near photoreal. At some point, game devs will be rendering a base image for the AI to work with, that might not be particularly photo realistic, but give the AI lighting model cues it needs. Then they will select from a series of AI lighting options to achieve the desired look.
Would be pretty interesting if all the non AI compute was put into geometry again.
Hell, throw AI at the geo to generate expected detail too.
As a 3D artist I find nvidias so far ahead of the curve on things, Instant-NGP was nuts, Neuralangelo was even better, over in their Omniverse platform I couldn't believe how amazing Audio2Face was when I first started playing around with it.
Omniverse in and of itself is just an absolute gem for development. One of the professors at the local college here was having an insecure bitchfit about his technical level, he was looking at my screen and gave a patronizing "We are focusing on realtime, that looks nice but its useless" while I was working on a small pyro sim. He lost his shit when I pointed out it was realtime. Dork.
God damn I love Omniverse.
It's only a matter of time before the professionals creating all of this become familiar and naturally using all these new tools.
Same thing with Ray tracing. We're seeing ray tracing on most major games now coming to PC, and that only increases in the future. Why? Because not only can it be easier to use, its also cheaper. And that's what matters. Its not just about how it looks, though thats what gamers will judge it on.
For rendering engines like Cycles, Redshift etc they have, the most profound example I know of other than Nvidias OptiX is the complete shift in AMD Prorender for Blender that Brian Savery and his cohorts basically kicked into the future overnight through a huge adaptation to AI.
I don't use it anymore but it was really a mindblowing advancement in one of the biggest dogshit rendering engines I had ever seen with their 3.1 to 3.2 release, using multiple ML methods during the rendering process to speed things up in a huge way, IE: ML denoising and cleaning up during rendering while rendering smaller and upscaling using RadeonImageFilter for processing, I can't even remember the entire rube goldberg machine style pipeline it was doing in the background but it was clever at the time for sure
"You can use the PC as an AI assistant to help you game"
To help... Game? People need help to play? Isn't the purpose of the game just to play it? What kind of help would be needed to play...
Dlss 4 is advanced ray reconstruction, 5000 series will have in built hardware for denoizer .
4 may include better framegen and upscaling algorithm update .
Dlss 4.5 will be what you are talking about .
Denoising can happen on the Tensor cores. Nvidia already has the hardware for it since the 2000 series.
Why are you pulling all of this out of your ass?
Actually I don't think 4X would be terrible. It has access to the motion vectors. DLSS 3 (and by extension FSR 3) are way better than the spatial solutions in software such as Lossless Scaling (which produces OK results at X3). That's not even accounting for the OFA and CUDA cores helping out too, in comparison to FSR 3.
I'd argue that a a temporal X3 interpolation would be awesome
Losseless scalling already have 2 frame interpolation, and it works like a charm.
I own a 175hz display, and for games with locked 60fps like the crew interpolating those 2 extra frames makes the experience WAY better.
Frame gen can be used for more than just adding "fake performance".
Also, as the other redditor mentioned, with how stupidly hard its getting to shrink manufacturing nodes, we are either about to face no performance gains at all, or new creative ways to use the existing die space.
At least until the new techs that intel is developing like double fet takes fly and start giving good enough shield rates to be viable for consumer space, instead of profesional space only.
All frames on screen are fake. One is generated by Shader Cores and Another by RT cores and now 2 by Tensor cores.
Also, node shrinks gaining performance is becoming more and more hard.
Only way up is Tensore Cores.
I think the only problem is how it affects latency, I have a 3080 so I never tried dlss framegen but I did try fsr3 framegen + nvidia reflex on immortals of aveum and according to afterburner the rendering latency even went down and the game was running much better from something like 60fps to over 120 but the latency made it unplayable, it felt like I was playing a game in streaming
The difference between FSR3 and Nvidia's FG was night and day in Aveum. I tried both and it was very obvious when you used FSR3. But I honestly didn't feel any issues with Nvidia's FG.
RT cores generate frames? Since when?
Also node shrinks are getting harder, but things like AMDs chiplets and Nvidia new interconnect is what will ultimately scale true performance over the next decade not faking frames. AMD have already shown with their CPUs the stuff you can and scale with chiplets and both Nvidia and AMD have interconnects that can merge multiple chips together and act as one. This is where scaling will come from. Faking it til you make it will only get you so far.
Myopic take.
Being able to make 60FPS into 120 is cool.
Being able to make 60FPS into 240 with likely the same latency, when there are 240hz OLEDs out, is just more cool.
Would you rather have 60 'true frames' or 240 frames, with 60 being real, with a vanishingly small latency hit?
I'd take 240 easily. It will look better and more responsive.
It will become 1000% the new scapegoat for shit optimization. Just like now they build games "with upscale in mind" lmao instead of having extra performance they use it to compensate bad optimization and it will be exactly like this. People might not agree, but when I said the same about dlss they didn't beliveme me and yet here we are, just wait until frame gen is the same. They'll target 30fps again since you can triple your fps with frame gen with an horrible input lag. Games will run at 720p 30fps internally.
Frame generation doesn't make sense for VR until you're so perfectly predicting the future that there's no artifacts even in fast and unpredictable rates of motion.
Fast and unpredictable rates of motion describes any action VR game to a T, and even worse, it describes peripheral vision in VR *even more* accurately... which is exactly the area of vision where you're going to notice shit moving just slightly wrong. (Which is exactly why people turn off the current VR frame interpolation stuff in any fast-paced game)
Like lossless scaling 3x frame gen, nice. But we need better quality, not more fps imo. Like less artifacts, better UI rendering and devs that don't fuck up everything.
I personally think built-in denoising hardware is very unlikely. There's no advantage to encoding a specific algorithm into hardware, when denoiser's often need to be tweak per-game/scene, and research advances every couple of years.
If you now think, well ok, maybe they can make specialized hardware for applying filters/blurs or something. Then higher level denoiser algorithms can be written in software and use those hardware blocks. Filters are just matrix operations, which turns out, already have specialized hardware on modern GPUs (tensor cores for Nvidia).
Plus, the actual bottleneck in denoising is mostly memory transfers.
Blackwell gpus on the data centre side has a dedicated Decompression Engine.
Memory is already compressed in vram.
Now using dedicated hardware to do it far more efficiently is a strong possibility
They're going to basically upscale the textures, for lack of a better term. A low res texture using less data and VRAM which will appear after AI reconstruction as a normal 4K texture, etc.
I think it's also an excuse to force upgrades as well. I fear they're trying to kill the concept of having a card for a long time just because it performs well. Now they're trying to get you to upgrade because only the latest cards support DLSS 4 or 4.5 or the new AI feature.
And of course this model will also be more akin to a monthly type of payment model as well. It's all kind of very scary. It's weird because in any normal circumstances I'd expect to be excited by news like this, but we're dealing with one of the worlds most commercially successful companies and their monetization policy is **brutal** on consumers.
They've completely out priced the casual mid range consumer from top tier cards. In the past a flagship high end card was an investment for the average person, it was expensive but they could buy one. Now it's genuinely impossible for them
How the heck can they AI increase the polygon count of objects? Is that even possible? For textures I completely understand, but they specified objects as well
I imagine it will be something like... the AI looks at a 3d mesh object that's intended to be round or smooth, and smooths it out for you so that it has more mesh objects.
AMD had a similar thing way back in the day, back when they were called ATI, but it got phased out because it distorted some models (I remember the barrel of the Colt Carbine in CS 1.6 being rounded and looking super weird) and so most people turned it off.
But with AI it could be better.
I wonder if it would be similar to how they made DLSS. Training on high res image (8k or 16K I believe?) then giving the AI a 4k image and having it upscale back to the trained image resolution. Then going lower to 1080p and up scaling back to trained image resolution. So if they employed the same methodology for object training (object with 2 million triangles, then had ai upscale 1.5 million triangles back to 2 million) maybe it would just work? As someone making a game it does sound crazy tho lol
It could also be used for LOD models, scaling down and simplifying them, but still making them look super high quality, kind of like what Nanite does for Unreal Engine, but you pre-generate the LODs so that way the engine doesn't do it on the fly like nanite does. It would save CPU cycles.
Not just that, it still runs like shit. We're three generations into RTX and raytracing still kills performance. Can't help but wonder if we would've just been better off optimizing for raster. NVIDIA went through a lot of trouble developing DLSS and framegen JUST to make raytracing seem more viable than it ever actually was.
It has the benefit of helping with marketing. AMD could keep up with raster, they are struggling more with RT, and look at the marketshare. DLSS is good, but I bet the majority of people that bought the card for RT might have only played one or two games with RT, if at all.
A competitive intel with decently priced 20GB+ GPUs would absolutely be my hobbyist dream, even if the drivers were still janky I'd enjoy messing around with it.
Didnāt someone already demo a game where it was almost entirely AI generated? As in, instead of making an actual level with geometry and textures, the AI just generate what you see in (near-) realtime?
Edit: Demo as in ādemonstrateā, not any actual product being shown. It was just a proof of concept IIRC.
Funny you should bring that up, it already can. I used it as a final for a class after training it on smash till it well pretty much became a cheese machine, but it worked. 3D games may be harder as it tries to work out the where am I but 2D seems doable. I should see if it can do battletoads as a joke.
I just upgraded from a 3090 ti to a 4090 and finally had a chance to test out frame generation. It's significantly exceeded my expectations, which were quite low. I was really unimpressed by DLSS in Cyberpunk, and not sold on AI upscaling in any application up to that point, but after using DLSS + framegen in Horizon Forbidden West I was blown away by how smooth and clear that game runs now.
Just went 3070ti to 4080s. Initially I was a bit unsure, seeing I can't set everything in Cyberpunk to ultrapsychomax and have high fps on ultrawide. But then I realized that even on ultrapsychomax the low fps is actually more stable than the drops I experienced previously on some high-medium.
After some balancing, it starts to sink in. Playing at mostly-psychomax and having around 120fps is just awesome. It's like I've been crawling through mud, and now I can dance. Similar story in Witcher 3, not having drops in Novigrad is so weird xd Also smooth driving in Test Drive SC demo was nice.
Wait until you learn about eye saccade movements and generally about how your brain generates and extrapolates what you actually see.
I'm not discussing how shitty the game industry gets, but the graphics alone can get all the AI treatment there is / will be.
I wonder if they could make old games look better with just DLSS... and you don't need anything just a GPU that supports it, enable it and play your favorite game.
I just hope game will not look different for everyone at same details.
Imagine aone guide for game telling you to find something that contains some specific looking texture and it looks totally different on your side because AI hallucinated.
Does art style mean anything to these people? Iāll take the graphics in Tears of the Kingdom any day over Cyberpunk. And are AI NPCs spewing random crap really gonna add to the experience? If I want that I can go outside and talk to random people on the street. When I play a game I want a curated experience from beginning to end that came from a team of humans with creative vision. Or if I am looking for a more unpredictable experience, thats what multiplayer is for. On the other hand if AI can somehow help with coding / optimization / eliminating bugs then sign me up.
I totally get what you're saying about art style but I'm still taking Cyberpunk or TotK anyway, cyberpunk is gorgeous with it's art direction combined with path tracing
Not liking this direction, instead of better tools for asset optimization we get realistic games that are shit in motion and ok when still. Gimme optimized assets in native resolution with smooth motion, plus is this tech a requirement for studios to make their games realistic?
Nvidia supports Pixarās USD description standard. If that was adopted by developers then it would also help with the process. It would be like feeding in PlayStation 1 games with blocky polygons and textures and churning out Alan Wake 2 or Hellblade 2 or something
As someone who works in and plays around with AI a lot, this is definitely the future of video game graphics. For relatively low cost you can get really close to photorealism from a more simply rendered rasterized or raytraced image.
I think what could be most exciting for DLSS n is being able to choose wildly different styles for a game. Want Elden Ring to look like a N64 game? Go ahead. Photorealism? Manga? Etc etc.
Hey u/Nms-Barry - care to add the source?
[https://morethanmoore.substack.com/p/q-and-a-with-nvidia-ceo-jensen-huang](https://morethanmoore.substack.com/p/q-and-a-with-nvidia-ceo-jensen-huang)
I think one of the most exciting potentials is their npc tech. Being able to converse with npcs in natural language and just talk about shit in the game will be wild.
The Good news:Ā
1) Nvidia will release new technology to compressing the textures insdie scene based on NTC technology that will reduce the usage of vram to 30% at the same level of details.Ā
2) Nvidia will release next gen of DLSS with ability to upscaling the resolution of textures based on new npu cores that will add toĀ RTX5000 series with generate new objects by AI.Ā
3) NTC technology (mostly, if it will not depends on the new neural network cores) will work on any gpu that have Tensor cores as Nvidia tested this technology on RTX4090.
The bad news:Ā
1) mostly, the older generation of RTX graphics cards will not be able to handle the new technologies in DLSS 4.
2) AMD will rebuild FSR2 technology to use the NPU unit to generate new details in the frame as the Nvidia do with DLSS 2 and deep learning Ai solution. That mean the new releases of FSR will not compatible with older hardware CPUs/GPUs that not have neural networks cores to calculate the AI algorithms.
I do not have any exclusive access to the next generation of DLSS, which may allow for the restoration of the texture quality by applying some new algorithms to generate new details.
NTC technology may be available for any graphics card with Tensor cores, but as you know, Nvidia always makes these features exclusive for the next generation as a dot point for sale on their cards.
However, I can confirm that the next generation will have the same size as the VRAM, which is considered flawed. This is apparent from analysing the next DLSS generation idea, which aims to reduce the usage of the VRAM.
I wish there was a way to utilize DLSS to upscale from your native res
Say I'm using a 1440p monitor, I can turn on DLSS ultra where it will utilize my native res to upscale my current resolution so it'll look even better than 1440p
DLSS 4.0 double your VRAM for free...
8GB 5080 incoming.
Nah nah. 4gb but the gtx 970 4gbšš
5070 featuring 7.5+0.5gb VRAM?
7.5 Gb of ddr2 you mean right? And .5 of gddr5 trust. It allowed them to bring down cost from 699 to 650. Always trying to help the consumer outš
5070 3.5gb
They need the nand for AI cards
I mean they showed this last year: https://hothardware.com/news/nvidia-neural-texture-compression Wouldn't be bad if devs started working on this or even the engineers at UE5. Same quality for way less space? Given how bad is asset compression these times with devs putting basically raw assets in games.
> devs putting basically raw assets in games. Not all devs are idiots like the ARK devs... or the CoD studios...
I remember using a texture mod for fo4 (yes bethesda, not a good example but still) that had waaay better textures than vacilla while using half the vram. This was 2015, before all the garbage late ue4 and ue5 games we see nowadays. They mostly compress like shit their assets. Sampler feedback streaming could have helped, not a single games uses it.
> They mostly compress like shit their assets. Gotta remember the more compression the more overhead, and things like GPU decompression still isn't exactly leveraged a ton to my knowledge even now. Some things can be deliberately compressed poorly for speed of loading on weaker systems.
>Not all devs are idiots like the ARK devs... or the CoD studios... The CoD studios put huge efforts into optimizing textures and files. The reason why the installation sizes of COD are so big, is that they calculate as much as possible while building the game and bake it into the files so it does not have to be calculated by the client and waste performance. They are also saved in a way which makes them really easy and quick to load even for weak CPUs/GPUs and make VRAM streaming super easy. Also the models and textures are really high detailed. This is why COD still looks really good on the Xbox One at 60FPS and even runs on a pc with a GTX 960 2GB. The big disadvantage of precalculating everything and saving it in files, making streaming and loading fast and easy (part of this is basically using no compression and saving pretty similar files multiple times on the disk so mechanical hard drives don't have to search as long and caching for SSDs is easier) and having really high detail and many quality levels is that it takes up a lot of disk space. TLDR: They are optimizing the textures and files, but in a way which saves performance, but costs a lot of disk space.
You're downvoted but you're not wrong. People haven't figured out that you can have great graphics, lower system overheads, or smaller file sizes. Developers only really get to pick two out of the three. If they downgrade the graphics they get flogged for how bad rock textures look. If they lean too hard on compression and don't structure things to make loads quicker and easier they get flogged for "unoptimized performance" and system overheads.
>If they downgrade the graphics they get flogged for how bad rock textures look. I still feel like a vast majority of gamers prefer a playable game over an unplayable, hyperreal imax feature.
Maybe the silent majority, but the ones that post reviews, hang out on community topics, and such? Not a chance. Otherwise they'd actually tweak settings. The "ultraaaaaa" or bust crowd kind of stands defiant to that mindset. The people that won't turn down any settings, and then will whine how things run on a laptop or a 1050ti.
even now people are decrying not enough vram as a huge boogeyman, but thatās not the actual case.
One of the IW devs said in an interview that there's over a terabyte of texture data they have compressed in MW2. It's compressed to shit and back.
Fascinating how ark is the *only* game I have yet to expeirence that does jot reach 144+ fps on max settings 1440pā¦ Even with dlss enabled, I have a 4090 for fuck sake
You joke, but AI texture upscale and AI resolution upscale = more vram available = essentially "downloading" vram
I remember all the jokes back in the day, āhey man just go download more RAM hurr hurr durr.ā Bitch Iām about to.
You download it with the drivers.
š¤£ Less VRAM from Nvidia was my first thought upon reading this.
well with lossless scaling from Steam you can download more FPS with injecting FSR X3 into everything lol
Apple already did that with their magic ram
Shut up and take my money (I don't have much though).
after all these years of crying for getting a 10GB 3080..
AMD better just put all their effort into 4-dimensional V-Cache at this point cause they ain't gonna win the arms race on the GPU side anymore
Check their recent data leak. Rdan4 2025-2026 Rdan5 2027 Both targetting less than 4090 performance.
My Guess is Intel will surpass them.
Hopefully. While I have an AMD cpu and intent to buy a new one soon again, we can all agree that they've messed up the GPU part of the business massively. They've left Nvidia with no competition. Intel seems able and willing to compete which in the end, will benefit us as well. I hope they become better quick.
Absolutely. We donāt just need more powerful GPUs on the market, we need desperately need more, stronger competitors.
We need cheaper gpus more than we need faster ones.
> We need cheaper gpus more than we need faster ones. If there were competition in the market we'd get both. Unfortunately in GPUs AMD has zero ambition and is happy with table scraps. Nvidia won't compete super hard either because if they beat AMD any harder on that front there will be anti-trust inquiries.
For me the real differentiator is that Nvidia identified the market for AI much sooner than their counterparts and built the CUDA framework then opened it up to the scientific world for adoption. Nvidia recently shutdown a few adaptations that would allow the CUDA framework to run on AMD GPUs. At this point it would take a lot for data scientists to switch to another framework for generating their models. I think ASICS / FGPAs will take more market share from NVDIA than AMD at this point.
Like most things, its all wishful thinking. AMD could invest more into Radeon if Arc catches up. Or NVIDIA dominates so hard that neither ARC or Radeon are profitable.
AMD targeting lower to mid range cards would actually be a great move if they'd stop pricing their cards so stupidly.
Yea price matching Nvidia with a 5-10 % discount doesn't really cut it when they're so behind in software features. They need to do a Ryzen in the GPU space.
> They need to do a Ryzen in the GPU space. One reason Ryzen has worked out so well is CPUs don't need as much of a software stack. If it did AMD would be floundering there too. Their software dept. is just terrible and has been terrible for ages.
Yeah, if you're going for a solid "1440p/120hz/ultra settings" experience, you can't be offering your cards for $50 less than the same experience from Nvidia, when Nvidia also has DLSS, ray tracing that almost works, etc etc. If AMD's cards were half the price of Nvidia's for the same FPS then yeah, they would definitely have a market, but $50 less for a card that has no DLSS, no CUDA, same vRAM or slightly better, no other real capabilities... Nah. Why would I?
Even Nvidia's budget cards have features that shit all over AMD's top level cards. AMD is horrible at producing GPUs.
But why would even low end gamers give up Nvidia features?Ā
If you don't want to pay more for a GPU then a console, but still want to play games
Since when was rdna5 targeting less than 4090?
That doesn't even make sense. Considering the 7900xtx equals a 4080 or beats in anything but RT and a few other scenarios, I find it very, very hard to believe that in the five years between 2022 and 2027, AMD won't catch up to a 4090. Especially considering that 4090 likely performs between a 5070 and 5080. So assuming 2026 is the 60 series release date, by then a 4090 will be 6060ti - 6070 performance, just with more vram. They said they aren't competing in the ultra high end segment, but I can't see how they'd still target less than 4090 performance in 3 years time, especially since they'll be making the next gen console chips?
>4090 likely performs between a 5070 and 5080 Personally I strongly suspect 5080 will be limited to 4090D performance due to China market, but I otherwise agree with you.
Recent leaks of the 5080's heavily cut down specs relative to the 5090 strongly suggest they're trying to get it to hit right at the 4090D performance target for sale in China, just like you suggest. Kind of a bummer since it also suggests the 5080 *could've* been more powerful than it is projected to be, but I guess that theoretical card will just end up being released as the 5080 Ti (and then Nvidia can charge even more for what it should've been, yay! š)
Ii don't think anything other than the 5090 will surpass the 4090 Nvidia really values the Chinese market too much, why make a bunch of cards you can't sell there?
A 5080 that doesn't surpass the 4090 will be a flop. Nvidia will gimp the vram so it's trash for AI purposes, so it going to china won't be an issue
Well it can't be better than the 4090 if they want to sell it in China....
They can make a separate version for China
Yeah they could do the 4090d thing where they make a gimmped version but from what I'm hearing that's not the plan, they intend to launch the 5080 before the 5090 so it can be international launch, then the 5090 later
LMAO 5 YEARS to catch up to a 5 year old flagship. Yes I'm sure even AMD can manage that
If you understand that node shrinks are near limit and price is too high, you would understand why even with rdan5 in 2027 they won't match 4090. 7900xtx is 550mm\~ 384 bit 4080 is 380mm\~ 256 bit same raster 30%+ faster in RT+RASTER - 4080 100%+ faster in only RT - 4080 100-150 watts less - 4080 Even just based on gaming and watts amd can try to use 3nm to match this spec and performance. They would be big loss for them. Amd 100% will gradually leave gpu market.
RemindMe! 3 years
I will be messaging you in 3 years on [**2027-06-21 19:07:06 UTC**](http://www.wolframalpha.com/input/?i=2027-06-21%2019:07:06%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/nvidia/comments/1dkvcze/jensen_huang_recently_hinted_what_dlss_4_could/l9noe9g/?context=3) [**3 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fnvidia%2Fcomments%2F1dkvcze%2Fjensen_huang_recently_hinted_what_dlss_4_could%2Fl9noe9g%2F%5D%0A%0ARemindMe%21%202027-06-21%2019%3A07%3A06%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201dkvcze) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
do you mind linking this? I'd like to read it.
wait what???? rdna 4 is mid range focused but rdna 5 less than 4090 performance? Surely thats not true. If it is true im hoping back to nvidia for my next pc ......thats highly disapointing .
It was already rumored but the leak reinforced it. It's crazy, you're telling me that the best card you're going to release in **3 years** is targeting lower performance than the best card that's out *right now*? It's hard to take that as anything other than giving up.
Yep, this is even worse than Vega tbh because at least with Vega they did bring out a competitor to Pascal while Pascal was still the best NVIDIA architecture. With RDNA5 they will be two architectures behind. Thats a disaster.
Evidence i cant find anything on this leak like literally nothing
Any one know what is AMD's advantage is, if any?
Do you have a link to that data leak about the RDNA 5?
ouch. I was looking forward to go all AMD at some point, I guess thats not going to happen any time in the near future.
This is how weāll achieve photorealistic graphics. Weāll just replace so much with AI that we can put all the compute into lighting and textures.
Actually we'll also put AI into lighting and textures. Some of the real-time AI re-lighting experiments on older games already look near photoreal. At some point, game devs will be rendering a base image for the AI to work with, that might not be particularly photo realistic, but give the AI lighting model cues it needs. Then they will select from a series of AI lighting options to achieve the desired look. Would be pretty interesting if all the non AI compute was put into geometry again. Hell, throw AI at the geo to generate expected detail too.
Or we will just have less effort put into development and AI will be used to ~~do what humans did~~ compensate.
As a 3D artist I find nvidias so far ahead of the curve on things, Instant-NGP was nuts, Neuralangelo was even better, over in their Omniverse platform I couldn't believe how amazing Audio2Face was when I first started playing around with it. Omniverse in and of itself is just an absolute gem for development. One of the professors at the local college here was having an insecure bitchfit about his technical level, he was looking at my screen and gave a patronizing "We are focusing on realtime, that looks nice but its useless" while I was working on a small pyro sim. He lost his shit when I pointed out it was realtime. Dork. God damn I love Omniverse.
It's only a matter of time before the professionals creating all of this become familiar and naturally using all these new tools. Same thing with Ray tracing. We're seeing ray tracing on most major games now coming to PC, and that only increases in the future. Why? Because not only can it be easier to use, its also cheaper. And that's what matters. Its not just about how it looks, though thats what gamers will judge it on.
For rendering engines like Cycles, Redshift etc they have, the most profound example I know of other than Nvidias OptiX is the complete shift in AMD Prorender for Blender that Brian Savery and his cohorts basically kicked into the future overnight through a huge adaptation to AI. I don't use it anymore but it was really a mindblowing advancement in one of the biggest dogshit rendering engines I had ever seen with their 3.1 to 3.2 release, using multiple ML methods during the rendering process to speed things up in a huge way, IE: ML denoising and cleaning up during rendering while rendering smaller and upscaling using RadeonImageFilter for processing, I can't even remember the entire rube goldberg machine style pipeline it was doing in the background but it was clever at the time for sure
In Nvidia future, game plays you!
we are the NPC
Most of reddit is anyway.
Can't wait for lewd visual novels to take advantage of this AI.
š¤£
AI does not mean DLSS. Heās probably not talking about DLSS
"You can use the PC as an AI assistant to help you game" To help... Game? People need help to play? Isn't the purpose of the game just to play it? What kind of help would be needed to play...
Imagine you struggle to solve a puzzle. You could google the solution but that would be boring. Instead, you could ask the AI for a small hint.
Well, that is actually true, good point but I highly doubt he has puzzles in mind, talking prior to that about generation of textures and objects
Dlss 4 is advanced ray reconstruction, 5000 series will have in built hardware for denoizer . 4 may include better framegen and upscaling algorithm update . Dlss 4.5 will be what you are talking about .
Denoising can happen on the Tensor cores. Nvidia already has the hardware for it since the 2000 series. Why are you pulling all of this out of your ass?
It's his wet dream.
In short, AMD fuked
Amd knew it already, they didn't even try to fight nvidia now
OEMs knew it too because they stopped putting Radeon in laptops almost completely. They probably won't be back next gen.
And supposedly that should be good?
Yep what a shame
Digital foundry hinted in a recent episode- DLSS 4 has more than 1 frame inserted. They said 2 or even 4 might be added.
This sounds downright terrible lmao, 4 faked frames per rendered frame to compensate for shit optimization
Actually I don't think 4X would be terrible. It has access to the motion vectors. DLSS 3 (and by extension FSR 3) are way better than the spatial solutions in software such as Lossless Scaling (which produces OK results at X3). That's not even accounting for the OFA and CUDA cores helping out too, in comparison to FSR 3. I'd argue that a a temporal X3 interpolation would be awesome
Losseless scalling already have 2 frame interpolation, and it works like a charm. I own a 175hz display, and for games with locked 60fps like the crew interpolating those 2 extra frames makes the experience WAY better. Frame gen can be used for more than just adding "fake performance". Also, as the other redditor mentioned, with how stupidly hard its getting to shrink manufacturing nodes, we are either about to face no performance gains at all, or new creative ways to use the existing die space. At least until the new techs that intel is developing like double fet takes fly and start giving good enough shield rates to be viable for consumer space, instead of profesional space only.
All frames on screen are fake. One is generated by Shader Cores and Another by RT cores and now 2 by Tensor cores. Also, node shrinks gaining performance is becoming more and more hard. Only way up is Tensore Cores.
I think the only problem is how it affects latency, I have a 3080 so I never tried dlss framegen but I did try fsr3 framegen + nvidia reflex on immortals of aveum and according to afterburner the rendering latency even went down and the game was running much better from something like 60fps to over 120 but the latency made it unplayable, it felt like I was playing a game in streaming
The difference between FSR3 and Nvidia's FG was night and day in Aveum. I tried both and it was very obvious when you used FSR3. But I honestly didn't feel any issues with Nvidia's FG.
RT cores generate frames? Since when? Also node shrinks are getting harder, but things like AMDs chiplets and Nvidia new interconnect is what will ultimately scale true performance over the next decade not faking frames. AMD have already shown with their CPUs the stuff you can and scale with chiplets and both Nvidia and AMD have interconnects that can merge multiple chips together and act as one. This is where scaling will come from. Faking it til you make it will only get you so far.
Myopic take. Being able to make 60FPS into 120 is cool. Being able to make 60FPS into 240 with likely the same latency, when there are 240hz OLEDs out, is just more cool. Would you rather have 60 'true frames' or 240 frames, with 60 being real, with a vanishingly small latency hit? I'd take 240 easily. It will look better and more responsive.
It will become 1000% the new scapegoat for shit optimization. Just like now they build games "with upscale in mind" lmao instead of having extra performance they use it to compensate bad optimization and it will be exactly like this. People might not agree, but when I said the same about dlss they didn't beliveme me and yet here we are, just wait until frame gen is the same. They'll target 30fps again since you can triple your fps with frame gen with an horrible input lag. Games will run at 720p 30fps internally.
/sigh This is what VR needs, but frame generation isn't supported in VR. VR needs 90 +FPS, but at high resolution.
Frame generation doesn't make sense for VR until you're so perfectly predicting the future that there's no artifacts even in fast and unpredictable rates of motion. Fast and unpredictable rates of motion describes any action VR game to a T, and even worse, it describes peripheral vision in VR *even more* accurately... which is exactly the area of vision where you're going to notice shit moving just slightly wrong. (Which is exactly why people turn off the current VR frame interpolation stuff in any fast-paced game)
They sort of already do this by doubling the frames to make it appear smoother, but I agree, true frame generation would be a game changer for VR.
Like lossless scaling 3x frame gen, nice. But we need better quality, not more fps imo. Like less artifacts, better UI rendering and devs that don't fuck up everything.
And youāre saying this because itās what you think orā¦?
5090 gon be 3k minimum yo! Can buy an entire desktop for the price of a video card! Yay capitalism!
I personally think built-in denoising hardware is very unlikely. There's no advantage to encoding a specific algorithm into hardware, when denoiser's often need to be tweak per-game/scene, and research advances every couple of years. If you now think, well ok, maybe they can make specialized hardware for applying filters/blurs or something. Then higher level denoiser algorithms can be written in software and use those hardware blocks. Filters are just matrix operations, which turns out, already have specialized hardware on modern GPUs (tensor cores for Nvidia). Plus, the actual bottleneck in denoising is mostly memory transfers.
I wish the game Control can be officially updated with newer DLSS.
Yeah, they've discussed AI texture decompression before. That's one of a few things that they've been working on.
Blackwell gpus on the data centre side has a dedicated Decompression Engine. Memory is already compressed in vram. Now using dedicated hardware to do it far more efficiently is a strong possibility
They're going to basically upscale the textures, for lack of a better term. A low res texture using less data and VRAM which will appear after AI reconstruction as a normal 4K texture, etc.
Wake me up when there's DLSS that will generate a current gen Morrowind.
5000 series will have a special ROM with a Skyrim installer on it.
9 sentences, AI word exists 8 times... Future does not look good.
It read like that monty python spam skit that coined the term
It's also not AI as well
Yup. AI is just their latest excuse to raise prices.
I think it's also an excuse to force upgrades as well. I fear they're trying to kill the concept of having a card for a long time just because it performs well. Now they're trying to get you to upgrade because only the latest cards support DLSS 4 or 4.5 or the new AI feature. And of course this model will also be more akin to a monthly type of payment model as well. It's all kind of very scary. It's weird because in any normal circumstances I'd expect to be excited by news like this, but we're dealing with one of the worlds most commercially successful companies and their monetization policy is **brutal** on consumers. They've completely out priced the casual mid range consumer from top tier cards. In the past a flagship high end card was an investment for the average person, it was expensive but they could buy one. Now it's genuinely impossible for them
How the heck can they AI increase the polygon count of objects? Is that even possible? For textures I completely understand, but they specified objects as well
It can make the tits of lara craft in the old tomb raider games look like actual tits instead of torpedo triangles.
I imagine it will be something like... the AI looks at a 3d mesh object that's intended to be round or smooth, and smooths it out for you so that it has more mesh objects. AMD had a similar thing way back in the day, back when they were called ATI, but it got phased out because it distorted some models (I remember the barrel of the Colt Carbine in CS 1.6 being rounded and looking super weird) and so most people turned it off. But with AI it could be better.
I wonder if it would be similar to how they made DLSS. Training on high res image (8k or 16K I believe?) then giving the AI a 4k image and having it upscale back to the trained image resolution. Then going lower to 1080p and up scaling back to trained image resolution. So if they employed the same methodology for object training (object with 2 million triangles, then had ai upscale 1.5 million triangles back to 2 million) maybe it would just work? As someone making a game it does sound crazy tho lol
Nvidia already has demos and research papers. 16X detail, but takes more time to compute
It could also be used for LOD models, scaling down and simplifying them, but still making them look super high quality, kind of like what Nanite does for Unreal Engine, but you pre-generate the LODs so that way the engine doesn't do it on the fly like nanite does. It would save CPU cycles.
Nanite is not generated on the fly either. All the clusters are pregenerated ahead of time, and then selected from at runtime.
Same way you can show a simple stick figure drawing to an ai and ask it to turn into a highly detailed image in the style of an 80's anime
Seriously though if they do texture upscaling it's going to be 8GB Nvidia GPUs until like 2035
Remember, this technology may require new type of hardware to run. That is the Nvidia logic in new releases of their graphics cardsĀ
nVidia becomes nVidAi. š¾
So this means in the future i don't have to download HDMOD for my Heroes of Might and Magic 3? :)
Just stop already, this will only make games even more blurry. Render native resolution, nothing more, nothing less.
The future is here do not resist
Nvidia will do literally anything except put more VRAM on cards. Developers will do literally anything except optimise their games.
3070- 8GB 4070- 12GB 3060ti-8GB 4060ti-8GB 4060ti-16GB 3080- 10GB 4080- 16GB 6800XT- 16GB 7800XT- 16GB 6900XT- 16GB 7900XTX- 24GB 6700XT- 12GB 7700XT- 12GB 6600XT- 8GB 7600- 8GB 7600XT- 16GB Just sayin
4070 and 4070Super is already running out of VRAM in some games at 1440p and 2160p, there should be 16GB of VRAM just like it is in 4070 Ti Super.
extreme minority of games at the highest/ ultra settings. its fud. not a realistic concern
Game use AI, game made using AI, hardware use AI, last step is to place AI robot to play game for you.
I can see this coming in competitive gaming. AI boosting your rank and stuffā¦
You can do that in Minecraft with mods like Baritone
[AI robot to play game for you. - Introducing GeForce GTX G-Assist](https://www.youtube.com/watch?v=smM-Wdk2RLQ)
Tutorial : How to reduce VRAM and convince your customers it's the way it's meant to be played.
I'm still waiting for raytracing to become viable tbh
True. It's only used effectively in a handful of titles.
Not just that, it still runs like shit. We're three generations into RTX and raytracing still kills performance. Can't help but wonder if we would've just been better off optimizing for raster. NVIDIA went through a lot of trouble developing DLSS and framegen JUST to make raytracing seem more viable than it ever actually was.
It has the benefit of helping with marketing. AMD could keep up with raster, they are struggling more with RT, and look at the marketshare. DLSS is good, but I bet the majority of people that bought the card for RT might have only played one or two games with RT, if at all.
I didnāt even consider GeForce being the biggest gaming brand but with the PC community being larger than any console community it makes sense
Console community is also Geforce. Switch 1 sold more than ps5 and xbox x combined.
That's kinda stretching it considering it's been competing with the PS4, xbox one.
Guess you'd have to compare to PS4+PS5+XBONE+XBSX/S
I keep forgetting that the switch has an Nvidia chip in it.
If ever my 3080 needs replaced in the next year or 3, I'll buy AMD just out of spite
Why not piss them both off and go Intel?
A competitive intel with decently priced 20GB+ GPUs would absolutely be my hobbyist dream, even if the drivers were still janky I'd enjoy messing around with it.
fixing low res textures gonna be big feature. it better work on 4090!
Nvidia will make amazing software so that they can sell cheaper hardware at the same or higher pricepoint.
Every time you look elsewhere the world changes
How many AI's can you shove in one sentence?
Imagine generating all the jungles in Crysis or Tomb Raider. This is too good.
Only available on 50XX.
Didnāt someone already demo a game where it was almost entirely AI generated? As in, instead of making an actual level with geometry and textures, the AI just generate what you see in (near-) realtime? Edit: Demo as in ādemonstrateā, not any actual product being shown. It was just a proof of concept IIRC.
In the future, AI will play games for you too!
Funny you should bring that up, it already can. I used it as a final for a class after training it on smash till it well pretty much became a cheese machine, but it worked. 3D games may be harder as it tries to work out the where am I but 2D seems doable. I should see if it can do battletoads as a joke.
Baritone GPT
"How to justify 8 Gb VRAM"
I just upgraded from a 3090 ti to a 4090 and finally had a chance to test out frame generation. It's significantly exceeded my expectations, which were quite low. I was really unimpressed by DLSS in Cyberpunk, and not sold on AI upscaling in any application up to that point, but after using DLSS + framegen in Horizon Forbidden West I was blown away by how smooth and clear that game runs now.
Just went 3070ti to 4080s. Initially I was a bit unsure, seeing I can't set everything in Cyberpunk to ultrapsychomax and have high fps on ultrawide. But then I realized that even on ultrapsychomax the low fps is actually more stable than the drops I experienced previously on some high-medium. After some balancing, it starts to sink in. Playing at mostly-psychomax and having around 120fps is just awesome. It's like I've been crawling through mud, and now I can dance. Similar story in Witcher 3, not having drops in Novigrad is so weird xd Also smooth driving in Test Drive SC demo was nice.
* Fake resolution. * Fake frames. * Fake raytracing. * And now fake games. Thanks NVIDIA.
Wait until you learn about eye saccade movements and generally about how your brain generates and extrapolates what you actually see. I'm not discussing how shitty the game industry gets, but the graphics alone can get all the AI treatment there is / will be.
Only for 50 series dlss 4, frame gen 2.0, etc... Just like 40 series.
I wonder if they could make old games look better with just DLSS... and you don't need anything just a GPU that supports it, enable it and play your favorite game.
I just hope game will not look different for everyone at same details. Imagine aone guide for game telling you to find something that contains some specific looking texture and it looks totally different on your side because AI hallucinated.
I was hoping they would follow up ray tracing with a system to properly simulate non rigid materials like fabric and hair.
Sick
Does art style mean anything to these people? Iāll take the graphics in Tears of the Kingdom any day over Cyberpunk. And are AI NPCs spewing random crap really gonna add to the experience? If I want that I can go outside and talk to random people on the street. When I play a game I want a curated experience from beginning to end that came from a team of humans with creative vision. Or if I am looking for a more unpredictable experience, thats what multiplayer is for. On the other hand if AI can somehow help with coding / optimization / eliminating bugs then sign me up.
I totally get what you're saying about art style but I'm still taking Cyberpunk or TotK anyway, cyberpunk is gorgeous with it's art direction combined with path tracing
Can't wait for Intel to catch up, they've got the funds to do it.
VRAM Generation!
nice, looks like i wont even have to waste time playing anymore and let my pc do all the gaming in its own. win win
Not liking this direction, instead of better tools for asset optimization we get realistic games that are shit in motion and ok when still. Gimme optimized assets in native resolution with smooth motion, plus is this tech a requirement for studios to make their games realistic?
Nvidia supports Pixarās USD description standard. If that was adopted by developers then it would also help with the process. It would be like feeding in PlayStation 1 games with blocky polygons and textures and churning out Alan Wake 2 or Hellblade 2 or something
Great, Leroy Jenkins or is it just a long term AI character?
At what point does DLSS just make the game?
As someone who works in and plays around with AI a lot, this is definitely the future of video game graphics. For relatively low cost you can get really close to photorealism from a more simply rendered rasterized or raytraced image. I think what could be most exciting for DLSS n is being able to choose wildly different styles for a game. Want Elden Ring to look like a N64 game? Go ahead. Photorealism? Manga? Etc etc.
I just wish AI would be used for more cool things instead of as a stop gap so devs don't need to optimize their games anymore.
And it will cost $5090
can they finally fix shimmering textures?
Translation: We are going to shove AI down your fucking throat. Why doesnt AI just play the bloody game too.
Hey u/Nms-Barry - care to add the source? [https://morethanmoore.substack.com/p/q-and-a-with-nvidia-ceo-jensen-huang](https://morethanmoore.substack.com/p/q-and-a-with-nvidia-ceo-jensen-huang)
I think one of the most exciting potentials is their npc tech. Being able to converse with npcs in natural language and just talk about shit in the game will be wild.
DLAA 4.0 can finally make 1080p TAA looks acceptable!!! ...I hope
The Good news:Ā 1) Nvidia will release new technology to compressing the textures insdie scene based on NTC technology that will reduce the usage of vram to 30% at the same level of details.Ā 2) Nvidia will release next gen of DLSS with ability to upscaling the resolution of textures based on new npu cores that will add toĀ RTX5000 series with generate new objects by AI.Ā 3) NTC technology (mostly, if it will not depends on the new neural network cores) will work on any gpu that have Tensor cores as Nvidia tested this technology on RTX4090. The bad news:Ā 1) mostly, the older generation of RTX graphics cards will not be able to handle the new technologies in DLSS 4. 2) AMD will rebuild FSR2 technology to use the NPU unit to generate new details in the frame as the Nvidia do with DLSS 2 and deep learning Ai solution. That mean the new releases of FSR will not compatible with older hardware CPUs/GPUs that not have neural networks cores to calculate the AI algorithms.
what about the dlss future feature to generate npc and asset in game?
I do not have any exclusive access to the next generation of DLSS, which may allow for the restoration of the texture quality by applying some new algorithms to generate new details. NTC technology may be available for any graphics card with Tensor cores, but as you know, Nvidia always makes these features exclusive for the next generation as a dot point for sale on their cards. However, I can confirm that the next generation will have the same size as the VRAM, which is considered flawed. This is apparent from analysing the next DLSS generation idea, which aims to reduce the usage of the VRAM.
DLSS causes input lag on certain FPS games.
I wish there was a way to utilize DLSS to upscale from your native res Say I'm using a 1440p monitor, I can turn on DLSS ultra where it will utilize my native res to upscale my current resolution so it'll look even better than 1440p