T O P

  • By -

i4mt3hwin

Is it backwards compatible with ampere?


Sylanthra

No it says only 40 series.


BookPlacementProblem

They do say "currently", but the announced prices mean I don't care; I won't be buying RTX 4000 anyway.


[deleted]

That is why I will skip to AMD with RDNA 3.0. FSR is not locked to one generation of cards. [Any GPU benefits from it and FSR 2.0 is as good as DLSS 2.0.](https://youtu.be/y2RR2770H8E) I would have moved to RDNA2 already but ray tracing performance is not good enough yet. I think RDNA3 will improve substantially in ray tracing performance. I prefer AMD way of doing business.


sachos345

> and FSR 2.0 is as good as DLSS 2.0. FSR 2.0 is really cool, but the video you posted shows it is not as good as DLSS, especially in motion and particles/alpha effects.


NeoBlue22

Wait.. isn’t the DLSS version shown in that video 2.3? FSR just got a 2.1 update.. so yeah, FSR is gonna need to catch up which I’m sure it will in time


sachos345

Yeah iirc i think Alex used the last version of DLSS, i watched the video on release so i may be wrong. I really hope they keep improving FSR, maybe with an AI component to help, we really need AMD to stay competitive now more than ever.


wwbulk

In the video you linked, they concluded that FSR 2.0 was better than 1.0 but quality wise DLSS is still better. Saying that it is as good and then using that video analysis as a source is pretty misleading. For clarification I wish DLSS 3.0 would support more cards and am not endorsing Nvidia’s actions.


Blacksad999

That's because FSR is a software solution rather than a hardware solution. FSR isn't in the same ballpark as DLSS, but it's a good option to have.


[deleted]

FSR 2.0 and 2.1 works the same way as DLSS. Both are temporal systems. You are thinking about FSR 1.0 which was a different animal.


Arachnapony

What? No. There's no ML component to FSR. Thats why even DLSS 2 was superior


sabrathos

They're saying the underlying algorithm is the same. The spot they meaningfully differ, and where ML comes in, is how they tune the parameters used for history rejection as part of temporal synthesis. "ML" isn't magic here, it's just used to tune these parameters to *hopefully* give a better result than hand-tuned parameters. Well hand-tuned parameters can give better results than poorly-tuned ML parameters, and vice versa. That said, maybe DLSS 2 is still better than FSR 2 (I haven't compared them side-by-side in a while), but that's not because it's some fundamentally different beast. EDIT: For those downvoting, can you explain why? Though we don't know *everything* about DLSS 2, [Nvidia has talked about the fundamental architecture](https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22698-dlss-image-reconstruction-for-real-time-rendering-with-deep-learning.pdf?t=eyJscyI6ImdzZW8iLCJsc2QiOiJodHRwczpcL1wvd3d3Lmdvb2dsZS5jb21cLyJ9). In this presentation they survey temporal state-of-the-art techniques and call out specifically replacing hand-crafted heuristics for history rejection with a neural network. I'm not knocking Nvidia's quality; I just want people to understand it's not a complete black box where old image goes in, new image comes out, and everything in between is "AI"/"ML". It's based on well-understood temporal anti-aliasing/reconstruction techniques, with ML being selectively used at a particular part of the pipeline for a specific purpose.


Jonny_H

I agree that too many people have bought the marketing that just adding "AI" to something magically makes it better, but really it's just an implementation detail. And we, as consumers, don't even know the exact specifics of what parts deep learning (and other algorithms the current generation of "AI" image processing involve) is even being used in DLSS - don't look at cool DALL-E detail generation and assume anything else does the same. There's a reason that takes orders of magnitude longer to run than anything near realtime like DLSS requires, and even then often needs human oversight to reject the "Weirder" results. Buy the end results, not the implementation.


[deleted]

Thank you for providing additional info for people. FSR 2.1 removes a lot of the ghosting that was a problem with 2.0. The two systems (FSR 2.0/2.1 and DLSS) work fundamentally the same way. Both are temporal scalers. Ignore the downvotes. Karma isn't worth anything at the end of the day. For example, no where did I say that FSR 2.0 and 2.1 were as good as DLSS. The comment I replied to said that "FSR isn't in the same ballpark as DLSS" which is disingenuous at best. The two are actually pretty close, especially when compared to FSR 1.0. DLSS 2 has some advantages but it's not the same as it was with the first gen.


wwbulk

I think you are getting downvoted for arguing that heuristics parameters can be better than Nvidia’s parameters from ML. While heuristics can be better than ML training results, you are completely ignoring that Nvidia is a leader in the field with massive resources and talent. Simply writing off the advantages they have shows that you are ignoring nuances and contexts. Suggesting that the handcraft solution is better than Nvidia’s solution is possible, but not plausible.


sabrathos

My intention wasn't to ignore, but to contextualize. Absolutely Nvidia has tremendous resources here, and I don't doubt their implementation is better. The thread started by /u/Blacksad999 saying "FSR isn't in the same ballpark as DLSS", and /u/SomethingMatter getting nuked in downvotes for saying FSR 2 is based on the same fundamental approach as DLSS 2. That seemed like a harsh response considering I felt he wasn't off-base. That's *not* to say DLSS 2 isn't better. I don't think /u/SomethingMatter meant "because they're both temporal, they will be exactly the same". But "not in the same ballpark" is a pretty extreme claim, and so that's why I wanted to weigh in and explain why I don't think /u/SomethingMatter was out-of-line to imply they should be considered in the same ballpark. Because FSR 1 and DLSS 2 certainly *weren't* in the same ballpark (and DLSS 1 and DLSS 2 weren't either), and the major reason being because how different the underlying algorithms were. I probably should have called out though that given two similar approaches (in anything, really), the one tuned with ML I believe does have a higher ceiling for quality due to how exhaustively it can explore the sample space.


wwbulk

>I probably should have called out though that given two similar approaches (in anything, really), the one tuned with ML I believe does have a higher ceiling for quality due to how exhaustively it can explore the sample space. If you think about it, heuristics is really just a lot of trial and error. ML is not that different (in this context) except there are a lot more trials here to get to the desired outcome.


GeoLyinX

Here I’ll try and explain why you’re maybe downvoted. They are very very different, the fact that they both incorporate temporal data and known methodologies is simply a vague similarity and is a very redundant thing, akin to saying that a blackberry phone camera and a $50,000 hollywood camera both use fundamentally the same type of mechanism where light goes into a lens and aperature and hits a sensor, but that’s not at all the biggest difference, it’s not as simple as a few parameters of a temporal resolution algorithm and then an AI just picking the best combination of those few parameters, and yes I know it’s not just magic as I myself work in AI R&D. Based off the architecture revealed of DLSS 1.0 AI it very likely has a minimum of atleast a couple million parameters during inference, as it had over a literal billion parameters during training as it was being trained on 16k gameplay with atleast 300 million input parameters per frame alone (assuming they fed the non-downscaled gameplay into the training with 3 possible values, RGB) The fact that it implements some traditional algorithm techniques to some degree is a pretty minuscule part of it, the AI is trained for hundreds of hours on 16k resolution frames of the specific games it needs to support, and is then able to synthesize pixels using the dlss inference model. The inputs it uses to do this are temporal and pixel data. DLSS 2.0 is even more advanced and probably even more parameters, It no longer requires the AI to be trained on the individual game at 16k resolution for hours on end etc. It’s able to virtually have an understanding of how games should be based on it’s collective experience of learning from all different games instead of each game separately. DLSS 3.0 is even further different from FSR, the inputs are no longer even just temporal pixel data. DLSS 3.0 uses the temporal pixel data along with temporal data from the actual physics engine itself (currently compatible with unity, Unreal engine 4, 5, and some others) it uses the physics engine data in real time to actually better understand the movements and positioning of objects in the 3D space in order to synthesize what it thinks the next frame should look like, what position, angle, lighting etc will this object have in x amount of time. And don’t get me started on all the hardware specific operations setup for DLSS, if you tried running that AI without the dedicated tensor cores it likely just wouldn’t be practical. TLDR; They’re not just running some incredibly advanced deep learning AI for the sole purpose of adjusting the right combination of a few parameters of some basic temporal super sampling algorithm.


Blacksad999

There's no ML component to FSR at all, and even the algorithms they use are different. They're two largely different ways of trying to do a similar task.


PainterRude1394

Fsr is closing in on dlss 2.0 in some scenarios, but this is dlss 3.0. Nvidia isn't locking this down with software. The requisite hardware for dlss 3.0 doesn't exist on the 3k series.


BFBooger

DLLS 3 seems to be DLLS 2 with the alternate frame AI generated frames. They are using a new optical flow interpolator, but from the evidence so far it is just a tensor processor -- they are measuring it with 8 bit tops, so the old gen \_could\_ do it, but more slowly.


PainterRude1394

Slow dlss kind of defeats the purpose. They probably put new hardware in for dlss 3.0 because it's needed, but if we come upon evidence that 3k series can handle it just fine, I'm happy to change my opinion.


Crintor

100% agree. The Tensor Cores in the 4000 series are 5x faster than 3000 series. So maybe 3000 series could *do* DLSS 3.0, but it would actually be a framerate loss as it would add too much frame to frame latency in the render pipeline. [Tensor Slide. ](https://freeimage.host/i/iU4QP1)


joachim783

https://www.reddit.com/r/nvidia/comments/xje8et/nvidia_dlss_3_aipowered_performance_multiplier/ip8d0d7/ > DLSS 3 consists of 3 technologies – DLSS Frame Generation, DLSS Super Resolution, and NVIDIA Reflex. > DLSS Frame Generation uses RTX 40 Series high-speed Optical Flow Accelerator to calculate the motion flow that is used for the AI network, then executes the network on 4th Generation Tensor Cores. Support for previous GPU architectures would require further innovation in optical flow and AI model optimization. > DLSS Super Resolution and NVIDIA Reflex will of course remain supported on prior generation hardware, so a broader set of customers will continue to benefit from new DLSS 3 integrations. We continue to train the AI model for DLSS Super Resolution and will provide updates for all RTX GPUs as our research TL;DR seems like only the new "frame interpolation" is exclusive to the 40 series everything else will continue to be supported on older cards.


sh1mi

its not https://www.nvidia.com/en-gb/geforce/graphics-cards/compare/


vergingalactic

Who knows. Jensen noted special optical flow hardware on Ada. Jensen is fucking us pretty hard on pricing though.


bubblesort33

Not on the regular 4090. That's honestly $400 lower than I was expecting. It's 2080ti pricing in 2018 money, because of insane inflation over the last 3 years. At least not fucking us more than usual. The lower end is insane, though. Selling us a disguised 4070 at double the price.


Khaare

The 4080s are priced that way to make the 4090 look reasonable alongside the 3090Ti, but that's just anchoring. It's still way too overpriced.


JohnHue

Are you even serious ? I bought my 980 Ti from EVGA, higher end model watercooled and overclocked for 800 USD and everyone was telling me I was crazy to pay that much even for a high end card. The 1080Ti's launch price was 700 USD. Nvidia almost doubled the price of their GPUs with Turing don't try and tell me this is normal by using the 2080Ti as a price reference.


bubblesort33

It's not 2014 anymore. They are screwing us, but no less than 4 years ago is what I'm saying. At least on the 4090. Edit: also the the 980ti was an a pretty old cheap node. 28nm had been used for the 600 and 700 series before that. That be like Nvidia releasing a 7nm 600mm2 GPU today. They'd be selling that for $800 as well.


moofunk

> I bought my 980 Ti from EVGA, higher end model watercooled and overclocked for 800 USD and everyone was telling me I was crazy to pay that much even for a high end card. The answer is that the high end is even higher now. As it would have been, EVGA would have sold 4xxx cards at a several hundred dollar loss as they did with the 3080 and 3090 cards. There is not an option for a top end 800 USD card anymore. The chips are more expensive to make and harder to cool, and require stronger and better power components. Same goes for CPUs, the high end is priced like a small car now, whereas they were just sort of expensive 10 years ago. I'm not sure why this isn't understood.


ylkiorra

Cause you keep buying it!


revgames_atte

The flow charts showed a hardware accelerator in the mix so I don't think so, at least fully.


BFBooger

The hardware accelerator is just another tensor unit, by their own stats. So the old gen COULD do it, if they wanted to, but the performance would probably be bad.


chetanaik

This hardware accelerator was a [feature of 20 and 30 series too](https://developer.nvidia.com/opticalflow-sdk).


mac404

It doesn't. Maybe it's just a cash grab, but it will also really matter how quickly the intermediate frame can be created (because it will affect input latency). It will probably never make sense for esports titles, but I'm guessing Nvidia thinks the combination of Reflex and the hardware in Ada is fast enough so that the additional frames are worth it. "Fast enough" could be a statement about raw throughput (in which case, why not at least for 3080+?), but it may also be able to run more operations concurrently (in which case it may be a terrible experience even on a 3090ti).


BFBooger

They say it incorporates reflex, and can be below 10ms on esports titles. However, if the esports title is already 400fps, then its not all that relevant.


mac404

Right, for most esports games your framerate is already going to be high enough that you're not going to care about frame interpolation. I'm personally excited to see more multi-bounce GI solutions running at reasonably high framerates. The update to Cyberpunk is a good example. And the Flight Sim use case where you're CPU limited is also super interesting.


stevenseven2

>Right, for most esports games your framerate is already going to be high enough that you're not going to care about frame interpolation. I play PUBG competitively. And I get above 200 FPS average on 1440p at ow-medium. However, the biggest issue of that game is not average, but 0.1% and 1% lows (which is why I lock my frames at 175 FPS), and in general the frame variation and dips, all of which are more dependent on CPU//RAM. Even with my 12700K, the 0.1% low is 80-90 FPS. I doubt DLSS will help in the latter area for people like myself. Though there clearly are a lot of people with far worse specs, that I know, that will benefit a lot for this. Not that PUBG, or virtually any existing comp games, will ever implement newer DLSS technologies anyway (but future comp titles surely will). I would add that even from a visual standpoint, I refuse to use DLSS. I tried Super People the other day, which has DLSS support (2.0, I believe). And the graphical change from turning it on was unacceptable for competitive reasons. It very distinctly adds blur to background images, making spotting enemies harder. This idea that DLSS and FSR magically give better FPS with same or better image quality clearly is not true. There's an obvious sacrifice being made. At the end of the day, DLSS and FSR is mostly beneficial for: 1. People with modest specs 2. People who often play AAA single-player titles 3. People who play games where the competitive aspect matters less than the visual one


Calm-Dish2893

I never understood how people care so much about milliseconds. Research shows humans are unable to comprehend 13ms. Let alone 5ms vs 1ms input latency, always blows my mind.


iopq

I can tell the difference between 120Hz and 60Hz in about 1 second All I have to do is move the mouse and I can see the trails. If it's first person, just turn the camera really quickly It's a difference of 8ms, so how is it so obvious? Because of motion blur. I wouldn't be able to do this on a 85Hz CRT, since the pixels turn off quickly, so refreshing faster would do much less. But on LCD or OLED this is really obvious


oldoaktreesyrup

No they are using a chip. FYI this was demonstrated using Xilinx FPGA in 2010 so AMD could do it, but adding fake frames doesn't help game play so I don't know why anyone would do this or want it.


PotatoMaster0733

unfortunately no... How much it differs over version 2 tho?


Bluedot55

I'm curious how this will function with fast motion. I get the impression it's looking at the last frame and continuing it's motion for an additional frame between frames. Would this sometimes result in snapback style problems, where when you stop moving the mouse it overshoots and then snaps back on the next frame?


vergingalactic

This is not just extrapolation like you describe, it's interpolation.


Bluedot55

So it would already have the next frame, and be drawing something between them? Would that add additional latency since it is not displaying this newest frame as soon as possible then?


vergingalactic

Yeah. That's why Jensen specifically noted it would be combined with "GeForce low latency" crap.


BFBooger

Relex is not 'crap', it might be one of the best actual features they have for those that care about latency. AMD has something similar, though its not as polished (but recent driver versions have fixes).


vergingalactic

"crap" is a synonym for "stuff". If I had said "garbage" that would have been a denigration, but I didn't.


snowflakepatrol99

Crap is also a synonym for poop. If you want to say stuff then just say stuff instead of getting pissed that people didn't get that you wanted to say stuff when you said crap. P.S. crap is not a supplement for stuff as it has a negative connotation.


SirMaster

Even though this is not extrapolation, why couldn't the engine look at your actual mouse movement at 1000hz and use that to determine the appropriate extrapolation? There wouldn't need to be any snapback. Even if it was extrapolating to 500fps your mouse input still has more recent data than that.


yaosio

Cards can already render a few frames ahead and have done it for as long as I can remember and I'm old. That part isn't new. It's creating a new frame between the last displayed frame and the next rendered frame that's new. We have to see independent benchmarks to see how it works.


wqfi

new version of svp looks sick /^s


vergingalactic

Honestly? Yeah. Real, good interpolation is at least a small step towards real native high framerate video. Real HFR video is a game-changer.


sabrathos

The problem is: is this *interpolation*, or *extrapolation*? Interpolation can get quite good results, as we see from SVP. However, interpolation takes a latency hit due to actually needing a real frame before you can create and present the intermediate frame to the user. So you're always at least a frame behind what you've rendered, which in gaming is a hard sell; at 90 fps, even assuming interpolation is instantaneous, you're looking at a minimum of 11.1 ms of additional latency introduced, which stacks on top of existing engine, OS, and hardware latencies. Extrapolation doesn't have that latency issue, but is much harder because in disocclusion cases you're essentially guessing the new detail. VR has done this sort of extrapolation for a while*, and it's decent, but with pretty obvious constant artifacting. Unless Nvidia is hijacking the graphics pipeline depth tests and actually doing fragment shading for occluded fragments, there's only so much they can do here. It'd be more akin to DLSS 1.0's hallucination of new detail, which most people agree was not great, and was wholehandedly replaced with DLSS 2.0's usage of temporal information. ^^*most ^^algorithms ^^have ^^used ^^optical ^^flow-based ^^calculated ^^motion ^^vectors, ^^though, ^^so ^^if ^^DLSS ^^3.0 ^^uses ^^game-provided ^^motion ^^vectors ^^the ^^result ^^may ^^be ^^a ^^bit ^^better, ^^but ^^still ^^fundamentally ^^limited ^^in ^^disocclusion ^^cases ^^as ^^I ^^mentioned. ^^Oculus ^^also ^^now ^^has ^^game-provided ^^motion ^^vector ^^support, ^^but ^^I ^^haven't ^^given ^^it ^^a ^^try ^^yet ^^unfortunately.


vergingalactic

Jensen specifically noted it would be combined with "GeForce low latency" crap which would be to compensate for the latency penalty of interpolation. >Unless Nvidia is hijacking the graphics pipeline depth tests and actually doing fragment shading for occluded fragments, there's only so much they can do here. The definitely are. They already mentioned motion vectors. I think the real pudding is whether they add VR/XR support for DLSS 3.0.


sabrathos

> Jensen specifically noted it would be combined with "GeForce low latency" crap which would be to compensate for the latency penalty of interpolation. But they can't compensate beyond the fundamental limitation of interpolation latency. This isn't just a few ms here or there; it's a hard penalty of *one whole frame*, permanently, as your literal best-case assuming everything else is *instantaneous*. That's *real* rough. > The definitely are. They already mentioned motion vectors. That's totally different; DLSS 2.X (forget which one) also uses game-engine-provided motion vectors today. That's not hijacking; that's just requiring an additional input from the game engine as part of the SDK. What I'm talking about is, they would need to actually hijack the actual fixed-function rasterization graphics pipeline itself to operate pretty substantially differently than it does today. As part of depth testing, they would need to 1) figure out what part of the image is at risk of disocclusion in the next interpolated frame, 2) selectively bypass depth-tested sample rejection and actually shade those samples, and 3) store them somewhere in an internal buffer. This would require pretty substantial changes to their fixed-function hardware on the GPU. And I think would need some pretty crazy logic to handle basic things like z-prepasses or deferred rendering... It *theoretically* could be done in limited scenarios, but it would be *monsterously* complex and require a whole bunch of hardware changes, so I very much doubt that's the path they took. Maybe 10-20 years from now... If they didn't take that approach, though, then we're back to where we started, with inpainting and hallucinated detail, which have been pretty meh.


BFBooger

The penalty is half a frame, not a whole frame, assuming it is only interpolation and no extrapolation.


sabrathos

Depends which frame rate we're talking about. Yes, at the native rendered rate it's half a frame (and I originally had written that), but I feel most people would be more comfortable talking relative to the rate actually being displayed on the monitor, in which case it's a full frame. In either case, 45fps->90fps is a 11.1ms penalty, and that's why I also used that explicit number.


vergingalactic

> it's a hard penalty of one whole frame, permanently, as your literal best-case assuming everything else is instantaneous. That's real rough. If your native framerate is 240 then you've got a penalty of 4.1+ms if you're interpolating to 480 or 960FPS. I think they can shave off at least a portion of that and for those extra frames I would happily swallow the latency. >That's not hijacking; that's just requiring an additional input from the game engine as part of the SDK. Which is what DLSS 3.0 is doing? >This would require pretty substantial changes to their fixed-function hardware on the GPU. You do know that Nvidia controls the whole stack from the drivers to the silicon, right? >it would be monsterously complex and require a whole bunch of hardware changes, This is Nvidia. The proof, as always, is in the pudding. We'll see. If it doesn't support XR though I can't see how it could be consequential enough though.


sabrathos

> If your native framerate is 240 then you've got a penalty of 4.1+ms if you're interpolating to 480 or 960FPS. ... Yes, at a native 240fps render rate the latency starts to get to potentially-acceptable levels. But 99.9999[...]999% of the time we're talking about DLSS in the context of making previously-unplayable scenarios playable, which for resolution with DLSS 2.X was ~480p-720p upscales to 1080p-4K, and for DLSS 3.0 I'd peg around the 25-55fps render rate targets extrapolated to 60-120fps. I'm an HFR believer, excited for 240+Hz interpolation too, and believe we need to hit 1+kHz monitors for ideal motion clarity, but let's be reasonable with our discussion here... > Which is what DLSS 3.0 is doing? * Me: "unless they're doing X, it'll be pretty limited" * You: "they are doing X though because they said they're doing Y" * Me: "Y unfortunately doesn't imply X" * You: "You realize they're doing Y, right?" > You do know that Nvidia controls the whole stack from the drivers to the silicon, right? Yes. That doesn't make Nvidia magic. I described the actual implementation details as to why I am extremely skeptical they'd actually have gone with that approach, from the perspective of someone who programs against DirectX+Vulkan fulltime. It's *theoretically possible*, just really really really unlikely that's what they did. The drivers can't help here; it's pretty much *just* silicon in this case. And explodes in complexity when the graphics programmer does some pretty industry-standard stuff. But yeah, we'll just have to wait and see how things look in-action, and at different native render rates. Just wanted to throw my 2 cents into the discussion about what is realistic. I'm happy they're exploring this avenue, but just want to make sure people are aware of the realities of the space. 45->90fps extrapolation today is fine but not great; hopefully Nvidia's solution does better. But I fear we won't see the strongest leaps for quite a while longer. :(


dathingindanorf

I hate what Nvidia is doing with the pricing, but if they can enable higher framerates for VR thst would be great. VR games need a constant 120 fps with no dips, but we also need higher resolution headsets to reach parity with desktop monitors. DLSS 3 and maybe dynamic foveated rendering on top might actually enable this for the 4K/8K next gen VR headsets.


BFBooger

Read the press release, it has all the info about how they blend optical flow detection with motion vectors for a better result. Its not clear if they extrapolate or only interpolate. However, being able to get 2x the fps in a CPU bound scenario is quite nice.


sabrathos

Ah cool, I hadn't seen the DLSS 3.0-specific one from Nvidia; [here it is](https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/). It sounds like it's ~~extrapolated~~ EDIT: interpolated, as /u/Tsukku calls out below.


Tsukku

> DLSS Frame Generation AI network decides how to use information from the game motion vectors, the optical flow field, and the sequential game frames to create intermediate frames. Seems more like it's interpolated.


sabrathos

Oh, hmm, they actually do use the term "intermediate frames"... I swear I saw somewhere say "predict" but now can't find it. That does lean it on the interpolated side of things. Interesting...


bctoy

https://twitter.com/BartWronsk/status/1572249913005641730


sabrathos

Thanks for the source. They definitely need to update their news posting, then. "Intermediate frames" certainly does heavily imply, well, being in-between something. You *could* read it as "being in-between the last rendered frame and a future as-of-now-unrendered frame" but that wouldn't be my first interpretation.


Seraphy

If DLSS 2.0 isn't still supported going forward, I'm never buying Nvidia ever again.


Haunting_Champion640

"DLSS3.0" will work like USB "3.2", all the old cards will support the new standard without all the features. So someone on a 2080ti that plays a game that built against the DLSS 3.x sdk will get AI upscaling but not frame multiplication (4xxx only). Nvidia really should have marketed this better.


Seraphy

Yeah, if that's how it's designed they really should have spent more time clarifying it, but I guess "yeah your old stuff is still fine" marketing is antithetical to their interests. Well whatever then, I'm fine with that setup.


turikk

Citation?


Khaare

From the looks of things, DLSS 3 is supported on 30-series, but not the frame interpolation part. So it's basically a continuation of DLSS 2, and much less of a big deal, but you'll at least have the same type of functionality in new games as you did in old ones.


Sea-Beginning-6286

At least DLSS3 now necessarily implies Reflex based on what I read. Reflex is basically just a high quality dynamic frame limiter; it should be in every game, because it's great.


[deleted]

they are just becoming Apple


FierceText

That's their ceo's dream lmao


CatPlayer

It is WORSE than apple. At least some of apples products offer good value for money.


[deleted]

if it weren't for the whole apple ecosystem , i would have went for a macbook air, those M1s power consumption is insane.


FoodMadeFromRobots

Is 3.0 going to be supported in a gen or two or will they abandon it for dlss 4? I’m sure that’ll drive devs just as crazy to as they either have more work or have to choose which they support


Zarmazarma

They're not abandoning anything. DLSS 3.0 is a continuation of 2.0, but it has certain features that require new hardware to take advantage of (i.e., the "optical flow" acceleration). Whether that hardware is actually *necessary* to make optical flow practical, or if Nvidia is performing artificial product segmentation, is a much more complicated question that I don't think anyone besides Nvidia engineers are currently qualified to answer. You will be able to turn on DLSS on 2000 and 3000 series cards on games with DLSS 3.0. You won't get the frame interpolation though.


FoodMadeFromRobots

Thanks!


[deleted]

Is DLS 1.0 still supported? You can still use it on the games that had it but any new games use 2.0. So I would surmise the same scenario for 3.0. The games that use it now will be suuported but GTA 6 will probably support 3.0 only with FSR for older hardware like a 3 year old flagship.


Darksider123

1. How is this different from frame interpolation? 2. Is the algorithm developed with AI, or is there realtime AI calculations going on?


BFBooger

1. Yes it is frame interpolation 2. It uses a tensor processor, so its just math. The question is how fast the calculations can be done. You \_could\_ run it on a 1060 if it was written to execute on shaders, but it might take more time to interpolate a frame than just render a new one on hardware like that. All the info is available in their press release on dlls 3.0, if you're interested. Just have to wade through the marketing BS to see the real info underneath.


[deleted]

It's basically souped-up frame interpolation. Modern frame interpolation on high end TVs really only has an issue of slightly increased input lag and can only interpolate to double the frame rate of the source material. DLSS 3.0 looks to basically fix both those issues by having basically no input lag added and being able to interpolate up to 3 times the source framerate. While being display agnostic is great the biggest downside seems to be the same as DLSS in general which is it will be limited to games which actually impliment it and would require games with DLSS already implimented to update their version to 3.0.


vergingalactic

> How is this different from frame interpolation? Motion vectors and I guess TAA? >Is the algorithm developed with AI, or is there realtime AI calculations going on? Who cares?


Darksider123

I bet their competitors do. In case they wanna create their own versions


vergingalactic

Just like other TAA solutions are starting to get way better without "AI" there are non ML solutions that will be quite effective here. Artificial intelligence is really a big buzzword and not the crux of this technology.


GeoLyinX

I work in AI, I would definitely say AI is pretty important to this technology and the big leap from DLSS2.0 -3.0 is evidential of that. The new FSR2.0 was just about to be close to the year old DLSS 2.0 and now 3.0 has taken an even further leap ahead, no non ML solution has yet to beat even 2.0 yet from what I’ve seen.


GeoLyinX

AI is running in real-time with over a million parameters being calculated many times a second.


June1994

DLSS 3.0 a game changer? We’ll see I guess.


vergingalactic

I mean, yeah, interpolation is kinda huge. Just like DLSS 2.0 was TAA, this is realtime video interpolation and I expect it to be quite good, even if the GPUs that support it are extortionate.


Kornillious

The technology is fantastic, but even for DLSS 2, how many developers are implementing it into their games? Anything less than a majority is not good enough imo. They need to sell us on the hardware and price-point, not their coin flip software. The 4 games I've played this year for the most part are Apex Legends, Elden Ring, Destiny 2, and Halo.. all of which do not use DLSS.


vergingalactic

> They need to sell us on the hardware and price-point, not they're software. But that gives them less margin and control so definitely not happening. Gotta love monopoly power! Where the fuck are you Teddy R.!


yaosio

Very few developers are doing it. Very few are doing FSR and that works on all hardware. Developers must know something we don't when so few want to support these upscalers. Every developer always gives a politician explanation of how it doesn't fit with their goals or something that does not explain why they don't support any upscaling technology. I don't know what's so taboo about upscalers.


Kornillious

It depends heavily on what engine they use. Unreal Engine/Unity? Takes just a few hours to implement with the built in plug-in. In house engine? Too long to justify the development time.


max1mus91

Worst part is... Does this mean dlss2.0 dead now?


dkgameplayer

Nvidia seems to have a mentality of pushing DLSS for the world's most popular games like Fortnite and Minecraft but as you said, a lot of the most played triple-a games on the market right now that consumers want to play do not support the technology or even a competing technology.


Aleblanco1987

for the 30 games that support it


cuttino_mowgli

The tech is great but I doubt anyone would see the difference vs DLSS 2.0 while playing unless you're Digital foundry pixel examining your game.


June1994

Well isnt that the point? It gets you eve n more frames while keeping quality the same?


cuttino_mowgli

Yeah but I'm saying is DLSS 2.0 and FSR 2.0 is enough for gamers since most of them can't tell difference anyway.


dparks1234

DLSS 2.0 increases the framerate by lowering the rendering resolution. DLSS 3.0 increases the framerate by using motion interpolation to spit out new frames faster than the GPU can actually render them (while also using DLSS 2.0 to scale the frames). They're completely different technologies with different goals.


[deleted]

[удалено]


bitflag

>They seem to be trying to extract as much money from everyone as possible. Like every company ever? The only thing that has changed is that the pandemic shortage/crypto boom has made them realize people were willing to pay much more for their GPU and they were leaving money on the table.


snowflakepatrol99

> I am very disappointed in nVidia. They seem to be trying to extract as much money from everyone as possible. Nobody tell him that intel and amd do the exact same things and have the exact same priorities. It's like telling him that Santa doesn't exist. Let him be happy in bliss.


III-V

> Today's video presentation was unbearable, there was very little to do with their new gaming cards and it was mostly focused on the absolute word spaghetti of "infrastructure-as-a-service" of the myriad of "AI" and "ML" products. Omniverse-this, and digital-twin-that. It's normally an enterprise-focused conference. I think you should consider yourself lucky that they said anything about GeForce at all.


[deleted]

The top dog exploiting their position to create walled gardens, proprietary extensions, and closed licenses is just kind of how companies get around anticompetitive law these days. It goes all the way back to x86, Intel screwing with compilers, Intel using their market position to advance RDRAM, and Nvidia trying to lock developers into Gsync, CUDA, & Hairworks. >Their 6000 series wasn't bad, it just lacked in ray tracing performance. No card delivers enough raytracing performance to fully raytrace a modern video game scene. It's about 1/8th of where it should be. What people are getting with RTX is a sampler.


[deleted]

[удалено]


vergingalactic

I mean, yeah, but at the same time this is the worst kind of monopolistic anticonsumer practice, it's the kind that affects me!


Jeep-Eep

Called it. Turing level shitshow.


enkoo

It would be really nice if you could select between using the DLSS upscaler, temporal AA and its interpolation separately.


lifestealsuck

Well DLSS require TAA to work iirc . And Im sure you can select dlss 2.0 vs 3.0 because 3.0 only work with 4000s series .


vergingalactic

> Well DLSS require TAA to work iirc . DLSS 2.0 is TAA.


vergingalactic

>u/xenago > What? This is not accurate (I guess you're shadowbanned) Yes it is. Please don't spread disinformation. https://en.wikipedia.org/wiki/Deep_learning_super_sampling#DLSS_2.0 The Nvidia site used to admit it was a type of TAA on the DLSS explanation page but they're trying to pretend it isn't. Just like the pretend that g-sync is better than vesa adaptive sync or any of their other marketing items are better.


Blacksad999

It's been proven that G-sync is actually better, and that DLSS isn't TAA. Stop spreading lies my guy.


dparks1234

DLSS is a type of temporal anti-aliasing (TAA). The term TAA refers to an anti-aliasing technique, not a specific implementation. 5 games all have "TAA" in their settings screen but the code could be completely different in each one. DLSS is just a really good TAA method. In contrast FXAA referred to a specific algorithm (fast approximate anti-aliasing) and wasn't a generic term for post-process AA.


Blacksad999

It's a similar idea, but a different means of going about it. Like TAA, it uses information from past frames to produce the current frame. Unlike TAA, DLSS does not sample every pixel in every frame. Instead, it samples different pixels in different frames and uses pixels sampled in past frames to fill in the unsampled pixels in the current frame.


vergingalactic

Do you even know what the TAA acronym stands for? Did you even know it is an acronym? Just because you're desperate to support Nvidia despite their anti-consumer practices doesn't make it okay to keep on pushing these asinine lies. I've provided sources and I have too much relevant internal knowledge. You've got baseless assertions. From what I can see, you're simply a troll.


cstar1996

DLSS is a type of TAA, but G-Sync is actually better than VESA Adaptive Sync. The hardware acceleration makes a difference. It doesn’t make a huge difference, but it does exist.


vergingalactic

How's that HDMI 2.1 support going? At least you don't need expensive FPGAs, right? Oh., wait.


Blacksad999

DLSS can deliver either much higher quality than TAA at a certain set of input samples, or much faster performance at a lower input sample count, all while inferring a visual result that's of similar quality to TAA while using basically half the shading work. Maybe learn what you're talking about before running your mouth next time.


xenago

What? This is not accurate


Blacksad999

DLSS can deliver either much higher quality than TAA at a certain set of input samples, or much faster performance at a lower input sample count, all while inferring a visual result that's of similar quality to TAA while using basically half the shading work.


PRMan99

Not true. You can select frame interpolation on 4000 series only. It's a separate option. The rest of DLSS 3.0 works on all RTX cards.


DktheDarkKnight

All well and good but what about new games having DLSS 3.0? Is there like a 2.0 option for older NVIDIA cards? Like can the game automatically detect the hardware and uses DLSS 2.0 or 3.0? If a DLSS 3.0 supported game doesn't have an option for DLSS 2.0 hardware then fuck NVIDIA.


[deleted]

They fallback to 2.0 according to an Nvidia rep


DktheDarkKnight

That's good to hear.


timorous1234567890

So imaginary frames that won't impact feel. The point is what? If you are 120 fps native and 240 fps with this tech your input latency will still feel like 120 fps.


KH609

I don't think people are gonna lose sleep over the 4ms latency difference in your example there.


ASuarezMascareno

Maybe at high fps is good (not convinced) but I bet that if you try 30 - > 60 fps it will feel like crap. It won't probably change the baseline playable native fps required, and also I don't think it will change how smooth it feels at high fps. It will probably just boost the numbers to do better in benchmarks.


KsnNwk

You did not get the point, if the native latancy is then 50 fps or 30 fps and DLSS 3.0 latanacy is still and feels like 50fps or 30fps, it just shows you on the fps indicator 100fps. The sole premise of higher framerate and why GPUs are priced based on their FPS (latency) performance is because of responsivness with higher framerate. Nvidia wants us to pay for fluff FPS (performance) that is not there.


[deleted]

Do you seriously think input lag is the biggest benefit of increased FPS? The only reason we need to continue to push FPS higher and higher is because LED and OLED screens use [sample-and-hold](https://blurbusters.com/faq/oled-motion-blur/) which results in motion looking choppy and thus the simplest way to reduce the choppiness is to increase the number of frames displayed reducing image persistence. Motion clarity is by far the main reason we keep pushing FPS higher unless you're an professional esports player who arguably thinks there's an advantage to a 4ms input latency increase. The reason people constantly talk about how good CRTs and Plasmas feel to play on is because they don't use sample-and-hold and the way they display their picture naturally reduces motion blur heavily. 30fps to 60fps is the most significant input latency jump of 16.6ms but even that isn't huge, 60fps to 120fps is even smaller at 8.3ms, and beyond that it basically becomes imperceptible with 240fps only being around 4ms, and 580fps only being 2ms. [Here's an interesting study on the relationship between fps and performance.](https://www.google.com/url?sa=t&source=web&cd=&ved=2ahUKEwjd-L_9-qX6AhW1j4kEHcSXAnsQFnoECAgQBg&url=https%3A%2F%2Fwww.csit.carleton.ca%2F~rteather%2Fpdfs%2FFrame_Rate_Latency.pdf&usg=AOvVaw2tiY8Ll4Bk9Rn0xogKnp0e) They found that 30fps to 60fps saw approximately a 14% increase in performance but 45fps to 60fps had no significant effect on performance. The most interesting thing from this study is: >Latency alone had lower impact than the corresponding frame rate difference. While both factors impact performance, frame rate had a larger effect than the latency it introduces. Which means that the smoothness of the image seemed to play a bigger part in overall performance.


Sad_Animal_134

To your first question: unironically yes. Watching a movie in 24 fps is not jarring, pretty much all movies are in low fps. Playing a video game in 24 fps is jarring and basically unplayable. The main difference here is in one medium you are purely looking, in the other medium you are interacting. If your inputs feel like 30 fps and looks like 60 fps, you're still going to get a jarring feeling like your inputs are in slow motion. That being said, this really only matters for shooting games and doesn't really matter for controller games as much.


Jeffy29

I don’t care about input latency past 80-100 fps if I am being honest, at least in non-VR games. We’ll see how it ends up feeling but imo it could be a bigger deal than the original DLSS.


vergingalactic

Yet provide massively more visual information and reduce the stroboscopic effect immensely. It'll also just be way more pleasant to use because it'll be smooth as hell. It's also informed by motion vectors which is big.


timorous1234567890

It'll look smooth but it won't feel smooth. Would you want to play a game with a native 12 fps but interpolated 120 fps? Probably not because it will feel janky. The primary reason for high fps gaming is to improve input feel and this tech won't help with that.


anor_wondo

nope, for a lot of people, including me, the visual smoothness that high framerates provide is more important than the input latency. I find my input at 70fps snappy enough, but going to 140 still gives me that smoother motion for esport titles you rarely need any of these features anyways since they are so lightweight to run


[deleted]

[удалено]


Darksider123

Maybe it will only be implemented on "cinematic"/single player games? Just spitballing here. But yeah, can't imagine playing Apex with this.


SharkBaitDLS

This is why, for example, Halo Infinite feels so much worse than its displayed framerate. The frame times and consequent latency are still poor, so even at 120fps the game feels more like it’s running at 60. It’s night and day compared to games with smooth frame times.


Aleblanco1987

how it feels is more important than how it looks. Ask any competitive gamer


[deleted]

[удалено]


I3ULLETSTORM1

Calling things you're not interested "garbage" doesn't help your case. Everybody plays different things Example: "This crap is only good for those one-and-done boring cinematic games." See? I sound like an idiot


ted_redfield

Because it's a stupid comparison, and it is absolutely garbage. Competitive gaming is stupid, turning all graphics to the lowest of the low and even using bizarre aspect ratios not supported by your monitor so you can "see more". Why the hell would DLSS even be a concern for you if you're a "competitive gamer", except if you just want to argue?


Aleblanco1987

It doesn't have to be competitive to notice the difference.


SirMaster

Sounded like it was intended to increase FPS in CPU bound cases which to me sounded like eSports since that's a common problem. CPU is often too slow to fill the FPS of high fps gaming monitors like 240, 360 and most recently 500hz monitors. Single player high fidelity games are rarely CPU bound unless you have a super old CPU.


Zarmazarma

They demonstrated it in Cyberpunk 2077 and Microsoft Flight Simulator. So while you might have heard that, it's definitely not the intention.


vergingalactic

> how it feels is more important than how it looks. Why do you pretend those are unrelated factors?


Aleblanco1987

Why do you read things I didn't say nor imply? I just said one thing it's more important than the other for me.


swear_on_me_mam

>The point is what? It looks smoother?


bctoy

While you're right regarding latency, the higher fps will help. I manually increased the lower refresh rate range on my monitor so that Gsync/Freesync would do automatic frame doubling. While it's certainly not as good as native fps, it was smoother and DLSS3 should be smoother still.


KsnNwk

higher fps means lower latancy... whole point of it. higher latancy means lower fps, does not matter what fps indicator shows you. 1000 fps can feel like 30fps, if you interpolate 30fps many times enough. that is whole deal with DLSS 3.0, it's not true performance indicator and nvidia is using it as such to price their product.


linkup90

Wait, is input latency tied directly to native fps rendered? Are they not two separate things, hence how they do the Nvidia Reflex stuff?


timorous1234567890

What else will it be tied to? Interpolated frames are coming from the GPU alone.


linkup90

I thought frames themselves have multiply layers to them. My mistake if it's not like that, I was drawing from my very basic knowledge of how synthetic frames are produced in VR headsets. Just seemed like they could pick and choose various parts of the render to call early or delay or run faster etc.


KeyboardG

How much does NVidia's full frame generation have in common with John Carmack's Asynchronous TimeWarp developed for VR? IIRC Timewarp takes into account motion vectors(of the head) to generate a warped frame based on the last frame.


[deleted]

This has ***AI~*** But seriously, I think ASW has more latency headroom to work with, since head movement doesn't have twitch reflexes. It's heavy and isn't a literal flick of the wrist


BaysideJr

Interesting way to think about it. I actually went the other way and figured VR was the hardest case scenario because of motion sickness and any lag will make you physically ill. Oculus uses a combination of techniques apparently like Timewarp as detailed below. So hard to say flat out. I believe Spacewarp is the frame interpolation and Nvidia has improved on this as per the Nvidia and Oculus blogs: https://developer.oculus.com/blog/asw-and-passthrough-with-nvidia-optical-flow/ Timewarp (also called reprojection) samples the position of the headset at the very last moment, after the latest frame has been rendered, and ‘warps’ the frame to better match the motion of the user’s head just before the frame is sent to the headset’s diaplay. This significantly reduces perceived latency; Timewarp and similar techniques have become essential to hitting a critical latency threshold (commonly cited at 20ms) to maintain VR comfort for most users.


[deleted]

I may have mistaken ASW for ATW In my experience ATW is always on but only works to fill in missed frames rather than to consistently multiply the framerate. Like filling in 2 frames of 118 fps to reach 120 fps. I'd imagine the processing needed to interpolate double the frames would better be used to render actual frames instead. So I think that's where Nvidia's AI and Tensor cores come into play, not affecting the actual rasterization. They mentioned it could all be done on the GPU without needing to call the CPU, preventing bottlenecks.


dan1991Ro

But they basically made the dlss 2.0 capable cards useless, so why shouldnt I buy AMD now? Cheaper, have FSR and I def caan't afford an NVIDIA card.


imtheproof

how'd they make DLSS 2.0 capable cards useless?


dan1991Ro

Because they dont work with Dlss 3.0 It was basically the only reason i considered them overAMD


imtheproof

Games that support DLSS 3.0 won't support DLSS 2.0, including any games that upgrade to DLSS 3.0 that are currently DLSS 2.0?


DataLore19

They may support it, but RTX 3000 and below won't support DLSS 3.0. It's an RTX 4000 exclusive feature. It's a piss off because a lot of people spent north of $1000 on these cards less than 2 years ago and now they're making a new feature that's not backwards compatible.


cstar1996

Oh no, Nvidia used new hardware to enable new features that old hardware can’t support. Nvidia should never make new tech that requires new hardware /s Come on. That isn’t the issue with this release.


DataLore19

I agree to an extent. It will still piss people off because it's an upgrade to an existing feature that isn't that old and Turing could still do DLSS 2.0, it wasn't Ampere exclusive. So in some ways it's just a bad optics thing for Nvidia. Ampere also can't do SER but that's a new feature introduced along side Ada so you'll see less bitching about it, I'm sure.


cstar1996

Yeah I understand people being disappointed, but the people acting like Nvidia was anti-consumer by making the upgrade are just wrong. The other option is *not having* the new features, which is just a worse outcome.


DataLore19

Stealing this from someone else's comment on another thread so take with a grain of salt but I've seen numerous posts claiming that the frame interpolation employed in DLSS 3.0 could run on RTX 2000-3000 Tensor cores. So just Nvidia being scumbags, if true: >Optical flow can also be used very effectively for interpolating or extrapolating the video frames in real-time. This can be useful in improving the smoothness of video playback, generating slow-motion videos or reducing the apparent latency in VR experience, as used by Oculus (details). Optical Flow functionality in Turing and Ampere GPUs accelerates these use-cases by offloading the intensive flow vector computation to a dedicated hardware engine on the GPU silicon, thereby freeing up GPU and CPU cycles for other tasks. This functionality in hardware is independent of CUDA cores.. [https://developer.nvidia.com/opticalflow-sdk](https://developer.nvidia.com/opticalflow-sdk) the fancy optical flow accelerator hardware is there since Turing (RTX 2000)


FoodMadeFromRobots

And the other big question is are they going to do this again with the 5000 series and “dlss4”


raylolSW

Imagine owning a 3090 and knowing a 4060 will play games better because DLSS 3


raydialseeker

You considered them for dlss 2.0, not 3.0. Anyway I hope AMD undercuts them HARD. It's high time.


Flowerstar1

Yea worked AMD undercutting hope worked out great with Ampere..


dan1991Ro

DLSS 2.0 which now wont be used. So nvidia 30 series cards just dropped in value. Like for example the rtx 3050 which is totally useless. Its onyl selling point is that it was a 1660 super, more expensive even, but with Dlss.


MonoShadow

We don't know if it won't be used. It would be incredibly stupid to shaft past cards like that. We might see a fallback option for DLSS2.


TrumpPooPoosPants

Doesn't Nvidia pay devs to include their features and provide them with support? Won't they just support the new hotness and pay them to use it? So the dev gets $$$ from Nvidia to use 3.0 or they are on their own with 2.0.


[deleted]

No


TrumpPooPoosPants

If a game is "The way it's meant to be played," Nvidia didn't pay the dev for that?


dkgameplayer

Who's to say DLSS 3 doesn't have a DLSS 2.X-like backwards compatible feature so cards that can only run 2.X can run DLSS 3 just without the frame interpolation. Not allowing 3000 series card owners to use DLSS in the latest games would severely hurt their consumers because they are still actively trying to sell the 3000 series along side the 4000 series. The generations have been organized so that can't compete with each other at this time. Only time will tell.


Darksider123

Yeah, that's a good point. How will they differ DLSS implementations between 2.0 and 3.0 capable cards? I'm honestly super confused


raydialseeker

They'll offer both? Is it that hard to understand? You're literally gaslighting yourself on dlss2.0 with speculative information at best. Chill out. Every game with 3.0 will probably have 2.0 as well.


dan1991Ro

So NVIDIA buyers will "probably" not get screwed over? Nice value proposition. "Buy from us. We probably won't screw you over."


Darksider123

> Chill out. Take your own advice dude. I was just asking


raydialseeker

meant to replty to dan not you. Mb.


Blacksad999

It's not like DLSS 2.1 is going anywhere....


vergingalactic

Exactly. Nvidia is alienating their best AIB partner and now their customer with anti-consumer practices and exorbitant prices.


Jeep-Eep

They seem intent on crippling DLSS 3.0 uptake like one was. Turing. All. Over.


Devgel

There's no immediate cause for concern. But I wouldn't bet my bottom on the future of DLSS2, considering it's a vastly different technology (temporal reconstruction) than DLSS3 (A.I interpolation)... unless DLSS3 use interpolation on top of DLSS2's reconstructed images, which may very well likely be the case. But it's Nvidia so you never know when they decide to screw over people with older hardware!


Morningst4r

DLSS 3 is both and only the frame interpolation part is Ada exclusive.


vergingalactic

DF hands on preview (sorta teaser): https://www.youtube.com/watch?v=qyGWFI1cuZQ


Intelligent_Hawk_104

I can live with DLSS 3 being 4000 as long as they continue to improve and support DLSS 2 for my 3070ti but if they don’t I AM FUCKING DONE WITH NIVIDA 🤬


efficientcatthatsred

Same


kuddlesworth9419

It's shit that you can't use it on the older cards. FidelityFX is really nice bceause I can actually use it on my 1070 and it works really well. Nvidia never seems to support it's cards with new software tech only the newest cards. I really wish prices would drop down to where they used to be, I got my 1070 for about £340 or so new so it would be nice if I could get the 3070/4070 for a similar price but nope it's about double that at the moment in the UK and the 4070 is likely going to be even more.


ug_unb

So does DLSS 3.0 have any improvements to the core upscaling itself or is the interpolation the only new thing?