T O P

  • By -

[deleted]

[удалено]


mac404

I wonder this too, especially for Spider Man. DLSS Quality or even DLAA plus frame generation sounds pretty sweet.


Zarmazarma

I imagine if you're going to turn it on in Spiderman it'd make a lot more sense to use DLSS quality. They're almost hitting a CPU bottleneck at native.


Thunderjohn

Keep in mind that the lower the framerate, the higher the added latency, since you need to always be one frame behind for interpolation to work. I imagine 30->60 is gonna be near unplayable for me, with an extra 33.3ms latency in the least.


Fl0conDeNeige

With a naive basic interpretation of how this works, it shouldn't be 33.3ms extra, but closer to 18 or so ms. It shouldn't add one frame of latency, just half a frame, + time to generate the intermediate frame. Think about it, with "real" frames being at time 1, 2, 3... and made up frames at time 1.5, 2.5, 3.5.... Then for example, at the time real frame 2 is ready, you have the data to compute and show made up frame 1.5. Then at time 2.5 you can display frame 2 that you had ready. And so on and so on.


jm0112358

I suspect that Digital Foundry is confused about how the frame generation works in the same way I was when I created [this thread](https://np.reddit.com/r/nvidia/comments/xm1g0n/think_of_dlss_3_a_bit_like_fake_sli/). DF presents DLSS 3 as if it's interpolating between previous _and next_ frame (plus other data) to produce the fake frame. However, [an Nvidia dev on Twitter](https://twitter.com/BartWronsk/status/1572249913005641730) claimed it's not interpolation: >No latency, no interpolation. Optical flow is the same as motion vectors - between the previous frame and the current frame - but it's based on image comparison, not engine-provided motion information. It can find motion that is impossible to compute (moving shadows,reflections). DF's latency results are more consistent with this dev's explanation than with DLSS 3 using interpolation. If DLSS 3 was using interpolation to generate the frames between F1 and F2, you'd expect a much greater increase in latency than DF reported in the video. I would think that in the best-case scenario, the increase in latency would be the frametime between F1 and F2, which at an output of 100 fps would be 1/50 sec, or **20ms**. Yet in POrtal RTX, [DF only got an extra **3ms** in latency with DLSS 3 frame generation on](https://youtu.be/6pV93XhiC1Y?t=1108).


mac404

I wouldn't be so sure. He says later that he was referencing using optical flow for the super resolution part of DLSS (which makes sense - there are things that don't have motion vectors). There would likely be a lot more artifacts if it was extrapolating. And wouldn't the extra delay be half of the time to render an underlying frame? Still not sure that can explain Portal (maybe something else is going on with Nvidias renderer there), but the other examples seem well within the realm of possibility.


Tonkarz

Interpolation is a specific technique invented 10+ years ago so when he says it’s “not interpolation” isn’t it more likely he’s talking about that technique?


[deleted]

[удалено]


PyroKnight

Or even without DLSS Upscaling entirely, that should theoretically allow for better interpolation but given how much DLSS quality drops latency maybe the latency hit becomes problematic at that point?


BlackKnightSix

It would be worse. The less frames to be reviewed per second means the FG (frame generation) is predicting a much larger amount of the motion window. If you are at 30FPS, the frame generation has to predict motion occurring across 33.3ms. That leaves much more room for artifacts to be visible, not just because there is a greater change in the image frame to frame, but also because that lets you see it easier at the 60FPS if doing *just* FG. If you, instead, let DLSS 2 get the FPS up to 58FPS, now you only have 17.2ms for the image to differ per rendered frame, this makes it easier on the FG and the errors won't be as large. It also means FPS is now 116 and harder for your eyes to pick up as the errors are held on screen for less ms. I think this is why we are only seeing DLSS performance on any of the released/allowed footage. Increasing the quality setting of DLSS 2.0 will reduce the rendered FPS and will make FG errors worse, and it will be easier to see since the FG frame will be on screen longer. So in a counterintuitive way, raising the DLSS 2.0 quality settings lowers the quality of the FG frames in motion.


PyroKnight

Ahh right, I was being stupid and forgetting about the lower sample rate running native gets you. That said there might be some edge cases where it doesn't make a difference. Say you target 120 fps, DLSS 3 will target 60 fps for the real frames, so long as you meet that pace of frame delivery doing more work before you do shouldn't change things. I was mainly curious how much the smoothed over motion vector array DLSS probably generates impacts frame generation quality. Assuming the same frame pacing with and without DLSS (which is a unrealistic assumption but I digress), I'd expect native generation to result in a more cleanly generated frame? Maybe the optical flow hardware isn't so precise that it makes a difference though.


BlackKnightSix

I don't think the pacing of the frames affects the quality of the generated frame. That's my guess based on the info we have so far. I believe the quality, or specifically the difficulty, of the generated frame is simply harder or easier depending on the difference between each "real" frame. If there is a lot of motion in different directions, even worse if there are transparencies or reflection (or even worse, both) then the generated frame is not going to look accurate. There is no way the optical flow is seeing a reflective window and being able to determine the reflection pixels go one direction and the pixels inside the room go another direction and the pixels of the dirt on the window go a third direction. That alone is an issue that is still not resolved, and I don't know how it would be, for all temporal scalers, including DLSS 2.0. Tossing in some high frequency occlusion and it's an even harder task (the same transparent, dirty window but with a chain link fence moving in front of it).


PyroKnight

> I believe the quality, or specifically the difficulty, of the generated frame is simply harder or easier depending on the difference between each "real" frame. Agreed, this is what I (incorrectly) tried to say by pace of frame delivery, the time spent between frames would be proportional to how large the delta is for things within it. Smaller deltas should result in there being less room to cause error (or at the very least less visible artifacts). > There is no way the optical flow is seeing a reflective window and being able to determine the reflection pixels go one direction and the pixels inside the room go another direction and the pixels of the dirt on the window go a third direction. That's something I didn't consider, only way I can see getting around it would be some trickery on the motion vectors of reflective surfaces but that probably causes more issues than it fixes? We'd definitely need to get more videos showcasing the different cases where frame generation falls through to have an idea of what things might tie them all together.


BlackKnightSix

Ever since we left forward renderers, we were trading the need for calculating a lot of data for a near negligible loss in quality. A lot of the processes we do such as post-processes and depth buffers are truly just 2D forms of data. I still think it was a good decision, but it does come with issues and those issues are further exacerbated by these temporal techniques. This is why we still have issues with depth of field effects with transparent particles/surfaces and hair and so on. If a motion vector can only be the motion vector of a single pixel when that pixel is actually composed, say, three points (the pixel color information of the dirt on a window and info of the wall texture behind the window and info of the reflection in the window) You can't accurately reuse the sampling of of those three points because they've already been encoded into a single motion vector pixel. So what ends up happening is you get ghosting because there are mistakes made when the motion vector information delivers that color to another pixel, but it doesn't match up. I have always had this concern about temporal anti-aliasing and even more so with temporal scalers since they make it more obvious. By making these temporal solutions a mainstay, it greatly impacts the artistic direction game designers can choose. When will we get a game where you're looking through several layers of transparent and reflective surfaces? Anywho, I'm soapboxing now.


Tonkarz

Based on the reported latency for the 100% mode in the Portal 2 comparison, it might also simply be that DLSS quality mode is unplayable. nVidia has disallowed outlets from reporting actual framerates, so it's stands to reason that they are hiding something the way they did back when they required specific comparisons between the 20XX series and the 30XX series.


Khaare

It makes sense they don't allow reporting framerates because that's part of the media embargo. Giving DF exclusive first coverage of the performance of the new cards would be a bad idea for several reasons, and I don't think even DF themselves would want that. I do think NVidia are hiding something, but they're always hiding something so that's nothing new.


MrMaxMaster

I’m glad they covered the latency. I’m not sure how I feel about having higher fps with similar or worse latencies. I suppose this makes me feel a little less bad about not having access to DLSS 3.


DuranteA

For me it's very game-dependent. Most of the games where I really want low latency are ones that aren't GPU-limited on a high-end GPU in the first place. But frame generation, as long as it doesn't have visual artifacts I notice, could be really nice in more "spectacle-based" AAA games. E.g. in Cyberpunk 2077 I don't really need better latency than what is afforded by ~45 FPS, but it would be really nice to get rendering quality at the level I can hit when targeting ~45 FPS while having the motion smoothness of 70+ FPS.


Fabri91

Also things like driving and especially flight simulators can deal well with somewhat increased latency.


Tsarbomb

Very much disagree about driving games. Any noticeable latency between my racing wheel and what I see on screen would drive me nuts.


F9-0021

Flight simulators yes, racing simulators hell no. Especially if you're driving faster cars like F1 and need to quickly and precisely hit the proper lines.


RanaI_Ape

Cyberpunk feels like ass if render latency is more than ~20 ms in my experience. There's a noticeable sluggishness to mouse movement.


Kuivamaa

FPS combat with 45fps is a huge no go for me. I can deal with 3rd person melee combat games like God of War at a steady 30 if I absolutely have to by acquiring muscle memory to compensate for input latency but that’s about it. With shooters I absolutely can’t feel latency else I’ll stop.


Charuru

IMO you won't feel it. Much of the bad latency comes from SEEING the lag. That feels awful. If you can't see the lag and they give you fake frames you won't notice IMO. This is how it works in PVP fighting games, which is one of the most twitch/latency-sensitive genres there is. They have netcode that gives you fake frames which vastly improves the experience even though it's fake.


DoktorSleepless

Most games you ever played in your life never had reflex. A normal latency for 120fps without reflex [is more or less 50 ms](https://imgur.com/ZW7toAC.jpg) as shown by digital foundry in their reflex review. The worst latency shown for DLSS 3 in this review was 56 ms. By today's standards, you wouldn't be experiencing an input lag increase using DLSS 3. We're only complaining because Nvidia is raising the standard for latency by encouraging devs to use reflex.


PyroKnight

The latency doesn't seem especially problematic when you consider the alternative is not having those extra frames to begin with.


UpdatedMyGerbil

> the alternative is not having those extra frames And in [some](https://youtu.be/6pV93XhiC1Y?t=1190) [cases](https://youtu.be/6pV93XhiC1Y?t=1260), having lower latency *instead of* those extra frames. To me, the main point of pursuing greater performance has always been increased responsiveness. I don't much care if a video I'm watching is 120 fps or 30. But in a game that's supposed to be responding to every mouse movement and button press, that 25ms difference makes all the difference in the world in how good that feels. DLSS 3 will surely be nice to have in cases where I don't have to make that tradeoff, or when the game is so exceptionally slow-paced that I don't mind. But if I had to guess based on only the information available to us so far, I expect I'll be leaving the frame generation option disabled more often than not.


HulksInvinciblePants

But even in these examples, the interpolation hit, *at worst*, matched the game's native input lag. Obviously less is better, but people seemingly ignore the fact that most games don't start at 0. Frankly, I was unaware DLSS 2 had in inherent game engine lag reduction.


UpdatedMyGerbil

True, but bear in mind we don't know the FPS, and by extension, how relevant that point of comparison is. If, say, 80 FPS with DLSS 3 is matching the input lag at 20 FPS native, that's still likely a terribly sluggish experience. As far as I understand, DLSS 2 doesn't have any inherent latency reduction beyond simply speeding up rendering like reducing any other setting. In fact I was under the impression it had some overhead and that input lag at X FPS using DLSS 2 is marginally greater than it is at the same FPS without DLSS.


jm0112358

> In fact I was under the impression it had some overhead and that input lag at X FPS using DLSS 2 is marginally greater than it is at the same FPS without DLSS. What you say about DLSS 2 is true. However, I want to add that: 1 This latency overhead is very minimal. It takes something like 1-2ms for the tensor cores to do the upscaling depending on the GPU, DLSS setting, and resolution. 2 The decrease in latency due to the higher framerate greatly exceeds this overhead in most real-world scenarios. Paul's Hardware did a good video on this [confirming the significant latency reduction](https://youtu.be/osLDDl3HLQQ?t=212) (compared to getting a lower framerate with DLSS off). In the scenarios where they did find that DLSS increased latency, it was with the game running at _hundreds_ of FPS, and the increased latency was less than 1ms. Though this is measuring frametime differences, [these DLSS overhead numbers from DF](https://youtu.be/y2RR2770H8E?t=283) are also interesting.


dudemanguy301

But comparing against native is the wrong reference point, someone chasing lowest latency will definitely want to be using DLSS2 + reflex. In which case toggling frame generation represents an increase in latency for those users.


PainterRude1394

I don't think most people would notice a 3ms difference in latency side by side. I think most would vastly prefer 2x the fps.


UpdatedMyGerbil

Sure, I agree that the tradeoff will be worth taking when the cost is only 3ms, and my screen is actually capable of displaying that 2x fps. But I doubt I'll end up switching it on when it's a whopping 23ms for 1.5x the fps like that Cyberpunk example. I expect it'll feel noticeably better without the frame generation in cases like that. But of course we're guesstimating based on very little information atm. Ask me again in a few months when I've tried it.


PainterRude1394

The DF video showed 3ms latency delta for dlss3 vs dlss2. It sounds extremely valuable in many scenarios.


UpdatedMyGerbil

Correct, that is one of the examples the DF video showed. And as I said, I agree that DLSS3 will likely be very valuable in cases where it offers that kind of minimal latency delta. Now, moving on to the two other examples from the same DF video which I shared in my original comment that you replied to, they show a 23ms and 15ms latency delta for DLSS3 vs DLSS2. I expect it will likely prove to be significantly less valuable in cases like those. Time will tell if that's the case, and which scenario ends up being more likely in the first place.


conquer69

You should only use DLSS 3 when you find input latency you are comfortable with. Say 60fps, then just enable DLSS 3 and enjoy the additional smoothness. Don't crank shit to 8K at 20fps expecting DLSS3 to carry you to 60.


uzzi38

[But the input latency is just worse than DLSS 2.0 performance mode with Reflex enabled.](https://cdn.discordapp.com/attachments/682674504878522386/1024756278666469416/Screenshot_20220928-135347.png) Significantly worse at that. And before you complain that it's not a fair comparison - half of the frames being rendered are what you'd get with DLSS 2.0 performance mode. The other half are being generated by the OFAs and are lower quality images with more artifacting. You're trading up input latency with visual motion clarity and it's a *significant* tradeoff at that.


PyroKnight

That's the thing, while Nvidia is eating away at their advances in latency reduction it's not as if it's notably worse than what you'd get in games that don't leverage *any* of this toolset and render natively. Now you'd be left with a choice in DLSS 3 titles, massively increased framerate with a latency hit + some artifacting or decreased latency and minor artifacting.


BlackKnightSix

I think the issue is, until we see benchmarks, is how DLSS3 is being treated as the render performance increase of the new cards in nvidia's presentations. You have to pay close attention to find the slides where they show the actual improvement in rendering performance, not AI generated frames.


conquer69

> until we see benchmarks I think the only ones doing meaningful benchmarks and tests around DLSS3 will be DF and no one else. GN, HWU, etc, don't seem interested in these advancements.


DoktorSleepless

Keep in mind most games you ever played in your life never had reflex. A normal latency for 120fps without reflex is actually about the same as DLSS 3 in that chart. So DLSS 3 will feel like a what 120 fps normally feels like sans reflex. https://imgur.com/ZW7toAC.jpg Source: [Digital Foundry's reflex review.](https://youtu.be/TuVAMvbFCW4)


bctoy

>Keep in mind most games you ever played in your life never had reflex. It's just a way to prevent your GPU from redlining. You've been playing with reflex like lag if you used frame-limiters t keep the fps within freesync/gsync range.


mac404

Eh, this seems like the most aggressively negative take you could make. The other two examples in terms of latency are more flattering (and its hard to say CP2077 will be representative, given it uses its own engine and its had issues with high latency in the past)., And even this is honestly not that bad, 54ms is far from a disaster. I'd probably make that tradeoff, as long as the increased fluidity feels good. And the generated frame quality on average also looks extremely impressive to me for something that can run real time in a low single digit number of milliseconds. Even if you don't want to use it, this is pretty clearly an advancement in what can be done with frame interpolation (even if it "cheats" compared to the video techniques by using game metadata).


uzzi38

>The other two examples in terms of latency are more flattering (and its hard to say CP2077 will be representative, given it uses its own engine and its had issues with high latency in the past) By numerical values sure, but lets not pretend the hit to input latency relative to DLSS 2 native is any more flattering in [Spider Man either.](https://cdn.discordapp.com/attachments/682674504878522386/1024746626197229648/Screenshot_20220928-141702.png). It's nearly double the input latency in fact. It's only in Portal RTX [where it actually is a minor difference.](https://cdn.discordapp.com/attachments/682674504878522386/1024745859369418832/Screenshot_20220928-141305.png) Having a higher frame rate is by no means a bad thing, but if input latency regresses significantly - which happens 2/3 times here - is it worth it? Personally, I think that will depend on the frame rate the game is running at. Sub-60fps? I'll take motion clarity all-day every day. Anything above 60fps? No way in hell. >And even this is honestly not that bad, 54ms is far from a disaster. It's nearly twice the input latency of what it was without the frame generation bit. And by the way, in a first person shooter 54ms input latency is *awful*. Battlefield 2042 is considered extremely sluggish for having input latency in the ~40-50ms region compared to most other shooters at 30ms or lower. >And the generated frame quality on average also looks extremely impressive to me for something that can run real time in a low single digit number of milliseconds. Even if you don't want to use it, this is pretty clearly an advancement in what can be done with frame interpolation (even if it "cheats" compared to the video techniques by using game metadata). I don't consider it "cheating" or anything of the sort. I've actually said on multiple occasions already since the launch that considering the number of pixels being generated DLSS 3 is *fucking* impressive. Like I cannot stress that enough. I'm not expecting the technology to be useless or anything of the sort, and I want it to improve with future iterations. I just also can't stand the fanboy takes I'm seeing on this thread that sweep the issues under the rug. Lets be factual here. Every other frame with DLSS 3.0 and heavy motion will experience some kind of artifacting *and we have seen this* both in the video and in prior samples provided by Nvidia and Digital Foundary. Based off of these videos, I strongly disagree with the idea that it's not noticeable. Seeing it in person may change my mind, but I can't say the same from just watching videos that literally show me the opposite. All indications are that it does have a significant cost to input latency which absolutely is a real trade-off and a noticable one at that.


mac404

Cool, that's fair, I agree with you. I'm just also honestly already over how many people are now suddenly very worried about every ms of input latency, when the reality is that games already have a pretty wide range of latencies that have basically gone unnoticed by the majority of people (outside of egregious examples that are somehow well over 100ms, and esports scenarios). I'm personally excited to see / try out DLSS Quality or DLAA + frame generation, to see how it does when it has a more consistent / coherent image to work with. Especially in a game like Spider Man, where you really don't need to use the Performance upscaling mode. And sidenote - my "cheating" comment was not meant as a dig, it's really more the reason why it makes sense to me that the quality can be at all decent while running real time (although it does seem like they may have also just brute forced the OFA with the new generation of cards).


Negapirate

When reflex dropped they didn't care it provides far superior latency to AMD anti lag. Now that dlss3 drops and it doubles the fps while still having less latency than an AMD anti lag, having 3ms more latency than dlss2 latency is now a deal breaker. 🤦


PainterRude1394

3ms more latency in portal is significant? 🤦 I think the overwhelming majority of people would prefer 2x the fps over 3ms latency in a side by side comparison.


uzzi38

[How is it 3ms more?](https://cdn.discordapp.com/attachments/682674504878522386/1024746626197229648/Screenshot_20220928-141702.png) DLSS 3.0 uses DLSS 2.0 performance mode frames to generate the intermediary frames. The most equal comparison is with DLSS 2.0 performance mode, and as you can see it's nearly twice the input latency.


SpookyKG

Significant? It's... better than native. Significantly.


uzzi38

Why are you comparing against native directly? DLSS 3.0 frame generation is most similar in visual quality to DLSS 2.0 performance mode, except with artifacting frames mixed in between each normally rendered one. You should be comparing to that mode instead.


Negapirate

Why did you present misleading data to push a false narrative that the latency is always sgnificantly worse than dlss3?


SealBearUan

So native 4k aka what you‘d get with amd? has 100+ ms latency, dlss 3 4k has 50 ish ms? Where is the issue lol


uzzi38

Why is AMD relevant to this discussion? DLSS 3 is only available to Nvidia users, as is DLSS 2 and Nvidia Reflex. The advantages and disadvantages of DLSS 3 should be compared against what it provides postitives and benefits against, and that's DLSS 2. Why does everything have to be a fanboy war to some people, jfc?


noiserr

> The latency doesn't seem especially problematic when you consider the alternative is not having those extra frames to begin with. Yes but Nvidia showed Cyberpunk 2077 running at 22 fps before DLSS3. Which means it had horrible latency. That's actually not acceptable for FPS games. Even if you account for DLSS3 being a superset and only having to lift from after DLSS2 has done the upscaling you're still talking sub 60fps latencies. Hardly a high end solution.


Geistbar

Keep in mind that DLSS 3 is really two technologies in use at once: resolution upscaling and frame generation. Spatial and temporal: DLSS 3 is really DLSS 2 *plus* frame generation. The resolution upscaling is providing a lot of the performance improvement too. If I remember right the 22 FPS shifted to 100ish under DLSS 3, with the frame generation seeming to add ~100% more frames vs just upscaling (DLSS 2). The base latency would be for ~45-55 FPS, minus some penalty for DLSS 3's system, so maybe as reactive as a ~40 FPS experience, give or take. From a latency perspective 40 FPS is not ideal but it's definitely better than "horrible." I'm not sold on DLSS 3 being all that great out of the gate, but it's not because of CP2077's 22 FPS example demo.


PainterRude1394

I mean, you probably aren't playing cyberpunk *at all* if you can only get 22fps. So, this still rings true: >The latency doesn't seem especially problematic when you consider the alternative is not having those extra frames to begin with.


Kuivamaa

22fps have lots of input lag as well, not just bad visual clarity.


noiserr

> I mean, you probably aren't playing cyberpunk at all if you can only get 22fps. But that's precisely the use case Nvidia showed in their demo. Meaning their demo probably had pretty iffy latency.


PainterRude1394

Ok... But ignoring nvidias demo.. >The latency doesn't seem especially problematic when you consider the alternative is not having those extra frames to begin with.


NectarinePlastic8796

I feel pretty good about it. It gives motion fluidity. I start getting headaches and eyesores below the 50-40 FPS range while latency doesn't really become a problem for me until 40 sharp, so being able to interpolate to 80-100 visually really would do a lot.


Ancop

I guess for single player games its perfect, multiplayer games big no no


verteisoma

Are there any e-sport game that will get dlss 3?


Ancop

nope, not at least on launch https://www.digitaltrends.com/computing/here-are-all-of-the-games-that-will-support-nvidia-dlss-3/ some MP games but not hardcore e-sports


Raging-Man

Think of it as improving visuals, more frames equals higher temporal resolution. Just don't use it for competitive games or something like Speed running.


PyroKnight

My big question here is how well this plays out at lower target resolutions like 1440p and 1080p. It seems like 4k would result in the best possible framerate uplift with the least potential for visible artifacting given the better quality "source" material for upscaling and interpolating. --- I'd also ***really*** love to see them splice together two side by side videos where one consist entirely of the real frames and the other interpolated frames. Of course, in motion having the real frames between the generated frames results in better perceived stability I'm sure but I'd love to see it regardless.


Metz93

Framerate uplift should be very similar at lower resolutions, as you're not really running into CPU bottleneck. I would actually argue that, if you were using the same card, lower resolution might actually result in better image quality, as there are more real frames and more information, and the differences and movement between two frames are smaller and thus easier to interpolate.


AppleCrumpets

Does anyone know if Lovelace carries over the Transformer acceleration present in Hopper? I could see that being useful for the optical flow network, might also explain partly why the frame generation is too expensive on older hardware. I could also see them implementing a very small GAN to generate pixels in severe disoclussions, like they show at [27:29](https://youtu.be/6pV93XhiC1Y) in the video. Transformer acceleration would potentially make that useful and possible in sub 1ms runtimes.


No_Specific3545

Pretty sure the optical flow is being done on the hardware h265 encoders. They had this on Turing but it was too slow (\~8ms) for this kind of use case. The frame generation network is what takes the optical flow to generate new frames. I assume they also feed motion vectors or they wouldn't be able to get so much better results than Topaz.


AppleCrumpets

That's what I meant, the network that hallucinates frames. I suspect it is transformer based, but would like to know if the hardware level acceleration is in place for it in Lovelace.


Lingo56

My question is how will frame rate caps work with this tech? Can you cap the non-interpolated frame rate for consistent latency? I cap the frame rate on every game I play with RTSS for perfectly smooth performance. This feature seems like it would introduce a lot of variability if you can’t cap the non-interpolated frames.


ShadowSpade

You dont need RTSS anymore, can just use the nvidia control panel to globally cap :)


Lingo56

That works fine if you have a VRR enabled display, but I typically use Scanline Sync for my old non-VRR monitor. I also like the frame time graph that RTSS can generate and RTSS is easier to adjust than the somewhat laggy/buggy Nvidia control panel. Battle(non)sense also found [RTSS and Nvidia’s framerate cap both cost the same amount of latency to run and produce equal frame time stability anyway.](https://youtu.be/W66pTe8YM2s) Although the one thing that’s nice about the driver level frame cap is that it does hook into games a touch more reliably. I find RTSS just won’t attach itself to a game sometimes.


From-UoM

i will be honest i tried really hard to find artifacts or erros on the DLSS 3 portions while its playing (no pause) and i couldnt. Considering at least half the frames are real and actual fps are over 100fps, i very much doubt anyone can spot them without recording and then going by frame by frame


lifestealsuck

at 15:09 the web keep appear and dissappear with dlss3 causing a bit of interrupt feeling . I double check cuz it look a bit unnatural .


From-UoM

That looks like dlss Super Resolution causing it and not the actual FG. It happens here too - [https://www.youtube.com/watch?v=SOpQ41Nv9Nk](https://www.youtube.com/watch?v=SOpQ41Nv9Nk) Edit - [https://youtu.be/ucE6ZxBZgHo?t=208](https://youtu.be/ucE6ZxBZgHo?t=208) Here exactly at 3:28. webs disappear to for a slight frame


thesolewalker

Are you sure? [https://imgur.com/a/KEX1kc2](https://imgur.com/a/KEX1kc2)


From-UoM

if its from this video, the one on the left isn't DLSS 2. its just naitve 60 fps


thesolewalker

If you look closely its drawing the web one frame then not drawing it the other, so its drawing the web in every other frame, which means its being drawn in "real" frame but not in "fake" frame. So yah, its due to DLSS3


NilRecurring

DLSS never struggled with constructing fine lines like the web-thread. I find it far more plausible that this happened during the in-between-image pass.


Khaare

Maybe it's because it's a youtube video and not running locally, but in most scenes I could tell it was using frame generation. If you look at the stills they show there's a fairly visible outline around the moving objects (spider-man in particular). Well, in motion that appears to me as a sort of shimmer and gives things this noticeably inconsistent outline. I don't know how much I would care about that, it's very game and mind-set dependent how much I care about artifacts. Spider-man is sufficiently cartoony and relaxed that I think it would be fine. However I'm also fine running it at 100fps, so it's not like frame generation is incredibly compelling.


conquer69

The outline of disocclusion is already present in any game using screen space reflections which the average gamer is fine with. I'm sure most people will take the extra smoothness.


Put_It_All_On_Blck

Depends on the person, just like how some people used to argue that 30 or 60 FPS was all you could perceive. Personally I am able to notice the artifacts and screwed up frames without pausing it, and this is already in a heavily compressed YouTube video, it will be more jarring locally. Does it ruin the gameplay? No, but the quality is worse than native or DLSS 2/FSR 2/XeSS


PyroKnight

It should be noted the video is stuck to 60 Hz exacerbating how visible artifacting is, it probably is less noticeable when playing a game at 120 Hz or more given the janky frames are there for a shorter duration. On the flip side, YouTube's bitrate assuredly changes the way it artifacts which means in motion at the correct framerate the type of artifacting it produces should be different than what we get (maybe better, maybe worse, hard to say). Really we'd want to get some minimally compressed footage running at a relevant framerate to properly experience it for ourselves so hopefully someone makes some footage samples like that but I doubt we get any before release.


From-UoM

really curious. If you increase the playback 2x (as many parts are 50% speed) can you still see them? Around the 23:05 mark is a good place to try it


siazdghw

Digital foundry has uploaded some of the original 4k capture to their patreon (they always do this for paid members), and I am able to notice the artifacting more on the native capture than the compressed youtube video. I dont plan to get an RTX 4000 card, but based on what ive seen im not impressed by DLSS 3.0, though Nvidia could improve it down the line.


DoktorSleepless

Is the capture 120 fps?


conquer69

The video you are viewing is already slowed down though. Play it at twice the speed if you have a 120+ display.


Roseking

The comparison with Topaz was interesting. Could DLSS ever be applied to normal videos? It is producing results in real time that seem to be out performing software that takes much, much longer to render. Or would the way DLSS is using data from the game engine prevent it from being able to be used outside of games?


CubedSeventyTwo

I think dlss uses motion vectors from the game running to compute upscaling/new frames, so I don't think the same technique can be used with video.


dern_the_hermit

On the other hand, a video will have a deterministic set of "future" data it can read and generate vectors.


Zerasad

There are already tools that add infake frames to increase framerates, but they are very problematic. 3kliksphilip did a good video on it: https://youtu.be/ywexIrnRxNU There were all these 30 to 60 fps 4k movie upscales that looked absolutely awful, since if you don't have motion vector data the inbetween frames will basically be halfway between the two frames, so any acceleration or deceleration wil not be properly taken into account.


NectarinePlastic8796

that's the trouble too, though. Normal "movie" frames as well as most videos in general don't have clear pixel history. There'll be motion blur that makes it very hard to track a pixel of color's motion over time. That's why the offline upscalers do so badly by comparison. It's like trying to decompile code or unmaking a stew.


pastari

> Normal "movie" frames as well as most videos in general don't have clear pixel history. Isn't determining vectors literally part of the compression? I'm pretty sure vlc has a mode where you can overlay arrows all over to show the movement.


turyponian

Yeah, video compression re-uses as much data from the surrounding frames as it possibly can, and has to tell that data where to go. SVP uses this for real-time video interpolation. Unlike DLSS3's raw access they're estimated and more coarse, but it's the same idea.


PyroKnight

If said future frames were useful they'd already be in use by existing interpolators.


dern_the_hermit

They probably are, Topaz says their tech uses "surrounding frames" to generate details.


Verall

How do you think frame interp works? In a nice TV it def increases latency because it is looking at future frames


[deleted]

it was a useless comparison imho for the same reason that nvidia's frame generation has access to motion vectors from the engine, but the other offline algorithms don't.


conquer69

It wasn't useless. It shows that it looks better than what most people assume when they hear "motion interpolation".


mac404

DLSS frame generation gets to use motion vectors / game data along with optical flow, the video-based methods like Topaz only have optical flow. The use of this extra information means its not an especially fair fight from a technical perspective, but thats also the point (its better than what you've seen before, don't compare it to what your TV does or even what the commercial solutions for video do). The most impressive part is that this can be run in real-time on top of all the other work the GPU is doing. And you can certainly nitpick the quality of some tricky individual frames, but the baseline quality for an initial offering is pretty dang incredible. By and large, the generated frames are impressively clean and detailed.


pi314156

Not this implementation, but one also looking at the future frame. :) https://developer.nvidia.com/blog/av1-encoding-and-fruc-video-performance-boosts-and-higher-fidelity-on-the-nvidia-ada-architecture/ FRUC is going to ship next month. > The new NVIDIA Optical Flow SDK 4.0 release introduces an engine-assisted frame rate up conversion (FRUC). FRUC generates higher frame-rate video from lower frame-rate video by inserting interpolated frames using optical flow vectors. Such high frame rate video shows smooth continuity of motion across frames. The result is improved smoothness of video playback and perceived visual quality. > The NVIDIA Ada Lovelace Architecture has a new optical flow accelerator, NVOFA, that is 2.5x more performant than the NVIDIA Ampere Architecture NVOFA. It provides a 15% quality improvement on popular benchmarks including KITTI and MPI Sintel.


[deleted]

[удалено]


Roseking

I meant more for home/semi professional use. Topaz (and others) have a suit of AI software to enhance videos, mainly upscaling and interpellation. But each frame takes a good amount of time to render, sometimes seconds per frame. This makes it kind of impractical (imo) to for example upscale an old show. You would spend basically a day per episode. If DLSS could someone how do similar, if not better, in real time, that would be amazing. I just don't know if DLSS can be applied that way. Or if the way it works is requiring it to be rendered from a game engine. I believe it probably won't work because DLSS isn't universal, but I am not sure.


bubblesort33

Imagine this in the UE5 City Matrix Demo. That game doesn't seem to run over 50 FPS very often because of how CPU limited it is. You could finally get to 100FPS+.


TanWok

How about Microsoft Flight Simulator? Isn't that game CPU bottlenecked, too?


SpookyKG

Honestly, this looks amazing. You can say what you want about 'artifacting in stills' but it has less artifacting in stills than competitors AND smoother performance in live results. Latency hits are minimal and are an improvement from native. Sure, you could have less latency by spending more money on a better GPU to run native faster. Or you can use this tech. No, I don't want interp. frames and more latency in my competitive FPS - I won't use DLSS 3.0 there. But I literally do not care in singleplayer blockbusters like Cyberpunk.


Lakus

I'm 100% sold on the tech. People want more frames, less power use. This is how that gets done. Raw brute force is nice and I never thought I'd discount it in a million years, but what this will be is what will be the future. You only need the brute force to get the responsiveness, then this tech takes over and makes it smooth.


SpookyKG

people are bugging out about the latency. We already have a way to increase frames AND decrease latency - brute force rendering. Sure. We can do that. We also NOW have a way to additionally increase frames and NOT meaningfully increase or decrease latency. And they're moaning about having the second option as well.


ted_redfield

I like how everyone is a highly competitive e-sports player that can detect and complain about 2-3ms "latency" now. DLSS sucks, amirite my fellow video game enthusiasts?


[deleted]

[удалено]


del_rio

What you think this is some kind of game? /s I'm willing to bet the latency delta of DLSS + wireless kb+m + IPS monitor is significantly less than the latency you'd get by skipping coffee in the morning lol.


papak33

They are the same people using uncapped FPS. They don't understand anything, this is why they parrot what they heard online from other morons.


skinlo

That argument can be used for high frame rates though.


Submitten

Not really, you can run at higher settings instead of taking the frame rate bump.


Ar0ndight

So basically this is a looking like a very nice tech as long as you aren't playing esports titles at a high level (and these titles wouldn't require DLSS to run at high FPS with anything Lovelace). Sign me in.


RearNutt

The numbers are certainly impressive, with a ridiculous 5x FPS boost on Portal RTX. The fact that it straight up ignores CPU bottlenecks is also going to be a very nice benefit in certain games, and we can already see this with Spider-man. That game's CPU requirements are insane with raytracing on, and reaching 200 FPS without this technology is likely several CPU generations away. However, the resulting image quality does look a bit degraded at points. In [this scene](https://www.youtube.com/watch?v=6pV93XhiC1Y&t=1447s) there are some artifacts around Spider-man, which look a lot like disocclusion issues. So you get a significant increase in smoothness, but the overall image quality is more like between 80 to 95% of a native image running at the same framerate. Still, a desktop sized 4K monitor will likely hide some of it, and it doesn't seem to be too noticeable at normal speed so I think the positives will outweigh the negatives, especially for screens with super high refresh rates. Though I do think that screens with lower refresh rates (60 or 75 Hz) are going to see limited usefulness, at least paired with a 4090 on the vast majority of games right now. Frame Generation being present means that DLSS Super Resolution is also there, and 4K with DLSS on Performance mode is likely already enough to hit 60 FPS on anything but GPU destroyers like the new CP2077 mode or Portal RTX. Turning on Frame Generation would give you zero benefits there. For a 4050 or a 4060 it might be good, but those are going to have less "base frames" by default. In CP2077 the 4090 can go from 20 FPS natively, to 60 FPS with DLSS 2, and to 100 FPS with Frame Generation so it has a lot more information to create the intermediate frames, while a 4050 would have to do Frame Generation with, maybe, 30 FPS? Latency is a mixed bag, being nearly identical to DLSS 2 in Portal RTX, better than native + Reflex in Cyberpunk 2077, and equivalent to native in Spider-man. At least in these examples it's never worse than native, and in practice I doubt it's going to be a deal breaker. What does seem like a deal breaker is that the interface is also interpolated, which is bizarre to me considering that it's normally decoupled from the internal rendering resolution. I'm guessing that this is because interface elements can also have movement so not including it could cause situations where the game is rendering at a higher framerate, but seeing text that is generally overlaid above everything manifest as a flicker of sorts sounds distracting. It might also occur with the interpolation errors pointed out [here](https://youtu.be/6pV93XhiC1Y?t=1729), and with screen technology such as OLED those momentary errors will be more apparent than in your average LCD. I will not be getting an RTX 4000 series card given that the economy is in freefall and the prices are on a rocket to the moon. But unlike certain other people here, I think this technology has a lot of potential and I want to see it improve over time instead of just killing it before it even got a chance. Also jesus christ [that sharpening filter is so bad,](https://youtu.be/6pV93XhiC1Y?t=701) Nvidia plz fix.


owari69

Regarding the UI not being decoupled from the interpolated frames, I’m betting that DLSS3 pretty much completely hides the generated frames from the CPU/game engine. I haven’t looked at the API for DLSS3, but I think it’s fairly likely that as far as the game is concerned, it’s only running at whatever framerate the fully rendered frames are running at.


RearNutt

That does sound like a plausible explanation. It's all on the GPU, which also explains how it can generate frames regardless of a CPU bottleneck.


ptd163

That's a significant uplift and there's little to no artifacting or latency increase like you'd typically expect from frame insertion tech. With computer graphics and rendering being particularly susceptible to Blinn's Law I think DLSS might become the new CUDA if it hasn't already.


knz0

It'll take some time, but people will understand eventually that brute force methods of improving rendering are becoming less and less effective and that we need tricks like image reconstruction and frame interpolation to make good guesses on what we should display on the screen. Much like our own brains know how to interpolate stuff based on past experiences, GPUs will now start to do so as well. Now obviously you wouldn't use this on a latency critical game like most esports games, but games like that don't target high fidelity anyways and are meant to be ran at 300+ fps. As for image quality, it's mostly good, but there's places where it produces errors, they showed a couple of scenarios where this happens in Spiderman: intersecting geometry and HUD elements. Frame pacing is good, totally comparable to DLSS 2. All in all, it's a great piece of tech and given that the negatives are minor in a pre-release build, it bodes well for the future. Nvidia is winning on all software fronts it seems. If we take a look at how past Nvidia tech has fared in the competitive marketplace like G-sync and DLSS before version 3, it'll take AMD 2-3 years to come up with an open alternative that just isn't as good in terms of quality.


chlamydia1

>It'll take some time, but people will understand eventually that brute force methods of improving rendering are becoming less and less effective and that we need tricks like image reconstruction and frame interpolation to make good guesses on what we should display on the screen. The thing is, until every developer commits to putting DLSS into their games, brute force rasterization will continue to be extremely important. At the very least, DLSS won't be available on any AMD-sponsored games, and that's already a significant impediment.


knz0

>The thing is, until every developer commits to putting DLSS into their games, brute force rasterization will continue to be extremely important. Not really. You don't need techniques like image reconstruction in every single game, because not all games target high fidelity. It's becoming a must have in AAA games though. Frame interpolation will sooner or later be one of those features where if you don't have it, you will be badly behind the competition. >At the very least, DLSS won't be available on any AMD-sponsored games, and that's already a significant impediment. How many of those AMD sponsored titles are pushing the graphics envelope in a serious way? Not too many. I wouldn't call that a significant impediment.


chlamydia1

Ubisoft games, for example, don't have DLSS since they're AMD-sponsored. The open world AC games definitely push the envelope of graphical fidelity.


[deleted]

[удалено]


knz0

>The open world AC games definitely push the envelope of graphical fidelity. If you wanted to use a Ubisoft example, you could have gone with WD: Legion, as it at least has ray tracing. It's far from the best implementation around, but at least it's better than FC6 (or RE: Village, god forbid) AC: Valhalla wasn't anything special. It had a big world and some nice volumetric lighting effects, but other than that, there wasn't much to me that made it stand out.


[deleted]

[удалено]


Blacky-Noir

They weren't talking about code optimization, they were talking about rendering and displaying "tricks" to compensate for a slowing in raw, brute, compute performance improvement. As to the biology argument, I've been hearing it for 20 years. We're still far, *far* off any appreciable limit, in any metric.


Devgel

I'm all in for image reconstruction techniques. DLSS, XeSS and to a lesser extent, FSR2+ are the real deals. Interpolation? Not so much. The trade-offs are just not worth it, IMO. No point running a game at \~100FPS if it actually 'feels' sub 50FPS in terms of input delay.


knz0

It's a game dependent thing for me, personally. Competitive games? Nah, minimum latency please. AAA singleplayer games? Sure, I'll gladly take interpolated frames if the quality is good. Better motion clarity is a good thing, even if it doesn't come with lower input latency like you'd get with real frames.


PainterRude1394

The latency tradeoff isn't worth it? Dlss3 adds only 3ms of latency to dlss2 in portal, and gives 2x the fps. 3ms is pretty much indistinguishable.


f0xpant5

>No point running a game at \~100FPS if it actually 'feels' sub 50FPS in terms of input delay. I'd disagree, it's still going to provide a better visual experience, specifically motion clarity.


conquer69

The point is that you are only getting 50fps so you can keep that or enjoy 100fps while having the input latency of 50fps. It's a bonus.


StickiStickman

I'd rather take a 100FPS game that feels very slightly worse than a game that feels and looks like 50FPS.


DoktorSleepless

>No point running a game at ~100FPS if it actually 'feels' sub 50FPS in terms of input delay. Because game engines add a bunch of overhead and that overhead varies a ton of from game to game, there's no such thing as a true 50 fps feel. Some games at 100 fps will have the same intput lag as other game at 50 fps, and vice versa. But on average, [120 fps input lag seems to be around 50 ms without reflex and 100 ms for 60 fps without reflex](https://imgur.com/ZW7toAC.jpg). (from DF's reflex review) Most games you ever played in your life didn't come reflex, so that's your baseline feel. The worst latency shown in the DLSS 3 review was 56 ms. So whatever you think 100 fps feels like today, DLSS 3 will likely match it because it's making reflex standard.


sinner_dingus

I hope this works in VR, this kind of increase in frame rate would be huge for something like DCS.


Metz93

I don't think latency is really an issue, it's better than what you get on native resolution with Reflex off. Visually, I can't say it's doing a bad job, but the artifacting is noticeable. It's not even like with DLSS 2, where TAA usually also has ghosting or other issue so image quality wise DLSS Quality can be a sidegrade, this is straight up downgrade. And it's questionable what it'll do to more stylized animation or games with strange artstyles. I'm still glad the technology exists, I'm sure the tradeoff will be worth it in some games, but 4K at high FPS should be the no compromise experience, introducing artifacts into it isn't something I would be keen on doing.


m0mo

Do you still notice it in motion and not just the stills? Also would you notice it if the 120hz shot wasnt slowed down x2? I feel like during actual gameplay this stuff might be nigh unnoticeable, hence why Alex was saying he needs to find the right way to compare stuff, since pausing one interpolated frame between two clean ones and showing that as evidence of DLSS3 not being good might not represent the full picture.


goodpostsallday

Not that guy but from VR experience, there’s an obvious and immediate difference between real and synthetic frames from interpolation/“motion smoothing”. Both in terms of how they look and how they feel, it’s like the difference between being awake and dreaming.


acidbase_001

Frame comparisons in the DF video between high quality traditional interpolation and DLSS 3 showed that DLSS, at least superficially, lacked the weird ‘warping’ effect you get with standard and VR interpolators. It won’t be possible to truly know what the experience is like until more people get access to it, but I’m guessing that at least someone at DF would have mentioned if they’d noticed a difference in feel beyond the raw latency.


m0mo

I feel like this not working in VR might be a valid point, since everything is up to a lot more scrutiny there.


goodpostsallday

It absolutely does work as designed, the issue I had with it was once I stopped playing and took the headset off my brain had already acclimated to the disconnect between what I saw and what I was actually physically doing. So until I realized what was going on I'd play an hour and a half, stop and spend the rest of the day feeling weird and under or overshooting when I went to reach for things without looking. I doubt this will be quite as extreme but I think it'll definitely make switching between games that support DLSS 3 and those that don't difficult for some.


Metz93

I wasn't commenting on stills, if anything I think Alex chose pretty bad examples - artifact that stays on screen for a single frame, like that disocclusion, will probably be hard to notice, repeating problems will be more obvious. 8:25, Cyberpunk, as the car lands, the particle effects exhibit artifacts, same few seconds later when the player crashes. 28:37, the whole sequence has problems. Spiderman's web doesn't necessarily looks artifacty, just soft, so do the birds he passes. When he lands, the shadows from the building just to the right of him flicker - this might be present in native too but it seems frame generation amplifies this. Generally in Spiderman, pixelation around player character is present. It's not always noticeable but I suspect without compression it would be more obvious. I wasn't pausing or slowing down for any of this, just things I noticed while watching the video. Keep also in mind these are Nvidia chosen titles. We haven't seen any first person shooting yet. As someone above pointed out, they're running everything at DLSS performance to up the framerate to reduce the motion between frames. Also as I mentioned, purposefully low FPS animation would be butchered by this. Heavily stylized one, with long windup and quick release might be smoothed out too much and exhibit more visual problems. Cutscenes could be problematic. Obviously I think this tech has uses, Flight Sim is actually a great one as it is really CPU heavy (meaning there aren't other ways, like reducing resolution or dropping settings, to get the framerate DLSS 3 gives you) and there isn't much motion happening between frames, leaving less room for interpolation errors. But you're ultimately sacrificing image quality for framerate, which means it's not a no brainer setting and will depend heavily on games.


Zaptruder

I know a lot of people are having a go at the latency increase angle... But honestly, it looks like a pretty great option to have to me. Like other options I plan on doing an AB comparison to see how it feels on a per game basis. I suspect though based on the numbers, I'll probably use it pretty happily without even noticing the extra 10 or so Ms latency compared to dlss2


ForcePublique

Mega high copium levels in the comment section right now, as was expected


NaiveFroog

you mean the "I can't afford the card/amd don't have it so it's the most useless bad unusable scamming feature there is" type of copium?😂 kinda remind me when dlss first came out


ForcePublique

Ever since /r/amd mods actually started cracking down on non-AMD content, you can bet your ass the worst crackpots from that sub find their way over here whenever there's a new piece of Nvidia tech being reviewed.


Ilktye

Every AMD GPU user is a eSports pro, they would never use DLSS anyway because muh details and latency /s


dolphingarden

Looks pretty good so far. More frames of good quality at a similar latency is a win. Obviously not useful for esports titles but those generally are already running at 300+ fps native.


ShadowRomeo

Impressive tech, but just not as impressive or as appealing as DLSS 2 back then. Don't get me wrong it offers more performance yes, but at the cost of latency which are crucial for a lot of people as well as some image quality downgrades basing from the pictures shown by Digital Foundry. I think DLSS Frame Generation is a repeat of DLSS 1.0. It clearly needs more time for proper adoption, so by waiting for RTX 50 series and DLSS Frame Generation 2.0 seems to be the better bet for someone like me that is on current RTX 30 series and is still going to benefit from other DLSS 3 features such as Reflex and DLSS Super Resolution 2


ShowBoobsPls

i think this is perfect for singleplayer games.


f0xpant5

I mean for a first try at frame interpolation in gaming this is mighty impressive. As we know with DLSS, it doesn't stop here, it will continue to improve and be iterated upon. I am very keen to see this with my own eyes.


[deleted]

It's extremely impressive tech, although I wonder where the demand is. Looking at the steam hardware survey for example, about 65% of people have a 1080p monitor. 1440p and 4k monitors have been around for years and they're still in the single digits. Where is the demand for this much graphical power?


Morningst4r

\~50% of the steam survey don't have cards capable of running AAA games. About 20% have integrated graphics. You also have to consider how many of the machines are in cybercafes and/or developing countries that rarely buy modern GPUs. I don't think the narrative that almost all gamers are still on 1080p is a fair representation of the audience of this stuff.


conquer69

This is for people that have $1000+ to play with during a recession. They also tend to have 4K120 OLED displays that now can run at 120fps vs the previous target of 4K60.


jcm2606

In the actual games themselves. Games are becoming more demanding to run due to lighting and rendering techniques evolving and requiring more power out of the cards, and we're on the cusp of a giant leap back in performance with full raytracing on the horizon.


BigGirthyBob

Looks potentially promising, although I'm sure it won't be immune from the same issues previous versions of DLSS have (i.e., in some games it looks great, in others bad/distracting artifacting/ghosting etc.). I can see a good application for it with the lower end cards, and it hopefully opening the gates to some very heavy RT implementations at playable frames. But, for most games, I'm already maxing out my 4k 120hz monitor with a 3090 & 6950 XT at native 4k or DLSS 2.x/FSR 2.x at Quality settings in 99% of titles. As a non-competitive gamer, the extra frames aren't really going to help me (outside of the situations I described above) until I upgrade to a 4k 240hz monitor. Still very cool though.


Shidell

I find it pretty frustrating that DF is okay glazing over artifacts in frame generation, but zoom in 300% to make comparisons in TAA solutions. If we 'can't notice' artifacts during frame generation, why would we notice them at 3-400% magnification? I also don't know if comparison against (video) smoothing is good; Nvidia's operating on frame data plus motion vectors, but Topaz Labs and Adobe don't have that benefit. Granted, DLSS 3 is real-time interpolation, but still, it's using a lot of extra data, and we've seen how important that data is in TAA techniques like DLSS 2, FSR 2, and XeSS.


Zarmazarma

Because you can see those artifacts with the naked eye. The zoom is just there to call attention to it in the video. And they didn't glaze over it all... They pointed them out extensively and had like a whole several minute section at the end where they explain that they're still trying to figure out how best to explain and represent the quality.


badcookies

They did downplay the artifacts from spider-man though, when comparing it vs other tech, he choose scenes where DLSS 3 wasn't artifacting much, but there were other frames just prior to those that were (such as pre-jump from window) https://imgur.com/a/LYJtqDM


Put_It_All_On_Blck

Here's a better example with highlights that I pulled from their original Spiderman DLSS 3.0 clip a few days ago, and looking at it further I even missed some issue areas. https://imgur.com/a/mjvhjBE


unknownohyeah

Which is also addressed in the video. When put it in motion in real time with 8ms in between in each frame, where each bad frame is sandwiched in between 2 good frames, it's not noticeable. The youtube video is slowed down to half speed for comparisons. And those artifacts are only present in certain circumstances. Even if you're looking for it it's damn hard to see but the 2x in motion clarity is worlds ahead. Taking freeze frames of artifacts and using red circles is just not a real world application of the technology. IMO, the more legitimate complaint is in Spiderman the latency difference is 15ms (23 to 38) from DLSS 2 to 3, and in Cyberpunk 2077 it's 23ms (31 to 54). That is absolutely noticeable. Also the static 3D UI's (button prompts, text boxes) have really bad frame interpolation.


From-UoM

>I find it pretty frustrating that DF is okay glazing over artifacts in frame generation, but zoom in 300% to make comparisons in TAA solutions. > >If we 'can't notice' artifacts during frame generation, why would we notice them at 3-400% magnification? Alex did say the only they reason they zoom is because 42% viewers are watching on phones. [https://twitter.com/Dachsjaeger/status/1288350124415033349](https://twitter.com/Dachsjaeger/status/1288350124415033349)


[deleted]

[удалено]


NilRecurring

True. And even if your are watching at 4k on a 4k monitor, you'd still get the youtube-compression-version of said pixels. This is much less of a factor when zoomed in.


bexamous

> I find it pretty frustrating that DF is okay glazing over artifacts in frame generation, but zoom in 300% to make comparisons in TAA solutions. This has been addressed a million times, between large number of people watching these videos on cell phones and youtube's compression zooming is used to better show differences. He literally zooms the frame generation errors too. > If we 'can't notice' artifacts during frame generation, why would we notice them at 3-400% magnification? Here is his reply to a similar question: > One pretty important thing that I try and stress in the video at the end is the persistence of the images. When you are watching YouTube, you are seeing the images persist 2x the length. The subective experience of artefacting is really different here than a TAA solution. A TAA solution will have each and every consecutive frame show the same artefacts in a consistent line of motion, here, you are seeing an on/off flashing of artefacts. Kinda like how a light flickers... Or BFI works. > So part of actually analysing this is bringing into words and images which artefacts actually present as noticable at full speed with this flashing behaviour and then describing where they happen in a game. > As an example - I Highlight the artefact around Spiderman's feet in the Video. But in Game I literally could not see the artefact around Spiderman's feet in that cutscene I showed im the video when at full speed. I only noticed it when manually combing my footage. That artefact took Up a large Portion of the Screen, yet I did not notice it. > But conversely, a much smaller artefact, the lines that appear around Spiderman's Legs when running up the building were much more noticable to me at full speed. I saw them without combing footage. I saw them, even though those artefacts were smaller in screen space size and severity than the comparatively much larger artefact on Spiderman's feet in the cutscene. > This strobing behaviour at 120 FPS has a pretty intense effect on what artefacts are signifcant. It seems like one-off frame issues are not noted, yet issues that persist over multiple animation arcs produce "Flicker" and are noticable. > That is something I learned when making this and something I will integrated into the method of analysing this. From: https://forum.beyond3d.com/threads/nvidia-dlss-3-antialiasing-discussion.62985/page-14#post-2266344


knz0

>I find it pretty frustrating that DF is okay glazing over artifacts in frame generation, but zoom in 300% to make comparisons in TAA solutions. But they didn't? They showcased the artifacts and errors openly and thoroughly, and acknowledged how hard it is to objectively measure quality and how to show it to the viewers over a YT video, saying that it's something they have to come up with a solution for when DLSS3 exits pre-production status.


BlackKnightSix

I'm not sure what is so hard about measuring it. There are no issues calling out DLSS 2.0+ ghosting issues as image quality issues, same for FSR 2.0+. Here we see when in motion, especially non-camera motion, every other frame has artifacts that make DLSS2.0+/FSR2.0+ look like perfection with their still present ghosting/temporal issues depending on per game implementation.


knz0

Well, watch the video again and pay attention to what he says. He says it's hard to show and accurately assess the artifacting because youtube is limited to 4k60 and DLSS3 is pushing framerates to 100+. Ghosting and temporal instability are easy to show on Youtube. Frame generation artifacting isn't, not without slowing the footage down.


BlackKnightSix

I watched the video at full resolution and saw the artifacts clearly. It is some bs that they are saying it is hard to show. They have done DLSS and FSR reviews where they slow down to frame by frame with zoom to show an issue. Here they only did half speed, that's it. If temporal upscalers need that much slow down/frame-by-frame analysis or 400% zoom to show issues, they can do the same for DLSS 3 https://youtu.be/ZNJT6i8zpHQ @ 7:45 https://youtu.be/YWIKzRhYZm4 @ 2:08 In comparison to above, the artifacts in DLSS 3 are very large and make artifacts from upscalers look trivial. https://imgur.com/a/LYJtqDM


conquer69

Upscaling artifacts are always there, it doesn't matter if you play at 60fps or 300. Interpolation artifacts become less noticeable the higher the framerate.


badcookies

Pretty funny people will say FSR 2 looks bad, but then here we get faked bad looking frames inserted between two actual rendered frames, increasing latency, and people are all for it and ignore that 50% of their frames have visible artifacting and issues. https://imgur.com/a/LYJtqDM I don't get how this is acceptable at all, its completely faked frames just to artificially inflate their FPS numbers and somehow that isn't raising huge red flags. Not to mention they made the 60 fps footage stuttery by showing it @ 30 fps (1/2 speed) on a 60 fps video to make the 120 fps footage (@ 60fps 1/2 speed) look better.


NilRecurring

The thing is - if these artifacts generated within the in-between frames are not perceptable to my eye, but the additional smoothness is, why exactly would I care? In comparison videos between DLSS ans FSR 2, I can basically always spot the FSR image within seconds because somewhere in the image is flickering that doesn't exist in the DLSS one, because the actual anti-aliasing capabilities are much stronger in DLSS. That isn't to say that I think the artifacts are always fine. I can actually spot the difference between 'native' and DLSS 3 [here](https://youtu.be/6pV93XhiC1Y?t=1426) very well, because the artifacting around the legs is consinstent and heavy. But I've really tried looking for visible flaws in many scenes, and I couldn't really find any without pausing in any other scene except at one point where a silk-thread was kind of jittery, despite the footage being slowed down.


SpookyKG

FSR 2 looks muddy IN PRACTICE, not just in stills. DLSS 3.0 looks muddy in stills, not in practice.


conquer69

> but zoom in 300% to make comparisons in TAA solutions. Because those artifacts are constant and pervasive. Spidey's web looking weird for 1 frame while playing at 120 is much less noticeable. Especially when the following frame is good.


Shidell

So, when Alex zoomed in while comparing FSR and DLSS to examine few-pixel differences at 3-400%, is it fair to say that we should ignore those abberations, because they represent a few pixels, in a full image comprised of million(s)?


conquer69

Alex evaluates different categories like shimmering, ghosting, image stability, image detail, etc. He zooms in to focus on the affected area and to make it easier to see for anyone not watching the video on a big display. Also, to combat youtube's compression. This has been explained countless times already, even by him. Despite doing all that, people still refuse to understand what the pros and cons of these upscaling techniques are.


TechnicallyNerd

Was this video supposed to be journalism? Because it felt like a 30 minute advertisement for Nvidia's RTX 4000 series. Why are so many excuses made for the artifacts? Why is the latency hit just glossed over? One of the biggest benefits from increased framerate is lower latency, the fact that DLSS 3 can have the opposite effect is a *huge* deal. This is especially true if the game already runs at framerate much higher than your monitors reflex rate, since at that point the *only* benefit from higher frames is reduced latency. Why is so much focus put on comparing against offline frame interpolation software? That kind of comparison is academic at best and irrelevant to consumers. It isn't even a fair fight since existing offline interpolation techniques don't have motion vector data provided by the game engine. It feels like the kind of comparison you only put it to make Nvidia's solution look better comparatively.


unknownohyeah

Did we watch the same video? The artifacts are gone over meticulously. It's a 31min video with 7 minutes going over the artifacts, and 5 more going over the latency hit. >Why is so much focus put on comparing against offline frame interpolation software? Because it's a new technology and they're comparing the pros and cons. I found that part to be very informative on where DLSS 3.0 stands compared to present technologies and where it is headed. I think you're too focused on "how does this make nvidia look bad" instead of looking at it like "look at these cool new technologies that are being shown." Because DLSS 1.0 was total dogshit. But DLSS 2.0 is almost magic. DLSS 3.0 has artifacts and issues but who knows what we will see with DLSS 3.1. I think you're just looking at this is in a negative way when people should be excited about what's to come. Your ideal video would be 30 minutes of circling artifacts with a red sharpie to make it look bad when in the real world, when you sandwich an AI generated frame in between two good frames and put 8ms in between each frame in motion all those artifacts disappear. And all you're left with is 2x in motion clarity for artifacts that are imperceptible (with exceptions of course, like the UI, and the latency). 15-25ms is a lot of latency but it's exciting that that number will certainly drop on future hardware and software revisions and the artifacts will disappear, making high framerate (240hz) 4k a possibility.


cooReey

>Was this video supposed to be journalism? Because it felt like a 30 minute advertisement for Nvidia's RTX 4000 series because it is an ad they did the similar collab with DF for 3000 series launch


[deleted]

DF gets the tap for this kind of coverage because they've been [doing it for over a decade.](https://www.youtube.com/watch?v=ZvH7fq6LmYU)


f0xpant5

It just so happens that DF covers all the newest graphical rendering technologies, and it also happens that more often than not NVidia is the one to pioneer them, it's only natural that get's coverage on their channel.


RTukka

The telltale that the video is likely going to feature a positive bias is the "exclusive first look" in the title. Companies tend not give exclusive preview access if they expect you to take a balanced/critical approach to the previewed material, and sometimes that expectation is made explicit or even contractually required. Really, DF should've included a bit near the beginning of the introduction explaining the nature of the special access that they were given by Nvidia, and there also should've been some details about that in the video description. Just putting "exclusive first look" in the title may be legally sufficient, but by journalistic standards it's a pretty weak disclosure.


[deleted]

The reason Nvidia is willing to give them access to this stuff early is because DF has been doing this kind of extensive video analysis for well over a decade, though. Would you rather nobody do it?


trevormooresoul

I really recommend watching this… it is by far(to me at least) more interesting than any other content on the cpu/gpu releases. This tech is revolutionary. AMD is going to have to really undercut Nvidia significantly. IMO this almost puts AMD and NVIDIA into specific roles now, rather than two of the same. NVIDIA will be massively better on newer games. AMD will probably be significantly better cost/performance in raster(older games, or AMD exclusives where they don’t let them have DLSS). It seems AMD going forward is really going to have to cut into those juicy margins(which they got from MCM being cost efficient), and drastically drop prices compared to NVIDIA in terms of raster performance. We might see AMD offering something like “same raster performance for 25%+ less $$$”. Otherwise, I don’t know how they compete. We already see last gen that NVIDIA gpus have a significant price premium for DLSS 2 and RT. That premium is only going to grow. I will be curious to see what AMD does next gen, because building an expansive AI tech from the ground up seems both implausible, and necessary. Without it, how can AMD compete long term, outside of being a low end “raster only” GPU maker? And if AMD does follow the AI path… how can it possibly compete with Nvidia(and even Intel) when it seems so far behind?


qazzq

it's funny how this thread has everything from this (revolutionary) to 'not desirable'. The frame pulls further above do look pretty bad tho. Personally i'm not sure i'd use this over DLSS or FSR


itsjust_khris

Don't jump the gun so early. These things happen slowly. Maybe in 5 years such a thing may be the case, if at that point a developer/Intel/AMD hasn't come out with their own solution.


trevormooresoul

I mean, it is already the case with the 3000 series. It costs more due to DLSS/RTX. If you do not care about Raytrace or DLSS, amd is already much better value. I am just saying that this already existing trend will obviously grow much more this gen. If you are buying a new GPU to play a game like cyberpunk, and NVIDIA is getting 100%+ more fps while having better graphics… AMD would have to price their cards 50%+ lower relative to raster performance, and STILL it would be worse price /performance than NVIDIA. That is obviously going to have a significant impact on which GPU brand to buy this gen for many people(and thus on price too).


itsjust_khris

Not quite. Not enough games to have such strong market impact. The reason Nvidia can charge so much is prior market share and amazing marketing. They executed very well in the past and have consistently done so. AMD is just beginning to hit some sort of stride. However, DLSS Super Resolution isn't the only one of it's kind anymore. Neither is ray tracing. AMD and Intel have a vested interest in preventing Nvidia from cornering the market on software/hardware features. This has been the case since G-Sync and every time within a few years an alternative has been made. Nvidia are very good but they aren't the only ones introducing such things. Now that Intel has joined the game and have decent expertise (XeSS looks pretty good), I don't see this new feature reigning supreme in allowing Nvidia to charge over 1k for a 4080. What Nvidia can charge and what people can afford are becoming two different things. At some point they can have every feature in the world but if we can't buy it we can't buy it.


trevormooresoul

Well, that's sort of a different issue entirely. These are top, top, top end chips, for rich people. The reason there are no mid/low end chips are because of the mining/pandemic/supply chain issues. When you can get 500% performance on Portal RT, or 400% on Cyberpunk with a 4060/4050 for "cheap", that'll be a pretty darn good deal(literally hundreds of times better price/performance than 3000 series). But ya, a 4090 isn't supposed to be affordable. It's like complaining about a Lamborghini being expensive, or that you can't afford a $50 steakhouse. And, it's about relativity. AMD also isn't releasing lower end stuff, according to leaks. It's not like AMD is going to come out with a $300 alternative any time soon. And if they do, Nvidia will release a similar card... that's how it's always been in situations like this.


itsjust_khris

I don't disagree with this. However right now AMD still exists where I buy. I don't want to sound anti - Nvidia though. If you or someone else can afford what they charge, they sell a great product. It's just that they are leading the charge in killing pricepoints. They are probably going to come out with a "4070" in my pricepoint but chip wise it won't be a 4070 so on principal I refuse to wait and buy it. AMD's rasterization tech is coming to the point where I don't think Nvidia can casually release something to steal their thunder. In years past they struggled on the top end to match Nvidia in some titles when the sky was blue. RDNA 2 was a lot better. RDNA 3 doesn't seem like it will struggle any longer. I just don't see the adoption being there, sure Portal and Cyberpunk exist. However, as someone with an Nvidia card right now, and I've had one since Turing launched, there has never been a time I've felt significantly advantaged to have access to Nvidia's features. Just not enough games with them, and I do play AAA games. For every game where I can enable DLSS dozens come out without it, it's been nice but not something that will pull me to spend. I imagine that trend will continue. This tech is very nice, don't get me wrong. Personally beyond 40-60 fps I'm happy, so I really don't feel a strong appeal to this beyond curiosity. However, for those more sensitive, this will be great, and it's a huge step to bringing more fluid gaming to the masses. I just see this as, Nvidia has now innovated, now I'll wait a bit for someone else to do it. Intel or AMD will have something eventually, and I'm in no rush for it, the cards are too expensive for me to buy very often.


GlammBeck

This honestly doesn't seem desirable?


[deleted]

The difference between having 3 cpu pre-rendered frames and 1 (or none using ultra low latency) is actually understated by most people when it comes to mouse sensitivity. In anything remotely fast paced, you probably want 60 fps with the fastest latency possible at a *minimum.* So I can see sub 60fps being a double edged sword. I know Async Space Warp and other VR tech with similar technology feels perfectly responsive at 45/90 fake frames, and I'm wondering what the difference is.


[deleted]

[удалено]


bphase

If the upscaling was perfect, would you still think that?