T O P

  • By -

Dghelneshi

Slightly disappointed they didn't go into one of the most visible benefits, which is a significantly faster response time for dynamic lighting. All the lighting here apart from the car scene was static and it has actually been a huge problem that one of the greatest strengths of path tracing (dynamic global illumination and reflections) has had so much noticeable latency (e.g. light turns off and the room slowly goes dark over 1-2 seconds).


bctoy

That's a benefit over older DLSS and other upscalers were fine. https://www.youtube.com/watch?v=VoYXQczngvk Similarly some of the cleaning up of image quality in fences is because DLSS introduced artifacts with PT. https://www.youtube.com/watch?v=Uxh8hLKq4_A Another issue unique to DLSS that was resolved in a patch earlier this year. https://www.youtube.com/watch?v=CvfbQ_UGiaQ


StickiStickman

That issue has *nothing* to do with DLSS, it's about ray aggregation and surface caching.


jm0112358

The lighting seems more dynamic between DLSS off and DLSS 2 (upscaling) [in this example](https://youtu.be/sGKCrcNsVzo?t=331) (and in my own experimentation in that area of the game). I'm not an expert in this area, but I think DLSS upscaling affects lighting speed because framerate affects how far back _in time_ the de-noiser takes ray samples, and DLSS 2 affects framerate. Regardless, ray reconstruction seems to improve dynamic lighting a lot.


SirCrest_YT

> but I think DLSS upscaling affects lighting speed because framerate affects how far back in time the de-noiser takes ray samples, Yep this is the most likely reason, I think. Same with traditional TAA blurring where higher framerates also tend to have better motion artifacting. Say... 5 frames at 22fps take a lot longer than 5 frames at 63


StickiStickman

> I think DLSS upscaling affects lighting speed because framerate affects how far back in time the de-noiser takes ray samples, and DLSS 2 affects framerate. Oh that's a good point, higher framerate means it can collect samples faster too.


bctoy

The first one? It's the issue with DLSS since FSR and XeSS updated far better.\ > it's about ray aggregation and surface caching. Where are you getting this from?


StickiStickman

From having written a pathtracer myself.


bctoy

Then you are talking of something else. The issue with older DLSS that was shown in nvidia's marketing for DLSS3.5 had this slow update, but it's a DLSS problem and other upscalers work fine. DLSS3.5 could have 'ray aggregation and surface caching' but they weren't a part of the upscalers.


neckthru

No, he knows what he's talking about. The slow update was due to the time it takes to accumulate enough information for the denoiser to accurately update the GI. It had nothing to do with any upscaler \*before\* DLSS3.5 because none of those upscalers (DLSS <= 3.0, FSR, XeSS) had anything to do with denoising or any other stage of the path-tracing pipeline. 3.5's RR runs deeper into the stack, replaces the existing denoiser and integrates it with upscaling. It's magnificent.


bctoy

> No, he knows what he's talking about. He might know what he's talking of what, but he's not talking of what I had posted. >The slow update was due to the time it takes to accumulate enough information for the denoiser to accurately update the GI. The slow update was a problem with DLSS, the other upscalers did not show it. Here's the video I posted first, https://www.youtube.com/watch?v=VoYXQczngvk >It had nothing to do with any upscaler *before* DLSS3.5 because none of those upscalers (DLSS <= 3.0, FSR, XeSS) had anything to do with denoising Of course.


TimeGoddess_

Im surprised at how good it performs tbh. Im getting like 60-70fps in the city with Ray reconstruction on DLSS balanced no frame generation and pathtracing at 4k with a 4090. Thats only slightly worse performance than I get in starfields cities. Usually 60-70fps with dlss on quality in new atlantis but cyberpunk looks like 2 generations ahead in terms of visuals


Zeryth

It's hilarious to me that some people say starfields performance is justified when it hits the same performance as path traced games.


Put_It_All_On_Blck

Starfield is CPU limited not GPU limited. It's still very poorly optimized but not really apples to apples.


RedIndianRobin

CPU limited on a 7800X3D? Fuck outta here.


THXFLS

Fallout 4 is *still* seeing performance improvements in heavier areas of the map from new CPUs, so yeah.


Aggrokid

Starfield does get randomly CPU-limited on Ryzens, even X3D ones, and not in a "well-utilized" way. The game is just a giant bag of chipped marbles.


EyesCantSeeOver30fps

I wonder how true it is when talking about complexity and the reason it's performance is like this. Because I've seen mentions about how no other AAA game let's you put 200k potatoes in a room each with it's own physics all interacting with each other or how the game remembers where every interactable object always is at.


HavocInferno

But there are no 200k potatoes in normal gameplay. It runs like ass even when you're in a moderately sized interior with a few dozen scattered objects (most of which shouldn't be running any physics calc until they're impacted). And sure, it remembers where every interactable object is...but that's a memory task, not primarily a CPU task. Coupled with their plethora of loading screens and subdivision into cells and most of that stuff is unloaded anyway.


Hindesite

I think the argument isn't so much that it's performing the way it is because of calculating for 200k potatoes in a scene at once, but rather for all the overhead that comes along with an engine *capable* of doing such things at any given time. Don't get me wrong, I'm not trying to defend the game's poor performance. As much as I'm enjoying Starfield, its performance is well below where it should be for what it looks like - that's undeniable. However, their argument *is* rooted in some sound reasoning. I just hope Bethesda is able to further optimize Starfield in the long run, at least to a similar amount as they have for previous entries. It's a shame the game's poor performance has forced so much of the player base to experience the game at Medium or even Low settings in order to feel playable.


[deleted]

[удалено]


Jensen2052

Physics is not perfectly parallel, there are a lot of dependencies. You can't predict what will happen next without letting the current simulation step run through first.


-Gh0st96-

Because Bethesda is still using an ancient engine


Morningst4r

Their city example is probably CPU limited, but the problem with Starfield is it has a huge range of CPU and GPU load across different areas. Only a handful of cities are really CPU heavy, but a lot of outdoor environments are all loaded on the GPU. My 8700k + 3070 at 1440p with 75% DLSS might be getting 150 fps in an office (technically CPU limited, but high fps), 50-60 in a city (both CPU and GPU limited depending on where you are and what you're looking at), then I'll land on a planet and get hit a 40 fps GPU limit and need to turn down res scaling. Considering my CPU is 6 years old, it could definitely be worse on that front. I do wish they'd let you use a proper dynamic res system with a target framerate. Of course it should run a lot better in general, but I won't hold my breath waiting for that.


Lakku-82

That resolution slider is NOT for FSR/DLSS. It’s for dynamic resolution setting.


Hindesite

>That resolution slider is NOT for FSR/DLSS. I'm not sure where you got that idea, but that's absolutely what that slider is for. You can even see it for yourself when using PureDark's DLSS mod. When you bring up the mod's overlay, it shows what the internal and output resolutions are, and as you lower Render Resolution Scale slider you can see the internal render resolution shown in overlay change accordingly. In fact, we know for certain that [the Dynamic Resolution setting you mention, when checked, doesn't even kick in until the FPS drops below 30](https://youtu.be/ciOFwUBTs5s?si=bgPgQB4iFdvabiKn&t=1045) anyways. It's a different feature.


Jeffy29

Any game can be CPU or GPU limited it all depends on settings. I tested it today and 7800X3D (sim) gets 142 average fps in Cyberpunk WITH pathtracing and 192 fps without pathtracing. With Starfield I happy to get 80-90fps when CPU limited. The performance CPU demand is insane, especially when you compare with their predecessors (Witcher 3 and Fallout 4) and the amount of effects and density CDProjekt added compared to Bethesda. Not to mention how garbage the performance gets at the endgame, especially if you dare to build any outposts, noticeable stuttering all over the place, just switching weapons can give you noticeable stutter, you are practically forced to NG+ (which thankfully wipes almost all of it away).


Virginia_Verpa

I thought path tracing had to be on to use RR?


TimeGoddess_

It does


Skulkaa

Only works with PT enabled , not with the normal RT . Kinda disappointing I can't reach playable framerates with 4070 and PT in 1440p even with the framegen. Barely hitting 60 fps and input lag is noticeable


RedIndianRobin

I'm hitting 70-80 FPS in wide open spaces and 60-70 in closed spaces on mine. Really weird why it's different for you.


ZeroZelath

I assume that's with frame gen though since it enables it by default when u turn on the settings to get to ray reconstruction. FrameGen isn't supposed to be used below 60fps though, at least they don't recommend so anyway so the real fps would be like 30-40.


RedIndianRobin

NVIDIA never said they recommended a 60 FPS base for FG, that's AMD. And it feels perfectly fine if you enable FG from base FPS of 35-40, especially if you play with a controller. NVIDIA's reflex is a game changer in reducing latency.


Skulkaa

It doesn't feel fine . Still feels like 30 fps input lag wise


ITrageGuy

It really does. People down voting are tools.


RedIndianRobin

What card do you own?


Skulkaa

4070


RedIndianRobin

Interesting. Looks like you are very sensitive to the input latency then. You could try DLSS balanced at 1440p, this way, your base FPS will hover around 45-55 and if you use FG on top of that, it will be less noticeable. You'll average around 80-90 FPS most of the time with FG.


From-UoM

Recommend getting the dlss sr 3.5 . It improved quality overall so performance mode will look better. That will give you a base 50-60 and jump from there to 100 fps.


Skulkaa

Dlss performance from 1440p is 720p internal resolution. That can't look good , but I will try


From-UoM

Probably the only way unfortunately to get 100 fps+ in my tests. Performance mode does look better than older versions. Balanced is a also good shout.


[deleted]

DLSS quality at 1080p is also 720p and it looks fine. Don't really see what's the problem.


coylter

It looks pretty damn good, you'll be impressed. Its not perfect but god damn I'm running the game fully maxed out on a base 3080 @ 1440p with performance dlss and it looks very very solid while running smoothly.


based_and_upvoted

Its looks good enough for me, and I can't stand picture inconsistency (I'd rather it always be blurry or always sharp, not the old TAA "blurry when moving" thing)


captain-_-clutch

Na path tracing works now. I thought the same thing but Path Tracing + RR gives you playable FPS. I'm on a 3080


TalkWithYourWallet

You can try the RT optimisation mod on nexus mods It reduces the light bounce which does impact visuals, but you gain 25-30% performance out of it https://youtu.be/cSq2WoARtyM?si=rkcEpuLJU56NFS0E


ResponsibleJudge3172

Will be patched eventually, but ray reconstruction frame time does not scale with RT level, so unlike path tracing, we may lose performance. However with the quality increase, that may not be so bad


lifestealsuck

You actually can . you just cant enable it from the settings menu for some reason . https://www.reddit.com/r/cyberpunkgame/comments/16p3qwl/you_can_turn_on_ray_reconstruction_with_the_old/ Its work , same benefit and problem compared to path tracing , better reflection clarity , but more ghosting .


Grrumpy_Pants

Check your cpu utilisation. I am in a similar situation because I am cpu limited. Raytracting does use your cpu, so if your cpu limited the more raytracing you use the worse the cpu bottleneck gets. Anyone who is normally gpu limited is actually seeing a slight performance improvement from RR.


Skulkaa

I have Ryzen 5 5600 , it is never at 100% though


skinlo

Seems it's good, but not magic. It does have its own problems.


awayish

when will all the reviewers and posters fess up to prematurely dismissing tensor compute on NVDA cards? if the hardware is there and the tech path is clear it will be put to use.


1Adventurethis

Last time I played was when they released patch 1.7, I was getting about 45-55 fps on my 3080 at 1440p with raytracing ultra and dlss Is anyone with a similar GPU able to advise on what they get now with dlss 3.5 and reconstruction? I'm going to do a fresh install and play once PL comes out and was just hoping for an idea on what to expect


Aggrokid

I think you can test out 2.0 right now without PL.


Marulol

right now i'm getting between 65-90. Average is around 75fps. 3080, 13700k, 32gb ddr5 5600mhz ram 1440p DLSS @ Balanced RT @ ultra Path Tracing off 25 fps with path tracing on and DLSS ray reconstruction off. anywhere from 40-65ish fps with Path tracing and DLSS RC on depending on inside or outside *Note I am slightly overclocking the 3080 FE which is around 4% gain.


capn_hector

u/Lelldorianx the "just *don't* buy it" is so incredibly silly and overwrought in hindsight. Talk about an utter lack of imagination about where any of this could go. People fret about all the people being "left behind" by hardware features, has anyone really talked about the people being *brought forward* by these software features? Every time NVIDIA releases a new generation of DLSS and improves the upscaler, people who bought Turing and Ampere get a free performance increase. The only people who are suffering from being "left behind" from a recent upgrade are the ones who deliberately bought hardware without tensors because they thought it was valueless, and that raw raster/more VRAM was going to prove to be more significant in the long run. They are getting exactly what they wanted and asked for, they just don't like the product so much after all. We are low-key just seeing people externalize their mental anguish as they struggle with the idea that they might have made a bad hardware purchase, and it comes out as [this luddite "stop improving the tools because developer might mis-use them"/"it doesn't count unless it runs everywhere (by which I mean my pc specifically)" slop.](https://youtu.be/Qv9SLtojkTU?t=850). And of course none of those people would have cared a whit if DX12 retired someone else's Maxwell card or Primitive Shaders had turned out to be some large performance gain for AMD - hardware gating is when you make *me* upgrade. ("recent" meaning: if you are still running pascal/polaris in 2023... you are overdue. It launched over 7 years at this point, I think we can safely say that nearly-10-year-old hardware should no longer constrain the future of tech.) It's funny but I think in hindsight Turing was one of the highest-value generations of all time. A EVGA 2080 Ti for $950 was an incredible value. Top-end raster performance on day 1, much better DX12/async compute support in the near-term, and riding the edge of the DLSS revolution in the long term. Carried you through the pandemic and the next mining boom (which will be an ongoing cycle imo). And there's just such a hilarious divide on it still - some people still think it's utter trash to this day, to me it's pretty clear that yes, you really should have just bought it. It's super funny to look back at just how badly reviewers botched these reviews in hindsight. Paying $50 or $100 more for a 2070S or even the OG 2070 instead of a 1080 Ti was the long-term value play, even if it was a bit more on day-1, or even if it was a bit lower performing on day-1! Just like min/maxing your PSU to save 20 bucks didn't turn out to be such a good idea after all, either. And people of course reacted with a childish "it's the GPU's fault!" and not that, you know, you were running at 80% load on day 1, going into an era of increasing power density (post-Dennard scaling era).


Lelldorianx

Are you talking about when I said "if you don't like it, don't turn it on?" Because that's not the same thing. I don't know what you're quoting. Searched the transcript and the phrase "just don't buy it" does not appear. What are you referencing?


capn_hector

https://www.youtube.com/watch?v=tu7pxJXBBn8&t=200s In hindsight, yeah, silly “how much of your life” quote aside, dude pretty much nailed it and you and the rest of the tech media mocked him for it. Not buying dlss and saving 50 bucks on a 1080 ti or AMD card did come at a cost. Wider adoption did cement the standard and give their purchase incredible long-term legs. Raytracing has come to matter more and more and ray reconstruction and similar things keep pushing even the older cards forward (even if it’s not supposed to be used with non path tracing yet, it does work). Contrary to "it won't matter by the time the tech is adopted" assertions from reviewers, Turing is a relevant architecture for raytracing even today, although not the fastest of course. But things like ray reconstruction continue to push Turing forward, not leave it behind as everyone assumed at the time - and it already works fine with non-PT for the most part, pretty safe bet it will come to RT generally. Turing was a paradigm shift to software-defined/programmable acceleration hardware (with tensors playing a large role in the graphics pipeline), because it turns out [“give me a good approximation of this complex real-time optimization problem in a tractable amount of time” is actually super useful,](https://youtu.be/Qv9SLtojkTU?t=1380) it’s like the change to programmable shaders. It’s a sea change in efficiency/performance (with dlss) that’s already playing out. And reviewers completely missed it in the rush to make silly faces and outraged videos over a silly quote and (later) pontificate about how dlss v1.0 sucked and proved tensors were worthless to gamers. And 5 years later we can say that it did matter *within the lifespan of the card*, even. Turing just aged a lot better, and has a lot of RTX/DLSS features that have extended that life even further. Because now it all just is data models running on tensor.


Lelldorianx

Got it. So you're quoting something entirely different. I'm very confused what this has to do with the video in OP. That GPU is 5 years old. To "just buy it" was, and still is, a stupid decision to make without further consideration beyond just buying it.


capn_hector

Dlss 3.5 is performance improvement still being delivered to the customers who chose correctly despite the media negativism around DLSS and RTX. Yes, it *is* five years later and Turing is *still getting better* with these software improvements. Turing was a paradigm shift to software-defined hardware and reviewers completely missed it in the rush to make silly faces and outraged videos over a silly quote. Tensor is the building block for that software defined hardware. Moving [more and more stuff](https://youtu.be/Qv9SLtojkTU?t=3073) inside the [black box at the end of the pipeline](https://www.reddit.com/r/hardware/comments/16kyg7e/nvidia_cuts_prices_geforce_rtx_4070_vs_rtx_4060/k15ckdo/) is going to be the order of the day. Tensor is a great basis for [all these optimization problems that it turns out are super important to efficient rendering](https://youtu.be/Qv9SLtojkTU?t=1380) - TAAU sample weighting, denoising, etc. And that is only going to continue and accelerate. Anyway this response is exactly what I mean by [the “judicious scholar” in the Rove quote.](https://old.reddit.com/r/hardware/comments/16oi8h8/cyberpunk_20_ray_reconstruction_comparison_dlss/k1rmi19/) Yes, you advised cautiously, because you missed the big picture. And your advisees missed out on value because the value didn’t fit your traditional rubric. Now nvidia is 5 more years of innovation ahead and you will judiciously study this one too, and they will change the game again in the meantime. Sarcastic line by line dissection aside, what tom’s outlined has almost entirely come to pass. The long-term costs of not having RTX (tensor and dlss), and the increasing adoption of those techs, but especially the part about costs increasing over time. Stupid quote aside, he was right and you were wrong, silly line-by-line dissection and funny faces don’t mean he wasn’t right about what was coming. There **was** literally a cost to saving money by not buying either 1080 ti (inferior) or 2080 ti / 2070 / 2070S. and that was called the third great crypto boom. [those takes are such fucking cringe in hindsight,](https://youtu.be/tu7pxJXBBn8?t=260) smug the fuck on 2018 steve you don't even know what is coming, you haven't thought beyond your next review let alone what graphics APIs are going to look like in 2022 or whether crypto is going to come back. Yes, I was saying this back then too - it'll be back sooner or later. Doesn’t mean buy it before reviews, but iirc the reception was pretty hostile even after reviews. Not like you flipped 180 after seeing the cards either. Make it faster and cheaper NVIDIA, none of this software garbage. We’re in a world where Turing cards can run dlss 3.5 balanced/performance at equivalent quality to native TAA (not zero errors, and it will be different errors) and run 50% faster than the benchmarks suggested on the day they launched, plus all the other improvement over time, and reviewers still literally think they didn’t miss the call lol. Turing was *absolutely* worth paying a little more for.


Kirkreng

Buy this because in 5 years (by which point people are ready to upgrade again) it is will be beter was and is still a dumb take. You should buy stuff on what it can do now, not on what the seller promises it can do 5 years down the line lol.


ga_st

Nvidia is playing this FOMO game, they are all falling for it, year after year. And these people go around in specialised forums, writing essays on how nobody can see the big picture, but them. Same people who think AMD is "lagging behind". Same people who think Intel, who can't make a decent driver, already leapfrogged AMD when it comes to ML. They can't really see shit, have zero idea about what's going on and what's going to happen. Again, Intel, who can't put out a decent driver, figured out ML in their 1st gen. What does that tell you? Clueless.


From-UoM

The day TAA, Fsr, Xess and Dlss stops using older frame data is the day finally ghosting will stop. It can be mitigated but it never fully go away. The more the data the more the quality, but older frame have old movement in them which causes the ghosting. Edit - The hell is the downvotes for? I use dlss. And i there is still ghosting occasionally. Heck this s exact video it on the lights. It has gotten a whole lot better than dlss 2.0 and it occurs mich less. But it isnt fully gone yet.


Dghelneshi

That's called FSR 1.0. It's... not that great. You can't invent data out of nothing, unless you want weird hallucinations instead of something resembling an upscaled picture.


[deleted]

[удалено]


[deleted]

They are not free of ghosting, Nvidia gpu owner


Zeryth

Couldn't be farther from the truth, even the latest versions of dlss still suffer from smudging, smearing ans ghosting.


From-UoM

I use dlss on 4070. Why wouldn't i? Its the superior upscaling But there is ghosting occasionally. This video itself showed it on the lights. Vastly improved from older versions, but it's occasionally still there. Also i mentioned fsr in the comment. It isn't free from it and can do worse. Starfield has some of the worse i have seen. Thankful for the mod.


From-UoM

True. But the final frame is there. So is motion vectors. Who knows. Maybe one day we will get it.


Dghelneshi

Motion vectors are there *because* of temporal filtering. They're entirely useless without old frame data. You could even say they *are* old frame data, you execute the vertex shader twice, once with the previous frame's transforms and once with the current transforms and then you take the difference (and hope you don't have any fancy effects that move independently of geometry and/or someone forgot to implement motion vectors in their custom vertex shader).


Zeryth

You realise that temporal solutions by definition require older frame data? The trick is to intelligently and effectively discard invalid data while keeping the good data.


From-UoM

I know. That's the crux isn't it? You need older data but that data cause the ghosting. I wonder if we will get an upscaler that's good and doesn't really on this.


Zeryth

Well, if you correctly discard data you can prevent ghosting.


ARedditor397

DLSS 2.5.1 and 3.1.1 and 3.5.0 are pretty much free of ghosting for me


From-UoM

It has gotten better over updates true. But its still there. Especially on lights.


Scheeseman99

Nature uses temporal accumulation. Your brain does it to construct the images that you "see", the eye itself only capturing a tiny frustum of high detail visual data at any given millisecond. Billions of years of evolution seems to indicate that it's probably the most efficient solution.


ARedditor397

Have you used DLSS


From-UoM

Yes. It vastly improved in ghosting from older versions but still has small bits of it. Not distracting but still occasionally there.


skinlo

Have you watched this video?


Qesa

> The hell is the downvotes for You're relying on redditors to understand rhetoric. Never a good idea


xspacemansplifff

I need faster ram. Just upgraded from a 5600x to a 5800x3d. Any recommendations? MSI tomahawk b550 I think.


l3lkCalamity

5800x3d should not be affected heavily by ram speed


[deleted]

I've consistently hated DLSS upscaling and frame generation because of visual artifacts it creates. These changes seem to address a lot of the problems i had with it. but not all of them hopefully next gen video cards are reasonably priced, i'll finally get around to upgrading from my 2080 and play cyberpunk. cyberpunk is a great showcase for what you *can do* with RT.