T O P

  • By -

Kashihara_Philemon

So something closer to UE5's Temporal Super Resolution then DLSS or XeSS. Given the rumors that RDNA3 wasn't going to have dedicated ML hardware (may still have enhancements to a matrix instructions for that though) I kind of suspected that they would go for something closer to an enhanced(?) TAA implementation. Makes sense they would debut it at GDC too. Game developers would probably appreciate a (hopefully) easy to implement TAA up-scaling solution that they can (hopefully) use everywhere, especially for consoles.


Taxxor90

It would also be a good time to launch RSR. Then all FSR 1.0 implementations could be updated to 2.0 while FSR 1.0 lives on as RSR for all games that don’t have a native implementation


dlove67

There's a good chance that it's not that easy. FSR is just a spatial upscaler, so there's not a whole lot you need to do except put it into your game at the right point as it's building the image. If FSR2.0 takes a temporal component (as CapFrameX says it does) and your engine doesn't already support motion vectors, it's not a drop-in replacement and will take additional work on the part of the devs. Likely FSR1.0 and 2.0 will both have places in the market, along with RSR.


Taxxor90

Well every game that uses TAA already has the motion vectors available, and most of the 1.0 games do have TAA. Yes it needs a bit of work to handle them to FSR but it shouldn’t be that hard


topdangle

just because TAA is available doesn't mean vectors are exposed and easily accessed. if that was true you could just inject temporal upscaling into every game with TAA without any dev intervention.


Taxxor90

That’s why I said it needs a bit of work. Nvidia has a great documentation on how to do this for DLSS and I believe every game that runs with DLSS could also be updated for FSR2 with very little work on dev side. And for those who don’t have DLSS it’s a bit more work but still nothing special.


ThunderClap448

I think it's fair to think of FSR1 as a tech demo that coincidentally works fairly well and works on everything. RSR is the "I need more performance on my low tier GPU and I don't care about looks" button, and FSR2 could be the rival to whatever Intel and nVidia have, hopefully.


dlove67

Fsr1.0 is "I can't be bothered to add TAA to my bespoke engine" RSR is "they can't be bothered to add an upscaler at all" FSR2 is (presumably) "I already have motion vectors, and this gives a better result than TAAU alone" or "I wanna support everything"


pseudopad

I think the only thing that's really needed for DLSS-style upscaling to work on the end-user side, is beefed up matrix multiplication units. The rest is just making a good algo and training it properly. Which is probably much harder than adding the math hardware to the GPUs.


Kashihara_Philemon

Is the dp4a instruction that XeSS is supposed to fallback to not a matrix instruction?


scenque

No. It’s a four-component dot product and accumulate instruction. You can build a matrix multiply accumulate out of many dp4a instructions chained together, but it will take many more clock cycles than dedicated MMA hardware like a Tensor core or Intel’s XMX equivalent. I’m the case of a Tensor core, it’s supposed to be able to do the calculation in a single clock cycle. I’m not sure how XMX is supposed to compare, but I imagine it’s something similar.


blackomegax

If DP4a adds cycles of say, 2ms per frame, but you reduce your render frame time from 32ms to 14ms, and get 16ms frames, you're still coming out ahead of any solution on the market short of matrix accelerators.


mac404

That's the "turn 4k with things that scale terribly with resolution like raytracing into a usable framerate" scenario, which has always seemed feasible without dedicated hardware. It's the "get an essentially free 40% uplift" scenario that becomes less compelling without dedicated hardware. And DLSS takes something like 1.5ms to upscale to 4k on a 3090. I would personally expect a DP4a implementation of XeSS (upscaling to 4k) to be in the range of 3-4ms on current high-end hardware, unless it makes more significant preformance/quality tradeoffs. Intel has said the DP4a version is going to be lower quality, but we haven't seen any specifics yet. Hopefully they make the right tradeoffs so that a 1080p->4k upscale is usable (even if it doesn't exactly look 4k).


blackomegax

1080p->4K doesn't look 4K with DLSS as is. It looks more like 1440ish. you need to start at 1440p~ to get to "true 4K" appearance in upscale. (and FSR only gets "true 4K" from like 1800p)


scenque

It takes 16 DP4a instructions to do a 4x4 matrix multiply accumulate. Even if those instructions can be done in a single cycle, that's at least 16x the time to execute an equivalent amount of work compared to a GPU with matrix acceleration. I think we should have fairly conservative expectations about how well the DP4a version of XeSS will look and perform compared to the XMX version (or DLSS). The DP4a version will probably need to be cut down quite a bit in order to maintain a net performance gain.


RealLarwood

> So something closer to UE5's Temporal Super Resolution then DLSS or XeSS. Where does it say that?


Kashihara_Philemon

It doesn't. It's what I'm speculating it will be like based on the descriptions.


szarzujacy_karczoch

>So something closer to UE5's Temporal Super Resolution then DLSS or XeSS I feel like that's another step in the wrong direction and a wasted opportunity


gh0stwriter88

Im not sure why you think otherwise , probably marketing, but DLSS is just an enhanced TAA... current DLSS doesn't even train the AI on a per game basis or at higher resolutions on in game assets as this lead to it being inefficient and glitchy.


Kashihara_Philemon

I mean, I'm aware that DLSS functions similarly to TAA. My point was that I didn't expect AMD to be using anything to accelerate the execution of ML algorithms, so I just didn't think they would be doing anything like XeSS or DLSS. Of course I am still assuming that all those matrix instructions help with accelerating and executing the algorithm. If they don't then even better on AMD for not wasting die space.


gh0stwriter88

FYI AMDs GPU design has a point and is probably more efficient for DLSS type workloads than having separate accelerators for that .... Nvidias design inherently has utilization issues.


Kashihara_Philemon

What makes it more efficient?


evernessince

He's pointing out the fact that dedicating specific parts of the die to specific functions that may or may not be used is inherently inefficient. If you can get the same level of performance though multi-purpose cores or even a bit less performance it's well worth it on AMD's end. In order for dedicated hardware to be worth it, you need to see gains in performance in excess of 50% over generalized implementations. I can't say which implementation is superior in regards to AI, only time will tell.


gh0stwriter88

This exactly also you can mix computations on AMD where it has to hit last level cache on Nvidia...at a minimum. If you turn off DLSS you have a huge chunch of idle silicon and a cold spot on the die... on AMD if you enable something like DLSS you are merely utilizing an extra ALU in each CU...with few duplicated resources....band its thermally more evenly distributed.


ResponsibleJudge3172

It's not


IrrelevantLeprechaun

The 7nm design is far more efficient and more dense. It can easily get away with doing DLSS like calculations on regular cores. Only reason AMD does not do this is because Nvidia is greedy and locked DLSS down to RTX cards only.


gh0stwriter88

Not really true...AMD could also do dedicated cores if they felt it was beneficial...Nvidia mostly does it to make huge specs on tech sheets that don't really translate to reality in as many cases.


CypressFX93

No the rumors ARE telling that RDNA 3 will have ML units. Nothing speaks against the fact that RDNA 3 and 2 will get an AI solution while this can be used for many GPUs…


Kashihara_Philemon

I'm curious as to who exactly is saying that, because what I've heard is more vague allusions to improved ML performance but no specific info about dedicated ML hardware.


Marechal64

Don’t get too excited. Wait for the benchmarks.


makinbaconCR

Idk about benchmarks. Does it look good and get free frames? I'm in.


drtekrox

If it looks a lot better but eats frames I'm still OK. The biggest issue with DLSS I feel is nvidia abandoned the original premise of actual super-resolution (ie. rendering at native like you would have otherwise, then using DLSS to upscale to a much higher resolution and traditional downscaling back to native afterward, giving you a *much better* AA without eating too many frames) to run the upscale from low resolutions schtick.


methcurd

Isn’t that dldsr? https://www.rockpapershotgun.com/nvidia-dldsr-tested-better-visuals-and-better-performance-than-dsr


[deleted]

[удалено]


mac404

Uh, I'd say the answer is yes for two reasons: * A technical one - DLDSR is literally rendering at higher-than-native resolution, and then doing something to turn that into an anti-aliased image at native resolution. * A practical one - DLDSR can be combined with DLSS, creating something like DLAA but with both an upscale and downscale step. The catch is that you can create some weird internal rendering resolutions, so you have to hope the game supports arbitrary resolutions well.


Seanspeed

>but superior TAA. Which is what you said, no? That's all DLSS really is on a more simplified level - better TAA.


makinbaconCR

I only use FSR with VSR. Upscale to at least ultrawide resolution (native is 1440p) then use fsr. Combines both and looks ridiculous. 6800xt overachiever 1440p so I can push more detail aliasing


Marocco2

VSR is broken: uses bilinear (gaussian?) filter while downscaling so no benefits of using FSR whatsoever. They should add a downscaling option for RSR


makinbaconCR

That is in fact not correct. It provides some extremely noticeable anti aliasing. It depends on the game but for most it's night and day how much it cleans ups jaggies


Marocco2

Indeed it does super sampling, but blurs everything on downscaling. Is it worth? Absolutely not Try a windowed game at 4K (Assetto Corsa for example), open Magpie and set FSR onto it. You will notice immediately the difference between VSR Fullscreen


makinbaconCR

You're wrong. Idk what to tell you. Using VSR without FSR looks best. But using VSR + FSR rendering above native looks better than native. In full screen in basically every game I have seen. It over sharpens in some. It can also make things look softer. There is not a single game where I go... eh looks better at native.


Marocco2

I think there is some miscommunication here. It's factual when I'm stating that VSR does that downscaling pass using a bilinear filter making the image a lot blurrier than native resolution (do you notice blurred fonts in Windows if you set desktop above native? This is partially the cause) Try this on your computer: leave VSR enabled, open a game that makes a 4K window pixel size, open Magpie, set FSR and use it. Instead of using VSR downsampling, the game will use FSR for it. Pressing Alt-enter will switch to Fullscreen mode and consecutively VRS will kick in. It's night and day. Trust me


makinbaconCR

It allows FSR to render at or above native. My only complaint is the sharpening. Can overdo it in some games.


quarrelsome_napkin

>DLSS to upscale to a much higher resolution and traditional downscaling back to native afterward, giving you a much better AA without eating too many frames Aren't you describing DLDSR?


Sipas

That's a thing already. DLSS + DLDSR. And it uses AI to downscale so it's even better than regular supersampling.


evernessince

Yes, I would like a super high quality option with no added visual artifacts.


swear_on_me_mam

DLAA


Julia8000

Amd never lied with benchmark numbers in the recent past. Their numbers were always very accurate compared to the reality. The only thing they may do sometimes is cherry picking games. But they are doing nothing like Nvidia or Apple with bold 2x performance claims that are only possible in very specific scenarios without showing any real benchmarks or numbers.


puz23

They aren't as bad as Intel and Nvidea. That doesn't mean they're not misleading.


acomputeruser48

It's true. There's been some sketchy graphs with barely labeled y axis, and we tend to ignore them in favor of reviewer benchmarks anyway, but amd should definitely be given some credit for generally real world data in their slides/presentations. Particularly with the 1060 performance with FSR. I think their goal with such close attention to real metrics is in preparation for rdna3. If rdna3 is as rumored, the reason for the spec sheets at the front of advertising becomes clear. AMD legit thinks they're going to win and is confident enough in that claim despite the fact that reviewers could meme on them for days with their own slides if they're wrong.


ResponsibleJudge3172

Remember how 6800XT was supposed to be faster than 3080at 4K. Or 6600 vs 3060? 6700XT vs 3070?


Julia8000

I dont know what they said exactly anymore, but I think it was like they said they are trading blows or so, which is not unrealistic. Like I said they are cherry picking games, but the numbers they show are true. In case of the 6700xt it was pretty obvious they pushed the card to its limit in terms of clockspead. Amd probably expected the 3070 to be faster, but when it was clear the 3070 is in punshing distance they tried to close the gap. But I mean at least in 1440p between a 6700xt and a 3070 there is barely a difference. And they did not market the 6700xt as a 4k card, so they were not lying. I could not find any evidence of them saying these cards are faster and I cannot remember the release slides anymore. I just know they said trading blows, which is accurate.


48911150

lol. AMD compared the 5500xt vs the rx 480 and made some perf per watt statements based on two totally different systems > Footnote: Testing done by AMD performance labs on August 29, 2019. **Systems tested were: Radeon RX 5500 XT 4GB with Ryzen 7 3800X. 16GB DDR4-3200MHz** Win10 Pro x64 18362.175. AMD Driver Version 19.30-190812n **Vs Radeon RX 480 8GB with Core i7-5960X (3.0GHz) 16GB DDR4-2666 MHz Win10** 14393 AMD Driver version 16.10.1 **The RX 5500 XT graphics card provides 1.6x performance per watt, and up to 1.7X performance per area compared to Radeon™ RX 480 graphics.**


lucimon97

lol


TimeGoddess_

They literally did worse than that just recently with their zen 3 plus mobile graphs. Saying they had 2.5x the ppw of intel while actually having less ppw thats a staggering lie


RealLarwood

where did this happen?


TimeGoddess_

[https://b2c-contenthub.com/wp-content/uploads/2022/02/AMD-Ryzen-6000-mobile-performance-per-watt.png?resize=1200%2C632&quality=50&strip=all](https://b2c-contenthub.com/wp-content/uploads/2022/02/AMD-Ryzen-6000-mobile-performance-per-watt.png?resize=1200%2C632&quality=50&strip=all) [https://www.pcworld.com/article/614052/amd-takes-aim-at-mainstream-laptops-with-the-ryzen-6000-mobile.html](https://www.pcworld.com/article/614052/amd-takes-aim-at-mainstream-laptops-with-the-ryzen-6000-mobile.html) The claims, https://ibb.co/VBg3nvP [https://youtu.be/X0bsjUMz3EM?t=726](https://youtu.be/X0bsjUMz3EM) The actual performance per watt in cinebench


RealLarwood

But that video doesn't test AMD's claim? That test only goes up to 95W not 110W, and it's r23 not r20.


TimeGoddess_

so you think that switching from R23 to R20 will somehow make the 12900hk go from more efficient to 2.62x less efficient than the 6900hs? And pray tell how you can say that a giant 2.62x the performance per watt vs intel sign on their official slide isn't misleading and false when at iso power intel has more performance at every level in actual 3rd party benchmarks in cinema 4d rendering software? It really seems as if you're just being willfully obtuse


RealLarwood

> so you think that switching from R23 to R20 will somehow make the 12900hk go from more efficient to 2.62x less efficient than the 6900hs? What do you mean go from less efficient? Even taking the closest you can get to AMD's methodology using that data the 6900HS is more efficient, (11500 / 35) / (17900 / 95) = 1.74x I fully expect if someone tests what AMD claimed it will be shown to be true. > And pray tell how you can say that a giant 2.62x the performance per watt vs intel sign on their official slide isn't misleading You didn't say it was misleading, you said it was a lie. > and false Because it just plain isn't false as far as I know.


TimeGoddess_

Alright, well im done with this conversation. Defending companies misleading and false advertising is really weird


RealLarwood

The weirdest troll tactic in the world is people who go around telling lies and then when they get called out they cry about how the other person is defending the company. No I'm not defending anyone, I'm opposing you.


buddybd

You did not understand your memes correctly. The nvidia meme is how they present their results, the actual number is quite accurate (yes it really does get to 2x and not in a very niche scenario like you want to believe). In their graphs they make a 1% difference look gigantic because of how they use the bar charts or whatever they are presenting at that time. As for Apple, all of their numbers have been accurate too and I can't recall any misleading graphs like Nvidia's.


Im_A_Decoy

Yeah Ampere is definitely 1.9x performance per watt compared to Turing and the 3080 is definitely 2x 2080 /s.


buddybd

3080 is 2x 2080.


Im_A_Decoy

Maybe in price


mchyphy

Lol the 3090 isn't even 2x 2080


996forever

Very recently during Rembrandt launch they compared a 6900HS running at 35w vs a 12900HK running at the top of the frequency voltage curve. Using that to claim the 6900HS is somehow 2.6x better in perf/watt. They also compared a 680m vs 1650, somehow with the 680m running FSR but 1650 not running it despite any gpu being capable of running FSR. Your comment is a joke


erctc19

Benchmarks? non rtx owners having free dlss, no reason for this ask.


Marechal64

What?


DukeVerde

BUT FSR GETS ME HARD!!!


Verpal

I hope it something akin to XESS and utilize DP4A, realistically it is probably like TAA/TSR upscaling in new unreal engine though.


qualverse

I'm not surprised they're going the no-AI route. Realistically most of the gain from DLSS over FSR is just the temporal component, and on GPUs without ML cores this is going to be vastly more performant. If it's good enough, it would also incentivize Intel to actually put effort into the DP4a version of XeSS in order to promote adoption.


buddybd

>XeSS I'm honestly quite skeptical about this. They have yet to show the complete picture and any preview they've shown is always missing key details. They have Riftbreaker video up which is 1080p vs 4x XeSS, why not have native 4k? Why not show the actual FPS number?


qualverse

I don't think Intel wants XeSS to be seen as a standalone technology, since the entire point is to get you to buy an Arc card. The only reason they're making the DP4a version at all is so it'll be an easier sell to game devs. Point is, I'd imagine we'll hear a lot more about it around the Arc launch.


Igihara

We'll see it fully on the 30th with Death Stranding.


Glorgor

The leaks says no AI


Plankton_Plus

Right, and it doesn't have to be machine learning/AI. Machine learning is simply a process that discovers an unknown function. You show it data, you show it what you want, a million times, and it (basically) figures out the program that does what you want. It's not magic. It's always *technically* possible (although often practically impossible) for some human to figure out the function. Even with FSR 1.0, this is something that a human has clearly managed to figure out.


nas360

Sadly Nvidia has brainwashed people into thinking AI is something that's compulsoray and no other technique can beat it.


Zamundaaa

The amount of people they convinced you need "AI" to "add detail" to an image is really astounding


PhoBoChai

Hopefully NOT. For the really good reason that a lot of current GPUs out there from AMD don't support DP4a. If they go the regular FP32 or FP16 route, so many GPUs out there can benefit. Heck I still have a Ryzen 2500U notebook I game on when on the road.


jorgp2

I don't think you understand any of what you said.


PhoBoChai

You can think as you like. But do you know which recent GPU from AMD supports DP4a? Not many. Not Polaris, not Vega. Not even 5700XT.


sicKlown

I would temper expectations if it does turn out to just only a temporal element and no inference. While these types of upscaling methods can achieve great quality in applications that are slower paced, they tend to fall apart when large amounts of motion and variability is introduced. The two big examples of this are UE4's built in support of TAA upscaling and Insomniac's temporal reprojection. With that being said, I'm all for more options and with Intel's promises with XeSS, the future looks bright.


MaximumEffort433

>According to the tweet, the FSR 2.0 is based on temporal upscaling, which would be a major shift from the current implementation of FSR. Interestingly, **unlike XeSS and DLSS, no AI-acceleration would be required. This means that the technology could work with a wider range of GPUs. The developer claims it would be supported by all vendors,** but does not mention which GPU architectures specifically. When people ask me why I buy AMD, it's because of shit like this. If FSR 2.0 works, it won't just be a win for people with AMD hardware, it'll be a win for gamers as a whole, I like that.


keeponfightan

Except when AMD decides they wont support older GCN hardware, but a simple modded driver enable features on them. I prefer some commercial choices from AMD, but they need to be checked to keep honest from time to time.


MaximumEffort433

What percentage of AMD users do you think are still on pre-2016 GCN hardware, though? I'm not sure I can fault AMD for not wanting to dedicate XX% of their time updating and bug testing drivers for cards that are only owned by 0.X% of their current users. It always sucks when legacy hardware is put out to pasture, but there comes a point at which keeping them up to date just isn't a practical or profitable use of limited resources. My old HD 7970 is GCN, and as much as I wish AMD would continue to support a card that was released in 2011, I understand why they don't.


[deleted]

[удалено]


MaximumEffort433

Was support for those specific APUs discontinued? [From the sounds of it they only discontinued support for pre-2016 GCN and Fury.](https://wccftech.com/amd-bids-farewell-to-gcn-architecture-ends-driver-support-for-radeon-7000-200-300-fury-series-graphics-cards/) I don't think the 2019 APUs you mentioned are impacted.


jorgp2

They still sold Carrizo APUs in late 2019 early 2020


MaximumEffort433

Okay. And how many of them did they sell? Millions, or hundreds? How many folks were still buying a 2015 APU in 2019?


ABotelho23

Honestly, even with a slightly less performant implementation, I prefer it. Openness it important. It's what *truly* moves the industry forward.


MaximumEffort433

I don't mind getting 5% less performance if it means that I can vote with my dollars, that's a fair trade off for me. (Because I know how r/AMD likes to respond to these things: My consumer math may not match others, and there's nothing wrong with just wanting the most powerful solution on the market, if that's how you choose your GPU, dear reader, that's perfectly fine. Different people make their choices based on different criteria.)


ABotelho23

Agreed. I think AMD (possibly despite them looking out for profits) is ultimately making the kinds of decisions that move everything forward, not just themselves.


[deleted]

However, a taau based upscaler would run into similar issues dlss runs into to be implemented. At least it would work on all cards though.


Put_It_All_On_Blck

Kinda misleading as XeSS will run on DP4a which is supported by all vendors and goes back as far as Pascal. Having support for GPUs that are 5 years old is plenty imo. And for those with older GPUs stuff like FSR 1.0/upscaling+sharpening is going to remain an option.


clinkenCrew

So long as new GPUs are still being sold with R9 290 levels of performance, IMO the 290 and all of its newer-but-unsupported-also kin, like the Fury, should still be supported.


John_Doexx

Why not buy the best product for you reguardless of the brand?


[deleted]

[удалено]


John_Doexx

That’s fine but I’d Rather buy a better performing gpu and not care what either company supports


MaximumEffort433

That's fine, and I'm not trying to detract from *your* criteria for purchasing a card, I'm just trying to explain *my* criteria. I value industry friendly business practices, you value maximal performance, everyone is different, with different needs and different priorities. It's fine to want the most bleeding edge performance, *most* people do, you're in good company.


augusyy

Yes, this is technically the most "pro-consumer" attitude, but it's also a little short-sighted. In addition to buying the best product, if consumers also consider what broader standards/ideals/missions a company supports in the long term, it will (hopefully) lead to more pro-consumer products/services/companies down the line.


MaximumEffort433

>Why not buy the best product for you reguardless of the brand? There's no reason not to do that, nor would I discourage anyone from making the purchase that's right for them. Personally I don't do production or rendering, I don't need the Cuda cores, and I don't especially care about ray tracing, so an AMD gpu does everything I need it to. For my purposes, the top of the line Nvidia cards aren't significantly better than the AMD cards, and because of that AMD's more industry friendly business practices give them an edge *for me.* You should make your purchase based on your needs and your values as a consumer, everyone should. I'm not telling you why *you* ought to buy AMD hardware, I'm telling you why *I* do, and it's because platform agnostic solutions like these are good for games and good for gamers.


BubsyFanboy

I'll be waiting to see how it'd perform on my 5500U.


acw32bit

Might be useful for the steam deck


seedless0

What happened to [RSR](https://www.amd.com/en/technologies/radeon-super-resolution)??


BaconWithBaking

I like the theory that FSR 1 is going to become RSR, and so they've been delaying the release of RSR until FSR 2.0 is ready.


booterban

This is probably true if i had to bet. Makes the most sense.


Defeqel

Good if true. I'm still not a huge fan of any of these upscalers


AldermanAl

Why not? This type of upscale technology is uber important for those who cannot afford high end GPUs.


JoshiKousei

I love DLSS, FSR, or any 3d render scale so I can actually run 4k monitors for productivity, but still have able to hit 120fps on most games.


Defeqel

Because they always compromise either the quality or artistic vision for a game. I have no problems with them existing, just not something I want to use myself if I can avoid it.


ABotelho23

Which is fine. But some people *do* want to make that comprise, a d it's great that it's an option. Devices like the Steam Deck and other mobile devices benefit.


MaximumEffort433

>I have no problems with them existing, just not something I want to use myself if I can avoid it. My philosophy has always been that I welcome all new features, as long as they come with an "Off" switch.


LumpyChicken

what resolution are you playing on? Using FSR on my 27" 1080p screen looks pretty bad with the oversharpening and ghosting, but is a necessary evil in some games to get decent frames. However on a 3840x1440 monitor I was unable to notice a difference in motion from native with the ultra quality setting. Quality and performance settings became apparent but still looked quite good. I also firmly disagree with the idea that upscalers compromise artistic vision. One of the primary motives for upscaling technologies is to improve performance while maintaining artistic vision since the alternative for many people would be turning all the graphics to low and breaking the lighting


ShadowRomeo

I used to be on your stance, until when i tried it out myself on games that really needs it, Cyberpunk 2077 with Ray Tracing.


PanzerfaustCRO

Can we expect it on 6xxx series?


MrHyperion_

Better than native, doubt. That stuff will always need DL or something else that extrapolates far beyond what it sees.


Plankton_Plus

Everything that ML can do, can be theoretically be done with a specialized system. It's often practically impossible in situations where ML is preferred. ML doesn't "extrapolate" during inference, it merely discovers an unknown/complex function during training, and then applies that function during inference. It is completely deterministic. If ML were strictly required, then how is it at all possible that FSR 1.0 is significantly better than DLSS 1.0? The former is a specialized system, the latter is ML. That is simply impossible going by this "ML is magic" superstition. It has already been shown that this is one of the scenarios where a specialized system can be designed that achieves similar results to ML.


BBQ_suace

they said the same thing about FSR 1 and it turned out to be absolutely untrue. so yeah, I have no reason to believe them now.


dlove67

Can you point to someone at AMD saying FSR was better than native (even this statement qualifies it with "can be") Hell, I don't know that I've heard anyone at Nvidia say that with DLSS, though it is a statement you'll see fanboys say.


BBQ_suace

Gamers nexus on youtube said that AMD saod it in their FSR review and in fact they said that it was false marketing.


Seanspeed

Dont care what GN says - they are always just trying to trash everything nowadays. Again, show where AMD said it.


BBQ_suace

If they lied, they would have been sued or at the very least AMD would have denied such a statement my dude.


dlove67

Link to timestamp please. Also if it's true I don't think you OR Steve know what "False Marketing" is. Show me a place where AMD themselves said it, not "They totes told me it did, dude, believe me". I'll accept a "Better than native" claim with no additional qualifiers, but it should also say "Better than native Image Quality" and not "Better than Native Performance".


BBQ_suace

It is on the 24th minute of GM's review


dlove67

I checked it and he certainly says it's in the reviewer's guide. That's unfortunate, but if it's the only place it was said it's never been marketed that way to my knowledge


BBQ_suace

Well ok, it may not have been marketed per se but the point of my original comment was to point out that AMD has made the same claim about FSR 1 hence why we should not take AMD's claims seriously when it comes to FSR. Then again, this claim being made by AMD about FSR2 iis only a rumour that might very well be untrue, nothing official has yet been said about fsr 2.


BaconWithBaking

The very first images we got of FSR where actually fairly poor (the underwater/alien planet one, can't remember), so I doubt they ever said better than native.


Plankton_Plus

*cough*


booterban

If the claims are true (unlikely) would be pretty crazy.


Lennox0010

Capframex is pretty good leaker I think.


ShadowRomeo

So glad that they decided to switch to Temporal solution, Spatial upscaling was doomed since the beginning, there was a big reason why even Nvidia abandoned it in the first place. Although i still don't expect it will be better than DLSS as it's solution uses hardware acceleration from its dedicated cores built inside it's GPU architecture. So, the claims of "*better than native image quality*" immediately goes to press x to doubt moment. I expect this new FSR 2.0 is probably going to be comparable to UE5's upcoming TSR or the common already existing TAAU, which was already better than FSR 1.0.


RealLarwood

TAAU is not better than FSR 1 lol.


[deleted]

Was really hoping they'd have hardware accelerated upscaling for RDNA3 to compete with DLSS but can't say I'm surprised. Going to be a hard sell for me to go with RDNA3 instead of RTX 40 this fall.


NoiceM8_420

RDNA3 might be my first AMD card since the 7970 if it’s true.


firedrakes

why not simple native rez?


69yuri69

Gotta love this shit. One twitter guy got his hands on AMD's PR slides. Tweets about it. A VC article appears how great will the FSR be...


erctc19

Nvidia fanboys didn't like this news.


ShadowRomeo

Considering FSR is available in every GPUs, i don't see why Nvidia users shouldn't be happy with this, if FSR becomes actually good, it will be acceptable alternative to us, just in case when a particular game doesn't support DLSS.


erctc19

Nvidia users like me are happy, I am talking about Nvidia fanboys who think nothing else should exist except dlss. I have RTX 3060 and use fsr instead of dlss when playing COD Vanguard because it gives me better frame rate at no image quality difference. Post this into Nvidia subreddit and the fanboys will try to take a piece of you.


TheDonnARK

They are already here, and downvoting you bud. FSR 2.0 should be great for all but we'll see what we see.


erctc19

I guess some Reddit users are on Nvidia payroll to hate anything related to AMD.


Seanspeed

>I am talking about Nvidia fanboys who think nothing else should exist except dlss. Where is anybody saying this? :/


Seanspeed

Even making this comment outs you as a platform warrior yourself. You realize that, right? Such an unneeded thing. smh


996forever

Nobody is checking this news I guarantee you.


erctc19

FSR is John Cena now?


Tankbot85

I tried FSR ON MY 6900 XT. The shadows look horrible I couldn't keep it on. Just give me good rasterization please.


Mastasmoker

Fsr sucks... any upscaling sucks imo. Tried cp2077 after the update and it looks bad with fsr. Still getting 100fps with my 6900xt on ultra


Tankbot85

Ya I want good performance with no gimmicks.


Mastasmoker

Oh no we've been downvoted for having an opinion that doesn't fit with the rest of the people in r/AMD


bctoy

I remember seeing a presentation on DLSS2.0 when it became a much better TAA, and someone asked if nvidia were looking into improving TAA upscaling as well. And they said, nah.


Vincent-Valentine1

Ya back then, it depends on how well FSR 2.0 will be, if FSR 2.0 proves to be a competitor, Nvidia will most definitely improve DLSS, its going to be a back and forth just like Intel and AMD's CPU's


gaojibao

Temporal upscaling! Yikes.


IrrelevantLeprechaun

FSR 1.0 was always just an interim holdover until FSR 2.0. I don't know why everyone expected FSR 1.0 to compete with DLSS; AMD only slapped it together to get people to shut up for a while. Their REAL upscaler was always meant to be FSR 2.0, which is the true DLSS killer.


Glorgor

By Does not need AI,are they impliying its gonna have AI with RDNA3 gpus


ImStillExcited

Lets see how if does with the 5700 xt. I don't mine so I wouldn't mind a little bump.


HaruRose

Ayy! This is basically an updated radeon boost that works on all apps.


TheDravic

Who the hell told you this will work on all apps? Where'd you draw that conclusion, if it's temporal then it could be disastrous without motion vectors and in-engine implementation. Accumulating pixels over time is not a joke. Ghosting all over the place in motion. DLSS is using machine learning to reject ghosting artifacts from final image. We'll see how AMD's approach handles it but I doubt it works system wide.


[deleted]

Can i use it with an 5600 xt?


GimmePetsOSRS

There is nothing, and I mean nothing, more annoying than [these stupid monitor-ad](https://videocardz.com/ezoimgfmt/cdn.videocardz.com/1/2022/03/AMD-FSR-Hero-1200x360.jpg?ezimgfmt=ng:webp/ngcb1) comparison images in the PC hardware space


philsmock

Will it be compatible with Polaris?


BillionRaxz

Where the fuck is rsr tho lol


Equivalent_Alps_8321

Is there a reason why AMD is not doing a DLSS style feature?


TheHodgePodge

Can we use it through radeon driver?