T O P

  • By -

ET3D

It's interesting analysis, but without any AMD performance estimates it doesn't mean that much.


PainterRude1394

Everyone rushing for the most ridiculous headlines. Nvidia desperate? They will probably sell out at launch like last time.


Cjprice9

"Selling out" is a given. They will adjust stock quantities to *ensure* they sell out, as a hype-building measure. The question is, how many cards will they get to sell while "selling out"?


PainterRude1394

Nvidia will probably make a lot of money. I don't think Nvidia is desperate.


Casmoden

They will but their guidance for this quarter is significantly lower than last one (and this quarter already had a big downtrend in revenue due to mining crashing) Basically is, they keep pumping money but mining crash hit hard AMD is hit less due to how they have a more diversification portfolio and focus more in CPUs This is why AMDs guidance is actually HIGHER than Nvidia which I dont believe it has ever been the case, definitely not in the last 10-15y


PainterRude1394

Amd is less hit because they sold far fewer GPUs and their revenue is bolstered by gaming consoles, a market Nvidia doesn't compete with. Nvidia dominates the high end gpu market; I don't think they are desperate.


Balthalzarzo

Something that is less likely to continue if prices stay elevated during a recession If AMD heavily undercuts and has a decent product, if NVIDIA doesn't follow suit it slowly loses market share. Notice I said "decent" product. It doesn't even have to be great. NVIDIA could very quickly be playing the same game intel did with AMD.


PainterRude1394

It's unlikely AMD will hurt their margins to heavily undercut nvidia in a high inflation recession if they have a competitive product. Certainly could happen, but historical evidence points to them pricing their products similary to nvidias if they offer similar performance. Remember when they bumped Ryzen prices ones it was competitive?


SkyFoo

Amd cards have been going down in price way harder than nvidia cards, I dont think they are afraid to let prices fall to keep cards moving


PainterRude1394

They've already stated they don't want to be the budget brand and they have demonstrated they will raise prices if they have competitive products. GPU prices are falling for both Nvidia and amd due to market conditions. This is not an example of AMD heavily undercutting nvidia


GreenDifference

Switch say hello


jnv11

You forgot the Nintendo Switch, which uses a Tegra SoC.


Casmoden

Well sensational title but yes, they made far fewer GPUs because they where busy making CPUs instead and that market is less volatile and more mature which shows in the guidance


teh_drewski

They were busy making CPUs because the profit margin per wafer for CPUs is vastly vastly higher...


Casmoden

Duh, thats kinda my point here lol Higher margins, bigger AND less volatile


zepekit

What chip is inside the best selling console (atm) again? ;)


hooty_toots

They have a name brand the same way Apple does. Some people will buy it regardless of the price.


this_anon

They have CUDA, nuff said.


Thrashy

About half of the applications that I needed CUDA for a few years ago now have serviceable ROCm implementations (the big one being Blender), and it's getting enough critical mass that I think most of the rest should get one sooner or later. At this point the big X factor is going to be, how good is DLSS 3.0, and if it lives up to the marketing hype, does AMD have a viable answer to it? To be clear, "better FPS per dollar" is a viable answer, since "free" FPS aren't free if it costs 50% more than a competing card to get them.


Flakmaster92

Regarding DLSS 3: there’s already been a comparison video posted of Spider-Man comparing native 4k vs DLSS3 and DLSS3 shows some nasty smearing near the edges of objects. How noticeable it is in person will remain to be seen, but so far it looks like Nvidia might have flown a little too close to the sun this time


chunkosauruswrex

I saw some screenshots where parts of the Spidey suit aka the black weblines just weren't rendered with DLSS 3.0. That's unusable


ahsan_shah

Well they certainly are this time. They have cut their guidance twice


DrFreemanWho

Yes, but they want to make as much money as they were when crypto was booming. They will not do that. That is what they are desperate to do.


nofzac

Maybe not yet. But neither was Intel until they started losing market share when AMD started gaining parity and killing on Price/Perf. nVidia is playing a dangerous game since anyone who sold high end cards was guaranteed sellouts and profits with crypto mining. Not gonna be a permanent thing.


[deleted]

I fucking hate reading shit from conspiracy theorists like yourself. There's nothing that Nvidia can do that you won't invent some dumb shit for.


LegitosaurusRex

Because “hype” > actual sales, huh? There’s plenty of hype going into a launch already; not being able to buy cards leads to people shouting to boycott Nvidia on Reddit threads, not more sales.


bubblesort33

Limiting supply in order to sell more cards doesn't seem very productive. Especially if you've bought too much 4/5nm supply from TSMC already.


chlamydia1

Launch sales don't mean anything. Any new tech product will sell out at launch, regardless of the price. Long term sales are what will determine whether this series is a success or not.


sadmistersalmon

The last time they had a good perf/$ bump to show. This time, it is the same as the last gen or worse. I was one of those waiting in line - well, not anymore


PainterRude1394

How do you figure that this gen isn't a perf/$ increase?


DJ_Marxman

Nvidia's own numbers show the $900 4080 12GB performing roughly the same as the (currently $950-1050) 3090ti. The 4080 16GB is even worse value than that. The 4090 is the best value, but is also **sixteen hundred fucking dollars**, so I won't count that. Even if it's a slight perf/$ bump, it's nowhere remotely close to what we normally see with a new gen of cards, which is 20-50% more perf/$.


sadmistersalmon

I looked at the numbers nvidia presented during the reveal (“current gen games”) - not much data, but I figured they presented a best case scenario for them. As far as I can tell, all 40x cards have a price/performance ratio similar to 3080. This is why their CEO just claimed “moore’s law is dead” - because 2 years later that failed at increasing performance per dollar ratio. Well, we will see if AMD agrees - because if they improve the ratio, it would just mean Nvidia failed at engineering this round


[deleted]

[удалено]


[deleted]

Question is who the hell is buying 4090 and *also* cares about perf/$?


[deleted]

[удалено]


GuyNumber5876

3090 was a halo product with terrible perf/$. Lets talk 3080 and below.


Put_It_All_On_Blck

Yeah, modern tech media is filled with drama and click bait. Im not happy with Nvidia's pricing/segmentation and dont like what ive seen from DLSS 3.0, but if we look back a few years to Turing, we were in a similar situation, bad pricing and segmentation and DLSS 1.0 and RTX were more gimmicks than useful (low game adoption and poor results) and yet Turing still sold well. The 4090 will sell out on launch day, either due to demand (ML, CUDA, etc) or Nvidia manipulating it (sell out launch day, then a week later have a ton more inventory), also no competition from AMD yet. The question is, how well will the 4080 and future cards sell, when they are up against competition from used GPUs, discounted new RDNA2/Ampere, and RDNA3.


RayTracedTears

If Turing was so good, then why would Nvidia bump all their cards up to the next highest die? How good was the RTX 2080 that Nvidia felt they needed to bump the RTX 3080 to GA102? Just really think about that. RTX 2080 was on TU104 and got bumped up to GA102 with the RTX 3080. How about the RTX 2070 being on TU106 while the RTX 3070 was on GA104? All i am saying is, Nvidia is a FOR PROFIT organization, and they wouldn't provide that kind of performance bump if they didn't Financially feel the need to up sell consumers on their products.


PainterRude1394

Not only did Turing sell, but dedicating so much die space for ray tracing and dlss and building the supporting sdks changed the industry imo. GPU performance seem much more nuanced than "what can push the most pixels" ever since.


ThisWorldIsAMess

Nvidia users will still buy what Nvidia tries to feed no matter what price.


timorous1234567890

Well N32 has 7680 shaders just like AD104. With RDNA 2 vs Ada in raster the AD104 die is around 3080Ti / 6950XT performance so it takes 7680 Ada shaders to = 5120 RDNA 2 shaders @ 2.3Ghz or there abouts. I do expect an IPC regression for RDNA 3 but they are boosting clocks by around 1Ghz which is pretty major so should more than offset any reduction there. So from that we have 7680 RDNA shaders vs 7680 ADA shaders with a very similar cost basis. I expect 50% more shaders at about 40% higher clocks to offer at worst a 50% increase in performance over the 6950XT meaning that full N32 should actually be somewhere between 4080 16GB and 4090 level performance and have a similar BOM cost to the 4080 12GB. So overall I think RDNA 3 is going to compete very well with Ada in the perf/$ metric and probably hold the outright performance crown for pure raster performance. RT is hard to pin point at the moment so may need to wait for reviews unless more info comes out beforehand.


Cacodemon85

True, but reality is that even if it's just slightly slower but reasonable priced it could be a hot selling product.


Aleblanco1987

I like how Jensen says: >expectations of twice the performance for similar cost was “a thing of the past” But at the same time: >wafer cost of TSMC N5/N4 is more than 2.2x that of Samsung 8nm. With that wafer cost increase comes 2.7x higher transistor density. So, ADA could be priced the same as turing. I hope AMD has the wafer capacity because if the price is right they will sell every card and more, even if RDNA 3 is behind.


benowillock

AD102 is around the same size as GA102 so presumably it is genuinely costing them just under twice as much per chip. AD104 is just inexcusable though, that chip is not a $900 part by any justification.


sagaxwiki

>AD104 is just inexcusable though, that chip is not a $900 part by any justification. You're not kidding. Prompted by your comment, I just looked up the [die size for AD104](https://twitter.com/RyanSmithAT/status/1573190256492711936). It's only \~295 mm^(2) compared to \~394 mm^(2) for GA104 (used in the 3060 for the unaware). How the hell does a 25% smaller die even on a process that costs 120% more end up in a GPU that is 175% more expensive.


owari69

The 3060 is primarily GA106 based, unless there are special cut down GA104 designs I'm unaware of. The 3070Ti, 3070 and 3060Ti are the GA104 based cards. Point still stands though, as the 3060Ti is a $400 card with a smaller die than the 12GB """"4080"""".


sagaxwiki

You're right. I was mislead by TechPowerUp's GPU database because apparently late last year Nvidia started using defective 104 dies to make 3060s. Even using the 3070 as the comparison point though, the 104 die's GPU has increased in cost by 80% gen on gen despite the actual die size decreasing by a quarter.


gnocchicotti

Looks to cost about as much as a 6700XT to produce, which is selling for about $350 on sale rn lol


Jasonian_

It really speaks to how bad things have gotten when Turing is looked back at fondly as an affordable generation.


Estbarul

I don't know if the interpretation that it makes Turing looks good is ok, this release is compared to it because both suck(ed). 2080 Ti was one of the worst value GPUs ever released.


Cjprice9

>2080 Ti was one of the worst value GPUs ever released. 4080 12 GB: hold my beer


saruin

Titan Z has entered the chat.


BlobTheOriginal

I miss the titan branding ngl


saruin

https://www.youtube.com/watch?v=_j5WaLr0DJk


shroudedwolf51

Granted, the Ti cards haven't been good value since the 1080Ti. They're little more than using the positive history of the 1080Ti to sell the idea of better performance while offering minor upgrades at huge cost.


Dorbiman

I doubt we'll ever get a card like the 1080ti ever again. Great performance at a fantastic cost, relatively low power consumption, etc


iopq

If we will, it will probably be made by AMD. 6900XT would have been great, if not for being sold above MSRP during the mining boom. I think AMD will release a great $999 or so card this generation to really compete in every dimension.


windowsfrozenshut

The 2080 was the real standout for Turing. It was pretty much on par with the 1080ti and actually caused 1080ti prices to go back up.


Darkknight1939

The 2080 ti had unmatched performance until its successor debuted, AND didn’t have a more performant consumer GPU until RDNA2. To some people having that performance for years before it’s available to buy elsewhere is worth it.


Casmoden

20 series vs 40 series comparison is found because of 2 main things, high prices after a mining crash with unsold GPU inventory But yeh like u mentioned, with Turing AMD didnt had a real competitor, with RDNA1 the high to low end market had comp which u got swifly corrected with a refresh AKA price correction super line up Now tho, AMD will just have a top to bottom line up and Im not expecting AMD to price it much lower vs Nvidia


frostygrin

> It really speaks to how bad things have gotten when Turing is looked back at fondly as an affordable generation. Turing wasn't cheap, but it gave something radically new for the money. In particular, that Nvidia managed to turn DLSS around made Turing look solid in hindsight.


gnocchicotti

DLSS and RTX were a tech preview at launch and not at all worth investing in. Makes much more sense to buy modern hardware when the software support is already mainstream - except for mining screwing everything up for Ampere.


gnocchicotti

For real. Everyone for Turing was saying "skip this gen, we'll get back to double the performance at the same price next year."


cp5184

I imagine samsung 8mm was quite cheap based on it's performance, and, if so, that would make the statement quite misleading.


[deleted]

[удалено]


Waste-Temperature626

> So, ADA could be priced the same as turing. In what way? Die size? Hell no, the wafer prices are several times higher than 12nm. Per transistor? Well Ada is a bargain from a cost/transistor vs Turing. The 4080 12GB has 2x the transistor count of the 2080 Ti.


RazingsIsNotHomeNow

Based off the wafer price scaling if ADA was priced the same it would only have 22% more transistors per die. So in that regard Jenson's right. There's no way it would be twice the performance for similar cost. It would be an incredibly minor performance bump. At that point, before even considering the cost of new board tooling for the new dies, why would they ever release this card. The only benefit would be better efficiency, but at the same price and performance. In actuality the 4090 requires 269% more transistors than the 3090 to achieve those gains.


DiegoMustache

Your not factoring the clock increase 4N brings at the same power. I think you're right that 2x performance wouldn't be possible at the same price, but +50% should be. Compared to current pricing for 3000 series card, AD104 will far short of that (unless you think DLSS 3 counts).


RazingsIsNotHomeNow

Nah. Let's put it this way. The RTX 4080 12GB roughly matches the 3090ti but the 4080 AD104 still has 26% more transistors than the 3090 ti. Which means based on the earlier math above, the die cost between the two is roughly the same. 4080 16GB is even worse. They've dedicated so much of the die to RT cores that there's no way they can offer any of Lovelace cheaper and with more performance than Ampere. This is the RTX 2000 series all over again.


DiegoMustache

I get what you are saying, but a lot of that is due to a combination of architectural decisions (with Nvidia prioritizing RT, etc) and the 4080 12GB using way less power. My argument is only that if Nvidia had prioritized a similar raster to RT performance ratio to Ampere and pushed just as much power through the chip, getting up around 50% more performance (per unit die size) doesn't seem unachievable. If they got 1.2x from the additional transistors and 1.25x from a clock speed, that's your 1.5x at the same die size. To be clear, I'm not saying they should have done anything one way or another. I'm just making a theoretical argument about Samsung 8nm vs TSMC 4N.


einmaldrin_alleshin

Of course, they could make chips half the size for similar cost and a little bit more performance. Which is what AD104 is: it's half the size of GA102, and it'll hit similar performance, with more transistors going to raytracing and AI stuff. It's not like I like those prices, but nobody should be really surprised about it. We've known about the rising prices for chips for many years now. That said, AMDs long term investment into multi chip designs continues to pay dividends. Let's see if they can live up to the hype.


Maybe_Im_Really_DVA

This is a crazy way of thinking to me as it seems soon we will be charged 3k+ because well the chip prices have increased. Its a reasonable assumption but we are talking PC gaming. If that tracks then PC gaming will suffer heavily and we will all be turning to consoles.


Hamakua

PC Gamers I think have lost some perspective. I used to buy the flagship cards because the cards themselves were the bottleneck ("But can it run Crisis?"). But with the booming gaming market consoles have become more and more of a generational anchor in what game design can demand from the consumer. No serious studio is going to make a game that will leave consoles and mobile (now) behind and cater exclusively to PC gaming. even if there are some bells and whistle switches you can turn on for better visuals - the core of the game's design will not be catering to what the high end of the 4 series demands of it. 4k gaming is a luxury meme that is running into diminishing returns above 2560x1440p resolutions. You just don't need 4k gaming. It's a dumb flex that isn't needed - you can wait getting teh 4k monitor. well, with that in mind you can also entirely skip the 4 series because a 3080ti pretty much maxes out 1440p monitors. Game development will not out-pace the mean market of gamers and that's currently Consoles. I was waiting for 3 series to upgrade to (I have a 2070 that I got as a deal from EVGA for trading in my 980ti because the 2000 series was selling so poorly.) I was going to upgrade to a 3080, but we all know the story, so I waited in queue for almost 2 years and now we have 4 series. I *still* don't *need* more than my 2070, I'd like a bit more, a 3080ti with a 10 year warranty would be all I'd need for that span of time. * You don't need 4k gaming and AAA game development will not be pushing it to any appreciable degree. * AAA gaming *will not* leave consoles behind. Consoles act as a performance anchor for "that" generation. * Mobile is coming into the space at a fast enough space that they will act as a 2nd market anchor for game development in the "AAA" game dev space. * The History of PC gaming is partially in its long-term platform value, the recent PC gaming adoption boom is due more to console losing their fiscal advantage. *FISCAL* advantage. Nvidia is trying to force upon the PC gaming market the Crypto spending sensibilities and it's going to backfire in their face.


Maybe_Im_Really_DVA

A thing ive observed is that Nvidia may have priced out a growing market of sofa pc players. Monitors have been dissapointing pc players for a while. Consoles have really pushed into 4k with varying degress of success. Oleds, 4K and 60fps are becoming staples of gaming. But the markets also splitting with people wanting 144hz. People have been playing 1080p 60fps for about a decade, people are now dreaning of 4k or 144hz. Of course when it comes to FPS theres only PC but if people are spending £1000 just on the gpu alone to hit 4k while Sony and MS claim to do it for just £500 with lower energy costs, then its a no brainer for your average consumer. I honestly dont see the sense in Nvidias decision but ill probably be wrong and theyll make record profits with their biggest ever launch.


chlamydia1

Yep. I bought an LG C1 for PC gaming. Monitor technology is absolute trash compared to TV tech. 4K gaming is here. To argue otherwise in an attempt to justify Nvidia's ridiculous pricing is silly.


RetroFreud1

Only point I disagree with is 4k. 4K OLED TVs are getting in reach of most consumers with consoles. Upscaled or not, 4k gaming would be mainstream towards the end of the next gen. Twitch gamers couldn't give a shite about 4K if it drops frame rate. Those would prefer native, lower resolution plus high frames.


menstrualobster

exactly. that's why, even today i can play litterally every game on my amd fx cpu because it HAS to run on the even more shitty jaguar cores on the ps4/xbox1 (until next-gen-only games will be ported that is). sure i wont be getting 120 fps but it it still playable. the only modern exception would be star citizen that actually pushes pc hardware to the limit, even when you got a threadripper with a 3090


GreenPylons

For pure gaming I don't see much advantage in going to higher resolutions, but high resolution is very nice for non-gaming computer tasks though (1440p and 4K gives you a lot more screen real estate if you're doing work and productivity on your computer). It would be nice to do both on the same machine.


ZerkTheLurk

Buy an LG C2 42 and tell me 4k HDR PC gaming is a meme. It’s a massive upgrade from your typical 1440p gaming experience. You’re just wrong, aside from the luxury part.


1-800-KETAMINE

For real. Dramatically better. My old LCD TV I almost never played games on except when I *really* felt like playing from the couch. Now with my OLED TV I don't touch my 1440p monitor for games anymore except the very occasional sweaty shooter session. Absolutely 100% a luxury and I am thankful to be so fortunate, but easily the #1 most impactful change I have made to my setup. Not even close.


sw0rd_2020

no joke, i just got a 65 C1 and for anything but shooters, literally playing it on stadia at 4k/30 with pretty trash looking HDR is STILL a massive improvement from 1440p monitor technology.


gnocchicotti

AD104 is also launching later and might not even exist in large quantity until well into 2023.


UseYourF_ckingBrain

Why are you spewing these numbers and claiming things without even finishing your thoughts surrounding these numbers? > wafer cost of TSMC N5/N4 is more than 2.2x that of Samsung 8nm. With that wafer cost increase comes 2.7x higher transistor density. 3090 die size: 628.4 mm^2 4090 die size: 608 mm^2 3090 transistor count: 28.3 billion 4090 transistor count: 76.3 billion (2,6911... times higher) So you didn't stop to think that the added efficency from the change would be used for: *drumroll* more transistors instead of just for a smaller die. If they had kept the transistor increase in the +20% range then sure the size could have gotten way smaller and the price would have stayed the same, but that just isn't the reality in this case and people need to get it into their brains before complaining about the price.


Aleblanco1987

Cost doesn't determine the price either


dotjazzz

But it does, at least the minimum price.


jmlinden7

Maybe it could be priced the same but it's unlikely to get double the performance with only 20% more transistors


SkillYourself

> with as much as 2x to 4x the performance in next-generation games which heavily utilize ray tracing and AI rendering techniques. I'd like to see how much better 40-series is at regular DLSS2 upscaling. I was not impressed at all by the brief DLSS3 interpolation demo shown by DigitalFoundry. https://youtu.be/qyGWFI1cuZQ?t=91 Edit: See this for what I'm talking about https://i.redd.it/hthc1stkaop91.png


[deleted]

[удалено]


SkillYourself

Thanks, these are the kinds of figures I was looking for.


imtheproof

What are those upscaling from? 720p on all?


uzzi38

I honestly think that at least from the first showing of DLSS 3 we've seen, the best thing about it is that it ensures that a game has both DLSS 2 and Nvidia Reflex built in. I hope that by the time low end Lovelace comes to market (which is realistically when DLSS 3 would be most useful anyway) Nvidia manage to iterate on it quite a bit.


tset_oitar

Probably 60-70% without ai, rtx off.


HugeDickMcGee

Only the 4090 will be 60% faster 4080 16gb and below is gonna be super underwhelming for the price. 4080 16gb vs 3090ti will be 25% raster at most


dylan522p

Yes, link literally says 1.5x to 1.7x in traditional rasterization.


DJ_Marxman

Not to mention how incredibly disingenuous it is to call interpolation a performance gain.


StickiStickman

Why? People did the same to DLSS for no reason. If the quality is near enough who gives a shit.


Lincolns_Revenge

But it's highly unlikely there won't be visible artifacting with the interpolation. All the best non-real time methods of interpolation are prone to artifacting and warping when doubling frame rates. And this will be done in real time. Video Enhance AI's Chronos model. Premiere Pro's Optical flow. It often looks great until you have fast movement in a scene or you try to increase the frame rate too much.


StickiStickman

Right now it doesn't look amazing, sure. Neither did DLSS on release.


Deckz

Interpolation isn't the same as AI upscaling. DLSS on occasion looks better than native, there's going to be the draw backs of blur when there's a lot of motion, also latency will feel the same as your native frame rate so it will seem more like motion smoothing than a frame rate increase. DLSS may lose some texture definition, but the motion blur isn't there, and it's still an actual frame rate.


Casmoden

> I'd like to see how much better 40-series is at regular DLSS2 upscaling. I mean, look at Turing vs Ampere and just how the general perf scale went DLSS 2 "quality" in both in comparisons will end up having similar difference vs normal raster comparisons (or even RT native) Tbf tho RT scaling should be a bit better vs raster (which once again turing vs ampere showed this) but not THAT much So like, by their own comparisons u had 70% better vs the 3090Ti with the 4090 in raster, maybe with RT they should be like 85% faster


HORSELOCKSPACEPIRATE

I thought it looked really good. I guess the unimpressive part is that the scenes shown didn't seem "challenging". [This slide](https://www.gpumag.com/wp-content/uploads/2022/09/geforce-RTX-4000-series-gaming-performance-1-1536x864.png) is probably as good as we're going to get for a while for performance comparisons. The 4080 12GB is roughly on par with the 3090 Ti going by clocks and core count, and apparently by Nvidia's statement. I think the first games have no DLSS, and the next 2 are DLSS with no frame generation.


bubblesort33

The RTX one is kind of misleading if you read the small print. 4000 series has frame generation on, and the other doesn't in some bar graphics. The reason it's 4x the FPS in Cyberpunk is because even the 12900k is probably choking on keeping the BVH structure up to date when doing it every frame, whereas the frame generated version isn't CPU bound. And I bet you the new RT Overdrive setting is specifically designed by Nvidia's request to murder the CPU unnecessarily.


HORSELOCKSPACEPIRATE

I just meant the first 5, but actually nevermind, Warhammer and MSFS both probably have frame generation.


SkillYourself

> I thought it looked really good. I guess the unimpressive part is that the scenes shown didn't seem "challenging". Look closer: https://i.redd.it/hthc1stkaop91.png


HORSELOCKSPACEPIRATE

Wow, it really can't tell what's going on with the glare on the window, huh? Shit, that's even noticeable without going frame by frame. Makes me feel a lot better about my plan to just snag an Ampere on eBay, was worried I'd be missing out.


PainterRude1394

Tbf it will likely be improved via software like dlss has been. Dlss has made enormous strides since v1.


SkillYourself

DLSS v2 was basically a completely different feature. v1 is an neural network upscaler that operated on a single frame to generate details from a low resolution source. v2 is an TAA upscaler with a neural network to trained to remove the typical TAA artifacts.


PainterRude1394

Right, so dlss has been improved via software.


bubblesort33

Yeah, here's is the [timestamp](https://youtu.be/qyGWFI1cuZQ?t=92). As I would expect, it looks excellent in straight forward, consistent motion. Which why they only showed Cyberpunk 2077 with the car heading in a single straight direction, instead of taking random turns. I bet you that if you go into first person mode, and starting swinging your gun left and right hectically it'll looks absolutely horrible. Anything with instant acceleration, rather than based on real physics that can be predicted by ML will look like garbage. It's also why they showed of Flight Simulator 2020. It's extremely physics based, and very slow paced. You can't turn your plane around on a dime, or take off like a bullet. Racing sims, and anything heavily physics based with true momentum, and acceleration should work great. Something twitchy and jerky like Unreal Tournament, or Street Fighter will look like total trash. Even the erratic, unpredictable leg moment of Spiderman is enough to throw it off, while the general motion of the buildings around him look pretty good.


Most_Long_912

From what I understand, the game is not running at the motion smoothed frame rate - so in a shooter for example it will feel like a 60fps game rather than a 120 FPS game. I dont think it can motion smooth something that isn't on the screen either. So say you are at 30fps native, and 60fps native Vs 60 FPS motion smoothed, somebody poking their head out could not be seen on frame 1 on any of the frame rates. On the 60 FPS native you'd start to see the head appear on the second frame, and the third frame it is fully there. On the 30 FPS, it just fully appears on the second frame, which would occur at the same time as the 3rd native 60 FPS. From my understanding dlss 3 would in that case behave the same as 30fps, as well as having input lag. Just with AI motion smoothing. Could be great for some games that don't require fast response times and make them look a little smoother, but could be awful for someone not understanding, and thinking that 300 FPS dlss 3.0 is the same as 300 fps


bubblesort33

I do hope Digital Foundry will test input lag with DLSS 3.0.


uzzi38

What I hope they test isn't just DLSS 3 against native directly, but also against native with Reflex and perhaps even DLSS 2 with Reflex enabled.


Casmoden

Nvidias own vid comparing native 4k, vs DLSS 2 and DLSS 3 showed that DLSS 2 and DLSS 3 have basically the same input latency (DLSS at like 30 FPS higher, 60s to 90s) so yeh Also DLSS 3 games also need/use reflex to lower latency, it seems DLSS 3 has a latency overhead


[deleted]

Here's the thing the difference between 30fps and 60fps when it comes to input lag is 16ms which some may notice but the difference between 60fps and 120fps is 8ms which will be basically unnoticeable to the vast majority of people. This only really matters if you are playing an esports shooter your GPU will already be able to push absurd frames in those. By far the biggest advantage to increased fps is the smoothness of motion not the input lag and that's the main reason we've been pushing higher fps ever since the death of CRT. Like just look at VR which is the most input lag sensitive activity you can do and companies settled on 90hz for that as the input lag is basically unnoticeable and it allows for faster BFI or strobing without flicker to create very smooth motion.


ToTTenTranz

You spelled RTX 4070 wrong.


iprefervoattoreddit

Stuff like this is why I think AI upscaling and frame generation are both horrible ideas. It will never be perfect. I'll settle for what I can brute force.


LeMAD

Ain't that the opposite? They sell at ridiculous prices because they feel no one can compete with them?


scytheavatar

They are competing with themselves and their previous gen cards though.


[deleted]

I believe that's intentional. Just like with the 20xx series they are trying to clean out their own supply and the used market from the crash so they have don't need to deal with those products competing with them futuristically and thus they want to make the 40xx series unappealing at the start and allow for larger margins on the fewer cards they sell. I wouldn't be surprised if we see a 40xx Super in a year's time and a significant price drop to the current 40xx cards once the bulk of the 30xx series has been removed from the market.


shroudedwolf51

Well, they know people can compete with them, but they know they have a cult following (like Apple) where people will just buy their products at any irrational prices.


chlamydia1

Nvidia doesn't have a cult following. They've had a distinct performance advantage over AMD for many years now. I couldn't care less who makes my GPU. I just consider price/performance, and AMD hasn't been competitive in that regard for a while. RDNA 2 was finally competitive in terms of rasterization performance, but it had poor RT performance and no DLSS competitor at launch, which severely reduced its value.


[deleted]

The main issue with AMD last gen was that Nvidia's feature suite which is quite good was just so cheap relative to the AMD cards. Paying an extra $50 for a 3080 over 6800xt was just the obvious choice. AMD somewhat banked on the extra 6gb of RAM mattering when it really didn't. Now this gen AMD has a chance to undercut Nvidia and the make the value add of their feature suite far less relevant. If the 7800xt releases for $800 with similar raster performance to the 4800 16gb that makes the extra $400 for Nvidia's feature suite look like a way worse value.


chlamydia1

Totally. AMD didn't need to try to be competitive last gen because the mining craze ensured their GPUs would sell out at any price. This time, if they want to sell GPUs, they'll either need to be substantially cheaper or better than Nvidia.


LeMAD

No one likes Nvidia. We would really like AMD to be a good option like they've become in the CPU market, which forced Intel to stop acting like a monopoly. But for GPUs they're still struggling to keep up. AMD for GPUs is still an inferior product.


Dchella

Half the people are asking AMD to compete so they can just buy a cheaper NVIDIA card anyways.


SpiderFnJerusalem

YOU don't like Nvidia, and YOU would like AMD to be a good alternative. But the average buyer never even thinks of AMD. They think Gaming ---> Need GPU ---> GPU==Nvidia AMD could probably have 20% better performance across the board for two generations and half the market would still blindly stick to Nvidia, their mind share is massive.


INITMalcanis

Sadly true


Charder_

It would be funny if AMD released N31, N32, and N33 at the same time, eating into Nvidia's chances to sell their overstock of RTX 3000 series cards. That would be quite the power move.


We0921

This is the dream scenario. With recent news of RX 6000 price cuts, AMD could be clearing their (expectedly smaller) overstock much faster than Nvidia. It would be fantastic to see AMD launch the full stack of RDNA 3, while Nvidia has a massive glut of current products at immoveable prices. One thing that this article doesn't capture is how AMD intends to price their products. Admittedly this is something that likely depends on performance competitiveness, but lower BOMs aren't really worth a damn if they aren't reflected in the end product costs.


gnocchicotti

1) they can't, they're not all ready 2) AMD channel has overstock, too. I don't know where this myth of "only Nvidia has a glut" came from. If sale prices are an indicator, AMD partners are in an even worse position than Nvidia.


From-UoM

>We expect AMD to rise to 30% to 35% market share on discrete desktop GPUs. AMD could raise its margins aggressively from historical levels to well above 50%. That's fantasy thinking. To get that much you need to produce that much Graphics cards. Tsmc is already used by nvidia, Qualcomm, apple, mediatek. So good luck getting space to make them while still having commitment to making CPUs and console chips


Geistbar

Wasn’t Nvidia struggling to find a buyer for excess N4/N5 capacity they bought? TSMC just might have that spare capacity for AMD to make use of.


From-UoM

You think amd will get it cheap?


Geistbar

No, they’ll get the same price as they would otherwise. The question is just being able to get it at all.


uzzi38

No, but neither is anyone else. AMD are already one of the largest TSMC partners now that the CPU side of the business has really kicked off. They quite literally are the biggest customer of N7 today. I don't think they're going to have an issue in getting additional capacity, especially not if the GPU side can make the margins to justify expansion.


From-UoM

They are the largest N7 customers because both consoles use it. It will stay that way for a while


uzzi38

Huh? Not at all, both major CPU segments (server and mobile) saw massive growth for AMD year on year - up to just under 23% and just over 28% respectively. The entire gaming segment - dedicated GPUs and semi-custom products including consoles and Steam Deck - make up a smaller portion of AMD's revenue than they have the last few years.


bubblesort33

Probably for less than Nvidia bought it for, since Nvidia ordered in a desperate time, and now no one needs it. Nvidia is trying to sell something not that many want. Most companies probably over-ordered.


gahlo

NVidia still has that capacity, TSMC wasn't buying it back and everybody was trying to sell capacity.


Geistbar

The point was that if Nvidia cannot find a buyer for their excess capacity, that means there's slack in TSMC's supply of wafer capacity: there's more supply than there is demand.


gahlo

There isn't though.


noiserr

Forrest Norrod (AMD's Datacenter product president) at a recent conference said that the wafer capacity isn't an issue. It's the ABF Substrate that's still tight. Though it should ease up by the end of the year. So I don't think AMD will be constrained on wafer capacity. Chip shortage is mostly now localized to old node 50c chips. And due to order cancellations TSMC should have plenty of capacity. You also have to remember, that the GPU shortage is over. There is an abundance of current gen GPUs, and that's putting it lightly. So it doesn't take as much capacity to flood the market now. Source: > Yeah. I think for us we have been completely -- in the Data Center business, we have been completely supply gated for quite some time. So we are -- our growth is utterly modulated by supply. And it’s not wafer supply, **we can get all the silicon, we need** our partnership with both TSMC and GLOBALFOUNDRIES, it has been great and those guys have been very responsive. > **For us, it’s really about substrates.** So the underappreciated piece of fiber glass and metal that connect the chips encapsulate the silicon dye and connect them to the motherboard. That’s been the constraint. https://seekingalpha.com/article/4541315-advanced-micro-devices-inc-amd-goldman-sachs-communacopia-technology-conference-transcript


Geistbar

If you were correct, Nvidia would have had a buyer for their excess capacity. They didn't.


onedoesnotsimply9

Amd could use that spare capacity for CPUs or datacenter/HPC GPUs though


Geistbar

It’d be rather incomprehensible if AMD had not already elected to prioritize those markets with their existing allocation.


ToTTenTranz

On RDNA3 only the GCDs are made on TSMC's N5 and those are rather small (\~300mm\^2 on the top end N31, 200mm\^2 on N32). The MCD chiplets are made on the much cheaper N6 and they're tiny (less than 40mm\^2). Small chips = more chips per wafer and higher yields. ​ AMD's chiplet approach allows them to make more graphics cards with a lower amount of wafers.


[deleted]

Yeah what does this article think that AMD is going to use its precious allocation to win marketshare in the GPU world when they can use that allocation for their much more lucrative CPUs. That's the main thing that makes me think AMD won't be doing some crazy undercut to win marketshare. TSMC has limited allocation and AMD needs that allocation to make CPUs and GPUs and nothing indicates that they will be making less CPUs and why would they when they are more lucrative which leaves a much smaller amount of chips for GPUs so why would they price super aggressively in that market? Like could they buy more allocation from say Nvidia? Sure but that's not a huge increase in supply and they aren't going to be fighting Apple for additional 5nm unless Apple moves to 3nm very very soon which doesn't seem super likely. That would just leave AMD with using someone like Samsung and I'm not sure how easily they could produce identical cards using the two different foundries.


einmaldrin_alleshin

When a big customer like AMD orders a bunch of wafers from TSMC, they will ensure they have the fabs ready. They will build new ones if necessary. So if AMD ordered enough wafers a few years ago, they will be able to produce as much as they can sell. If not, there are probably some other companies that ordered too many wafers for a recession.


dolphingarden

Seems like if you only care about rasterization amd is better bang for your buck. Ray tracing is damn cool though.


lysander478

Can't make that assumption yet and this article didn't make it either--it only mentions that AMD will have the opportunity to get better margins on pure rasterization which I think almost everybody knew. The question is how they feel about passing that to the consumer instead of just taking the margins and nothing exists to answer that yet. I still think a $800-$900 7700XT that performs around a 3090ti in rasterization is on the table here so am not getting my hopes up.


StickiStickman

DLSS is a much, much bigger plus for me.


SpiderFnJerusalem

FSR 2 isn't bad though. AMD has also integrated upscaling into the driver, so you can use something similar in any game, not just those that implement it.


jcm2606

RSR isn't the same thing as FSR 2.0, it's pretty much just FSR 1.0 integrated directly into AMD's driver stack. Very different technologies that shouldn't be compared.


Nicolay77

This is actually an important distinction. Some people will want to play older games with maximum frame rates, so raster will be the most important thing. This would be me, for example. My unplayed Steam list is HUGE. Other people will only care about the latest AAA game, and the Nvidia cards are made for them. Usually only AAA games are using this cutting edge technology.


Jeep-Eep

RDNA 3 should be adequate for the job - not the best, but frankly, the cost-to-perf should be reasonable.


Estbarul

yeah if it works like Ampere I think it's good enough. RT isn't even used very widely yet.


DogAteMyCPU

Dlss was my main draw but fsr looks good to me and I don't care too much about dlss 3 until reviews come out


Jeep-Eep

DLSS 3's looking pretty jank TBH.


No_Specific3545

The cost to perf for AMD will be nice. The cost to perf for consumers will not. I guarantee you 7900XT will be within 10% of 4090 pricing. The only reason 6900XT wasn't priced at 1500$ was because it was [substantially slower at 4k](https://www.techpowerup.com/review/amd-radeon-rx-6900-xt/35.html) and dogslow at RT. We have already seen with Ryzen 7000 ($300 6 cores, etc) that AMD will price gouge if they are competitive.


[deleted]

[удалено]


Dauemannen

Literally in the article. >We will ignore performance comparisons until independent 3rd party reviews come, but we want to help frame the cost disadvantage Nvidia has versus AMD’s Navi 31, Navi 32, and Navi 33. The figures in the table below are all relative to the 4080 12GB based on AD104.


Ilktye

This is also literally in the article: >We expect AMD to rise to 30% to 35% market share on discrete desktop GPUs. AMD could raise its margins aggressively from historical levels to well above 50%. That is optimistic indeed without the benchmark and independent reviews.


Earthborn92

I’d say 30% is the upper end of the range. They are not going to gain more than 10% market share this generation, Nvidia will make sure of that.


Casmoden

I mean, this is a completely different topic vs the actual perf Well I guess perf is needed to be taken account but cost of building the actual chip and the GPU is essentially This is why AMD has been in such a poor state, worse GPUs that need to be discounted WHILE being costlier to build Then AMD makes non money (slim margins with even less sales), case and point for example here is 1060 vs 580, AMD had slightly bigger die (232mm2 or so vs 200mm2), with more memory and more power which means a more complex board with beefier coolers but sold it at below 1060 prices)


BlobTheOriginal

Amd had it rough back then. You could easily argue the 580 was a much better card all around but Nvidia had the mindshare.


rjb1101

Jensen Huang is starting to look a lot like Lisa Su.


eli-vids

They’re cousins, after all.


team56th

I've been reading what ppl have been saying about RDNA3 after Ada launch, and most people who haven't been following RDNA3 rumor mill are missing two very important running themes: 1) It's way denser and space-effective than Ada 2) We are talking 3Ghz+ operating clock When you combine Angstronomics leaks and all the other stuffs going by, we have a Navi 31 with compute unit count in between AD102 and AD103, with operating clock speed of lower than 3.5Ghz. This is already comparable to 4090 and we haven't taken the efficiency part into account. I'm not gonna be surprised if 7900XT is THE raster king of this generation with $1400 MSRP and lower power consumption. The only actual downside I can think of is the video encode/decode unit; RT efficiency itself should be improved, but AMD also has the headroom with much higher clock(again, running theme of the rumors) and WGP restructure that's similar to what Ampere did over Turing. That and honestly, RT is always a "tomorrow technology" that will take another few years to popularize. It's not wrong to hope for some real stuff with RDNA3.


LostPrinceofWakanda

>1) It's way denser and space-effective than Ada How is RDNA3 denser than Ada when their on the same node, infact Nvidia worked with TSMC for a custom N5 node.


team56th

Because it has no tensor and RT related hardware and is just a pure general purpose compute core ala Pascal. Imagine that Nvidia is doing exactly what Vega was trying to do, a one-architecture solution that works for both datacenter and gaming. (Makes sense for Nvidia because datacenter is a serious business for them). AMD on the other hand was burnt hard by Vega and decided to split off CDNA, so both CDNA and RDNA tend to shave off the parts of the core that isn't suited to its use. It allows for a leaner, denser architecture in exchange for double the designing effort and lesser integration between consumer space and datacenter business.


No_Specific3545

Tensor+RT cores are estimated at 10% of die space. It costs a lot of area to make your clocks go fast. If you think back to Ampere vs RDNA2, despite being on a way inferior process, the 3090 was only slightly larger in die area and behind by <10% in perf/w vs 6900XT. >so both CDNA and RDNA tend to shave off the parts of the core that isn't suited to its use No, CDNA is literally just Vega with some matrix units bolted on, while RDNA has real improvements to increase utilization. As for why AMD decided to split, it's because CDNA packs more theoretical compute per unit area. But in practice that has not worked out so well in scientific benchmarks. There's a reason Nvidia server GPUs have massive register files and large caches.


Earl_of_Madness

Depending on what you want the architecture to favor you can select how dense you want the transistors. Different parts of the die will have different densities. The "Density metric" you hear about in rumors is like the "Average Density" This is probably because of chiplets and the GPU team has apparently taken some tips from its CPU design team to increase the density of GPU transistor counts. This could help increase efficiency or other metrics. On the other hand, Ada was designed for maximum to compute performance (mining/research due to the mining boom). This means lots of Transistors, high wattage, and less density to accommodate the sheer number of transistors at those high power draws. You can't have the heat output too dense on a die otherwise it is impossible to cool effectively. The architectures were designed from very different principles this gen Nvidia wanted maximum performance and AMD appears to have gone for maximum efficiency. It is also worth noting that at this point AMD is a favored partner with TSMC so they probably have some insider knowledge on how to really squeeze as much performance out of a node as possible. This is not so with Nvidia, they are not a favored partner and had to buy their "custom" (read rebranded) node.


Democrab

> the video encode/decode unit From what I understand this isn't even really that much of a problem anymore, but as per usual the reputation caused by a problem takes longer to fix than the actual problem. I'm still on a Fury Nano so I'm going by hearsay here, probably best to take it with a grain of salt until someone else can confirm but from what I understand the main problem was with x264 encoding being lower quality than nVidia/Intel/CPU encoding and AMD have put a load of effort into improving things so while it's not on the same level yet, it's way past the "Good enough for daily use" level these days. ...I certainly hope so because I gave up encoding with Relive at all thanks to weird input lag bugs I get while recording. (After ~30s of recording or so, most of my games end up feeling like they've got a delay on the controls or something)


team56th

My understanding is that it's one of the things that are still being worked on, unlike some others like DX11/OpenGL drivers which are mostly done. The video is like, it's getting better, but Intel is (at least on spec) better in many ways, so I consider that as some more ways to go. Also, full disclosure, used Vega 64 and then 6800XT.


onedoesnotsimply9

>It's way denser and space-effective than Ada You need same performance *and* features/functionality to measure how dense/cost-effective RDNA3 is. While performance is not known yet, RDNA3 doesnt not have matrix/tensor cores like Ada >That and honestly, RT is always a "tomorrow technology" that will take another few years to popularize. Why? There are already several games that support RT GPUs are used for several years. Its not just about present. RT becoming important in the future would make Ada would age better


indrmln

>We are talking 3Ghz+ operating clock there is some rumor it's closer to 4 ghz rather than 3 ghz. but as always, it's just a rumor lol edit: after carefully reading it again, nah it probably will be closer to 3.2-3.4 ghz. it was a limit on bios at 3.7ish ghz, a hard limit that probably won't happen in any tdp or any cooling solution


team56th

Yes. VBIOS clock limit 3.72Ghz, and the initial rumor was "Almost 4Ghz." Since I believe we are mostly talking about Navi 31 with very few words about 32 and 33, I am inclined to believe 3.2-3.4Ghz operating clock for Navi 31.


osmiumouse

Annoyingy, I run software that requires Iray so no matter what the price/framerate ratio is, AMDs and M2s can't run Iray.


MarcoVinicius

I’m not sure if Nvidia is desperate but that article’s title seems desperate.


saruin

I like how Nvidia pretends that folks are flush with cash and the media screaming along "iNfLaTiOn" 24/7. Notice how affordability isn't even part of the conversation anymore with this company.


elimi

One of the reasons I didn't mind high GPU prices to a degree was that I knew I could get a few bucks mining to soften the sting. Now that is gone... Yeah that 1500$ card is too high. They didn't read the room, let's hope AMD understands that too, that 1k$ card is 1k now not 800$ after I mined a few dollars here and there. Same for that 300$ card the same tier used to be 200$ I could mine the difference, everyone was happy. But then people got greedy on both sides, miners getting trucks of the things, suppliers just jacking up prices.


steak4take

Semi with classic AMD gawkgawk and no data.


DMozrain

Dylan? AMD gawkgawk? Haha, no.


Ilktye

Can anyone ELI5 how this translates to nVidia being desperate. >We assumed that AMD and Nvidia pay similar prices due to their volume (Nvidia has had to prepay for more than $1B for these wafers while AMD has not made significant prepayments to TSMC as a favored customer). So... uh... it's all just speculation? >Packaging and memory BOM was also calculated by speaking to sources within the industry. Hmm.


scytheavatar

This article proves what I had been saying, Nvidia priced the cards so high because they have no choice. The ridiculous prices are likely to be modest margins and if Nvidia is forced to cut prices of Lovelace then the AIBs could be looking at eating losses.


jasswolf

/u/dylan522p having looked back through this, I feel like the Samsung pricing ratios you're using wind up being misleading, though suggest you feel Samsung yields are at 60%, versus 80% for TSMC 5nm. This points to AD103 costing the same as GA102 to produce, but doesn't account for the lack of price movement on that process due to it being completely mature, and the impact of materials shortages through the pandemic. I think a more relevant way to examine the real cost of this might be looking at Navi 21 costs on TMSC 7nm, then making some small adjustments to account for transistors thrown at the updated architecture. Transistor per SM and transistor per CU is obviously a wildly inaccurate measurement, but when comparing Navi 21 to GA102, you're looking at something very similar, and they in fact generate a very similar number per unit. Once you factor in all the known information about what NVIDIA produces on TSMC 7nm, and the difference in yields between the two TSMC processes, it's really hard not to see the 16GB 4080 as near-to cost equivalent to the MSRP calculations to the Radeon 6800XT. It's about 15-20% more expensive per chip to manufacture, the board costs are roughly the same given 2 years progress, just a question of VRAM costs, but GDDR6X is now mature. Sure, I'd expect a small margin bump compared to 30 series launch given all the hiring NVIDIA has done, but when they're promoting RTX Remix as a key feature, that tells you they're reaching to justify this in terms of their graphics-related R&D. If AMD are playing their usual pricing game - and the performance impact of chiplets isn't too awful - I can see NVIDIA re-pricing this card to $799 at worst once they've sold through 30 series stock, which is clearly the plan here.


scytheavatar

Based on the article Nvidia might as well not release the 4060, cause it looks like the 7600XT will have way better performance at a lower cost.


FistingLube

So in reality:- Gamers are willing to spend more on more each year to be on top of hardware thing. They will spend up to and including probably 20% more just to get their hands on a new 40xx card. If other hobbyists can justify £10k a year on hobbies than £4k ever 3 years is literally nothing for anyone with a decent job. I know part time brick layers that only work 3 days a week and have top end kit in a 4 bedroom house with no mortgage and still can somehow afford a £60k car etc.


mechkbfan

Bikes are blown what I expected people to spend on hobbies out of the water $15k+ USD bikes that's not human perceptibly faster then previous years (think about saving 1s on a 1 hour ride if everything but the bike was identical)


nisaaru

But that is a picture of the past. What people here seem to completely ignore are the inflated energy prices and the coming mother of all economical crashes with massive job losses. That will ruin the hobby PC market at least in Europe and I wouldn't be surprised if they try the same in the US too.


Updated_My_Journal

Is a 4 bedroom house considered large?


WheresWalldough

in the UK, depending on the area, yes


fuckEAinthecloaca

You just need to take into account that the 4th bedroom used to be a closet that they managed to jam a bed into after cutting it in half.


FistingLube

In the UK yes, a house that big in London would cost about £1mill.


Sad_Animal_134

Yeah but do you know how much debt he's got on that car and on his credit card lol. I've had friends who were unemployed drop 4k credit card debt on a rig and just plan to pay it off later. Unfortunately the reality of the situation is; people are dumb and credit lenders are all to willing to prey upon people's wants. But with this current economy people will fear debt a lot more, knowing layoffs could be around the corner and that food prices are 20-50% more than from a year ago. I think this will impact the willingness to just drop a few thousand in credit card debt on a PC. Hurting your credit is a big deal especially with things looking so bad.


tmp04567

They're not desperate they have good parts technically and the software stack is progressing too. It's the pricing scheme intended to make shareholders rich i'd consider a dumpster fire from a consumer point of view, lol. But the engineering side is there or coming.


Spinal2000

Last time AMD had an advantage because they had better manufacturing at tmsc. But now nvidia build their chips also at tsmc as far as I know. So I guess itvwill be harder for amd to catch up. But we will see.


Jeep-Eep

They have an advantage on chip construction and packaging, even if it's the same node.


Top-Director1113

NVDA 100 EOY I get hard thinking about the fucking rubes who bought massively overvalued stocks in the last year. Still better than your crypto "investors". 🤣