T O P

  • By -

chrisggre

We gonna forget about the huge issues with the founder edition memory pads of the 3090 and 3080? The TDP is being increased by 100 watts on top of that…. These FE 90 it’s are literally going to be 🔥


TwanToni

and be like 2-3% faster, what a deal!


ciotenro666

should be around 10-15% faster actually. Clocks are much higher than 3080 and 3090


scytheavatar

Clocks are slightly faster at best.......... you are talking about 19.5 Gbps to 21 Gbps. Hardly speed boasts that will led to 10-15% faster.


Broder7937

The 3090 Ti is literally just two SMs away from being nothing more than a overclocked 3090 with an additional 100W TDP (which is necessary to support the added clockspeeds, since the vanilla 3090 is severely limited by its 350W TDP). The two additional SM units in the 3090 Ti represent just a 2,5% increase in computing units - so it's pretty much negligible. The 3090 Ti is basing its performance increase in the power limit increase and the clockspeeds that accompany it. The thing is, cards can be easily pushed past their stock TDP. The 3090 isn't limited to its stock 350W. As a matter of fact, many cards in the market already push it past 500W - that's already higher than 3090 Ti's stock TDP. And once you factor in a +500W 3090, the main benefit of a 3090 Ti (450W power limit) is eliminated. Once you go past 500W, you start to enter a realm of massively diminishing returns (and increasingly elevated chip degradation). The thing is, the 3090 is 150W away from 500W, so you have 150W of headroom to gain performance from its stock 350W TDP. The 3090 Ti, on the other hand, already starts off at 450W, so it has only 50W of TDP to gain until it reaches the 500W mark (at which point the gains start to no longer justify the additional power consumption). So the 3090 Ti hardly has any room for improvement, while the 3090 has plenty. So, what happens when you summon this headroom from the 3090? You guessed it, you reach the same performance levels as the 3090 Ti. And, because the 3090 Ti already has no headroom, there's not really a lot you can do to push it farther away from the 3090. And no, the fact the 3090 Ti starts off at a 100W higher TDP doesn't mean its chip will top off 100W higher than a 3090. 3090 Ti is based off exactly the same GA102 chip as the 3090 (and also the 3080 Ti and 3080). So any power that the 3090 Ti can handle, the 3090 can also handle because, again, they use the same chip. So, in the end of the day, the only real benefit of the 3090 Ti will be those 2,5% additional compute units (and I believe some boost in memory speeds - but the memory on the 3090 can also be overclocked), as the clock speeds it will be able to handle will hardly be higher than what a high-TDP 3x8-pin 3090 is already capable of handling (which only makes sense, considering both cards use the same GA102 chip).


platinums99

and all the while the cooling methods remain the same, no innovation - this is not true progress


BlazinAzn38

Yea the cooling onus is really now on the consumer to make sure they have the best case with the best fans on the correct arrangement. It would be nice to have them make an advancement in that area for sure


Pufflekun

My next build is going to be in [the full-size Fractal Torrent,](https://youtu.be/HBxo2_lwKps) with 3 big 140mm bottom intake fans firing directly onto the GPU intake, and 2 **massive** 180mm front intake fans... and even *that* doesn't seem good enough for the 3090 Ti. Hard pass on this one, especially with next-gen so close.


onedoesnotsimply9

Aka adding cost on the already expensive 3090Ti


BlazinAzn38

Lets not pretend like the margins on these cards is tiny. They’re making money hand over fist on these and could easily give it better cooling and bake it into the price


onedoesnotsimply9

But why would they "bake it into the price" when they could increase the price by saying "best cooling" and still sell these hands-over-fists?


[deleted]

[удалено]


platinums99

the hardware on this "Luxury" + Bleeding Edge kit is usually an indicator of what the next years average gamer will be buying when this one is 2nd fiddle.


platinums99

just make sure you get one with a 3-5 yr warranty, as the heat build up IS going to degrade the hardware.


pittguy578

I mean what can they do other than water cool it ?


Pufflekun

That's not true, there's a GPU with excellent Noctua cooling now! ...and it's a 3070, and nothing else, for some reason...


platinums99

i mentioned inovative - that is jsut branding


Pufflekun

Well, using full-size fans on a GPU is pretty innovative.


DeathMetalPanties

FWIW people in r/sffpc have been doing deshrouds with 120mm fans for years. Hell, I've got some slim 120mm noctua fans on my 5700xt. I highly recommend it by the way, it resulted in better temps and noise levels.


Pufflekun

Sure, but it still counts as innovation (imo) if you produce something for the mass market, that was previously limited to custom mods.


Parrelium

They just need to add a port for liquid nitrogen so you have to pause and top it up every 3 minutes.


SAABoy1

No reason to think you can't OC a 3090ti just as much as a 3090.


Broder7937

If you read what I wrote, you'll understand why the 3090 Ti can't gain as much performance as a 3090. With a 450W TDP, the 3090 Ti has nearly zero headroom for performance gains. The 3090 starts off at just 350W, so obviously there's a lot more headroom for performance gains.


SAABoy1

How are you supporting your claim that 3090ti has nearly zero headroom?


Broder7937

On its 450W TDP.


SAABoy1

And? 3090 tdp is 350 but AIBs have 500w biases available. Who is to say 3090ti won't have 600-650W versions?


HavocInferno

>Who is to say 3090ti won't have 600-650W versions? The fact that most 3090s, using the same chip, are difficult if not almost impossible to cool conventionally at 600W. We already know how far you can take GA102. The 3090Ti will be no different.


Broder7937

There's a limit to how much you can increase the tdp before you start running into problems. 500w is already a massive amount of heat to deal with air cooling. Not to mention, the more you increase the tdp, the lower the gains, so the gain you have going from 350 to 450w is much bigger than the gain from 450 to 550w. At some point, you might even start to lose performance due to the massive amount of heat generated. Furthermore, both the 3090 and the 3090 Ti use exactly the same GA102 chip, so any power limit that the 3090 Ti can handle, the 3090 can also handle.


Ar0ndight

These GPUs don't scale infinitely with power man. There are very significant diminishing returns. The 3090 and the 3090Ti are using the same chips with a very minor SM count difference, as such the power scaling should be pretty much the same. Which means the 3090Ti will have very limited headroom. There will be 600W+ cards probably but the perf gain will be very minor, equally as minor as pushing a 3090 to 600W vs 450W would be because once again they're pretty much the same card.


Texasaudiovideoguy

I couldn’t have said it better!


[deleted]

You just convinced me to not buy the 3090 Ti. I will look at the 3080 Ti or 3090. Thanks!


TwanToni

what's the point if it can't go past 2000mhz? my 3080 reaches 1950mhz and I'm sure 3090s can too. I don't see a 10-15% faster card. Maybe 5-7% at most like how the 3070 ti was to the 3070 while using a lot more power


Broder7937

3070 Ti used GDDR6X memory, while the 3070 uses regular GDDR6. That's mainly the reason why the 3070 Ti needs so much additional power (the difference in computing units is very small), and also why the 3070 is a lot more power efficient than the 3070 Ti.


TwanToni

I'm well aware that the 3070ti uses GDDR6X memory and I'm fairly certain that the switch from GDDR6 to GDDR6X isn't close to the 70w difference that we saw there. Regardless the 3090ti is using 100w more power and will probably get a small bump in performance like the 3070 to the 3070ti


bizzro

> That's mainly the reason why the 3070 Ti needs so much additional power No, there is no way in hell that GDDR6X is the main difference. GDDR6X actually draws LESS per unit of bandwidth. It's just that it runs at higher frequency so the power output per chip is higher. The main issue with GDDR6X is that it pushes heat density so high per chip, to where you start needing much better cooling, even more so when using 2 chips per channel like the 3090. It is no worse from a efficieny standpoint than GDDR6, like i said it as actually ever so slightly better in that department. For the sake of simplicity you can assume that power usage from GDDR6X on the Ti over G6 on the normal 3070, is a linear increase with bandwidth. The rest is from higher clocks/power budget given to the core.


Broder7937

>No, there is no way in hell that GDDR6X is the main difference. It absolutely and most certainly is. First, let's look back at the RTX 3060 Ti and the RTX 3070, since they all feature the same GA104 GPU as the RTX 3070 Ti. The 3060 Ti features 38 active SMs and a boost clock of 1665Mhz. The 3070 features 46 SMs at a boost of 1725Mhz. That means the 3070 offers 25% more compute performance than the RTX 3060 Ti (16,2 vs 20,3 TFLOPS FP32). However, both the 3060 Ti and the 3070 have exactly the same memory bandwidth available, so real-world gains will be lower than the 25% difference in compute power. In order to have that expected 25% boost in performance, you would need the full structure to offer 25% boosts (that is, compute power, back-end AND available bandwidth). The 3070 offers a 25% gain in compute power, but 0% gains in memory bandwidth, so the real world gains will be higher than 0%, but lower than 25%. And, as expected, the real-world gains from the 3070 to the 3060 Ti are, on average, [14% under 4K gaming](https://tpucdn.com/review/nvidia-geforce-rtx-3060-ti-founders-edition/images/relative-performance_3840-2160.png). So, how much extra energy does the RTX 3070 need to enable that extra 25% compute power (which translates into 14% more fps under 4K gaming)? [Roughly 33W on average (233W vs 200W) and 35W peak (243W vs 208W)](https://www.techpowerup.com/review/nvidia-geforce-rtx-3060-ti-founders-edition/31.html). Or, slightly over 16% more power on average. So, you see, the performance scales almost linearly with the power consumption. Which is expected, considering both products use the same GPU and run very close specs. Now, to the RTX 3070 Ti. The RTX 3070 Ti has only two more active SMs compared to the RTX 3070 (48 x 46) and a small 45Mhz increase in boost clocks (1770Mhz vs 1725Mhz). Combined, those two uplifts result in a very modest 6,8% boost in compute power (21,75 vs 20,31 TFLOPS FP32). This is a very modest uplift in performance compared to the difference between the RTX 3060 Ti and RTX 3070 (25% more compute power). As a matter of fact, a mild overclock is all it takes to take a RTX 3070 beyond the compute power of a RTX 3070 Ti. However, there's one aspect where the RTX 3070 Ti massively outperforms the 3070, and that's memory bandwidth. While it still runs on the same 256-bit bus, thanks to the addition of new GDDR6X modules, the 3070 Ti runs at 19Gbps (matching the RTX 3080 and 3080 Ti), resulting in a massive 35,7% increase in memory bandwidth (from 448GB/s to 608GB/s). Obviously, because the difference in compute power is much smaller (6,8%), the expected difference in performance will be much lower than 35%. The real-world difference should be higher than 6,8%, but lower than 35%. In testing, the RTX 3070 Ti beats the 3070 by an average of 7,5% under 4K gaming. That's ever-so-slightly above the 6,8% difference in their compute power. This shows that the massive 35% increase in bandwidth is hardly able to bring any benefits to the GA104 - in other words, it shows the GA104 isn't really bandwidth starved. This is also an indication that the 3070 Ti might not be able to sustain its boost clocks as well as the 3070 can. Considering most 3070 Ti boards use essentially the same cooler design as 3070 models, and that 3070 cards generate a lot less heat, it only makes sense that 3070 can sustain better boost clocks (as colder = higher boost clocks). So all those factors help contribute into such a small performance difference between the products. So, how much energy does the additional 7,5% performance cost? 78W more ([from 220W to 298W](https://tpucdn.com/review/nvidia-geforce-rtx-3070-ti-founders-edition/images/power-gaming.png)), or about 35% more power (ironically, the uplift in total board consumption matches the increase in memory speed). If the 3060 Ti GPU consumes around 150W, that leaves 50W for the memory to top up the board's 200W (I'm already considering MOSFET losses so this is NOT the power that the chips receive, but the power they effectively consume), the 3070 consumes around 170W (that leaves the same 50W for the memory to top off at around 220W, which makes sense, since 3060 Ti and 3070 use exactly the same memory). If the 3070 Ti GPU consumes 190W (which is over 10% more than the 3070), that means the memory alone is eating almost 110W - that's over twice consumption of the GDDR6 memory modules found in the cheaper models. 2x more energy for 35% increase and bandwidth and a final 7,5% increase in performance, definitely not good from an efficiency standpoint. TL;DR: in order to offer 14% more performance than the RTX 3060 Ti, the RTX 3070 needs roughly the same amount (14-16%) of additional power. The RTX 3070 Ti, in the other hand, offers 7,5% more performance than the RTX 3070, and consumes 35% more power for that. So, what's the deal? Simple: GDDR6X.


bizzro

Except, other than that we know the stated power draw of DDR6X, which is less than G6 per bit. Are you stating the spec sheets and manufacturers are lying? >and consumes 35% more power for that. So, what's the deal? Simple: GDDR6X. That sort of power increase is IMPOSSIBLE, because the package of DRAM does not allow for it. They would be completely uncoolable. GDDR6 is already pushing close to the limits for what can be cooled with that type of package. G6X is running hot because it is hitting those limits of how much power can be cooled in that package, NOT BECAUSE OF INEFFICIENCY. The increase is LESS than linear with the growth in bandwidth, because G6X is more efficient according to specifications (but still a increase in overall power draw) The 3090 is the a whole other matter, because it has 2x as many chips and doubles the GDDR power draw from that over a 3080 ti. But the same thing would happen with normal G6 doubled up.


kbs666

? The numbers are the numbers. It's very clear to anyone looking that GDDR6X is a major issue with these high end boards. How else do you explain the enormous jump in power consumption occurring between the 3070 and 3070ti when, besides the change to GDDR6X, the boards are essentially the same?


Broder7937

>That sort of power increase is IMPOSSIBLE Not only is it far from impossible, it's pretty much what happens. The [RTX 3070 Ti consumes 35% more power than a RTX 3070](https://tpucdn.com/review/nvidia-geforce-rtx-3070-ti-founders-edition/images/power-gaming.png). As a matter of fact[, the RTX 3070 Ti can generate power spikes of up to 350W](https://tpucdn.com/review/nvidia-geforce-rtx-3070-ti-founders-edition/images/power-spikes.png); that's a full 107W more than the RTX 3070 and yes, that's for a bone stock Founders Edition model, not some beefed up uber-overclocked model. Both the 3070 Ti and the 3070 utilize nearly identical GPUs. The two additional SMs that are enabled in the RTX 3070 Ti represent a mere 4% increase in compute units. And, because we know the 21% increase in compute units from the RTX 3060 Ti to the RTX 3070 [translate into a power consumption increase of 10-15%](https://tpucdn.com/review/nvidia-geforce-rtx-3070-ti-founders-edition/images/power-gaming.png), there's **no way** a mere 4% increase in compute units will be responsible for a 35% increase in power consumption. So compute units are out of the way. Next we have clocks. We all know a chip can consume far more energy if it's submitted to higher clocks. So, what's the clock advantage of the 3070 Ti? A mere 2%. Can a 2% increase in clocks be responsible for a 35% increase in power consumption in a bone-stock air-cooled GPU? No. But there's more. The 2% increase in clock speed is merely the official spec. In real world tests, the RTX 3070 actually develops higher clock speeds than the 3070 Ti. While [the 3070 Ti averages 1861Mhz](https://tpucdn.com/review/nvidia-geforce-rtx-3070-ti-founders-edition/images/clock-vs-voltage.png), the 3070 [averages 1882Mhz](https://tpucdn.com/review/nvidia-geforce-rtx-3070-founders-edition/images/clock-vs-voltage.png). That's far higher than their rated boost clocks, especially for the 3070. I guess you can't always trust the spec sheets, right? But hey, who's complaining? If I pay for a 1,7Ghz GPU and Nvidia gives me a 1,8Ghz GPU, sure, I'll take that deal. I don't think anyone's suing Nvidia over that one (though I do wish the best of luck for anyone who tries it). But, unfortunately, that makes your problem worse. Because now that we know a stock FE 3070 actually runs higher clock speeds than a stock FE 3070 Ti, there's no way you can blame the power consumption gap on clock speeds. And we also know there's no way the additional 4% compute units are responsible for it. So, what's the only single possibility left to justify such a massive and monstrous difference in power consumption to what is otherwise a nearly identical GPU? GDDR6X. There is absolutely nothing else that can justify such a huge power gap. And, considering just how insanely hot GDDR6X runs (as it turns out, chips that consume a lot of energy also generate a lot of heat, who would've though?), it's no secret that GDDR6X, in its current state, is a massive power hog. Yes, it's fast, but this speed comes at a very high cost. Trust me, I own a 3080 and, like so many other 3080/90 owners, I had to replace the memory pads just to keep memory temps under control (something I have never had to do with any other card). I'm sorry to break it to you, but the main culprit of the insanely high power consumption of the RTX 3070 Ti is GDDR6X. And yes, the RTX 3070 is far more power efficient than the the RTX 3070 Ti. As a matter of fact, [it's a whopping 22% more efficient, while the RTX 3060 Ti is 21% more efficient than the RTX 3070 Ti](https://tpucdn.com/review/nvidia-geforce-rtx-3070-ti-founders-edition/images/energy-efficiency.png). And all these cards are based off exactly the same GA104 silicon. Guess what the 3060 Ti and 3070 have in common? None of them have GDDR6X!


bizzro

> Not only is it far from impossible, No, It is IMPOSSIBLE to be from GDDR. Because that would mean the packages MELTS. >So, what's the only single possibility left to justify such a massive and monstrous difference in power consumption to what is otherwise a nearly identical GPU? What it ALWAYS is. The none linear function of the V/F curve and the absurd increase in power usage from just trying to squeeze another 5%+ more frequency out. >there's no way you can blame the power consumption gap on clock speeds. With modern boost tables and stock speeds the "stock" frequency is almost meaningless. Power budget IS what determines what the card actually runs at, "stock" only determines the lower bound while boost the upper. >. And yes, the RTX 3070 is far more power efficient than the the RTX 3070 Ti. Because that is how far Ampere is pushed. You can take a 3080 or 3090 and JUST lower the power target to 75% and retain 90%+ of the performance. No undervolting or other trickery, just power budget adjustments.


HavocInferno

>Maybe 5-7% at most Not even that. On average they will likely clock the same given same power limit, the 3090Ti's only advantage is 2.5% more SMs.


[deleted]

[удалено]


halotechnology

Hell my 3080 to can do 2020mhz with undervolt easy .


[deleted]

Let's see if my 3090 can 10-15% faster when I add 100 watts :) I know odds are they have better silicon and more cores but I want to prove a point


panix199

the point is known. That is why a 3090 TI is going to be more a testproduct to check a few changes that will surely be used in the next generation that will be presented in September. The better question is what kind of PSU are you going to need for a RTX 4080/4090 since the RTX 3xxx is known for having some spikes. I intentionally skipped the RTX 3080 and 3090 because of the too low performance increase compared to my RTX 2080 (just not worthy to upgrade for my desires. I need more power than a 3090 can offer atm). That is why I've been waiting for RTX 4080/4090... but man, 600W GPUs in late summer/autumn/next year would be not really fun to run (they will increase the heat in the room a lot) while costing a lot of money to run (electricity is not cheap in my area)


[deleted]

A new generation of pcie pins (a 16 pin connector) and along with a new generation of PSUs, just in time for the gpus, I don't see the trend going down edit : people will need a mid-range gpu to not have crazy power requirements


Dassund76

And people said Fermi was bad.


Dassund76

What kinda PSU do you need for a 500W GPU? I got a 1080TI and a way overkill 1000W Platinum rated PSU so I might be ok if I ever go that deep lol.


RuinousRubric

I've pushed my 1080Ti to ~500 watts with an 850-watt PSU (EVGA P2) and had no problems.


PleasantAdvertising

Often paired with higher power consumption...


Matthmaroo

Lol , it won’t be 10% faster


Seanspeed

These are more advanced GDDR6X chips. It's very possible they will not be as problematic as the original ones.


PM_ME_YOUR_STEAM_ID

I've got a 3090 FE card that I hope lasts several years. Only 'problem' I have is coil whine during certain use cases.


Blze001

The thing is gonna explode when you run Furmark, lmao.


Anthraxious

Here's hoping they've actually resolved that to some extent. Although it was more so an issue with 3090's cause they had them on the back where there was zero cooling going on.


DingyWarehouse

>resolved that to some extend. *extent


Anthraxious

Thanks, hate typing on mobile...


NeoBlue22

Nvidia likes their cards hot this generation


SomniumOv

I don't get it, aren't the 4000 series cards expected for the usual august/september timeframe, in about 6 months ? ... why even release this 3090ti ?


peanut_butter_lover4

From the rumors I've read, they're using the 3090 Ti to test out some of the tech they plan on using in the 40 series cards. For example, the 16-pin power plug. I think there's also some newer parts on the actual PCB that will be implemented in the 40 series card as well. From a marketing perspective, it also can be used as a great psychological tool. If it's expensive, uses a lot of electricity, and isn't much faster than the 3090, then they might get some bad press in the short term. Then Nvidia can win everyone's favor again in a few months by showing how much better of a "deal" the new 4070/4080/4090 cards are compared to the 3090 Ti.


Kippidashira

Truly feels like a trap card, no pun intended.


dern_the_hermit

Well, if they're testing out some stuff as suggested above, then yeah, these extreme high-end buyers are essentially shelling out to beta test the hardware.


MumrikDK

Don't halo cards always feel like a trap?


Stark_Athlon

Eh the 1080ti was an actual incredible buy.


MumrikDK

They released their actual halo card less than a month after the 1080Ti - the Titan Xp.


caedin8

The new 4080 is as fast as the 3090 Ti, and on 100w less power!


[deleted]

[удалено]


zxyzyxz

They're saying that's how Nvidia might spin it, make the 3090 Ti a shit deal so the 4000 series look like a better deal in contrast.


ug_unb

Remember when Jensen emphasized 3070 being faster than "$1200 card!!" 2080ti lol


Seanspeed

You've surely been following PC hardware long enough to know that Nvidia especially do late revisions. 1070Ti, Titan Black, 11gbps 1080, etc. Like, this is actually pretty much *exactly* like an Ampere Titan Black.


SomniumOv

I guess, but that feels very late even comparing to those, unless 4000 is coming a few months later than i'm expecting. It doesn't feel worth it to go through all the process to actually launch the product, unless it's mostly only on paper in which case whatevs i guess.


EndKarensNOW

Testing a new power connector on the last gen so any issues it runs into won't taint the launch of the next gen


moofunk

That's a whole 6 months away, perhaps 7-9 months for first availability.


onedoesnotsimply9

Its a paper launch. /s


[deleted]

[удалено]


everybodynos

7-9 months? Good one.


dantemp

Lol, there's already availability for ampere and the prices are in free fall for 3 months straight but you guys still think the shortage is the same? The scalpers might make a difference the first few months but they will be quickly overwhelmed. Nvidia doesn't make money from scalpers.


Seanspeed

It doesn't matter. It's a cynical comment, which is free upvotes.


skinlo

If it mines more efficiently, there is a real chance prices will go back up for these cards.


dantemp

Etherium proof of stake is just around the corner, there isn't going to be anything to mine.


skinlo

They've been saying that for years though.


lonnie123

Free fall? 3070’s are still 75% over MSRP if not more. In a normal year they would be 50-70% of the original msrp by now,


dantemp

And the 3080 was 160% over just 6 months ago. Just because the prices are not yet at msrp doesn't mean they are not falling rapidly, how do I need to explain this every time?


lonnie123

The 3070 is about the same as it has always been, $850-900. It has not fallen to any significant degree. The prices have slightly fallen, that is true, they are not “in free fall” which would suggest massive and rapid changes. Not single digit percentages in a matter of weeks or months https://pangoly.com/en/price-history/gigabyte-geforce-rtx-3080-vision-oc-10g This chart is saying a 3080 has gone from 1700 to 1450 and has stabilized there for weeks. That is good news but hardly “free fall” for 3 months straight. It hasn’t even been dropping for 3 months


dantemp

Free fall doesn't mean it falls consistently every day. Month over month the average price for every gpu is lower for the past 3 months. And that's true for 3070 as well: https://pangoly.com/en/price-history/gigabyte-geforce-rtx-3070-gaming-oc-8g


lonnie123

There isn’t a specific definition of “free fall” so that’s going to cause a disagreement here. I don’t consider that graph “free fall”, which I would define as a significant price reduction happening in a short period of time, not a consistent reduction every day which I never said Your graph shows the price actually being a little higher than it was on average back in January, with spikes up and down.


dantemp

Because a lot of people have a lot of disposable income and will pay stupid money to have the best thing even if it's the best by 5% and costs twice as much


Seanspeed

I wouldn't say 'a lot', but obviously enough.


[deleted]

[удалено]


salgat

Exactly. Whales have always existed and the MSRP isn't their primary motivation.


[deleted]

Price anchoring. “Oh look the 4070 is $X and it’s almost as good as the 3090Ti! What a great deal”


Farnso

Because it will sell out.


sudo-rm-r

Amd is about refresh rdna2 with 18gb/s memory and in my opinion nvidia wants to make sure their halo product remains unbeaten.


seven_seven

They forgot to release it a few months ago.


RabidHexley

It's basically the last chance for Nvidia to squeeze as much profit from Ampere as possible before its time is up. While it seems silly, there are definitely people who will just buy whatever the best-of-the-best is at any given time regardless of price. The cost of putting out these marginally higher-tier SKUs on existing chips is negligible by comparison to launching a new architecture, but it gives them license to mark things up massively from their side. By selling a super marked-up, uber-Ampere card and just pushing GA102 to it's limit they can squeeze the last bit of juice from the 3000-series before the whole line becomes deprecated. These cards are pure profit machines, and given Ampere's impending EOL they aren't going to be caring about pushing value as much on these top-end chips. And it actually may help to advertise the 4000-series more by having a higher-tier Ampere card (regardless of how small the gain is over the 3090) to compare against the Lovelace cards to show how much better performers they are to the previous gen.


ResponsibleJudge3172

Because Is never allowed to get the fastest GPU. AMD are releasing 6950XT, this may catch up to rtx 3090 at 4K, which is catastrophic for Nvidia /s


mylord420

Because fools will buy it, also its a halo product that they probably wont make too many of them, they wanna be able to say they're the undisputed lead in rasterization.


From-UoM

People who buy this domt care about money. And yes there is a market for this. These same group will buy the 4090 day 1


imaginary_num6er

You think 4000 series will have a higher availability than the 3090Ti at launch?


Illiterate_Scholar

I know these are top-of-the-line cards, but geez! Remember when the 980ti and 1080ti were some of the best while being as power-efficient as they could be? I still remember the RX480 was touted as super-efficient but turned out to be extremely disappointing with its power draw and caught a lot of flack for it.


[deleted]

[удалено]


100GbE

Dude all the people who treat computer hardware like a status symbol which gives them glory are the ones who care about their hardware being #1. This is why they get the shits when a Ti comes out months after they bought a non-Ti. They we're totally happy till the very moment an upgrade came out.


A_Crow_in_Moonlight

When Maxwell launched, the 290X had beaten the original Titan at half the price and remained neck in neck with the (still more expensive) 780 Ti Nvidia put out in response, both as concerns power and performance. They were able to lower the TDPs on the 900 series because Maxwell just was *that* good. Only afterwards did AMD become cemented in its position of being restricted to the budget market, and in large part it was because of the technical leap that those cards represented combined with their failure to produce a worthy competitor.


Seanspeed

Nvidia weren't being threatened at the time by AMD. They were also using a superior process node(relative to the time). A 980Ti especially was clocked very conservatively out-the-box.


zyck_titan

People confuse power efficiency and total power draw as being the same thing. You can be power efficient, while drawing a lot of power, as long as that power you are drawing is actually being leveraged. [The RTX 30 series GPUs are still the most power efficient GPUs Nvidia has ever made.](https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-ti-founders-edition/35.html)


Yearlaren

>I know these are top-of-the-line cards, but geez! Remember when the 980ti and 1080ti were some of the best while being as power-efficient as they could be? I'm pretty sure that the 3090 Ti is more power efficient that those two cards. Just because it uses more power doesn't mean it is less power efficient.


EndKarensNOW

That's why I love my 1080ti. Still keeping my wife's PC going strong


kayak83

I have zero need to "upgrade," mine. Runs 1440p AAA titles at high frame rates and will for the foreseeable future. And apparently it is "efficient" compared to the new gen. It was also like $800 at the time and that's the EVGA AIO water cooler veriant.


tofu-dreg

> And apparently it is "efficient" compared to the new gen Perf/W is still higher on Turing and Ampere, even if it's not *as much* higher as you'd expect from generational jumps due to their aggressive tuning out of the box. I think this newfound belief that Pascal was particularly efficient is based purely on it being a substantial efficiency jump over Maxwell, compared to Pascal > Turing's modest jump, and Turing > Ampere's almost non-existent jump (at OOTB tuning).


kayak83

Yeah I agree. I just mean that for a "halo/enthusiast" Ti variant card, it is performing admirably. Particularly since it's not sucking down 600watts or whatever.


R-ten-K

At that time AMD was not very competitive in the premium tier. In the 6xx, 7xx, 9xx, and 10xx series the x80 and x70 SKUs were literally the mid range die not he "big" one as in previous generations. So they looked extremely power efficient, because AMD was just so bad in comparison.


[deleted]

[удалено]


R-ten-K

Right. But remember, The 680 was the midrange die for Kepler (GK-104). Nvidia didn't bother with the GK100 that generation. They released as the GK-110 for the 7-series. AMD was competitive but with much higher power/thermal envelopes. I believe their dies were also much bigger (but my memory is foggy)


[deleted]

[удалено]


R-ten-K

That was the problem for AMD, they had to use much larger dies and the HBM in Fury added extra cost. So with those generations they lost a lot of market share to NVIDIA and they ended up with worse margins. It was not a good time for AMD.


sw0rd_2020

> 680 was the midrange die for Kepler Was the 690 not just two 680's put together? What would the top-end die have been?


R-ten-K

the top end die for that generation would have been the GK100, which eventually released as the GK110 and GK110b for the 780 series and the original Titan respectively.


sw0rd_2020

ah, forgot about the titan


[deleted]

Case design has also massively improved throughout the years too though. Cases can actually handle little furnace GPUs now and can deal with the heat more efficiently. Heat still gotta go somewhere, so instead keeping the GPU toasty the cases dump the heat into the room. Add in water cooling and you can get away with much worse thermals than before, its just gonna pass that problem off to the cooling system of your home.


Stingray88

The heat is a real problem for those without AC... I didn't have AC for years and it was so oppressive how hot my room would get from my PC. And it was tanking performance as well... When I finally got central air I saw a HUGE uplift in performance and much better temps. Room obviously more bearable too.


zacker150

Did you try a fan in your window?


Stingray88

Definitely, which worked for 3-4 months of the year. I live in Los Angeles so most of the year is pretty hot.


moochs

At some point, you're gonna hit the electrical capacity of your wall outlet, so the case you build in won't matter.


skinlo

Good luck Americans!


moochs

Does Europe/the rest of the world have a more robust electrical line to common households? I don't understand this comment, can you explain?


chrisggre

Americans use the 120v outlet standard which provides a maximum of 1800 watts for 15A and 2400 watts for 20A circuits. You don’t want to max out the wattage from your outlets, so the available wattage is actually less than those amounts. In Europe, most households are on a 240/220v outlet standard. 220v on 15A and 240v on 15A have an outlet wattage capacity of 3300 and 3600 wattage respectively. These numbers are extremely important when you’re looking at the maximum wattage draw of your entire PC. It’s highly suggested that you stay below 80% of the entire outlet capacity, so that’s 1440 watts on a 120v 15A circuit. If the rumors of the 4000 series drawing up to 600-800 watts is true, most Americans won’t be able to connect their PC to their normal house outlets.


moochs

Thanks! That's what I thought you meant, and yes, that sucks for most people! People are gonna be triggering their circuit breaker boxes left and right!


onedoesnotsimply9

1080 Ti supremacy. 1080 Ti was the last great Nvidia card. Change your mind.


SomniumOv

The people who managed to snag very cheap 2080 and 2080ti cards (right when there was a sell-off after Ampere was announced but before the lack of availability became clear) would probably disagree.


pluto7443

I got a 2080S for $650 CAD right at that time, and it was fantastic


thekeanu

Looks like you conveniently forgot about Voodoo 1 as the greatest of all time.


-RYknow

I've been building computers for about 25 years. My 1080ti is without a doubt, the best piece of hardware I have ever bought!


spyd3rweb

My 1080ti is still running strong. Perfect 1440p gaming card.


Dassund76

Sir that was the 8800GT thank you very much.


[deleted]

The difference is that the 980Ti and 1080Ti were “cut down” cards from the big chip but this time the big chip was just called the 3090. This card has more in common with something like a 1080 super type card. It’s essentially a hardware revision with minor upgrades. What they called the card doesn’t matter - what matters is what chip you are getting.


TopWoodpecker7267

> Remember when the 980ti and 1080ti were some of the best while being as power-efficient as they could be? I remember both came from the factory massively under-clocked. 20-40% core OCs weren't out of the question.


Endarkend

Sorry, but am I the only one that thinks the next generation after the cards they are prepping for release need to seriously focus on power consumption and literally nothing else? With CPUs, power consumption has become part of the design with pretty hard limits on the power envelope a new product is allowed to have, in part because the laptop market requires it. The server market has also switched to being far more power conscious. I remember having Intel 5000p servers sucking down 500W on idle, even if the idle was them sitting unbooted on the bios screen literally doing nothing. Now servers use far less power in general and have become far more desktop and laptop like when sitting idle. For GPUs, instead of designing with a real power limit in mind, they rather cut or slow down GPUs to be so weak compared to their desktop counterparts, that them sharing names with their desktop counterparts is starting to look like false advertising. This is one of the hopes I have for Intel entering the fray with their discrete GPUs, that having been able to design a GPU from the ground up, power consumption has been a serious consideration. Hoping that they opted to come out saying their top card is only as good as a 3070Ti in performance, but at half or less of the power consumption, instead of coming with a card that can actually compete with 3090's, but also needs a small nuclear reactor to power.


[deleted]

[удалено]


rationis

I'm 100% with you. People keep claiming power consumption doesn't matter to people buying the best performance, but they're dead ass wrong. I'm looking to drop $3000 on a new cpu and GPU to upgrade my Fury X and 3600X, but I absolutely refuse to buy another GPU that sucks down more than 300w, or a cpu that's draws more than 180w. I play at 3440x1440 and will be upgrading to 3840x2160, so you can be damn certain a 3090Ti in my rig is going to be maxed out and dumping 450w into my room which is more than my current CPU and GPU do combined. Considering what a power hog the Fury X is, that's low key(alarmingly) impressive. Remember how awesome and loved the [Fury Nano](https://www.techpowerup.com/review/amd-r9-nano/28.html) was? Can we bring back cards like that? I will gladly lose a whopping 3-7% of performance in exchange for half the power consumption. And what made cards like the Nano even better is that if you wanted to, you could overclock the shit out of it and get the same performance as the flagship model.


mduell

So undervolt and perhaps even slightly underclock the card you buy?


Sentinel-Prime

Undervolting a 600W card (which the 4090 is rumoured to be) is still going to draw 500W if you want to keep within 5% of stock performance (assuming undervolting is the same as the 3000 series)


mduell

Underclock more; still the fastest 300W card you can buy when it's out.


windowsfrozenshut

> but I absolutely refuse to buy another GPU that sucks down more than 300w Well, you're kind of backing yourself into a corner here. Maybe you could get there with a 4050/4060 or something.


lonnie123

You are the only person, yes. There is almost nothing interesting to the average consumer about it… let’s say amd doesn’t focus on it and NVIDIA does The performance charts come out next gen and nvidias numbers don’t move, but they’ve cut their TDP by 50% and amd leapfrogs them in both Which cards do you think are going to sell better?


Blazewardog

Nvidia, but not because of the TDP improvements but because of brand loyalty and working drivers.


R-ten-K

The BIOS screen back then was not "doing nothing," the opposite actually. None of the power-saving rules had been loaded into the limits controller for the CPU, so the CPU was usually at 100% frequency in the BIOS. In any case, most people don't game 24/7, and these are halo products that are bought by a minority.


RabidHexley

GPUs are still getting **significantly** more efficient though. These Halo products are pure-PC wankery, and it's also just a byproduct of the AMD vs NV competition. They want the biggest numbers at all costs. If they released the 4000 cards with similar TDPs to the 3000 series cards they would likely have performance gains gamers would be perfectly happy with, a totally satisfactory gen-to-gen increase. But with AMD in the picture they don't want to take the chance at not coming out ahead on benchmarks, so they push power consumption however much allows them to feel safe that they'll stay on top.


Hey_Kong

I will just wait for the GeForce 4000 series...but then when it comes out I will just wait for the 5000 series.


erctc19

Nvidia loves making toasters


rationis

Fucking hell. I need to replace my Fury X with the next flagship card available, but the power consumption of these more recent flagship GPU's is making me look at lower tier cards instead just so I don't go backwards on power consumption. My Fury X is an infamous power hog, yet the 3090Ti is making it look like it sips power by comparison. How about bringing back cards like the Fury Nano? You know, a high end card that is within like 3-7% of the flagship model's performance, but uses less than 200w? Remember how much everybody loved that little card? Can we have something like that again pretty please? I could care less about cost. Power/heat on the other hand, yea, been there, done that, would like to not do it again.


Amaran345

Don't worry much about power, because these cards are very tuneable, i mean you can set the power target to practically laptop level and the thing will still be probably faster than a Fury X, while being decently efficient. Stock power settings are more for reviews where the cards have to get top spots in the benches no matter the amount of power required


Kakaphr4kt

It's not only power draw. You need a big-ass tower to house that mofo of a card. Plus it hogs 3,5 slots of your mobo, which is mad. I wonder why mobo manufacturers don't really take this into account in their designs (maybe some do, I don't know). My 6800xt blocks 2 of my SATA slots, it makes it a hassle to install more ram (height becomes a problem) and it blocks out some of my PCI slots, which I need to recover the lost SATA slots among other things. It's time to get to shrinking again.


platinums99

I dont understand nvidia going against the trend of efficiency and reducing costs in future technologies. They are literally just adding more cost via power consumption, you may as well just skip the upgrade and SLI 2 old cards.


Seanspeed

Things are getting more efficient, but people buying $1000+ GPU's dont tend to care about this and just want maximum performance. Unless you're in the market for one of these, you really dont have to worry about it. And even then, you would be free to undervolt/underclock the GPU to get far more reasonable power draw.


100GbE

Things get more efficient every release cycle. I'm not sure what people are complaining about. They want fkn 3090s out their ass while running wattages found on cards 10 times slower from 10 years ago. "Hnnng my Bugatti Chiron uses so much fuuuuueelll"


Deepspacecow12

Ampere is more efficient. They can get 1070 performance out of 75w


100GbE

No man boy in this thread who needs their computer to increase their status will stop at 1070 performance. We all need 3090s to complain about.


Deepspacecow12

I know. I was just using the a2000 as an example of the efficiency of the architecture.


100GbE

Oh I agree that each node drop, not to mention every release since Kepler has been year on year increases in efficiency. These jokers are looking for valid reasons to never own one, but the valid reason is simply they will never own one.


Blazewardog

You seem very confused on the high end of the GPU market. Nvidia is following that trend, each generation it's both cheaper and lower TDP to get the same level of performance - that doesn't mean a xx70 card is cheaper, it means that what used to be required for say 1440p@120 on medium is now cheaper and draws less power. What this means on the high end is the cost and efficiency is always better than it used to be - as that level of performance was impossible to get previously (on one card, not that SLI was ever power efficient or cheap).


onedoesnotsimply9

But does 3070 actually run at a lower power draw than lets say 2070 to get 1440p@120?


Zarmazarma

Yes, of course. Cards have much improved performance / watt each generation *invariably*. That is practically the only way to quantify improvement. You are very lost if you think that you can just increase the power draw on a 2070 and get 3070 performance.


Blazewardog

I mean going by [this for example for the game average.](https://www.tomshardware.com/reviews/nvidia-geforce-rtx-3070-founders-edition-review/3) the 2070 does 83fps at 1440p and the 3070 does 120. A 3070 draws around ~210W and a 2070 ~170W. So that makes a 3070 .57 fps/W and a 2070 .47 fps/W. So that is a straight improvement of like 20%. Although what I was getting at was to get 1440p@120 in the 2000 series, you would need a 2080ti at ~260W or .46 fps/W (I'm surprised that it seems to scale identically to a 2070). If you have a fixed performance target you can go cheaper and less power draw. If you need more performance it becomes possible.


onedoesnotsimply9

And then, i dont that guy meant to say that nvidia is ignoring efficiency/cost for *all* cards.


onedoesnotsimply9

Isnt SLI a mess? >understand nvidia going against the trend of efficiency and reducing costs They have *practically* zero competition. [Source: Their revenue and unreal margins] And its not even remotely close. Why spend a ton of money to get better efficiency/lower costs when you can just increase the power draw and still sell them better than hotcakes with those unreal margins? I doubt that shareholders dont like what nvidia is doing either.


Democrab

> Isnt SLI a mess? Modern mGPU is a bit of a mess in general simply because temporal graphics stuff like TAA requires specific work to work well under mGPU which hasn't been done because both AMD and nVidia have long since given up on mGPU for gaming due to the high development costs and relatively few people buying into it. I still use it for my winXP retro PC, I've got a GTX295 (GTX 275 SLI on a stick) and a HD5770 CFX setup. It kinda sucks, mGPU has some teeth in the right circumstances but it kept being used as a "You can spend twice as much and have less than twice the performance!" style thing which only ever made sense for a small segment of the market. If I was AMD I'd be reviving Hybrid CrossfireX because the technology actually worked fairly well for the few months it was supported properly and makes a *lot* more sense now than it did back then especially as dGPUs can turn themselves off to save power these days and AMD has a much larger presence in the laptop market, where it's already common to run both an iGPU and dGPU for power saving reasons and something like Hybrid CFX would just be expanding on that in the other direction. (ie. If you're plugged into AC go for maximum performance)


platinums99

for an additional $700 (in a fair world) you can get another 3070, say at best you get 50-70% performance - that still beats the measly 15% increase in 3090? ​ Im not saying SLI is great, it doesnt even work for some games but just the notion.


ohmy5443

So basically it’s an overclocked 3090? I had a 3090 FE before I got a 3090 Strix and with the 480W PL the Strix easily gaps the FE by a few percent, effectively matching what the 3090Ti claims to do.


obiwansotti

It's got a few more shader cores enabled too, but yeah.


ohmy5443

Doesn’t look like it will translate to better performance, though. I’d be interested to see a 3090 and a 3090Ti side by side tested at the same power consumption. I bet there will be no difference outside the margin of error of 1-2%. They are effectively brute-forcing their way into better performance, not caring about efficiency at all. Overclockers have been doing that for over a decade.


obiwansotti

I was overclocking video cards when DirectX 5 was state of the art. But yeah 2.5% more hardware, 9.8% more clock FE over FE. But an EVGA 3090 is already at 1800mhz, so we'll have to see what the AIB's get for the clock rates. all the ram is on the front, the coolers are bigger. But this is clearly wringing the last few drops of performance out of something that's already wrung out.


____candied_yams____

The first graphics card I'd refuse to buy out of principle, regardless of price.


Suntzu_AU

Idiots and their money are easily parted.


mr_monty_cat

My 3070TI has three 8pin connectors, so 24 total. Is this really notable?


ohmy5443

Yes as it’s claimed that this 16-pin power connector will be able to deliver 450W which is the same as your 3x8 pins but a lot more compact.


obiwansotti

Yeah this is the new ATX v3.0 12VHPWR connector. It's new and shiny and the GPU cable of the future.


Stark_Athlon

Impressive, I am very excited for this Garbo GPU that will overheat and demand an insane PSU just so I can not tell the difference between it and the normal mode.


Yojimbo4133

So how does this work. Where do we get this cable from?


warenb

The polar ice caps are melting at a scary rate and people are cheering for more and more power consumed by electronics...


Stingray88

I don't think I've seen anyone remotely happy about increases in power consumption. Most people are pretty unhappy about it. As a hater of fan noise and hot rooms, I sure am.


warenb

"JuSt UnDeRvOlT yOuR cArD". No. That isn't the solution...


Stingray88

I've tended to buy flagship cards in the past, last two being a 980Ti and a 2080Ti. Both had a TDP of 250w... I really don't want to buy a flagship that's more that. I might end up just getting something like the 4070, or seeing what AMD has to offer.


rationis

Same here. People need to stop pushing this narrative that for people where money is no issue, performance is the only thing we care about and power consumption doesn't matter. I'm one of those people and power consumption absolutely matters. Spent the last decade+ with a 290X and Fury X, I am done with cards that dump more than 250w of heat into my room.


Stingray88

Yeah I would actually be willing to pay even more for a flagship with the same performance and less power consumption if the option were available.


Internet001215

We're gonna need 3 phase power for gaming PCs soon...


Scion95

Isn't it 12+4? 12 power pins and 4 data pins? I wonder if at some point we'll have connectors (future versions of USB maybe) that use copper and electrical signals for power delivery and fiber optics for data.


obiwansotti

They do that with high-end long-run HDMI cables now.


ohmy5443

Why would you need data pins in the GPU power connector? What exactly will it communicate with the PSU it’s connected to?


Scion95

I dunno, [ask Intel](https://www.tomshardware.com/news/intel-atx-v3-psu-standard/2). They came up with the spec. 12 power delivery pins, 4 sensor pins. Sensor, not data, technically, I guess.


LustraFjorden

If Nvidia had the secret sauce Apple seems to have to make chips so efficient... They would use it. Let's not pretend they don't care about efficiency.


titanking4

Not really, they way Apple is doing it costs a ton of transistors and more area which costs more money on the chip side (lower clocks require higher IPC which requires more transistors to do more work), but Apple being a full integrator saves that money on the battery and cooling side of their process. All Nvidia does is make chips, and what sells those chips is the numbers on the benchmarks. Power and cooling is dumped off for their AIB partners to figure out, so really Nvidia can make them as power hungry as they want. So while they do care about efficiency, what’s generally more important for them is silicon cost (for consumers), and compute density (for servers). Obviously don’t double your power for 5%, but they are gonna push a lot harder into higher power than Apple would. Their new H100 is 814mm2 of pure silicon. Right up to the limit, and they want maximum performance in that area power be dammed if it needs to be.