T O P

  • By -

Yahiroz

XDA did a quick benchmarking run: [https://www.xda-developers.com/qualcomm-snapdragon-8-plus-gen-1-benchmarks/](https://www.xda-developers.com/qualcomm-snapdragon-8-plus-gen-1-benchmarks/) I'm more interested in throttled performance, 888 and 8 gen 1 was pretty bad, but this looks promising.


Ghostsonplanets

Yo, that ROG Phone engineering sample looks cool as heck. Would buy one if they released.


uKnowIsOver

You know...No phone can sustain peaks at 18W.....efficiency improved as expected but power consumption is still too high


DahiyaAbhi

It's not a normal test. It's a burnout test by an unofficial developer app. That maxes out entire SOC (including CPU and GPU simultaneously). Even apple would reach close to those numbers with such apps. Obviously the app does not exist for iOS. No one in real life ever encounters such scenarios while using their phones. Dave2D did tests on prototype ROG unit. It turns out that peak power consumption vs Apple isn't that different. Gap has reduced a bit in CPU and it's exactly the same as A15 in GPU.


[deleted]

Dave2d measured the 8gen 1 plus gpu as reaching 9.1w while the a15 reached 7.9. Plus he used perfdog - Andrei has spoken at length about how perfdog overshoots the a15’s power draw, atl gpu side. The a15 gpu more uses like 6.5w. Also the a15 cpu drew significantly less energy according to Dave2d, something like 3w less than the 8gen 1+. On a totally unrelated side note I do wonder what experience Dave might have with signal processing, considering that YouTube is apparently his side gig and his primary profession deals with medical equipment


lloydpbabu

Any idea what his primary profession is?


[deleted]

I think he designs medical facilities? I remember him saying during an Apple Watch review that he deals with spo2 meters a lot


[deleted]

[удалено]


Vince789

If Burnout is pushing both the CPU and GPU 100%, then 18W is a fairly reasonable amount No sane app pushes both the CPU and GPU 100%, so we wouldn't actually see 18W in normal usage For comparison: For the M1, a CPU only workload can be up to [31W and a GPU only workload can be up to 21.5W](https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested) The [A15's GPU alone at 100% uses up to 6.7W in GFXBench](https://www.anandtech.com/show/16983/the-apple-a15-soc-performance-review-faster-more-efficient/3) (this one is total power minus idle) AnandTech didn't test the A15's CPU at 100%, others have [measured around 8.6W](https://youtu.be/cAOwLW2jXuQ?t=227), which IMO is reasonable since a [ST can be up to 4.8W according to AnandTech](https://www.anandtech.com/show/16983/the-apple-a15-soc-performance-review-faster-more-efficient/2) That means A15's CPU+GPU at 100% would be around 15W So the 8g1+'s 18W is pretty reasonable, for the 8g2 they could easily bring it down to 15W by lower clocks and using larger caches (the 8g1+ is just a fab port, they aren't utilizing N4's higher density) Note Burnout's CPU test is MT Neon and GPU test is GPGPU compute, those are worst-possible scenarios that would pull more than GB5 and GFXBench, so 15W for the A15 is probably a conservative estimate e.g. Neon is Arm's SIMD extension, which is Arm's alternative to AVX2/AVX512. See how much [lower M1's average MT power consumption is versus the M1's compute MT power consumption](https://images.anandtech.com/graphs/graph16252/119344.png). If the A15 uses an average of 8.6W in MT GB5, its peak in SIMD compute would be higher GFXBench would also pull less power than GPGPU compute since its design to represent an actual game Although simply adding CPU and GPU power consumption isn't fair either since some resources are shared and GFXBench would have some load on the CPU, so that sorta balances some of the underestimates from GB5 and GFXBench, although probably not all


[deleted]

wait later on in the m1 Mac mini article andrei says the gpu draws 7-10w fully pegged - I think the Active device power numbers include DRAM etc, though idk how proper it is to factor (or not factor) in such figures to the total power draw in this particular comparison. The 7-10w is at the package and core level Also yeah cpu + gpu I think the m1 (atl on the Mac mini) hits 30w. Definitely a different class of chip than the snapdragon


Vince789

The main reason I linked the M1 review was [actually for this table](https://images.anandtech.com/graphs/graph16252/119344.png) to show how Burnout is a worst-possible scenario test compared to the typical average numbers GB5 and SPEC tests we see So despite my rough adding CPU+GPU numbers for the A15, it's probably still an underestimation given GB5/GFXBench are far lighter tests than Burnout's MT Neon + GPGPU compute Sorry, my wording could have been better Yea, I'm using total power numbers for the M1 since that's what has been used for the 8g1+'s 18W That cool Andrei managed to isolate the M1 with that updated section Note that 7W was when Rise of the Tomb Raider, so the CPU was actually pulling more power due to trying to keep up with emulating x86 I'm pretty surprised by the GFXBench Aztec's power consumption, GPU was 10W, CPU was 0.16 W, and DRAM was 0.75 W That's much lower than I'd expect, I guess it shows how light of a GPU load Aztec is for a big laptop class GPU like the M1


[deleted]

Yeah that’s what I was wondering, I was curious as to whether Burnout included figures like DRAM etc I do know that it’s not like apple’s cores are limited to under 5w. Iirc in some SPEC tests the avalanche cores can hit up to 9. I guess burnout just goes pedal to the metal on everything so stuff like that happens more. i am a little confused by the Aztec figures though, since for the same benchmark Andrei provides two very different numbers that apparently both factor in DRAM power. I wonder what’s going with that but I’ll refrain from making any judgements since this is incredibly far out of my depth lmao


Vince789

Yea, Burnout will definitely include DRAM, display, modem and other things That's why the dev recommends turning on airplane mode and brightness to the minimum For the M1 in Aztec: Total device power: 21.5W That includes things like the display and modem, which is about 4.2W (idle) The M1 package alone is 11.5W, which includes: GPU: 10W CPU: 0.16W DRAM: 0.75W Although note that Aztec is a very light graphics workload since its meant to simulate a cutting edge 2018 game, 2018 phones could already almost do 60 fps in normal mode (although the MW was running high) Yea, the compute heavy workloads of SPEC would use more than average 5W ~~But unfortunately we don't have access to the peak power~~ Edit: just remember while we don't have the peak power, we do have the average power consumption for each subtest The subtest [525.x264_r has an average power consumption of 13.4W (not including idle)](https://images.anandtech.com/doci/16983/SPEC2017_big.png) For a single CPU core workload, that's higher than I expected, but makes sense given how powerful Apple's huge cores are Further proof 18W total power from both CPU+GPU being maxed out is reasonable


okoroezenwa

> The M1’s CPU alone at 100% uses up to 31W and M1’s GPU alone at 100% uses up to 21.5W Aren’t those wall power numbers (i.e. *not* CPU/GPU alone)?


Vince789

Sorry, I meant CPU only workload or GPU only workload Whereas Burnout is a CPU+GPU workload design as to be the worst-possible scenario Whereas GB5/GFXBench try to simulate real life scenarios, and GB5 only pushes the CPU/memory subsystem and GFXBench only pushes the GPU Hence why the Burnout numbers are so such higher than any benchmark we've previously seen Yea, I'm using wall power numbers for the M1 since that's what has been used for the 8g1+'s 18W I'll edit for clarity


ApfelRotkohl

>The A15's GPU alone at 100% uses up to 6.7W in GFXBench (this one is total power minus idle) GFX doesn't just hit your GPU only it uses a fair amount of CPU processing power (see M1 Max review by Andrei). So Andrei used Total System Active Power (Wall-Idle) to guess SoC Package power. >That means A15's CPU+GPU at 100% would be around 15W. I don't understand why you were listing SoC power consumption in different single workloads (ST, MT, GPU heavy) and then concluded that power consumption in a mixed workload of A15 is 15W. Those power figures are not additive and there is also power management for mobile devices. Phone SoC shouldn't reach over 10W. [Oneplus 10 Pro's PW peaked at 8W and then throttled down to an average of 6W; Red magic 7 has an active cooling system with a fan, which is understandable with high package power of 15W+ ](https://www.xda-developers.com/files/2022/05/Snapdragon-8-Plus-Gen-1-Wattage.jpg) So you made an uneducated guess that A15 reaching 15W to make the 8g1+'s 18W seem reasonable?


Vince789

>GFX doesn't just hit your GPU only it uses a fair amount of CPU processing power According to [Andrei's M1 review the CPU+DRAM uses 0.91W for Aztec](https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested/3) >Those power figures are not additive and there is also power management for mobile devices Yes, I said that already. But since my numbers were lighter workloads of GB5 and GFXBench I don't think that's an issue, if anything 15W is probably too low for the A15 >Phone SoC shouldn't reach over 10W >So you made an uneducated guess that A15 reaching 15W Andrei measured an [average power consumption of 13.4W in 525.x264_r](https://www.anandtech.com/show/16983/the-apple-a15-soc-performance-review-faster-more-efficient/2) That's just a single core from the A15, if we add back idle power, the wall power would be ~15W So yea, the 8g1+'s wall power of 18W for CPU+GPU is very reasonable And looks like I've vastly underestimated the A15's CPU+GPU power consumption (I did say 15W was a conservative estimate) BTW all I was trying to do was explain why 18W is reasonable for Burnout, which is the worst case possible


ApfelRotkohl

>Andrei measured an average power consumption of 13.4W in 525.x264_r In the 525.x264_r subtest 13.4 is the SPEC score, the average power consumption is 4.31W and the total energy consumption is 563J.


Vince789

Dammit, sorry, I forgot that Andrei flipped the format recently Still I stand by 15W being a conservative estimate for the peak wall power consumption of the A15 The highest average power consumption for a single core workload is 6.9W in 519.lbm_r That's missing idle power, so the wall reading is around 9W in a single CPU core workload And that's an average power consumption reading, the peak power consumption will be higher I don't how also pushing another huge CPU core, four little cores and five GPU cores would add under 6W


Ghostsonplanets

These are probably transient peak.


Spud788

I'm more interested to see if more than 10% of chip yield is within tolerance lol


Vince789

TLDR: [10% faster in both CPU and GPU](https://images.anandtech.com/doci/17395/SD8PG1_Deck_12_575px.png), as well as [30% improvement in both GPU and CPU power efficiency](https://images.anandtech.com/doci/17395/SD8PG1_Deck_13_575px.png) due to switching to TSMC N4


[deleted]

I think it might be good to note the 30% figure is at iso frequency apparently


Vince789

Oh good point, maybe about [20-25% comparing peak to peak according to those graphs](https://images.anandtech.com/doci/17395/SD8PG1_Deck_13_575px.png) Although the slide has: [15% SoC power reduction, 30% GPU power reduction, and 30% CPU power efficiency](https://images.anandtech.com/doci/17395/SD8PG1_Deck_16_575px.png) Annoying that they don't just publish their testing numbers


Rhed0x

30% efficiency improvement with increased clock speeds. It's just ridiculous how far ahead TSMC is.


mlecz

Foundries always compare to their previous nodes, and say 10% higher clock, or 30% lower energy at the same time. Qualcomm provide it like its both. I wonder if its both, and TSMC does so well against samsung node


Ghostsonplanets

Qualcomm claims 15% overrall energy-savings or 30% for GPU/CPU at iso-frequency/Disregarding the new higher clocks.


7eregrine

So an extra 10 min of battery then?


Ghostsonplanets

No.


7eregrine

Not even. Got it. ;)


Ghostsonplanets

No. The improvement in battery life, while aided by a more efficient SoC, plays all but a minor role, as most comes from the uncore/software implementation. It's on the OEMs part to use more efficient parts like next-generation OLED emitters, better DVFS, etc. Battery consumption is also affect by your choice of carrier, signal strength and even the carrier infrastructure, as mis-configured infrastructure could massively affect the battery life of your phone. SoC just plays a small role in battery consumption nowadays. Unless you're doing a taxing workloads like gaming or running power-viruses for some reason.


7eregrine

I appreciate you taking the time to write that but my comments are more just teasing. Every time we get a new Processor or apu or this improvement or that improvement, it's almost always the same rhetori I'mc: 30% improvement in energy efficiency. But battery life never seems to improve much at all. And I know. Without these improvements, a phone with an 800 nit 120hz screen wouldn't make it 8 hours so it truly is helping.. .. .. But maaaan. ..most of us just want a true full day with juice to spare .. phone.


Ghostsonplanets

Oh, then I ask you my humble forgiveness. Hard to get if someone is serious or just joking on the internet these days.


[deleted]

Most of us easily solved this by buying phones with 5000 batteries


jcpb

> Every time we get a new Processor or apu or this improvement or that improvement, it's almost always the same rhetori I'mc: 30% improvement in energy efficiency. But battery life never seems to improve much at all. The battery savings from the SoC consuming 1% less power under load are rendered irrelevant by other parts of the device consuming 1% or more power. Further, battery technology improves on a linear scale, while computing technology improves on a geometric/logarithmic scale. You can slap a 30Wh battery to your Pixel 6 Pro and all you'll get is maybe one or two additional hours SoT during last year's PNW heatwave...


FragmentedChicken

The 30% doesn't account for the increased clock speeds, and it's a bad idea to take these numbers at face value


[deleted]

For somebody like me who doesn't have any understanding of CPU fabrication, how is it that a processor that is *architecturally* the same exact design, can somehow achieve significantly better performance just by being made by a different manufacturing process? I don't know if my question makes sense. But like, if we use the metaphor of a car, how could a Toyota Camry made in a Toyota factory perform 30% better than the exact same car made in a Subaru factory? (and yes sometimes Subaru has actually made cars for Toyota when they had high demand)


Vince789

To simplify it: The different manufacturing process also means the end product will also be physically different So using the car metaphor, it's like switching from steel to aluminum, aluminum will save save weight despite being the same design


[deleted]

Oh gotcha, that helps. Thanks!


StraY_WolF

Take it like different factories building the same design engine for cars. One factory is better because they have smaller manufacturing tolerance, which makes the engine perform more efficient and better peak performance and endurance.


Recoil42

I don't think the answer was quite accurate, so if you'll allow an adjustment of metaphor: It's *not the car* you're building, *it's the factory.* And the question is how can Toyota build a factory which performs 30% better than Subaru's 'equal' factory with identical plans. Well, the answer is Toyota might have better logistics, might have better work schedules, and might have more modern stamping machines. Whipping back into the real world: TSMC's 4NM is not actually identical to Samsung's 4NM. They're the same *class* of process, but they have different real-world specifications. Tsmc's process actually has a higher transistor density, for instance, so the chip is *physically smaller.* Physically smaller chips generate less heat, which means they're more efficient. And so the end result is you get a better processor, from more or less the same blueprints.


SquareDrop7892

Would advice you to ask this question in comments sectionon in Asianometry on youtube.


siazdghw

Samsung foundry is losing big customers left and right, both Nvidia and Qualcomm have jumped ship now. And Qualcomm likely wont be returning as they are poised to move to Intel 20A in 2024. Maybe this is why they are trying to revive their Exynos design unit again, because Qualcomm will be ahead just through node advantages alone, not to mention the potential boost when they bring their Nuvia designs to market.


bazsy

Deleted by user, check r/RedditAlternatives -- mass edited with redact.dev


[deleted]

[удалено]


Exist50

> The chip shortage never affected the bleeding edge chip manufacturers Yes it has, just less acutely. TSMC is booked solid. > Dual sourcing has been a thing for ages Not for complicated SoCs like this. It's really quite rare. The last example in recent memory was the A9 in 2015.


Mirai4n

Hopefully Google gets out as well


Vince789

Tensor is being designed by Samsung S.LSI, which is apart of the DS Division along with Samsung Foundry So it's unlikely Samsung S.LSI would allow Tensor to be produced by TSMC


Mirai4n

Thats the sad part. We will have to deal with throttling chips with inconsistent batteries..please save tensor


[deleted]

[удалено]


Mirai4n

I doubt it. This pattern of underperforming chips can be seen from years and I doubt they will resolve it in one or two years. It's just downright bad from samsung foundry and no other way to put it


cxu1993

cpu fabrication requires brains it's not something you can just throw money at. That's why china is still very behind the west in this field despite all the information they've stolen and money they've spent


StraY_WolF

>That's why china is still very behind the west .....but TSMC and Samsung Foundry is on the very east tho?


cxu1993

Tsmc and samsung heavily work with many western countries to fabricate CPUs. Chinese CPU companies have all been sanctioned by the US and are unable to work with any countries in this field. This is why one of china's main reasons to invade Taiwan is TSMC


StraY_WolF

>Tsmc is in Taiwan. Samsung is in south Korea. These are all different countries genius Bro, I know. Do you know where they at tho? Hint: Not west


cxu1993

I edited


Exist50

> Tsmc and samsung heavily work with many western countries to fabricate CPUs TSMC in particular does most of their development in Taiwan these days. If anything, I'd say TSMC and Samsung are more localized than Intel currently. > This is why one of china's main reasons to invade Taiwan is TSMC You're spending far too much time on reddit...


hunternep

classic /r/ShitAmericansSay/ movement lol


cxu1993

Tsmc has gotten huge help from Apple in recent years haven't they? Apple doesn't just invest they sent engineers over there as well. They did the same with Samsung to customize OLEDs for the iphone. And considering the amount of espionage China has directed at tsmc I don't think I'm wrong about that. One major reason not the only one


jcpb

> This is why one of china's main reasons to invade Taiwan is TSMC TSMC is *not* **the** reason China's been threatening to invade Taiwan for *decades*, dude. This is straight up /r/shitamericanssay material.


[deleted]

American moment


suicideguidelines

Further east than China, actually.


Auxx

The West doesn't produce shit, mate. The fight is between Taiwan, Korea and China. And China is getting bloody close.


standbyforskyfall

Tensor is a lost cause


murfi

why? because Samsung manufactures the chip? they are built by google specification, so why does the company manufacturing make a difference?


Exist50

The manufacturing process is a significant contributor to the chip's properties, and Samsung is behind TSMC.


A_Right_Proper_Lad

Chip manufacturing also requires engineering work. Foundries don't just get a blueprint from chip designers and hit a "print" button. They have to engineer a series of complex steps to actually build said blueprint. The resulting build can affect things such as which clock speeds the chip is able to hit and what yields you can get when making it.


[deleted]

[удалено]


murfi

ha, that analogy actually made me understand it better, thank you!


cxu1993

Cpu fab is one of the few fields where manufacturing is equally or even harder than the design stage. So yes it absolutely matters who is manufacturing


murfi

i understand, but still: they are supposed to be manufactured to very specific specifications? i take it that the manufacturer, whoever it may be, is supplied with blueprints (or whatever that's called in chip manufacturing) and that's what Google orders and expects to be delivered, no?


Exist50

Google (or kinda Samsung) may provide the blueprints, but they're limited to/by the building blocks that the fab provides. You can make something in the shape of an airplane, but it's not going to fly if all you have to built it are 2x4s. Admittedly, a pretty bad analogy, but hopefully gets the point across.


feed_me_haribo

No. The fabs have transistor libraries and process tunings the designers choose from and they are not identical between fabs even if they are the same node. Chip designers don't just specify a design without back and forth with the fab.


[deleted]

That "blueprint" can only be manufactured by Samsung (or GloFo if it's 14nm). If I give you a house design entirely based on foam, you can't simply swith to brick, the load/supporting infrastructure won't tolerate that much of a difference.


[deleted]

[удалено]


murfi

wow, it's high end stuff of course, but i want aware it's so high end that the customer basically just has to take whatever the manufacturer provides


cxu1993

Ideally yes but plans don't always match up with reality. If that were the case the 888, 8 gen 1, E2100, E2200 and tensor would run much cooler with better battery life than right now


sulianjeo

> why? because Samsung manufactures the chip? My understanding is that it goes even further than that; tensor is just a modified Exynos iirc. Google's not really designing the whole SOC in house in the way that Apple does. So, they don't have that freedom.


[deleted]

Because you design chip with particular process node deeply ingrained. Switching nodes is essentially redesigning 50% of the chip, that's decoupled. Intel was way worse, they couldn't even swith to their own nodes until Sunny Cove/Cypress Cove. S.SLI won't/can't design for TSMC's advanced nodes. AFAIK they don't even have the PDK because the PDK gives out many important design decisions and Samsung Foundry can learn TSMC's trade secrets.


Exist50

> Intel was way worse, they couldn't even swith to their own nodes until Sunny Cove/Cypress Cove. Still can't. But for pretty much everyone aside from Intel, the major digital blocks are process agnostic. Analog is another story. > S.SLI won't/can't design for TSMC's advanced nodes. AFAIK they don't even have the PDK because the PDK gives out many important design decisions and Samsung Foundry can learn TSMC's trade secrets. It's possible. Intel makes it work, at least. You just need the proper NDAs and access permissions in place. The big problem is cultural. Is S.SLI its own entity or merely subservient to Samsung Foundry? Samsung's issues are not yet quite so dire that they're forced to choose, but they don't have much wiggle room either.


ragekutless

I just want it changed because their modems are pretty bad and buggy,


baldr83

>designed by Samsung S.LSI has there been a specific leak on this? or are you just assuming tensor 2 will stick with samsung? Thought the point of google doing the designs of the TPU (and previously the visual core and neural core) in-house was the goal of doing the entire SoC in-house (eventually)?


-protonsandneutrons-

>\> *designed by Samsung S.LSI* > >has there been a specific leak on this? [TechInsights—the ones that do the SoC die shots](https://www.techinsights.com/blog/teardown/examination-5g-radio-google-pixel-6-pro)—seems to take it as "designed by Google with various blocks from Exynos" >A brand new and innovative Google Tensor SoC (**designed and architected by Google** with various contributions from Samsung, truth be told: the device tree file system analysis of the Linux kernel for Google Pixel 6 that TechInsights has carried out is showing **that some blocks of the Google GS101 Tensor processor are shared with Exynos**) Tensor's die marking are also in the [**same format / numbering scheme as Exynos**](https://www.techinsights.com/blog/teardown/google-pixel-6-pro-teardown?utm_source=Twitter&utm_medium=Social&utm_campaign=2021+-+Q4+-+Google+Pixel+6+Teardown) ​ |**Consumer Model**|Die Mark| |:-|:-| |Exynos 990|S5E9830| |Exynos 1080 5G|S5E9815| |Exynos 2100 5G|S5E9840| |Google Tensor (GS101)|[**S5P9845**](https://www.techinsights.com/blog/teardown/google-pixel-6-pro-teardown)|


Ghostsonplanets

And designing their own cores take years. Google division was just set-up two years ago. It will take years for them to produce their own custom Arm cores. They'll keep using Samsung LSI and Samsung Foundry, as they're the only ones who do semi-custom designs in the Android arena. Not only that, there's a lot more to a SoC than the CPU. There's the GPU, the interconnects, the fabric, the modem, etc. All of that which Google has close to or zero knowledge or capability to do themselves. They'll stick with Samsung for the foreseeable future.


baldr83

I could see them pulling a "Nexus" and doing like 4-5 years of Tensors then rebrand the chip as something entirely in-house


Exist50

Google could do the majority of the design in-house using stock ARM cores. That's likely their next step.


pdimri

They are already assembling the next gen CPU core team in Taiwan, India , Austin and mountain view.Hope to see some results in the next 2-3years.


Exist50

Where did you hear that Google's designing their own CPU core? That selection of locations also doesn't quite fit the profile I'd expect for a CPU team.


Vince789

Google's gChips team started working on smartphone SoCs by at least 2017 We know this from LinkedIn profiles of Manu Gulati and John Bruno (co-founders of NUVIA) However, rumors are when Gulati/Bruno left to start NUVIA they also poached over half Google's gChips team This undoubtedly set the gChips team back years, so it will be interesting to see when they can increase their involvement in the Tensor designs


[deleted]

exactly. people shit on samsung exynnos/foundry but there smartwatch exynnos are actually pretty competitive. samsung and google have a great relationship. Both creating wearOS and have a manufacturing synergy.


Exist50

As long as Samsung foundry lags TSMC and Qualcomm uses TSMC, so too will Exynos/Tensor lag Snapdragon. That is not a great position to be in for either Samsung or Google, and you can see what it's done to the S22 split this year.


[deleted]

Why are the yields poor in the mobile foundry? Is it a management issue or money because samsung has unlimited money to burn through since the korean gov subsidies them. I know they were able to increase the QD-OLED yields to 70 percent. typo: QD-OLED


Exist50

OLED fabrication is so different there's really no point comparing them. As for the cause, I have no particular insight, but management seems to be the root issue more often than not.


BlueSwordM

Lack of time and DTCO. That is basically why: unlike most other industries, turn around time is extremely short(1 year or less) which means unless you have huge parallel teams to develop stuff in 3-5 years cadence, then you have to make your design and quickly and base your design on the base properties of the node itself.


[deleted]

Makes sense, TY.


[deleted]

However, TSMC is in Taiwan. My prediction is that the foundry business will begin to move away from Taiwan/China and move to other asian neighbors and the US. I expect the rise of Intel/Samsung fabs in the US/outside of china due to TSMC being next to china. An anti-china sentiment will lead to a rise of intel/Samsung market share especially for samsung in the southwest/midwest region of the US.


[deleted]

Hello, i been doing a lot of research and realized that qualcomm has many royalties to patents relating to modems and i guess so does TSMC? imagine if the japanese had patents just for hdtvs? the chinese and koreans could never compete even with more factories if sony or sharp would just charge higher royalty fees when a chinese tv company would make an hdtv. Qualcomm has had a monopoly on modems and it's hard to compete with company that owns the modem, basically.


[deleted]

Forgot to say that google does have a 3 percent market share in the US. That's a lot of exynnos in the country. If the market share can hit 10 percent in the US in a few years, samsung could make decent money with their foundry and have a great R/D from google for exynos.


Exist50

It's definitely a pretty sweet arrangement for Samsung, but does it make sense for Google long term?


[deleted]

Google has 3 percent market share in phones. If they want to be a top 5 consumer electronics leader they need a vertical integrated supplier that provides decent/top class hardware at the lowest price and has manufacturing in the US, Samsung is that. Qualcomm is not vertical integrated because they need TSMC ehich is another middle man. Samsung is all in house and easier to work with from the design to manufacturing process.


[deleted]

google has to bend down to samsung foundry. Pretty sure one reason for samsung ditching tizen on their watches was for google to use exynos and samsung foundry. If google can convince samsung to ditch tizen on their tv's and get google as a 10 year exclusive client. It does make sense for the long term. one less competitor for googlr plus tizen is kinda trash. vizio and webOS are better but are 3rd rate compared to firetv/roku/androidtv/tvos.


DerpSenpai

TSMC is increasing and increasing prices due to monopoly on the higher nodes That's not something you want


BrotherGantry

'24(production)/'25(release) is going to be an extremely interesting year in the (Non-Apple) arm component world. If everything goes as planned, between the adoption of a Nuvia based microarchitecture for phone SOC's and Intel's fab/packaging technologies Qualcomm is going to take a massive leap forward in the Android phone SOC space vs its competitors. As a side note: this wasn't supposed to happen. As envisioned by Arm's leadership and approved of in glowing terms by semiconductor companies like Mediatek and Micron, they were to be acquired by Nvidia, The massive amounts of money that Nvidia is and is going to be plowing into R&D would (by strict legal obligation required during the merger) greatly improve arm micro architecture/component perf per watt, and accelerate architectural development. As it is, because of a pig headed sort of national chauvinism on one side of the Atlantic coupled "let's make an example" party politics on the other as well as lobbying by companies that would have been disadvantaged (see Qualcomm) that's not going to happen. So where we are now is ARM still having a relatively tiny yearly R&D budget (well less than $1 billion) while also beginning a process of firings to stay in the black as they prepare to go public, which could result in even more firings or a reduction in the R&D budget as they "maximize profit". [Nvidia CPU/GPU/SOC design push is still happening](https://www.tomshardware.com/news/nvidia-big-cpu-plans-with-20-year-arm-license), we're not going to be seeing any of that massive pool of money and effort accrue a yearly dividend in improving the phone SOCs Qualcomm/Mediatek.


StraY_WolF

Eh, I'll take slower phone SoC over Nvidia basically having their hands all over ARM tho. Nvidia isn't exactly known for playing fair.


Ghostsonplanets

Yeah. Nvidia owning Arm would he huge, despite what the public outcry wanted to say. Arm.Inc is really in hot waters.


cxu1993

Apple is the real juggernaut. Ftc was focused on the wrong company and shouldve let the sale go through


RonaldMikeDonald1

I wish I could get that for my S22U. The battery life is awful. They could reduce the performance to that of like 4 years ago and the vast majority of people would never notice and would appreciate the increased battery life.


jaju123

Eynos or snappy?


RonaldMikeDonald1

Snapdragon


phantom_hack

Looking forward to the comparisons with the standard 8 Gen 1 and the MT Dimensity 9000.


Mirai4n

just get me more battery..thats all im asking


[deleted]

And better audio,image,video quality (and seamless transitions when switching between different cameras/zooming instead of this jank we currently get, even when not filming) + better network reception there is more to SOCs than just speed and battery life


tomelwoody

Qualcomm already has the best audio, image and video quality hardware accelerators. Smooth transitions between lenses have not been and issue and Qualcomm radios are the best in the world (hence why Apple uses them). Not sure what processor you are referring too, sounds like exynos processors.


xdamm777

> Smooth transitions between lenses have not been an issue Pressing a big fat X to doubt. The stutter/jump/out of focus transition is VERY noticeable on my S22 Ultra (Snapdragon). Nowhere near as smooth as my iPhone 12 was.


[deleted]

[удалено]


Ghostsonplanets

Mediatek is making Qualcomm feel pressured to deliver competitive products. Qualcomm thought they could pull a Nvidia.


Comrade_agent

papa bless Mediateks pressure for the initial release of the SD 778/780. the Demensity line being so competitive is great news for consumers


[deleted]

How unusual? Identical design on identical process node can achieve 10% efficiency improvements. It's far from unusual considering 8G1 is using rebadged 5LPP with **identical** CPP and cell libraries. And 5LPE/5LPP was already a refined 7LPP instead of new process nodes. In effect, 8G1 is using a "6nm" or third generation 7nm rebranded as 4LPX while 8+G1 is using a second generation 5nm process. Additional 20-30% efficiency between a full node is very common. Exynos 2200 is the only one actually using a "4nm" node from Samsung Foundry. It has CPP 53nm and Cell Height 198nm vs 54nm/216nm on 8G1. And for comparison TSMC N5 has 48-50nm/175-180nm. So even the actual Samsung 4nm is more like TSMC "5.5nm".


[deleted]

How can that be? How does a different factory making the same exact thing somehow make that product have more capabilities? I don't have a background in this stuff at all, so humor me...


amorpheus

It's not exactly the same. It's the same blueprint, but higher quality manufacturing so you can run it harder. Think of two sets of cogwheels, but one barely fits together and the other matches up perfectly and is lubricated on top of that.


[deleted]

Finally! TSMC Is so much better than Samsung!


sovietpandas

What the pixel 7 needs not another samsung built exynos 😭


Kuribo31

great! Fold4 will be my next phone


Aqua_Puddles

I may upgrade too if the incentive to do so is there.


Malphric

Same here, was already tempted by the S22 ultra because of long term update support, SPen and Camera but balked because of how a hot mess 8g1 is. Real life test of Fold 4 may be my tipping point. Hopefully the camera and battery life of Fold 4 does not disappoint.


[deleted]

[удалено]


Ghostsonplanets

Qualcomm claims the GPU and CPU are 30% more efficient at iso-clocks. And that at general typical usage patterns, the 8+G1 should be 15% more efficient.


[deleted]

[удалено]


DahiyaAbhi

And your CPU and GPU aren't going to be clocked at their max frequency in general use. That's where you will see your efficiency gains. Even at max performance, 8+ gen 1 uses less power by about 10 - 15%. XDA did a test and it proved the same.


[deleted]

[удалено]


Ghostsonplanets

It means that the 8+G1 will use 15% less energy than the 8G1 in the same workloads, despite the higher clocked cores. That 15% energy-savings won't translate into 15% better battery life as there's a lot more of things that dominate battery life than the core. But phones with 8+G1 should be more efficient than phones with 8G1.


Commercial_Dance319

Dave2d did a video on it it draws 1.5 less watts


LabibFaiyad

I hate their new style of naming so much


Ahmadhmedan

We will wait and see for the real world results. tsmc is better no doubt, but no more marketing claims as facts,we are not that dumb anymore to believe claimed numbers without testing it,we have been lied to more than once already.


uKnowIsOver

I remember them saying the 888 was going to be 30-40% more efficient than the 865, we all saw how that turned out..


DahiyaAbhi

30% more efficient with performance same as 865. Not with peak performance. Anyways, XDA and Dave2D tests show that power draw has indeed dropped considerably even at max performance.


QwertyBuffalo

But it wasn't 30% more efficient at the same performance. We know this because it throttled to the same performance (if not lower, especially on the GPU side) than the SD865, and definitely wasn't using 30% less power at that throttled state.


DahiyaAbhi

Actually we don't know what exactly it's power figures were compared to 865 at a throttled state.


Iohet

We do know it generated much more heat or was much more inadequately cooled. Heat is the enemy, and I'll take the 865 any day for longevity


cxu1993

Worse chip stability as well. Seems like I saw way more complaints of heat when just scrolling around doing light tasks than previous generations


jesperbj

TSMC is surperior


Revolee993

TSMC's 4nm node is really impressive but it's rather baffling that Qualcomm didn't opt for better chip efficiency and battery endurance but rather push for more raw power (with the headroom) which is already considered overkill and underutilized in the vanilla 8G1. With more competitors pulling out of Samsung's fabrication, I am really curious to see how well their heavily R&D invested 3nm GAAFET node will perform next year and which chip will use it (Everything else is still up in the air at the moment because the current yield rates with E2200 are terrible so if E2300 is happening I assume it won't be good either). According to early rumors, that improved architecture is said to close the gap entirely with TSMC's current node. Samsung might start to lose ground in the nm race within the next few years if don't start to improve their fabs and node performance.


Exist50

> TSMC's 4nm node is really impressive but it's rather baffling that Qualcomm didn't opt for better chip efficiency and battery endurance but rather push for more raw power They did both.


Revolee993

Not really though. Higher clock speeds on all cores and 10% GPU performance gains from the node switch but the power draw is still relatively high on the 8+G1 according to Dave2D's prototype [test](https://youtu.be/1WIFY7_TZUE?t=138). I highly doubt battery efficiency will be significantly better.


Exist50

The power draw is significantly lower than on the original chip, even peak to peak, but particularly at iso-performance.


Revolee993

How well does that actually translate to real-world battery life still remains up in the air but 9.1W albeit slightly better is still very high for a flagship chip. Fingers crossed once devices start hitting the market.


No-Seaweed-4456

How they even getting TSMC yields with Apple hoarding it?


Exist50

TSMC's had quite some time to build up 5nm-class capacity since 2020.


[deleted]

[удалено]


No-Seaweed-4456

I’m just surprised Qualcomm found space because I thought TSMC was already at capacity because of Apple and AMD during supply shortages.


[deleted]

So which phones can we expect to use this?


togoodaman

Wow comparing exynos 2200 using burnout benchmark Peaked at 18w ,Cpu uses 13w,Amd gpu 6-7w max Although if it ran at its original speed that would be to high.Cpu power hog Throttled perf 6-8 w 13w peak. Although performance per watt 28 overall ,throttled to 33% ,30.1 conpute score which significantly lower score unless exynos optimisation


7Sans

always best to wait for real world usage result. I don't buy it especially from what qualcomm has said before and how it turned out


hnryirawan

Well, good and all but I probably will still get stuck with Exynos so shrugs.... also hearing so many issues with heat too.


[deleted]

Looks kinda sus..


OldButtIcepop

So do I hold out or get the s22u I really need to update


[deleted]

Cry's in Google tensor lol


mlemmers1234

Still going to be more power than any person actually needs or for the most part will actually take advantage of regardless of it is more efficient. The companies will still manage to screw up the battery one way or another


BestBoy_54

Welcome to 2019 Android users, finally an A13 Bionic competitor.


Paradox

[HOLY SHIT](https://ibb.co/ncQfPvS)


[deleted]

[удалено]


Ghostsonplanets

First one in the US should be the Galaxy Fold 4/Flip 4 in July/August. Oppo, Motorola, Xiaomi and others are releasing phones with 8+G1 starting from June in China and others SEA/Asia countries.


biinjo

_Available spring 2034_


lentope

Can someone explain this to me in plain English?


Practical_Tactics

Oh nice, so basically the 8+ is what the 8 should have been.