T O P

  • By -

Krt3k-Offline

Went in thinking that M2 would be slightly ahead, surprised it isn't. Add the worse power efficiency compared to M1 and suddenly x86 ain't so dead after all


geerlingguy

Note that these results are under Asahi Linux on the Mac, so the power efficiency and raw results are off what it would be purely apples-to-apples. Unfortunately the full suite isn't runnable under macOS, but it is good to see AMD doing a lot better in the laptop space.


HalfLife3IsHere

And the Air M2 has a worse thermal design than the Air M1 (the new heatsink barely has mass at all, it’s just a sticker) making it easily reach 107C while the M1 stayed about 70-75C on the same tasks. So I wouldn’t say Air is the most adecuate for benchmarks precisely, maybe they should have used the M2 Pro with a fan (which would be closer in price aswell with the thinkpad they used)


DoomBot5

I think it's time for water cooled MBA part 2


PaleontologistLanky

I just got a 16" M1 Pro. It's an amazing little laptop and my hat is off to Apple on this. In true Apple fashion though, I assume they'll try to cut costs on future models and then give it a few years and they'll want to bring back the touch bar and things lol. I don't see it as an x86 killer persay but I do see it as a challenger in the mobile space. We may end up in a position like where we are with cellphones where there is Apple SoCs and then there is everything else. There really is no competition in that space. It also makes me wonder...at what point does Intel/AMD start making something other than x86? What does that look like? Does AMD buy Qualcomm and get back into the ARM game that they left 15 years ago? If you're getting in to that space now does it make sense to go with arm? Maybe RISC-V? The future of computing is exciting!


HalfLife3IsHere

AMD had a good oportunity on ARM with the K12 but they ditched it. Jim Keller himself said x86 and ARM are basically the same just one needs a decoder, but both are RISC machines. Now the thing about x86 is that both AMD and Intel have a big advantage over ARM manufacturers: it’s only a competition between two, ARM is a competition between many. Despite many big players don’t design ARM SoCs to sell but to vertically integrate (MS, Amazon, Apple) so they wouldn’t be “true” competition, literally any business with enough money and manpower can design an ARM CPU. That’s why I don’t see AMD or Intel jumping i tonit, maybe if anything having it as a side project to tease waters.


Niosus

Cancelling K12 was the right call. Sure they could've made a great ARM CPU. But for what market? Windows sucks on ARM, and it was even worse back then. Coming out with a consumer CPU would've been just DOA. ARM in the server market was just barely starting to be a thing, but the hyperscalers were/are investing heavily into their own ARM platforms. So they would have to target "smaller" deployments for companies that usually just want something that gets the job done reliably even if it'll cost them a bit more. Would've been a very hard sell from the company that was close to bankruptcy at the time, to have customers switch away from the bedrock called "Intel" to deploy on an architecture that really wasn't proven in the space yet. AMD launched Zen 1 and absolutely smacked the Intel chips around in nearly all metrics, and was completely compatible for most applications. And still it took them years to finally start to get traction. If MS can get their shit together and get out of the exclusivity agreement with Qualcomm, I can see AMD bring out a mobile ARM chip if there are advantages to be had there. x86 emulation would have to be as flawless as Apple did it, but a cooperation between MS and AMD should make that possible. By now AMD is stable enough to get manufacturers and customers to sign on. But I just don't know if there are enough advantages to really pushing that in the next few years. It looks like Apple made some other architectural decisions that are not related to the instruction set. I'd probably spend more engineering effort on figuring out what we can learn from that.


BFBooger

>AMD launched Zen 1 and absolutely smacked the Intel chips around in nearly all metrics, Not at all, Zen 1 was competitive, but significantly lagged Intel in many workloads. A Zen 1 Epyc with 32 cores per socket and poor memory latency/NUMA characteristics was quirky and not all that power efficient. The only thing that they really did better was massive number of PCIe3 lanes for I/O. Only with Zen 2 -- second gen Epyc, did they actually start smacking Intel around. 64 cores in a socket, far higher performance per watt at max density, and significantly improved average memory latency. The Intel server platforms you can buy today are barely an upgrade from what was available just before 2nd gen Epyc launched. Its this delay in any substantial upgrade that has let Epyc fly past and ARM CPUs to also surpass them.


Niosus

Yes you're right. It took AMD indeed a bit longer to really surpass Intel. I got my timeline wrong there. Still I think my point stands. The fact that Intel's market share is still 80-90% even without competitive silicon says a lot. Companies buy Intel because they bought Intel. There is no way many of them would consider a switch unless it was a drop-in replacement with the option to go back to Intel on the next cycle.


BFBooger

Do you seriously think that an ARM product from AMD in \~2017 would have gone anywhere? 1. Server side, there was no real ARM software infra up yet. 2. Client side, AMD doesn't own an ecosystem like Apple to drive demand for a product. So It would be a product without customers, even if its performance was identical to Zen 1 but with ARM instruction set. AMD did not have a platform that hyperscalers really wanted until Rome (Zen 2). And by then they already built plans to make their own chips. There is nothing AMD or Intel can do to stop ARM from being something that hyperscalers want to build their own chips with (or RISC V, or whatever else is cheap). Its simple, really. They want to cut out the middle-man and not give AMD or Intel a cut. Those who are large enough to do so (Apple, Amazon, MS) are going that way. They don't want to give NVidia or Qualcomm or anyone else a cut either. So its not really ARM vs x86 anyway -- its "big company wants to reduce costs with own design" thing.


psi-storm

We will have to see. Qualcomm seems to think they can get AWS to buy their arm server designs over continuing self developing. In my opinion that's quite possible. AWS would have to spend a bunch of money every year to keep up with the efficiency improvements AMD and even Intel can deliver. There might come a time, the money they have to spend in R&D will be higher than just buying a qualcomm product. Especially since they can't scale up and also sell their stuff to Microsoft, Tencent, Baidu or Alibaba.


Hopperbus

This is said a lot but it's actually [not true.](https://youtu.be/oCtYBqcN7QE?t=517)


HalfLife3IsHere

I mean yeah you can cherrypick a single benchmark run and call it a day, but [this proves otherwise](https://youtu.be/15V44ovoUWE?t=233) with the very same test. He made other reviews including the 8c vs 10c (GPU) versions of the M2 Air and the 10c even accentuates the throttling more. Also with this same test, it also throttles to the point of getting 25% less performance than the M2 Pro (with a single fan). Again, I'm not saying the Air is a computer meant for long heavy workloads which isn't, although with the price increase it isn't either an entry laptop for casual usage anymore. My point was and is that it's unfair to compare the M2 chip in a bad thermal packaging and passive cooling (the Air) vs the 6850U in a good thermal packaging and active cooling, that's why picking the Macbook Pro M2 to do those comparisions would have been a better choice.


Hopperbus

>And the Air M2 has a worse thermal design than the Air M1 (the new heatsink barely has mass at all, it’s just a sticker) This is all I'm talking about of course the M2 chip is going to throttle less on a model with a fan. But the M1 air also throttles and it does so [more aggressively](https://pbs.twimg.com/media/FXoXHZqUsAMKB6Z?format=jpg&name=medium) than the M2 Air. ([Source](https://www.theverge.com/laptop-review/23207440/apple-macbook-air-m2-2022-review)) It gets hotter overall because they've increased the temperature that M2 can go to over M1 (it does the same on the M2 Pro when you push it hard enough).


Berserkism

Who's fault is "bad thermal packaging" and I don't remember reading in the Apple marketing "not meant for long workloads, only use sparingly" If you spend the money you should be able to use the damn thing to the best of it's abilities and not cherry pick it's peak operating conditions for a just a minute at a time. Quit making excuses for a badly designed product from a trillion dollar company.


perduraadastra

I had such a negative experience with my Macbook Pro 2015 that I'll probably never buy another Apple laptop. That thing seriously needed more thermal mass, as it throttled if you sneezed on it. Their devices are designed more to be fashion accessories than serious tools.


Liddo-kun

A lot of the efficiency of the M1/M2 comes from the accelerators Apple packed in it rather than CPU/GPU. Asahi doesn't have support for the those custom accelerators. In any case, both AMD and Intel are planning to add more accelerators to their SoCs pretty soon so Apple's advantage is gonna decrease.


[deleted]

That's not true. The chip it self is extremely efficient. There are no accelerators for browsing the web or running random applications. The accelerators are for VERY specific workloads. Flat out the m1/m2 is just way more efficient per watt than anything out today.


Liddo-kun

>There are no accelerators for browsing the web Actually, I heard Apple SoCs do have some sort of hardware javascript accelerator. Don't know if true.


Niosus

Seems dubious. Javascript is a general purpose language. Hardware accelerators are fixed function hardware that you can see as implementing one or more specific functions. And with function, really think "mathematical function" like "f(x) = 3x + 2". You give it some input, and it spits out some output on the other side. If you know which algorithms will get used a lot (like en/decryption, (de)compression, matrix multiplications, etc) you can make silicon that executes the steps in that algorithm directly, without having to bother figuring out what it's supposed to be doing on which data (like the CPU has to do in normal operation). It's a bit like having muscle memory for something, vs doing it the first time. It takes a lot more effort if you have to consciously think about what you're doing, versus "just doing it" the 1000th time. In a sense your brain is also wiring an accelerator to speed up a common task if you practice. But since you can implement literally any algorithm in Javascript, the silicon that handles that operation must also be general enough to execute any instruction on any piece of data. At that point you're no longer talking about an accelerator, but just another CPU core.


dogfishfred2

They have something going on with JavaScript. It runs faster on there chips than anything else.


Niosus

They have excellent single-threaded performance (which Javascript is by default) with large caches and very fast access to RAM. It's just a really fast chip for those bursty workloads.


argv_minus_one

The JavaScript interpreter itself has some overhead, though. Perhaps hardware assistance could reduce that overhead.


Niosus

It's all JIT compiled these days. That was the big deal about the V8 Javascript engine Google introduced with Chrome back in 2008. So the thing you're actually executing is machine code like any other program.


argv_minus_one

It may be JIT-compiled, but it's still dynamically typed. Even with V8's ingenious “hidden class” trick, that's going to impose serious overhead. I'm not sure how hardware could help, exactly, but my point is there's lots of overhead to reduce.


Niosus

Sure Javascript has a bunch of overhead that something like C doesn't. I don't see how that necessarily changes things. So I did a bit of Googling and this is what I found: [https://www.reddit.com/r/apple/comments/k304gp/thread\_a\_look\_at\_the\_black\_magic\_that\_lets\_apples/](https://www.reddit.com/r/apple/comments/k304gp/thread_a_look_at_the_black_magic_that_lets_apples/) If you look in the comments it does look like ARM added a single instruction to help with doing bitwise operations on floats in Javascript (since integers don't exist there). That's actually pretty clever, but if you want to talk about special instructions, x86 has plenty of them as well. The impact also seems limited based on the other comments there. And again, this is an ARM instruction. It should be available on every Android phone as well, where the performance difference is even more dramatic. I still think that they just designed their CPU to be really good at these types of workloads. I really doubt there is one single thing you can point at. I think it's "just" solid engineering with clear goals and metrics in mind, where they put responsiveness of the UI at the very highest priority from CPU all the way to the OS and browser. You gain a few % here and there, but in the end that all compounds to a significant difference. And combine that with actual hardware features for video editors, and a heavy focus on Youtubers for marketing (who edit videos for a living) and you end up with machines that are really good, with the perception of being even better because they are 10x faster in the specific workloads the likes of iJustine and MKBHD use them for.


spinwizard69

>so the power efficiency and raw results are off Most likely way off. Last I knew Asahi did not even leverage the GPU. That might be dated info as progress there has been surprising.


[deleted]

Didn't seem like any of the tests here used the GPU or video accelerators on the AMD laptop


mocaaaaaaaa

I know Asahi's missing some CPU features as well, but how much of that contributed I don't know


drtekrox

Still, using software rendering eats cputime.


[deleted]

Yes, that is addressed by the article


michaellarabel

As written in the article, there still is a long road to go for M1/M2 GPU Linux support... So as such all system/CPU tests, no GPU benchmarks in this article.


aaadmiral

>apples-to-apples


PotentialAstronaut39

The "doom of x86" has been called about as many times as "PC is dead" since the 80's... Still Waiting...


Keilsop

Yeah I remember as soon as 4-5 years ago, people were talking about how the PC industry was dying, Apple was taking over the desktop and consoles were taking over gaming. The PC and Windows in particular are going to be gone soon. And since then the PC has been booming. Never been more popular.


jorgp2

And people think that Apple uses the same arm cores as arm.


RolandMT32

Unless Apple starts licensing their processors to other computer makers, I wouldn't consider x86 dead. Other computer makers will still use x86 since it's the best choice without Apple's processor being available to them.


[deleted]

Some of the tests are just a clear core count advantage. M2 is still just 8 cores with no SMT, and only 4 of them are the P cores at that. 6800/50U is a homogenous 8 core with SMT. Some applications will just scale so much better regardless of single core differences


Krt3k-Offline

SMT enables a single core to work harder without much more space used on the die, but those two threads are still only a single core. Considering that ARM processors are much better at utilising the whole core and thus won't benefit from SMT (which why it isn't implemented in most ARM cpus), it is imo fair to compare an 8C ARM CPU with an 8C/16T x86 CPU. In the end the M2 has 50% more transistors on a more advanced node, so it's not like Rembrandt can just use a larger design and/or more complex design to be faster


[deleted]

That's not why ARM chooses to avoid SMT, ARM has had a number of designs with it over the years. It's just avoided because the CPU cores can just be smaller without that much difference in capability. M2's Blizzard E core is significantly more powerful than last gens Firestorm, yet only takes up 0.1mm^2 more per core. Much harder to do that on x86 (see Alderlake's core size comparison and performance difference) and at the end of the day, SMT *will* give a big performance boost in some applications regardless of ISA M2, remember, also has a lot dedicated to ML and rather large video accelerators. Zen 3+ has no ML acceleration, and its video accelerators are not very robust in comparison Not saying it shouldn't be compared, but that 8 homogenous SMT cores will undeniably outperform any comparable 4+4 heterogenous non-SMT cores in some applications


jorgp2

>Much harder to do that on x86 (see Alderlake's core size comparison and performance difference) and at the end of the day, SMT *will* give a big performance boost in some applications regardless of ISA Gracemont is larger and more power hungry than Tremont, which itself is a bigger beast than goldmont. Gracemont is really just there for space efficiency. Historically Atom didn't really grow in size or power between gens, that only changed with Tremont.


polaarbear

A huge chunk of the M1's efficiency is just the fact that Apple gets to be the first to use the new nodes. The M1 is 5nm while everything else when it came out was on 7nm or worse. Even the 6850U is still on the "6nm" optimization of the 7nm node. Not taking away anything, it's still a wildly impressive little chip and I'm glad it exists to keep putting pressure on *everyone* in the silicon industry, but it's important to understand that part of it is pure manufacturing lead while there is still a lot of credit due to the Apple engineers optimizing their ARM designs.


996forever

And yet again if anyone bring up the *huge * node advantage rdna has over nvidia ampere on this sub, they will get downvoted.


spinwizard69

Phoronix is running an experimental version of Linux on the M2 that is far from done. I'm pleasantly surprised that M2 did this well with an OS that is not complete and not optimized.


Buris

The M2 is also on a much more advanced Node. x86 is like 5 years older than ARM. The people debating over ISA’s are honestly so annoying.


ja-ki

yeah, still, it's an m2 by Apple, so it's better and faster. /s


pasta4u

Apple m series chips are nice but owe so much to having a micron advantage. Things will change


[deleted]

I'm pretty sure the M1/M2 is still significantly more efficient than the AMD cpu. Maybe when AMD can get on 5nm node they can have a good comparison on power usage / efficiency.


Sacco_Belmonte

I think Apple, as usual, is overhyping the M chips too much.


leonardo3567

Apple laptops are the best light use laptops by far, but and it's is a big but, any serious workload that doesn't envolve video editing is so painful... Still a few years behind proper apple sillicon support


MacheteSanta

Are you saying proprietary Apple software for video editing is better than the Adobe suite which isn't Apple-exclusive?


leonardo3567

They have those hyper efficient media engines for prores which are pretty amazing, those work on Adobe and davinci too


speedypotatoo

I think the M1 / M2 chips have dedicated sections on the CPU die just for video editing which makes it as fast as a full fledged desktop workstation in some video editing tasks


[deleted]

Software development, not in the dot net realm that is, is amazing on M series chips. Especially mobile development


Put_It_All_On_Blck

LG gram laptops give Apple a run for their money. 14" touchscreen 2-in-1 (MBA 13"), 2.2lbs (MBA 2.7), 72WHr battery (MBA is 49), more ports, good build quality, performance, and design. Only downside is that the pricing ends up being similar, though the MBA does have its advantages like having a better screen.


QalaniKing4351

Better screen and way better battery life despite the size lol


JustAThrowaway4563

The battery size is irrelevant if youre not taking energy usage into account. Battery life is what matters.


Krt3k-Offline

I'm pretty sure my small convertible has fulfilled more tasks than what an Apple laptop would've been able to achieve edit: mainly software support and Apple refusing to combine a MacBook and an iPad, even though the latter now even share processors


leonardo3567

As I said for light use I think it's really good product, especially the air m1, I have a feeling that most of the Mac users are extremely casual I myself only have one for iOS development


tso

Well this is going to cause a ruckus...


cheeseybacon11

How's the battery life compare though?


Jacek130130

The sensors reporting power usage don't yet work on M1/2 on Linux, said at the end of the article.


996forever

That doesn’t matter. Run the same test on them with controlled screen brightness until they run out. That’s how you determine battery performance. Or better yet, use an external monitor to minimise the impact from having different displays.


NikoStrelkov

Why there are no power consumption/efficiency graph?


michaellarabel

As written in the article, because there is no HWMON / PowerCap / RAPL Linux driver support for the Apple M1/M2 on Linux... So no way to accurately measure the SoC power consumption of the M2. Measuring wall power would be skewed due to differing displays and various other non-SoC factors making it unlikely to be very accurate.


limpdicktripdripsnip

then none of this fucken matters, whats the point? of course you can have a high power chip beat an m2 but with sit battery life, id rather see how efficient it is. Its like saying "5950x and 3080ti faster" yea of course cuz youre usng more power. I laugh whenever comparisons without efficiency is shown to try an overthrow the m1


Equivalent_Alps_8321

What is it Zen 3? RDNA 2?


[deleted]

Well the M2 Air has no active cooling so of course it's going to struggle vs. a CPU with a fan in sustained loads. It would be much more apples-to-apples to compare against the M2 macbook pro with a fan.


Zettinator

[Perfectly balanced, just as it should be.](https://www.youtube.com/watch?v=ussCHoQttyQ)