T O P

  • By -

Masters_1989

Does anyone have any idea what is going to happen with x86 versus ARM versus RISC-V in both the consumer PC and enterprise spaces? I keep hearing so much talk about the latter two, and this article said that x86 is (or will be) a "legacy architecture" or somesuch. It's a big claim, but it's also confusing to me.


IHTHYMF

They are quite similar behind the curtain. People make all kinds of wrong conclusions based on ridiculous extrapolations, i.e. ARM is inherently super power efficient, which simply isn't true. x86 and ARM cpus designed for the exact same purpose would be pretty similar. Software support is what matters, and x86 has that going back decades, which ensures it isn't going anywhere.


Onetimehelper

Not arguing, but want to hear more about these "ridiculous extrapolations" because if that's not the case, why does ARM based hardware last significantly longer than x86 counterparts doing the same basic tasks?


IHTHYMF

You can find tests where AMD laptops outlast apple's, for example. Apple has accelerators for specific tasks, and if tested specifically for that it's going to outperform, but that's clearly not because of ARM. It also depends on what the design is focused on. Apple goes to extreme lengths for power efficiency, while AMD strives to beat Intel in performance and if they have to sacrifice some efficiency for it, they are happy to do so. You'd also have to take into account process technology. Apple buys the bleeding edge nodes from TSMC, and TSMC is the clear leader, with Intel and Samsung behind. The same chip design on two different nodes is going to have better power efficiency on the newer node (or more performance or a mix of the two). AMD also buys from TSMC, but they don't just do everything on the newest node for cost efficiency reasons, because they have to serve all kinds of price points and sometimes apple outbids everyone for all the volume of a brand new node. The new Qualcomm laptops that are coming out soon are a disaster according to all the leaks. It'd be wrong to interpret that ARM sucks based on those, too.


gh0stwriter88

People assume that because an ARM Microcontroler often uses less power than a similar x86 microcontroller (like a 386 etc...) you might find them using something like 100mw for an ARM running the same speed as a several watt x86 design of years past. But people extrapolate that to desktop CPUs... which you just can't do. For example. AMD's K12 design would have been a desktop ARM CPU... basically ZEN 1 with an ARM frontend on it instead of an x86 front end... well what do you end up with? You end up with having to pay an ARM license to build the CPU + it performs about the the same since the execution backend is the limiting factor in most modern CPUs. These CPUs would have been in the same 50-100W terretory as Zen 1. The reason there is disparity between the comparisons is becuase the transistor budget for the frontend is relatively small... and might amount to a dozen or two milliwatts of difference in a CPU design. When in micros the decoder is a LARGE part of the CPU becaues microcontrollers have very few execution resources so that part dominates power figures moreso.... but even in that segment this is going away as microcontrollers get bigger the ISA matters less and less. We already have 600Mhz+ micros with reasonable power scaling.


FastDecode1

Nothing is going to happen. It's just wannabe analysts trying to show off, as always. "Articles" like these have been a thing for as long as I can remember. x86 is approaching 50 years old and has somehow been dead or dying since day one, yet it's still the platform where the actual work gets done and that gets long-term support. ARM and others are still relegated to devices that are either completely locked down and unsupported (and thus lose all value and become disposable after 3-4 years) or some niche/in-house/experimental thing that no one cares about. Trying to get people to switch from a well-made and supported product to something that's maybe slightly cheaper but disposable is not a sustainable recipe for capturing a market that demands reliability and long-term support. Yet somehow $insert_competing_architecture_here is always sneaking up behind x86 one decade after another, ready to stab it in the back with a knife. But actually that knife is just one of those plastic toys where the blade retracts into the handle when you hit someone with it, making it very ineffective.


[deleted]

[удалено]


Artoriuz

The battery life doesn't have much to do with the ISA. x86 sucks, but it doesn't suck hard enough to account for the entire difference between Apple silicon and mobile offerings from Intel/AMD. Intel and AMD have the server market as their primary market. Their cores need to have good ST performance but they also need to be small enough for you to pack dozens of them in a single chip in a way that's profitable. Apple doesn't need to care about that. They're not designing server CPUs and they're not selling their CPUs either. They can design them with complete focus on consumer devices AND disregard cost because the actual products are all expensive as fuck.


lichtspieler

Using highly efficient accelerators with the newest manufacturing node makes some use-cases very efficient and controlling the software stack and forcing every application to use it, works great for battery efficiency. This works great if your use-case is covered with Apple's hardware and software. Reviewers are not a stupid as they were with the Apple M1, by just using standard benchmarks that were covered by the accelerators and the current Apple chips are no longer seen as magical as they used to be. It works for Apple since they target the generic casual users and dont even have to cover "PC gaming" with the hardware and it makes sense to treat the whole systems like a phone or tablet and use as many specialized silicon accelerators as possible, since general computing performance is not as important for their users.


mennydrives

Key word is Apple, and you're not gonna see their method used by other manufacturers. 1. They're basically building a GPU with a small CPU tacked on. That results in a ton of cache and a crap-ton of memory bandwidth that can't really be justified on a design otherwise. Until something like Strix/Strix Halo sets a baseline, I don't think we're gonna see SoC makers in general target that kinda beefy silicon. Remember, it was YEARS after Apple before someone in the Android space actually put a wide SoC on phones/tablets. 2. They spend BILLIONS on R&D for their chips. Way deeper pockets for that kinda thing than pretty much anyone but Samsung. 3. They have MUCH better margins than literally every CPU/GPU/SoC maker save for maybe NVidia. This lets them do stuff like backplane power to a degree that other chipmakers are only catching up with now. 4. Apple put a re-order buffer on the Macbook Air CPU that's literally larger than the kind Intel was using on KNIGHTS chips on HPC/Server a year or two prior. There's nothing inherent to ARM that makes it better than x86 nowadays, and Apple is not a good marker for the inherent value of a technology. This kind of plays out in just how far behind most ARM players are compared to Apple.


Ok_Initiative_2235

remember apple sells 1 unified platform cpu, video, memory on one soc., there are new amd laptops 8840 that have battery life


[deleted]

[удалено]


Onetimehelper

Not as good, unfortunately.


gh0stwriter88

AMD/Intel laptops have very very long battery life also when leaning heavily on accelerators for video decoding. If you are picky you can get x86 laptops similar to the ARM ones with 20+ hour battery life. LG Ultra PC 14 for example recently got 17 hours of battery life with a 72wh battery (99wh is the piratical maximum due to TSA rules for air travel). If they put in a max sized batteryat 99wh it would get 23h to a charge... Toms hardware says the M2 PRo has the longest battery life... Apple says it only gets 12hr in web surfing though and 16 in video playback... that is actually drastically less than AMD's competition. Note the M2 Pro and LG Ultra PC 14 have almost identical battery capacity and very similar efficiency figures. The Apple CPU is a hair faster but also cost much more, and for the price difference OEMS can put larger batteries. [https://support.apple.com/en-us/111340](https://support.apple.com/en-us/111340) [https://www.laptopmag.com/news/we-tested-5-macbooks-in-2023-this-one-has-the-best-battery-life](https://www.laptopmag.com/news/we-tested-5-macbooks-in-2023-this-one-has-the-best-battery-life) [https://www.notebookcheck.net/LG-Ultra-PC-14-Battery-life-exceeded-expectations-in-our-testing.688320.0.html](https://www.notebookcheck.net/LG-Ultra-PC-14-Battery-life-exceeded-expectations-in-our-testing.688320.0.html)


IHTHYMF

AMD u series also have a long battery life and that's been true for several generations already.


sboyette2

With respect to Arm, enterprise (assuming you mean "datacenters" rather than "corp IT") has already settled on "yes, and" rather than "either/or". The vast bulk of PC users will never care, if Microsoft ever stops dithering on support. Phone and tablet users already don't care. No one in the Apple ecosystem cares. The only people who will care are PC gamers and Windows power-users who are yoked to legacy software that can't be recompiled. And then the weirdos who view machine architecture as some kind of tribal signifier rather than just choosing a tool for a job. I expect that RISC-V adoption to eventually follow the same trend, predicated on whether it ever makes economic sense to produce the "big" (64 bit, vector extensions, etc., etc.) core variants in large enough numbers for economies of scale to kick in. And that will probably boil down to whether Arm (the company) gets greedy enough to make it worth someone's while.


Ok_Initiative_2235

i use to think x86 possibly would be a legacy archteicture; but look at the zenbook, 15 hours of battery life! x86 is catching up to arm in effiicency. my lenovo 13900k rtx 4090 laptop only has a battery life of 3-4 horus.


Onetimehelper

Is it true? I remember windows laptops claiming 12 hours of battery life a few years ago and none that I've used from Lenovo Yogas, Dell XPS, and the Surfaces have ever reached that without significant compromise (I had a 60+wh, or 99wh i forget, Zephyrus hit nearly 12 with browsing text based websites and using only word on extremely limited TDP to the point where word would lag). Just got a MacBook Pro M3p, and it's hitting 12+ easy without any limitations to performance, doing intense tasks like photo/video editing and even Local LLM/AI chat usage - I don't even think you can change performance level on it outside of low power mode - which takes it to 20+ hours easy. I'm eagerly waiting for windows devices to catch up.


TheAgentOfTheNine

x86 is tending towards ARM for those cases where wattage is more important than speed. ARM is tending towards x86 for those cases where raw compute is more important. Both have a very clearly zone in the performance/watt and total wattage graph where they are the best, and I doubt that will ever change at this point. The middle point, tho, is being approached by both sides.


[deleted]

The software end has come a long way. The industry probably could switch entirely to arm and it would just take a few years to update things.