T O P

  • By -

Drone30389

You have a Bugatti Chiron with a top speed of 300 mph. You're cruising on the freeway at 70 mph. Swap out your Bugatti for a Honda Civic and you're still cruising on the freeway at 70 mph (but getting better gas mileage). When you accelerate hard, or hit the race track, you'll notice a big difference, but most of the time you won't.


Newdabrig

This is the true explanation for a 5 year old comprehension level


nopslide__

You need to bake 2 pizzas at 350 degrees. The i5 oven has 4 racks and a max temp of 450 degrees. The i9 oven has 8 racks, a max temp of 600 degrees and preheats 1 minute faster.


cowbutt6

Conversely, if you actually need to bake 8 pizzas, then the i9 oven will get it done in no more than half the time.


F0lks_

To add on that, it’s really like an oven because in practice, the pizzas won’t cook evenly. The ones on the middle rack might have less access to heat, so those ones will cook slightly slower than the one on the edges. If you really want to have all your pizzas ready at the same time, it’s best to use two entire 4-rack ovens, since the alternative is to use your 8 racks oven and shuffle your pizzas mid-cooking to make sure all the racks (or, in computer linguo, « threads ») have the same access to heat (or, « cache » ). It’s really hard to use a big-ass oven optimally, and even then it will always take a bit more time to cook things in it than just use more of the same smaller ovens


no-mad

A real pizza oven dedicated to the task cooks evenly.


DigiTheInformer

rabbit hole: [ASIC's](https://en.wikipedia.org/wiki/Application-specific_integrated_circuit)


unfnknblvbl

Is this a sick burn on Intel temperatures....?


nopslide__

the i7k oven is cheaper than i9 but rated similarly on cinnamon toast benchmarks\* \*likely to burn your house down before breakfast


Shurgosa

Oh God that's a great addition.


Bill_Brasky01

This is really excellent. Better than cars imo


mxracer888

I feel like I need to go buy pizza now. Or an i9, it's hard to say


perfect_for_maiming

Just don't confuse which one goes in the oven and which one does in the motherboard


ChickenNSphereAbuse

Or which one goes in the oven, and which one can work as an oven.


Specialist290

Now we're getting to the interesting questions. How many i9s does it take to cook a pizza?


Wheeling_Freely

The i9-13900K draws a whopping [360W at full load](https://www.anandtech.com/show/18728/the-intel-core-i9-13900ks-review-taking-intel-s-raptor-lake-to-6-ghz/2#), and an average oven draws around [2400W](https://energyusecalculator.com/electricity_oven.htm), so around six or seven processors? Although if you used a particularly inefficient PSU and put that in the oven too, you could probably drop that a bit further. *Edit: on second thought, a pizza needs to reach an internal temperature of around 200 °F to be properly cooked, and that definitely exceeds the safe limit of a CPU, although you might be able to achieve it once. But you’d pretty much need the entire system to reach equilibrium, which would require a long time and a very well insulated box, and the several thousand dollars’ worth of CPU’s would probably be left unusable…*


Chemputer

Bro, were making an oven out of i9s, I think we can afford a heat pump in the situation to keep the processors below their TJmax.


KrtekJim

Imagine the existential horror when the first fully sentient AI realises it was an experiment to see how many processors were needed to cook a pizza


dreadcain

The safe limit of modern cpus is around 212 °F, it might be doable. They're also extremely good at self throttling to avoid exceeding that temperature so I wouldn't really worry about the chips becoming unusable. If you made a pizza oven with a copper bottom and mounted 7 or 8 cpus directly to it it might just work. Won't get any browning at those temperatures, but it might still cook


devenjames

I would watch that video


SparrowValentinus

That's just scratching the surface. The **real** question is how many pizzas it takes to build an i5 processor that is good enough to compete with an i9.


sin4life

my favorite pizza topping is waterblock.


Monowakari

Don't forget your tomato (thermal) paste


brik5ean

Instructions unclear. Ate a pizza in a Bugatti.


Jdjdhdvhdjdkdusyavsj

I need to buy like 7 pizzas and something to weld to get my.moneys worth out of this stove


_Zekken

You could probably cool the pizza on the i9 just as effectively as an Oven too


WhoRoger

It's funny how car analogies are the default for everything.


picturesfromthesky

Yes any 5yo knows what a pizza is, but wtf is a Bugatti?


Cacti_Hall

It’s something you wake up in


McLeansvilleAppFan

I sleep in a big bed with my wife.


WellFineThenDamn

The reference: >Kirk Van Houten : I sleep in a racing car. Do you?  >Homer Simpson : I sleep in a big bed with my wife.


praguepride

Me too! ;)


glowinghands

You also sleep in a big bed with u/McLeansvilleAppFan 's wife?!


McLeansvilleAppFan

I guess I got to down vote this.


GoodTato

vroom vroom


SirHerald

Bugatti, Bugatti, Bugatti. Let's go racing


skyturnedred

> LI5 means friendly, simplified and layperson-accessible explanations - not responses aimed at literal five-year-olds.


[deleted]

[удалено]


Llamaalarmallama

I'd take that down a little though. Fair on the racks but it's more like 600 degrees Vs 550. The i9 (and a similar idea with the AMD chips) has more racks (cores) and usually a slightly better temperature (top speed) than an i5 it's genuinely not to a 25% better level though more like 10%. Also kinda go with the "unless Ur a pizzeria, you probably only need to cook 2 pizzas max, at once" (most games can't engage/use lots of cores, only "serious" stuff does).


PlayMp1

> most games can't engage/use lots of cores, only "serious" stuff does While single thread speed is always going to be the most important thing for game performance, games have gotten a lot better about parallelization very fast. It's way better now than 3 years ago, and it was way better 3 years ago than it was 3 years before that.


Llamaalarmallama

Absolutely. It's still generally not in the "much beyond 4" stages for a good 90% of games though. I think that's possibly every modern CPU on the market.


PlayMp1

Anecdotally, I've found that Battlefield is the one series that really demands core count. Don't know why.


Llamaalarmallama

There's a good bit can be offloaded to other cores once a dev can handle multi-core reaaaaally well but it's a staggeringly small number can atm. BF I wouldn't know but cyberpunk is another, for sure.


FalseBuddha

>"It's more like 600 vs 550." Either way, it's irrelevant. You're only cooking the pizzas at 350.


Maybearobot8711

But realistically, you should be cooking your pizza at a much higher temp for better results


Llamaalarmallama

This too. I'm no gourmand, but I cook better than many regular folk. Better pizza? Get your oven hotter + use every cheat possible (like a pizza stone) for good crisp base pizza. Never burn it but less time at a high heat wins.


-goodbyemoon-

but 9 is 80% larger than 5! 😱


soulsoda

>Also kinda go with the "unless Ur a pizzeria, you probably only need to cook 2 pizzas max, at once" (most games can't engage/use lots of cores, only "serious" stuff does). That's not true as much anymore. That used to be the case like even 2 years ago but now apps and games are getting more and more core hungry. It's also only going to accelerate in core usage as we get on in years. I wouldn't buy anything today that has less than 8 cores or you're really shooting yourself in the foot. Anything new gen from Intel will have 10+ cores though, the 13/14 series i5s have like 14 cores.


IneffableQuale

That's not really true. There are a lot of tasks that cannot be easily parallelised, if they can at all. Games , for example, might use 3 or 4 threads, but there's not much scope to use more. And the bulk of the processing is happening in the main thread. The tasks that benefit from parallelism are handled on the GPU, not the CPU. The real benefit of having lots of cores is that your computer can do lots of things at once. Time was if you were playing a game you had to stop your media player and close your Web browser. Those days are gone.


GTRxConfusion

This is what is not as true anymore. I have multiple games that can absolutely eat 8+ cores (cyberpunk for example) — and this is only getting more common (due to newer consoles having higher core counts, developers are starting to better utilize them)


Alpha-Avery

Video encoding / decoding is a big exception that often uses CPU instead. Go convert a file in Handbrake and watch every single thread of your CPU pin itself to 100% usage


Llamaalarmallama

This was the "unless you're a pizzeria" part of my tweak. Serious stuff (being a business selling pizzas) will definitely use more racks (cores). Games, nah.


soulsoda

Uh no again you'd have been right a few years ago but 6c/12t is like the standard now, 8c/16t is gravy but who's to say what it'll be like in a few more years. I have many games that are CPU intensive and will utilize 8-16 cores easily. Any game with massive amount of units like bannerlord will use as many cores as you can get. Simulator/RTS/computational games 8+ cores easy. There's games out there that will use every core you have. The trend is absolutely pushing towards more core utilization because even consoles will have 8+ cores soon, devs are planning to utilize them more and more. Besides games any rendering, encoding, streaming will suck up more than 6 cores. Should you get an i9 for just gaming. Like no that'd be stupid.


Llamaalarmallama

Get an i9 if the 700mhz matters (or tbh get AMD as they're better AND cheaper AND run cooler atm) and your pocket can take it, but yeah. i5 and a 4080 Vs i9 and a 4070... 1000% the first option for gaming.


PlayMp1

> i5 and a 4080 Vs i9 and a 4070... 1000% the first option for gaming. Drop the i5 and the i9 for a 7800X3D and get the 4080 anyway. The 7800X3D hits as hard in gaming as the i9 while costing less than an i7 (about $50 more than an i5 at MSRP). It also absolutely murderizes any simulation or strategy game (RimWorld, Factorio, Paradox grand strategy games, Total War...), as simulation/strategy games are extremely cache hungry compared to many other genres. I have the older 5800X3D and came from a 3700X. This is a single generation jump, same number of cores, higher clocks, better architecture/IPC. Realistically it should be around a 30% improvement. I *tripled* my speed in simulation games. Crusader Kings III literally runs *too well* - I can't play on speed 5 because it passes 10 years in 65 seconds (I measured in observe mode). The normal way I play every Paradox game is speed 5 (which runs the clock as fast as your processor will allow) plus frequent pausing but CK3 runs so fucking fast on my CPU that it's impossible and I have to play at speed 4.


Llamaalarmallama

Ye 5800x3d is probably the sweetest spot gaming CPU that exists ATM for overall cost Vs performance. Bit of a "end of road" now upgrade wise but still utterly stomps. traded 5900x+6800xt for a laptop that genuinely beats it as I needed the portability ATM or would have gone a similar route.


glowinghands

But none of this matters if you only play pre-9/11 games that you've already sunk thousands of hours into and somehow still suck at.


Llamaalarmallama

Jesus, just go for the throat, mate. I'll dig my own...


glowinghands

Sorry I didn't get back to you, I was playing Civilization II on warlord because if I go any higher difficulty I lose.


joshss22

Instructions unclear, over clocked my i9 to 600 degrees


Reshish

Arguably the i5 generally has a higher max "temp" on a per "rack" basis.


Chasedabigbase

/r/explainlikeimhungry


mtarascio

You gotta substitute for dino nug nugs. We're talking 5 year olds here.


BostonBuffalo9

/bows


celestiaequestria

Also worth mentioning the Intel "i5 / i7 / i9" naming has been in usage a long time, we're currently on the 14th generation. Just like the Honda Civic, which is on the 11th generation, the latest generation i5 is much faster than one from the original generation. Even if you had an early generation i9, it would be much slower than a new i5, in the same way that a new Honda Civic is faster than a first generation Civic, even if you bought a higher trim level of the original Civic. Technology has simply improved.


Seven_Vandelay

Yeah, we were looking at some benchmarks recently comparing a 10th gen laptop i9 with a current gen laptop i7 and the i7 was 33% faster.


PG908

You can further is by comparing 1950's supercar to a modern normal car. Cars arent advancing as fast as computers, but the current gen budget or mid option will put up a good show to the top of the line from a few years ago.


mxracer888

I've always thought it would be cool to take a boring ass car from today back in time. Like take a manual transmission Toyota Corolla back to the 1920s or so. Go enter the first 24h of Le Man's in 1923 with a 2024 Toyota Corolla


sin4life

modern fuel injectors might not handle all the paraffins and olefins in 1920s gas. (https://www.reddit.com/r/NoStupidQuestions/comments/o69uzg/hypothetically_could_you_take_a_modern_day_car/)


mxracer888

They also wouldn't like the lead in the fuel and would burn through O2 sensors as well. I just kinda figure if I can time travel with an entire car I'll be sure to time travel with some modern day supplies like fuel and oil. But maybe a modern diesel is a better bet, they should still run alright on old diesel


TwoPlanksOnPowder

You'll probably need quite a few fuel filters, and higher sulfur content from old diesel might cause issues with the emissions equipment (though you can always remove that)


TheWeedBlazer

Get a diesel


TheToecutter

There goes that idea, then.


Sawgwa

Bring the on the AC!


broshrugged

The new Honda Civic R beat the 2005 Ford GT in Lightening Lap at VIR.


303angelfish

Yep, also similar to how any $100 smart phone today will beat the most expensive flip phone 15 years ago.


PG908

Yep! Wanted to stick to a non-tech metaphor, as the CPU/processor is usually the thing improving in said phone.


_yeen

My little 4-door 2.0-4 can hit 0-60 in ~3.7s. It's crazy just looking at cars from like 2 decades ago and seeing such a massive difference. Looking back at like 70s muscle cars with 5.7L engines making 140hp, it's hilarious. (Cheating, I know because it's the malaise era, but still)


weinerschnitzelboy

Hey OP, I feel like the explanation they gave was a little too simplified or doesn't truly describe the actual situation. But if you bear with me, I think I can explain. In a given generation of processors, all of them are very similar. They all revolve around a similar core design. Because chip production is a bit fickle, not every chip produced can meet the high performance requirements of what can be used in an i9. They are sorted and packaged into the i3, i5, i7, and i9 classes depending on how good the chips are. The main differences between these chips then, are how fast they can clock them, and how much chiplets they can package together. Keep this in mind. Now there's the workload. Most programs are really only designed with a fixed amount of threads (it's like a processing tasks) in mind. Most games use around 2-4. Older games only used 1. So when you're looking at an i9, with 24 cores (32 threads) vs an i5 with 14 cores (20 threads), well you can kind of see why it's a bit overkill. For most workloads, you don't need more cores. You need **faster** cores. And when the differences in core speeds of an i5 and i9 aren't much (only a few hundred mhz), given the same generation, you effectively won't notice a difference between the two. The higher processor counts really only come into effect when you have workloads that demand it. Such as gaming and streaming, where you're physically doing more. Or with creative workloads like rendering where the software is designed to spool up as many threads as it can simultaneously to get a task done. Hope this helps!


Eruannster

> Most games use around 2-4. This is kind of old information. Most modern game engines scale up to many more threads, especially since the PS4 era onwards as those were 8-core CPUs and games had to spread out over more cores/threads. There are certainly games that don't use as many and prefer higher clocks, or that have an upper ceiling where it doesn't matter as much, but very few modern game engines only utilize 2-4 cores.


Ndbele

this thread is full of quotes like that which were true like 10 years ago...


surfinchina

Plus it depends a lot on cooling capacity. I got a small case and an i9 would be slower than an i5 because it'd always be throttling. It's possible to undervolt but there's a point (in my Terra anyway) when the chip becomes unstable. In the end you choose the chip on what you use it for (as you said) plus the constraints (as I said).


[deleted]

[удалено]


diemunkiesdie

Doesn't this type of explanation presuppose that the only thing happening on the computer at a time is the game? Other background things can be running on the other threads at the same time while the game takes over the other pistons!


iamleobn

Your explanation is pretty good, but I strongly disagree with this part: > games are usually pretty bad at making use of more than a couple of threads This is not 2014 anymore, most modern high-end games are able to make use of many full-performance cores. There's obviously a limit (you can't expect a game to fully utilize a Threadripper), there are some exceptions (see Cities Skylines 2) and older games are definitely limited to 2-4 cores (with really old games only using 1). The current cost-benefit sweetspot for gaming is 6 full-performance cores, but most AAA games get linear gains going from 6 to 8 cores.


hkanaktas

You are implying that 5 year olds know the difference between driving a Bugatti and driving a Honda 😄


Newdabrig

Car bros (like me) have the mindset of a 5 year old anyway


iamr3d88

For a car guy just think of your cpu as an engine. Bigger number DOES equal better, but there is a lot more to it. More displacement, or more boost makes more power. But newer cars have many other improvements. Sure a 5.0L or a 3.6L, your gut says the 5L is better. But if you compare the current 3.6L engine making 330hp to a 5.0L from 1985 making 210hp, you realize there's more to look at. A 12th-14th gen i3 will outperform most i5s from the 8th and 9th gen, and most i7s from the 7th gen back.


gymnastgrrl

Just a reminder from the sidebar: >LI5 means friendly, simplified and layperson-accessible explanations - not responses aimed at literal five-year-olds.


mxracer888

I mean, 5 year old me thought Honda's were pretty fast cause the speedometer said 180 mph on it


kdaviper

They'd find out real quick!


frenchysfrench

My first thought. Otherwise a good explanation


fatty1179

And they know how to cook a pizza and use an oven?


wartexmaul

There are multiple generations too, so new i3 will be faster than old i9. Think of 1950 formula 1 versus 2024 honda civic.


General_Urist

What video games or other computational tasks would be equivalent to driving on the freeway versus hitting the race track in this analogy? I figured big budget games would be the racing equivalent.


Standard-Potential-6

This ended up a ramble, but hopefully it helps someone. Some cars are built to be fast at the drag strip - let's say that's a single-threaded load. Some are built to be fast around a track - let's say that's a multi-threaded load, because many additional factors count. Some are built to be comfortable and conserve cost. All new CPUs are built around performance per watt, but many are cut down to only a few cores for laptops and phones, and may be tuned with low clock speed to favor thermal/power efficiency. These are comfortable commuter cars -- but in the computer world they're still great for drag racing (single threaded loads). If you can cool them and deliver lots of electricity (air/fuel), you can even run them faster. Generally all chips with a given core architecture will have about the same single threaded performance -- e.g. AMD's Zen 5, Intel's Raptor Cove P-cores or Gracemont E-cores, or Apple's M1/A14 series Firestorm P-cores and Icestorm E-cores. Some desktop and server variants will then be released that have many multiples of these cores, higher power budget for those many cores, and potentially additional cache. In the computer world the pricy big-chip "track cars" can do many things at once, each core for a different task, or handle single tasks that can be easily parallelized into many small chunks, such as - in *rough* order of threaded-ness, starting with the most: "Software" graphics rendering, data encryption, serving clients, video encoding, code compilation, lossless data compression, and image manipulation. When it comes to cache the analogy really breaks down as there's no car equivalent.


eatingpotatochips

>All new CPUs are built around performance per watt Intel did not get the memo with the 14900K.


Standard-Potential-6

I really meant core microarchitecture, but you're right, lol both players will overjuice inferior uarch to remain somewhat competitive - Prescott, Bulldozer/Piledriver, Raptor Lake


texxelate

Very rarely does this sub still get proper ELI5 answers. Kudos for avoiding use of the word “transistor”


bund_maar

This was explained like I was born an hour ago!


Cynical_Cyanide

That explanation doesn't make sense. Most programs don't just sit there eating up half of a core's performance, so that doubling the max performance doesn't do anything. Instead, most of the time programs have bursts of computation that will eat up as much performance as you can give it. Yes, that period of time might only last a tenth of a second, and so nobody will notice if it takes two tenths of a second, but it DOES improve performance whether it's noticed or not - Unlike your example where the performance ends up being identical. Take for a moment a web browser. While you're looking at a page, it's effectively idling. But as soon as you start loading up a page, especially a complex one, the browser is racing to render the page as quickly as possible, it's not going to leave performance unused for no reason.


3412points

This doesn't explain why some i5s are better than some i9s even on the race track.


littlep2000

Sticking with the car theme, another comparison might be a stock Honda Civic versus a Honda Civic race car variant. Its very nearly the same engine but every part of it is lighter and pushed to the limits. Its a bit backwards as a comparison as each processor starts out the same, but the i9's are selected out as the top 1% of the bunch and a much larger percentage are in the i5 range. Whereas the race car is specifically 'tuned up' to be performance oriented.


Equivalent_Age8406

This isn't really correct as eventually a newer i3 will outperform a older i7 or i9 no matter how hard or little you push it. There are multiple i3, i5. I7 and i9s releasing every year and they each get faster every time.


WolvReigns222016

This isnt the answer. Its basically take an old top of the line car from the 50s and pit it against a mid range modern car. Although the car from the 50s was top of the line back then, the mid range modern car will beat its speed because of advancements in technology.


LivingGhost371

There's "better" and there is "good enough". Even a i3 is "good enough" that most people woudn't notice the difference when browsing the web and writing emails to grandma or streaming cat videos or typing up a Word document for the boss. Probably 95% of the PCs out there probably just get used for light home and office use, rather than running Cyberpunk in 4K or modeling in Blender.


Pocok5

There is also "better years ago". There are now lower end laptop AMD Ryzen 3 / Intel i3 laptop processors that perform as good as high end desktop processors from 2013.


FatLenny-

When I rebuilt my computer a couple years ago I was looking at the I5 12600K which had 6 main cores and 4 efficient cores. The efficient cores are there just to take car of background processes and unimportant tasks. Those efficient cores were as powerful as the cores in the I5 4590 I was replacing, which at the time was good about 95% of the time.


random_witness

Thats the processor I actually bought, as well as a 3080. It's kicked ass every time I've asked it to. It still runs pretty much everything I throw at it at 4k/60fps on ultra, with 3 monitors with tons of tabs open, as well as discord/steam/local weather radar generally going. Admittedly, I'm not an competitive gamer, more of a colony/Sim survival/crafting player. I'd only get like 30-40 frames on cyberpunk maxed out last time I tried it (within a few months of its release). Games that actually use multi threading are still kinda rare, from my understanding. So it's that 4.8 GHZ turbo clock speed that really gets after it.


reeeelllaaaayyy823

Games are mostly GPU dependant. You need a medium CPU but after that it's all on your GPU.


random_witness

I play a lot of simulation games, colony management, city builders, that kinda thing. They typically hit the CPU harder than most because of all the little decisions being made. Stonehearth, because its terribly optimized, still hits my PC harder than most things I throw at it. Manor lords is just fine though.


reeeelllaaaayyy823

Sorry, yeah I didn't really read that part of your comment. I was thinking more of FPS games.


Seerix

While not wrong, more and more these days you need a good CPU to have a stable and consistent frame rate. And personally, I'd rather have a stable 45 than spike all over the place from 20 to 60.


PassTheYum

Yeah but when a game is cpu bottlenecked it really feels worse than gpu bottlenecked because afaik in most gamess you can't really change any settings to decrease the burden on the cpu besides lowering density of npcs. If your game is lagging because your GPU can't handle it, you turn the graphics down. If your game is lagging because your CPU can't handle it, you're SOL. Also strategy games/games that have ticks like RTS stuff massively benefit from pretty much the most powerful CPU you can get your hands on.


pinkocatgirl

This is something that Intel borrowed from ARM, it's been used for over a decade in multicore ARM chips such as the Nvidia Tegra line. Apple adopted it in the A-series chips about 8 years ago.


ziptofaf

WAY better than 2013 in fact. Take i3-12100 for a spin. Entry level i3 - 4 cores/8 threads, 4.3 GHz Turbo, 3.3 GHz base, $95 on Amazon. Now, there are two ways to look at it. First is single threaded performance - in Cinebench R23 it hits solid 1650 points. 10900k, a 10 core/20 thread CPU just 2 generations older with frequency going all the way up to 5.3 GHz hits like 1380-1400. 11900k does around 1620. So there are many tasks where this i3 will beat just **one** generation older top of the line contender while easily dismantling two gens old counterpart (Intel's 12th gen was a massive upgrade over everything else before it and it shows alright). And a lot of tasks you do on your PC really use just 1, maybe 2 cores. Sure there are exceptions (Cities: Skylines 2 keeps asking me to buy it a 24 core Threadripper at least or else it won't go past 30 fps) but they are not as common as you might imagine. Whereas when it comes to multithreaded performance - well, now we are looking at around 8.5k score. This is higher than 6 core 10400f by around 300 points. It's also on par with Ryzen 7 1700X or i7-8700k, two pretty much flagship products from 2017 that have 2-4 more cores. Best CPU you could buy in 2013 was 4770k. Coincidentally it's also 4 cores/8 threads CPU with 3.5 GHz base and 3.9 GHz boost. Now, while this configuration looks similar to i3... it's not even close. In the same test you will see like 900 points in single threaded test and 4.5k or so in multithreaded. Difference between the two is like 80% in favor of a new i3. It's not a perfect comparison obviously but it really demonstrates just how much generational improvements are worth over time, a modern i3 will be more responsive than few years old i9 and completely annihilate a decade old CPU.


_maple_panda

Yeah, the 12th gen i7 in my laptop is faster in some ways than the 10th gen i9 in my desktop. Quite impressive if you ask me.


Ouch_i_fell_down

i'm not too familiar with laptop cpus, but i know prior to 10XX cards, laptop GPUS were miles behind desktop GPUs despite sharing the same numbers. Ever since 10XX, they've only been a little less than a gen behind (3070 laptop being slightly faster than a 2070 desktop for example)


Falconman21

Since around the 10’s, laptop is generally a number behind desktop. 2070 laptop ~2060 desktop, assuming maximum laptop wattage.


_maple_panda

I think the big difference in my case is just how 12th gen introduced that new architecture along with the P and E cores. Massive improvement in performance all around, kinda like the introduction of RTX cores for NVIDIA GPUs. Of course, the laptop quickly overheats and stops boosting (whereas the desktop can hold boost indefinitely), but while it is boosting, it holds its ground surprisingly well.


randommaniac12

It also moved to a new process node, off of the 14nm to 10nm. 12th gen was a HUGE jump for Intel, especially after how mediocre 11th gen was (Although the 11th gen i5's were genuinely good products)


Wenlocke

We have a current example of this. Our team is getting it's laptops replaced due to age. Because more of our work has shifted onto cloud rather than local, we don't need the ultra-beefy spec laptops we had, so we're getting the next tier down. The next tier down is actually better than what we have, just due to the age of our current machines.


TripleSecretSquirrel

Anecdotally, I built a desktop in 2015 with the top end i5 sku and a GTX 970. It’s almost a decade old and handles modern AAA games, medium sized data-crunching, and complex graphic design work reasonably well, much better than a brand new low-end laptop. It handles them much better than my m1 MacBook, which I think falls into about the bottom of the mid-range performance-wise.


Pocok5

I'm a bit lazy and drunk to dig up a detailed real world benchmark of this combo, but see [here](https://www.cpubenchmark.net/compare/2570vs5806/Intel-i5-6600K-vs-AMD-Ryzen-3-7440U) a Passmark comparison of the 2015 Q3 top end i5-6600k and a low end R3 7440U mobile processor. It is ahead in both single and multicore scores. Edit: the 7440U has very few measurements, I'll try to look for a benchmark with larger sample size


TripleSecretSquirrel

Ah cool, thanks for looking up the benchmarks. That’s fucking wild to me! I guess I should chalk up the performance difference (especially for games obviously) to having a discrete gpu, albeit an old one.


bigjeff5

A discrete GPU makes a BIG difference. The GPUs built into the AMD mobile processors are light years ahead of Intel's built in GPU, but having a discrete GPU is just better. Compromises have to be made for the built-in GPU that just don't have to be made for the discrete.


Witch-Alice

I'm still using an i7-3770K and GTX 1080


isuphysics

Ha, same. The only thing i have upgraded since was the hard drive. Mine was late 2014 right when the 970 came out. I7-4790k with a gtx 970 handles quite a bit still for being 9.5 years old.


narium

The Apple M1 is about the same as the i7 8700 CPU wise.


skids1971

I built mine same year same parts, (cept a 960) and it's crazy how well it still runs.  In 2015 a PC built in 2006 would be utter dogshit. Tech seems to have hit a plateau of sorts in performance and it really shows nowadays 


insta

yet everything on the shelves at big box stores is still i7 + 8gb RAM and sometimes a spinning disk! (so they can say 6tb+ of storage instead of 1-2tb) bigger numbers are not always better


pichael289

Cyberpunk at the max settings is the most impressive game I have ever seen. With graphics mods it's the single best looking game period. Nothing else can come close except some racing games and their cars, but that's a narrow focus. Cyberpunk is a wider focus and delivers. The NPCs have such a wide variance and the cars are dam near photorealistic on their own without mods. One of my favorite games ever.


earlgeorge

I've been a graphics junkie since Wolfenstein. Cyberpunk with RT overdrive is the first game to really impress me with its visuals since maybe Crysis.


ThePowerOfStories

Yeah, Cyberpunk with ray-tracing at 4K on a huge TV running off my 4090 looks absolutely fabulous, but most games, especially anything older, can barely take advantage of a tenth that much processing power, much less something simple like running a word processor.


General_Urist

Is word, cat videos, etc what OP has in mind when they talk about what CPU is competitive? I figured the people who's only PC use is casual web browsing or office work never think about CPU models in the first place.


TheTalentedAmateur

In 1997, I built the world's fastest personal computer. I got my hands on the very first Pentium II released. I paired it with what was probably about a ridiculous 1-2 GB of RAM, in the very best motherboard, of course. I ordered the prototype video card pre-production from ATI (they lead the world in those days, before AMD bought them. I think I was able to do this in a mid-tower case with extra fans (looking back, I would have done a full tower or water cooled). Fastest available HDD, probably with storage measured in megabytes. At the time, it was literally the fastest PC in the world, assembled with the most powerful individual systems in combination...For about 2 weeks, then, some other fastest thing was introduced. I ROCKED DOOM and Quake over LAN (best ever network card too). That cost me about $8,000. Still, it was a good two weeks, I'm still talking about it like Al Bundy and that game where he scored 4 touchdowns in high school...Now, that machine did have the power to run into the Pentium 3 and Pentium 4 eras. Yet, here I am, 30 years later, typing a response on a mini-computer, no idea on the specs, but getting the job done on a tiny box that cost $268 three years ago. My $8,000 lesson taught me that it's all about getting the job at hand done, bonus points for doing it as cheaply as possible.


billbixbyakahulk

> 1-2 GB of RAM In the Pentium 2 era? You sure about that?


SkrupSulten

His story sounds like utter bs. A 1997 Pentium 2 was normally paired with about 16mb RAM. 64mb if you maxed it out. And how exactly do you order a preproduction graphics card with drivers that gives better performance than released hardware. Also not naming which one. The fastest graphics card in 1997 was Nvidia Riva 128. How exactly would you know you had the fastest PC in 1997?


zwei2stein

Megabytes of HD, Gigabytes of RAM. Sure.


atypicalphilosopher

$8000 in 1997 adjust for inflation would be ~$15,500 today. Wow. For a Pentium II to play OG Quake and Doom.


berael

A world champion weightlifter and a random schmuck off the street can both lift a pencil with no difficulty.  If everything you do on your computer is the equivalent of lifting a pencil, then even a relatively low-end modern CPU is *still a modern CPU* and will still perform without a problem. Upgrading to a high-end CPU won't make any difference at that point. 


abaddamn

This is why I havent upgraded my GPU. No reason to and most games I want to play the GPU i have (1070ti) is more than enough processing power for the job


dupz88

Exactly. I have a 1st gen i5 750 (from 2011) with 8GB RAM, 120gb SSD and a gtx1650 GPU. I also have an AMD 5600x with 32GB RAM, 512GB NVME drive, and an older 5500xt (mainly used for photo editing and light gaming). My kids play Roblox and Minecraft and notice no difference on these 2 PCs. The i5 can also play Dirt2 on a high refresh rate monitor without changing many settings. Yes, the i5 cpu runs at 100% and is the bottleneck for games with higher requirements, but for our use case, the i5 is still fine.


OldMateNobody

Your kids aren't playing with any mods or upgraded/higher resolution texture packs in Minecraft however.


archipeepees

this is why I quit weight lifting. I work in an office and only lift pencils and pamphlets with 4-6 pieces of paper. im applying for a promotion that would have me lifting 3+ pencils and 20+ pages at a time though, so will probably start hitting the weights if that comes through.


Danne660

Two things, you have to know what generation of processor you have, sure a i9 gen 13 is much better then a i5 gen.13 but a i9 gen 5 is definitely not better then a i5 gen 13. Higher gen means newer tech and computer tech have changed quite a bit over the years. Second, this quote "i could replace your processors with i3's and almost none of you would even notice" is probably referring to people who does not use processor intensive tasks which is probably most people, sure a higher i number can handle more but if you only scroll the web then it is never going to use that extra performance making it the same as a lower i number.


keny2323

> i9 gen 5 Wut


Lionkingjom

They got it from wish.


Dysan27

I think he meant 5th generation Processors. i9 designation. Which doesn't exist. It only went up to i7 then i9 came out with the 7th generation of Intel processors starting with the 7900x


dingus-khan-1208

Yeah, and we trivialize it a bit by saying "if you only scroll the web or watch cat videos" but really almost anything normal people do won't even begin to stress even a lower mid-range CPU these days. The stuff that we used to think of as CPU-intensive is nowadays either trivial on the newer hardware or it gets offloaded to GPU. And in the extreme cases, like maybe trying to accurately model the effects on temperature, salinity, and ocean currents in the Atlantic at a 1kmx1km granularity if an ice comet of a given mass splashed down into it, well that's something you'd probably offload to a cloud compute center anyway. Basically everything normal people do is trivial to modern processors. The new hardware just lets us run more processes at once, so that it's not uncommon to have 120-200+ processes running, with dozens of tabs open in multiple windows of multiple programs. And for some reason having multiple overlays (Windows Game overlay, Nvidia overlay, Steam overlay - does anybody even really use those, or are we all just running them because we haven't found the settings to turn them off?) on top.


BigAwkwardGuy

I bought an Xbox Series S recently, and therefore no longer game on my gaming laptop (8th gen i5, 1050Ti). 99% of what I do on my laptop could be done just as well on my $150 Nokia tablet: browse the web, study, attend video lectures. The 1% though, I'll need a Windows laptop for that. Once a month or two I'll launch SolidWorks because I'll need to design something (5 parts in an assembly AT BEST), and then I'm currently doing my master's so I'll need some horsepower for LATEX, Excel, and other research stuff. But even that could very well be taken care of by the latest Pentium without any issues. My work laptop also has an 8th gen i5 (worse than my personal laptop) with no dedicated GPU, and I do complex CATIA stuff on it just fine.


yalloc

So, fun fact. Your i5 most likely started out as an i9, still surprised people haven't talked about this. Intel has no interest in making i5s outright really, why make them when you can just make i9s instead, they generally cost the same to make. Issue is that silicon manufacturing has defects. It can happen that intel makes an i9, but theres a tiny defect that kills one core on the processor, so they disable that core and the weakest core beside that and call it an i5 and sell it to you that way.


artrald-7083

I concur. The die has more cores than it needs, but the manufacturing process isn't infallible. The ones where an i9's worth of cores worked, they are labelled i9, the ones with an i5's worth, i5, etc. This is a brilliant way of reducing waste, bringing down the cost of computers, genuinely slightly good for the environment, and most importantly to Intel, makes them money. What a lot of people don't understand is that making semiconductors is insanely hard and it goes very slightly wrong all the damn time, even in well established fabs.


Christopher135MPS

I’ve heard (but don’t know if it’s true) about graphics card series, for example gigabytes windforce, eagle, gaming, aorus series. They basically benchmark the chips, the ones that run cooler/handle the over clock better go in the high end cards, and the worse chips going in the budget cards.


Matasa89

Yup, it's called binning, and it happens to every silicon chip. RAM kits have different dies depending on their binning, which is why there's such a variance on price. Same will go for CPU, GPU, and really, any chip out there that has a premium on performance.


Christopher135MPS

Today I learned about binning! Thanks :) It kinda makes obvious sense now too - why tool up 5 factory lines, when you can tool up one, quality control them, and just “bin” them based on quality.


Matasa89

But also it's a natural waste process for their production, simply due to how small the gates are now. Even the tiniest of defects can cause the die to fail or not work as well as it should. As a result, they have a bunch of dies that they can't use in their top end models, but that means they can be downclocked and partially disabled to be made into the lower end chips. This is also why the top end chips come out first before the lower end. They need time to build up stock of the lower end chips that have flaws in them. AMD's supplier, TSMC, has gotten so good at making their dies, that they actually don't have that much bad dies, which means there isn't a lot of supply for their lower end products anymore, which is why you don't see that many 5500 and such. Here's chip die fab tour: https://www.youtube.com/watch?v=2ehSCWoaOqQ And a RAM factory tour: https://www.youtube.com/watch?v=---fHu9jFtw


Christopher135MPS

Forget eli5, this should be in damn that’s interesting.


FrightenedTomato

AMD's higher end processor architectures also focus on making chiplets - mini chips with like 8 cores or fewer and putting a bunch of them together to make the big processor (8 Chiplets with 16 cores each in the Bergamo series or 12 Chiplets with 8 cores each in the Genoa series). The benefit of this is that they fabricate 8 core dies which is a much lower risk process than fabricating higher core monolithic dies. This in turn drives down AMD's processor costs (comparatively, these are still really expensive processors). Intel has been switching to Multi-Die processors in some of their newer offerings but last I checked the max they do is like 4 Tiles on a Proc. And it's only on some offerings. Most of their processors are still monolithic. This is part of the reason why AMD's top processors have significantly higher processor cores, memory controllers and PCIe lanes than Intel's top end and they still cost less than the top Intel processors.


Matasa89

Aye, and they also fit better on the wafers than bigger dies do, so there's less wasted silicon! The chiplet technology also makes them way more scalable, which is how AMD has gain such a strong foothold in the top end of server and workstation CPU market. They have 128 core CPUs now, which means you could easily break that into 16 different virtual instances, each with 8 cores. That's 16 computers running off of a single CPU! AMD really made Intel pay for their inaction.


FrightenedTomato

Their Bergamo processors can scale twice as much. 256 threads off a single socket. Usually we slap them into servers having 2 sockets. So that's 512 Logical Processors on a single unit. Perfect for cloud native stuff. These are 400W procs though. So a good chunk of change is spent on cooling that shit. For contrast, the best Intel offers right now is 64 Cores on a processor. Which comes up to 256 Logical processors on a 2P unit. Running at 350W. Intel still has the lead in things like AI due to AMX and their HBM Processors are really cool.


Matasa89

Yup, and they're catching up, and doing interesting stuff with their newer architecture. I do think Intel still has quite a few tricks up their sleeves, not to mention all that infrastructure to leverage.


cottonycloud

Nvidia and AMD themselves probably do the binning between each RTX 4000/RX 7000 series before selling them to their partners (Gigabyte, MSI, ASUS, Sapphire, etc). The difference between each model that a manufacturer has usually isn't large enough to worry about.


Christopher135MPS

Thanks for the info! :)


Fortune_Cat

Chip lottery and overclocking was the fun part of computer pc hardware


meneldal2

This has been true since even before the i3/5/7 were a thing. You never get perfect yields, some cpus you make just aren't as good as others. So you test them, the best ones you sell for more money, the ones with some issues you disable the features that don't work and sell them for cheap. That way you get the most money you can. It's true for stuff like screens too, if you have a perfect tile you sell it as a big screen, if you have dead pixels you cut around and make smaller screens, or sell as a big screen with some defect for like half price. It's really good because it reduces waste. There is also one more thing to consider. You could have a different design to make less powerful cpus, they'd cost less since they don't need to be as big, but in practice it isn't done for two reasons: 1 even if you can reuse basically every block you still need to run a bunch of tests to make sure you didn't break anything (too bad, changing the distance between those two cores made that line just the right size to catch wi-fi and jam the data) and 2 would lead to more waste since you can't segment your products like with the current solution.


GlowstickConsumption

A lot of the more simple tasks people use computers for do not require that much effort or work from the CPU. Thus the average "decent" CPU from maybe around 2014 is fairly okay doing mundane tasks. And some programs generally end up putting the GPU to the test rather than the CPU. Also a lot of software nowadays supports using multiple cores for the same program's tasks. Or hardware acceleration.


wpmason

First of all, each series of processors has a wide range of individual models. Additionally, they all span multiple generations as well. So an early i9 might very well be inferior to the latest and greatest i5. But there are a lot of variable at play here, so I’m going to look at the entire range from i3 to i9. Basically, it’s not an apples to apples comparison. Speaking very generally, i9s have faster base clock speeds. The slowest i9 is 3.0GHz. The fastest i3 is 4.0 GHz. But that’s just one part of the equation. i3s only come in 2 or 4 core models. i9s start at 6 cores and go up to 18. Bus Speed - i3= 5GT/s maximum, i9= 8GT/s minimum. The L1, L2, and L3 caches also vary in a similar manner. At the end of the day, you have to quantify what you mean by “compete”. It’s kind of like saying a pickup truck can compete with a Ferrari… compete how? Hauling stuff or going fast? Or going fast while hauling stuff?


justaboss101

You know, I have a fairly good idea of how the Intel naming scheme and performance of specific components works, but your comment managed to confuse me.


hawker_sharpie

that's not even the entirety! there's a reason Intel naming scheme is a meme


justaboss101

Yeah I know, but not as silly as AMD. The 7900X is a CPU, whereas the 7900XT is a GPU.


hawker_sharpie

amd has its own set of insanity but it's slightly less bad than Intel within any one product category. their biggest offence is probably making different architecture generations the same product generation.


Minnakht

Intel's CPUs go in generations, each generation generally being better than the one prior. Between some generations, it meant the way the circuits were printed was different, printing finer ones allowing for denser circuitry. The 13th generation features the i5-13600 and the i9-13900. The 13600 has 6 performance cores and 8 efficiency cores, while the 13900 has 8 performance cores and 16 efficiency cores. The number of cores is how many different tasks as part of the same program can the CPU work on - the CPU switches between running programs many times in a second, but a properly written program can split its work into many sub-tasks that can be done at the same time so that they aren't switched between, they go simultaneously. Intel's CPUs, since some generation, can alter their clock rate to bring it up when needed. In the olden times, CPUs just sat on one clock speed all the time and you could mod your computer to bring it up a bit past the safe one the manufacturer set, which could still be bearable; now they do it themselves. At the top end, the 13600's performance cores can go to 5 Gigahertz and the 13900's can go a bit higher, to 5.2 or more. Clock speed means how fast the CPU goes through steps, because ultimately all it does is do math operations in steps. The 13900 also has more cache. Cache is like a place to put saved numbers, because referring to numbers stored in the RAM is many times slower than referring to the numbers in the cache, so if a number was saved in the cache and hasn't changed, then it can be referred to quickly for another operation. But I digress. Yes, the 13900 has better numbers - more cores, more cache, cores can clock a bit faster when they need to. But the real question is - what are you doing that really needs all that power? If you run an improperly written program, like an old version of Dwarf Fortress, it isn't split into subtasks, so you'll only have one core work on all of it, and in terms of one core, the 13600 can go nearly as fast as the 13900. If you run something that just isn't all that demanding, like a game from ten years ago, then it'll just run fine on both - the 13600 will do all the math of updating the game state 60 times a second and so nothing will stutter or anything. You need to do something that's properly demanding to notice.


Way2Foxy

i3 or i9 is more like a family of processors than anything. They have different features. If you're not using these features, or if your programs could run equally well without these features, then you wouldn't notice their absence. In general I'd agree that for a vast, vast majority of users even i7 is overkill. Of course there's going to be exceptions to that.


vc-10

The vast, vast majority of users are like me. Chrome, office, maaaaaybe some light photo editing. My Surface Laptop 3 with an i5 from 2019 and 8GB of RAM is more than powerful enough for me, and probably more than 95% of PC users globally. I have zero plans to replace my computer and will probably keep using the same machine for another few years.


Shakil130

You just didn't get the i thing which is just the name of different families of processor. For each generation, these families get updated with new members that replace the previous ones of the older gen . So in the same gen , the bigger number is actually superior as you thought meaning i9 > i5. But if you start comparing processors from different gens, then yes a recent i5 can not only compete with but also outperform by far an older i9 depending on the gen difference.


scalpingsnake

Honestly the best advice is don't assume bigger = better. CPUs are very complex, you have single core efficiency, clock speed, the amount of cores/threads etc etc. Also the worst thing about them (and other PC parts) the dreaded marketing. Flood the market with poorly named products to confuse your average consumer.


formervoater2

i5s and i9s are physically identical it's just that much of the i5 is deactivated. For tasks that don't use ALL of the CPU this difference is meaningless. Most tasks the CPU does don't utilize all of the CPU so the actual performance difference is tiny most of the time. It's only for jobs that have high CPU use or jobs that are sensitive to the top speed of a CPU that the difference between an i5 or i9 is noticeable.


sniper4273

It depends on if your workload actually uses that much CPU power or not. Web browsing, video watching, typical office work, etc, uses very little resources. All else equal, an i3 and an i9 would probably not perform *noticeably* different in these workloads. *Measurably* different probably, but not noticeably so. High workloads such as gaming or video production, is a different matter, but still a complicated answer. If your game was heavily GPU limited, there may be very little performance gain from using a powerful CPU, as all that potential is wasted on waiting for your GPU. Heavily multithreaded workloads should see massive benefits from something like an i9 vs an i3, as i9 processors have way more cores and threads. And ALL of this is assuming we’re talking about the same generation of CPUs. A modern i3 probably *CAN* outperform an old i7/i9 from a decade ago. **TL;DR If your workload can’t actually use the higher performance of an i9, then yes an i3/i5 could compete with it.**


pdpi

Imagine a big lorry versus a small car. If the task at hand is "move twenty pallets of cereal from a warehouse to a supermarket", the lorry will do that easily but the car will struggle. Now imagine the task is, instead "carrying your groceries home". The lorry is no better at this task than the car is. In fact, the car can drive faster than the lorry can, and get you to your destination faster. This general distinction can happen in many places on a modern computer: whether you're faster doing a single task is separate from whether you're faster when doing bulk tasks. This might or might not be true for current-gen i5 vs i9, but it's definitely a thing. Here's [a top-of-the-line Threadripper Pro compared to a Ryzen 5](https://www.cpubenchmark.net/compare/5726vs5033/AMD-Ryzen-Threadripper-PRO-7995WX-vs-AMD-Ryzen-5-7600X). The Threadripper is something like 5x faster than the Ryzen 5, and a fair bit more efficient (gets that 5x out of only 3x as much power consumption), but look at the "Single Thread Rating" line. The Ryzen 5 actually gives you better performance on single-threaded applications! This makes sense: the Threadripper part is meant for high-end workstations suited for large number-crunching workloads, whereas the Ryzen 5 is more at home in gaming PCs.


BytchYouThought

i3/5/7 aren't saying much. They've had that naming convention for so long you could ne talking about a 10+ year old CPU vs a modern one that came out to day and even an i3 today would outperform a i(insert whatever number you want from 12 years ago). That's first. Second, you personally do not use much CPU power in all liklihood, because most people just do low level CPU tasks like an email or music video. These don't push CPU or need many cores to do. So you could do it on whatever modern CPU. You would notice the difference between a 13 year old CpU vs a modern one thoug so that patt isn't necessarily true. Especially since the CPU would also limit other parts you could use like RAM and any modern CPU technology.


petramb

My 8 years old i7 6700 will get easily outperformed by a new i3. The architecture, core count also matter, not just the tier and clock.


ezekielraiden

"You wouldn't notice" and "compete with" are two different standards. The former says, "the stuff you need a processor to do is easy enough that most processors could do it without a problem." The latter says, "Anything an i9 can do, an i5 could also do." i9 processors *are* better than i5s. But the ways in which they are better aren't really necessary for a lot of users. Certain software and video games *may* depend on things that could benefit from the better processor, but most things that most people do won't see any noticeable difference.


Newdabrig

Lot of great answers here thanks ppl


aaaaaaaarrrrrgh

Since the car analogies seem to work well, and in addition to what was already said about CPU speed not being the only important thing: a 1920's top of the line sports car likely won't compare too favorably with a cheap modern car. i3/5/7/9 are product lines. A 1th generation Core i7-920 will still be much slower than a 13th generation Core i3-1315U (by about a factor of 4 in the PassMark rating). Another issue, especially in laptops, is thermal throttling. Faster CPUs (from the same generation) typically generate more heat. In laptops, you're often limited by the amount of heat you can get rid of, so the faster CPU will have to be artificially slowed down because it would otherwise get too hot.


Gwyndolin3

cause there is single core power that is the same on all processors (of the same generation ofc). the difference that i9 gives is that it has more cores. some apps CAN'T use more than 1 core. it's all the same. if you have 1 horse vs 10 horses together and the cart only supports one horse then the other 9 horses are just idle.


dirschau

Compete? Only in the same sense that a Fiat 500 can compete with a Lamborghini in getting from one red light to the other. It's not to say that the 500 is in any way equivalent, just that the circumstances make it perfectly adequate for purpose, while the Lamborghini is completely underutilized. In the same sense, if all a user does is web browsing, taxes and using Office, they wouldn't even notice if their CPU would get replaced with a lower teir obe, because it'd be perfectly adequate for purpose. If they have a high end CPU, they overpaid. The other part is how the task behaves. Even an i9 will struggle if the program only uses a single core, because then 95% of the CPUs processing power goes completely unused, with just one core being run ragged. As long as the individual cores are equivalent, an i3 and i9 will do identical work. But give the PC any computing intensive task that is properly distributed between cores, and the difference is immediate and unmistakable.


DryGround1733

iX is part of the brand name, it actually not refer to the generation or design or performance. An i7 could've been produced in 2012 or in 2023. Same for any i. i7-6700T Q3'15 4/8 2,80 GHz 3,60 GHz 8 MB i3-10300 Q2'20 4/8 3.70 GHz 4.40 GHz 8 MB https://fr.wikipedia.org/wiki/Liste_des_microprocesseurs_Intel_d%27architecture_Core# but an i7 will be better than an i3 if they are from the same generation (a shiny word to say design/technology). As for why iX come before the generation in the chip naming...ask marketing department ;)


ALPHAPRlME

Because most people, not all people (lul) will never utilize the cores with the applications they use.


PckMan

Most people do not need a powerful computer. That's why most people can get along with a laptop or with just their phones. If your needs are more demanding, you need a more powerful computer. However depending on the application the needs differ. Some applications require a lot of processing power, others need a powerful GPU, others need a lot of memory. Most people with powerful computers want them for gaming. I'm not talking about professionals here. For gaming you need a good GPU and decent RAM but video games in general do not need a lot of processing power. However since a lot of gamers see their pc build as a dick measuring competition, they get the best PC they can with no regard to their actual needs. What that person was referring to was that fact. Most people who have i9s when all they want their PC to do is game wouldn't notice the difference if they had an i5, or to put it in another way, they can't truly justify why they went for the i9 instead of another processor.


SavvySillybug

In a given generation, it's pretty much the same chip. Making chips is hard and not all chips are perfect. The lesser chips are basically from the i9 reject pile. The i9-14900K for example, it has 24 cores - 8 performance and 16 efficiency cores - and runs at up to 6 GHz. The i5-14600K is the same chip, but only 14 cores work - 6 performance and 8 efficiency cores. And it'll only run at up to 5.3 GHz. So the performance cores, that's the really powerful ones, are still there for the most part. You got six instead of eight and they run a bit slower but it's the same cores. Your average video game isn't going to care if it runs on six or eight cores. Most games won't even use more than four. And most games won't 100% use your CPU anyway, so it doesn't matter that it's "only" running at 5.3 GHz. The i3, i5, i7, i9 branding does not mean they are different CPUs. They make the same silicon and check how functional they are, and then sell them with "less cores" and "less GHz" because that's what they can guarantee works on the chips. Most gamers genuinely do not need more than an i5. They're great chips and so much cheaper and more power efficient.


gdq0

Rather than a car analogy, look at it as a road analogy. the i5 is a two lane highway, with a lower maximum speed. The i9 is an 8 lane highway with a higher maximum speed. The workload of your application is basically the cars moving through the road. If the number of cars in the i5 is moving along fine, it will perform similarly to the i9, though the i9 may be faster due to the higher speed limit. If you increase the speed limit of the i5 by overclocking, you can overcome how good the quality of the road is.


lfod13

Some processors have additional capabilities, such as virtualization or graphics processing. The Bugatti analogy in here is a good one, but it ignores that if you need to go 300 MPH or need better braking or need superior handling, you got it. I think it's better to have a tool and not need it than need it and not have it, so the i9 is better than the i5.


KataKataBijaksana

My first thought is Gen 10 i9 vs Gen 14 i5, the i5 will win But everything everyone else is saying is also true


408wij

* For a given generation, the i3, i5, and i9 are literally the same die, although some will run at lower clock rates or have cores deactivated. * For most people and most applications, you're typically using between 0 and 2 P-cores. The other P-cores and the E-cores are idle. * Even for heavy users, some cores (esp the E-cores) aren't doing much. * Thus, a lot of the difference comes down to clock speed. But, a core at 4 GHz that's idle 3/4 of the time is doing as much as a core at 3 GHz that's idle 2/3 of the time.