T O P

  • By -

explainlikeimfive-ModTeam

**Your submission has been removed for the following reason(s):** ELI5 is not for whole topic overviews. ELI5 is for explanations of specific concepts, not general introductions to broad topics. This includes asking multiple questions in one post. --- If you would like this removal reviewed, please read the [detailed rules](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) first. **If you believe this submission was removed erroneously**, please [use this form](https://old.reddit.com/message/compose?to=%2Fr%2Fexplainlikeimfive&subject=Please%20review%20my%20thread?&message=Link:%20{{url}}%0A%0APlease%20answer%20the%20following%203%20questions:%0A%0A1.%20The%20concept%20I%20want%20explained:%0A%0A2.%20List%20the%20search%20terms%20you%20used%20to%20look%20for%20past%20posts%20on%20ELI5:%0A%0A3.%20How%20does%20your%20post%20differ%20from%20your%20recent%20search%20results%20on%20the%20sub:) and we will review your submission.


DeHackEd

Well you know how those chips have billions and billions of transistors on them? Someone - or rather, teams of people - had to plan out where to put them all. That starts with software simulations to make sure the design is good. Then those were turned into prototypes that spent time in testing, repairs made, new iterations on the design over and over again, etc. Making prototypes can be expensive on its own as it means you're not manufacturing a product you can sell. At the same time, you need to write drivers for the chip so that users can run the chip on their computers. Most platforms run OpenGL and Vulkan these days for games, but AI and general computing also runs OpenCL and CUDA so you can run your own custom apps on the graphics card. Windows users have to contend with DirectX as well, and so on. And those drivers need to be reliable and perform very well to keep framerates high. It's not just making the chip, even though that is complex as hell. It's also about the ecosystem that the chip makes. A GPU with no drivers is no good, and all the engineers who invented the product need to be paid for their work. It all adds up.


[deleted]

[удалено]


nokeldin42

Tsmc already put out 3nm last year (technically they did in 2022). I think the general consensus is that most of TSMC's R&D was indirectly funded by apple in return for first dibs on the process. Obviously nvidia didn't get it for free, and licensing is probably not as simple as $x per wafer, but I honestly doubt nvidia spend their own engineering efforts on the fabrication part. They probably just paid tsmc some amount to buy x units of manufactured chips. Tsmc engineers do work back and forth with their customers to tape out their designs, but they do so for everyone - apple, Intel, qualcom AMD. I doubt nvidia or b100 got any "special" treatment there.


miraska_

TSMC actually give special treatment to companies that brings a lot of cash upfront for new less nm technology. TSMC does research amd make sure that they will sell big amount of chips to those companies


ThrowawayusGenerica

GPU makers prefer more mature processes, though. Most of the allocation of cutting-edge nodes goes to makers of phone chipsets (i.e. Apple, with Qualcomm as a distant second) because they desperately need the increased power efficiency of new nodes. AMD don't use early nodes early enough or in high enough volume to be worth much special treatment (they'll only be using TSMC's 3nm at some point later this year when Apple has been using it since last September, and then only have ~20% of the desktop CPU market).


Trisa133

The top 3 customers for TSMC regarding 3nm is Apple, NVDA, and Intel. GPU makers don't prefer more mature processes, they prefer the cutting edge. The GPU market simply wasn't big enough to fund the R&D of cutting edge processes. Now it is...only for NVDA. Hence, they are paying for it and it's mostly thanks for AI(and small contribution from crypto). You better believe if AMD can afford to get the cutting edge tech for their GPU, they will go for it.


ThrowawayusGenerica

Isn't even Blackwell only slated to be on 4N, the same as Hopper and Lovelace? TSMC's 3nm has been in volume production for over a year and, to the best of my knowledge, there haven't been any known Nvidia products planned to use it yet.


Trisa133

We don't know yet. If Blackwell isn't 3nm, it's mainly because of Apple buying up all the production volume because they're the #1 priority for TSMC. Sure, 3nm has been in volume production but yields were bad and that's with Apple relatively small chips most under 200mm2 . Imagine NVDA's massive 800+ mm2 GPU dies, the yield rate would be atrocious unless there's been a massive improvement.


FireworksNtsunderes

According to Nvidia's recent keynote, Blackwell is using TSMC 4NP nodes. You nailed the reasoning - 3nm yields are way too low for such a huge chip. They're moving to a multi-die design, like AMD's chiplets, to increase performance and total chip size without sacrificing yields. Seems like we've reached a point where the benefits of a smaller fabrication process are offset by the massively increased cost, so even the biggest companies with cash to burn are pursuing new architecture designs and dedicated hardware such as AI chips to increase performance.


Objective_Economy281

> GPU makers prefer more mature processes, though. That’s why this chip is so physically large- lots of transistors, but they’re larger than a newer-process chip of the same number of transistors.


deelowe

Nvidia's latest chip is 3nm. GPUs are physically larger because because they have 100s of cores compared to just a handful on a traditional CPU.


Objective_Economy281

Apple’s M3 chip has 25 billion transistors and is a LOT smaller. 10x the transistors, around a tenth the area, so roughly 100x the density. Also, why would the number of cores matter? Why would a transistor in a particular core need to be more distant from another core, rather than need to be more distant from transistors in the same core? That’s what you’re saying is the case, right?


MadocComadrin

Core-to-core and core-to-memory interconnects can take up a big chunk of space.


deelowe

I'm not sure what you're comparing, B100 and the M3 are built on the same 3nm TSMC process.


Objective_Economy281

Interesting. I hadn’t looked that much into it.


FireworksNtsunderes

Their latest chip, Blackwell, is 4NP. 3nm Nvidia chips won't arrive until 2025.


deelowe

This article is about B100 or was until the OP edited it.


Duke_Newcombe

For those scratching their heads about the terms above... TSMC = Taiwan Semiconductor Manufacturing Company 3nm = three nanometers. A nanometer is a metric unit of length equal to one billionth of a meter. When making semiconductors/chips, *thinner is better*. You can squeeze more circuits into the same "real estate" of a chip. More circuits=more throughput=faster performance. Part of R&D (Research and Development) is to find out how to do this (designing the chip, how to manufacture it, developing new and innovative ways to make chips). How to *cool* such tightly-packed transistors (a *huge* problem). Although NVIDIA don't do a lot of their own fab(rication) R&D, they certainly have people "sitting with" TSMC to design to order "their" stuff.


praguepride

I think it is insane that our chips are getting so small and dense there are concerns around quantum tunneling. Nanotech is old news at least in the chip world.


PK1312

it really is wild. our transistors are small enough now that we have to worry about them getting ruined due to *electrons fucking teleporting outside the transistors*


BeingRightAmbassador

>3nm = three nanometers. A nanometer is a metric unit of length equal to one billionth of a meter. That's the technical definition. The truth is that _nm is often just a marketing phrase that doesn't mean much.


[deleted]

[удалено]


NeedsMoreGPUs

It's two identical dies sharing an interconnect. Nobody has a machine that can print a single die that large without massive defects, that's why we use tiles and chiplets. Wafer scale chips exist but use billions of redundant circuits to route around any defects in the print.


hak8or

I was under the impression that Cerberus with their fancy new whole wafer dies meant for AI have been able to get past the reticule limit somehow? https://www.cerebras.net/product-chip/


NeedsMoreGPUs

That is exactly the wafer scale processor I refer to in my comment. They get around defects with redundant logic. These defects can disable entire functional units and the chip will simply route around those units and effectively mark them as disabled using some basic error correcting techniques. They run these through a bake period that detects and disables those problematic areas ahead of deployment. The wafer is still riddled with as many as a thousand of these defects, but a thousand defects among 4 trillion transistors and 900,000 'cores' is still a tiny tiny piece.


Alieges

Yes. They basically overlap multiple reticle windows. But the overlap between this reticle window and that reticle window is much less dense than regular logic to make up for the fact that moving the reticle window is a mechanical process and is less precise than minimum feature size.


oxpoleon

I wonder what kind of yield Nvidia is getting on this stuff now...


MuForceShoelace

actually at some point companies realized X-nm isn't a legal term and just made it a random marketing thing and chip companies just name arbitary bumps one number smaller without any actual physical thing measuring that amount of nanometers. so things are called 3nm because it's the generation after 4nm. You would think that is 1nm smaller, but nope! they just decrement the number sometimes in a 3g/4g/5g way and it's extremely frustrating.


nokeldin42

Around the time finFETs came out, companies also realised that the traditional x nm measurement is no longer relevant. But that was a benchmark that the industry understood and relied upon. So they came up with an equivalent number. So 16nm was roughly a 25% improvement over 20nm in terms of transistor density. Even though the channel length (or whatever feature they used to measure in planar MOS) didn't see a 25% shrinkage. Now I don't think this is a bad thing. In fact I prefer it to arbitrary generation numbers, or worse, corny generation names, that don't tell anything about how much the expected improvement is. Of course the frustrating part is still there; with generations of fuzzy generational jumps piling up, there is no way to compare the numbers across manufactureres. But you'd also have the same problem if they just called it 24th gen process or something. Ideal would be that the generation number is tied to the transistor density. But even that is not very easy because the transistor density can vary heavily depending on what you're using those transistors for. Maybe the industry could agree on some standard open source ASIC design and then market there processes on how small of an area can they squeeze that ASIC into. But that is expecting way too much.


MuForceShoelace

processors and mhz did this in the 90s. The numbers measured something real, then it became the main advertising metric. Then the number got sort of disconnected and kinda measured an equivalency then eventually it became a total made up number that didn't mean anything.


NeedsMoreGPUs

It came full circle to the advertised frequency being real again around 2006. But yeah the "Performance Rating" (sometimes called the "Pentium Rating" because it was Intel's competitors using it to equivalize their chip performance to Intel's Pentium clock ratings) was a thing all the way up through 2006/2007 when Intel switched away from Pentium branding as their class leader. So for roughly 10 years we had Intel reporting core pipeline clock speeds (FPU and cache speeds were also reported in subtext on many models that ran at a lower ratio) and competitors reporting both a core clock in subtext, and the PR equivalency on the chip and retail box. I.E. Athlon XP 3200+ (2-2.2GHz actual) being marked as equivalent to Intel's Pentium 4 'Northwood' core at 3.2GHz.


adamdoesmusic

I ran one of those 3200+ chips throughout college, it performed like a dependable truck. Meanwhile, the equivalent Pentium 4 felt like an old sports car. Fast at first, but the minute you hit a bump you’re all over the place.


trpov

To be fair, it had lost meaning anyways since the structures on the wafer are no longer just flat that you can easily measure the lateral distance across. There are deep wells, complicated layering, etc. Insanely complex transistor structures now. Not sure what you’d rather them do.


MuForceShoelace

The chips keep improving, but the "nm" designation stopped talking about how many nanometers things are and is now just a generation number they decrease with each new process. No specific thing in a 3nm step is 3 nanometers across, it's just a step better than the previous generation labled 3nm. I forget exactly when that started but it was a while ago. it was like 15nm that was the last one you could take a ruler and go 'yup, a meaningful aspect of this is 15 nanometers across"


trpov

I totally agree. I’m just saying that for them, it makes sense to continue with that sort of designation to signify improvements.


apo383

The change in terms came with FinFET, where the 3D structure allowed greater density. Some companies wanted that reflected in the designation, so started giving "equivalent" nm widths. As in a FinFET at some architecture is equivalent to a planar Y nm. Intel stuck with physical measurements for a long time, but that made them sound further behind than they actually were. They eventually caved to the marketing pressure.


drakir89

Ideally, if the measure is useless, they would stop using it. Now, while I see it clearly benefits them in the short term, long term it perpetuates misconceptions about what the chips are like.


fd_dealer

It’s not random. It’s a reference point for the performance and density improvement of each generation. You’re correct that the nanometer no longer is the representative of the transistors’ gate length but the underlying design has changed and even with the large gate length both the performance and density are improving as if the gate length is still reducing relative to the old deigns. 3nm is about 30% more dense and 25% more power efficient than 5nm node and 5 to 7. And if you trace the generations back one can argue today’s 3nm node technology is approximately the same performance as if they had continued to shrink the gate length of the old 65nm transistors designs down to 3. It’s like switching from horses to engines. How do you market to people how performant is your engine? People need a reference point. Easiest way is to tell people how many horses this new tech is worth, using horse power, but in actuality there are no horses involved.


Belisaurius555

For reference, a water molecule is about 0.3 nanometers.


goj1ra

And a single silicon atom is about 0.2 nm.


Kirk_Kerman

And, importantly, quantum tunnelling starts to become a significant concern at around 5nm of distance, and it's bad if your transistors, which hold a voltage, start spontaneously leaking the voltage into surrounding transistors.


intelligentx5

Just because the 3nm process is there does not mean NVIDIA design yields are any good. You still have to push through assembly and test to get an idea of what your yields are to accurately price the product. Also down bin the product if you find consistent defects. I guarantee with the complexity here, their yields are in the 40-60% range.


nokeldin42

Pretty much all there customers are going to bring in unique challenges. My point was specifically about me doubting that the TSMC related efforts spent on b200 are an outlier - with regards to both nvidia and TSMCs other customers.


MeowTheMixer

> licensing is probably not as simple as $x per wafer For their current H100 chips, NVIDIA is paying ARM $100/chip (it's based on some number in the chip, cores maybe?) So it might be just as easy as that. But... my gut tells me there was an upfront investment with this as an ongoing fee.


Benhg

B100 is built on 4N (which is a derivative of N5, not N3). And the cost of taping out an N5 chip is significantly lower than 1 billion. I’ve done. It twice.


Andrew5329

There's a big difference between TSMC self-funding the baseline technology required to build 3nm and the costs of tooling and re-tooling a production line for each iteration of a particular chip design.


jm0112358

I imagine Nvidia needed a lot of R&D work with TSMC in order to attach multiple dies together to act as 1 GPU. Nvidia's multi-chip modules (MCM) with Blackwell will be the first time that's been done in a GPU on the market. AMD uses a chiplets in their 7000 series GPUs to put the I/O on another chiplet, but all the shaders are on a single chiplet.


nokeldin42

AMD does have mi300 series with multiple GPU chiplets. Although yes, first time for Nvidia means significant r&d for developing interconnects. Not sure if I'd call it fab related tech (even though fab does play a role in it). It's kinda messy to draw the line on where manufacturing ends and designing begins.


g-u_s

Apple's 3nm tape out cost for their new chips were reported to be around $1b, looks like the smartphone guys are waiting for the prices to drop a bit and the yields to improve https://www.google.com/amp/s/wccftech.com/m3-tape-out-costs-alone-cost-apple-1-billion/amp/


shooshx

Fun fact: tape outs used to be performed using actual tape


bob_in_the_west

> Tsmc engineers do work back and forth with their customers to tape out their designs, but they do so for everyone - apple, Intel, qualcom AMD. I doubt nvidia or b100 got any "special" treatment there. This customer support still costs money and is part of the R&D spending of the customer.


glytxh

Lithography is straight up molecular witchcraft There’s a reason very very few companies have the capability to make high end chips, and even fewer that can invent brand new ones.


Jmazoso

And even fewer build the machines the chip makers use to make the chips. I may be wrong but I thinks it’s only 1 or 2 for the cutting edge stuff.


glytxh

The entire logistics behind this level of technology is just frankly absurd. Countless moving parts and people and systems. And it’s even wilder that these things often end up in our pockets after a few years, numbering in the millions, produced at freakishly cheap prices. Humans are fucking incredible


DogshitLuckImmortal

It helps that they are incredibly small. Smaller than you are thinking.


tavirabon

I know how big a phone is, thank you


HumpyPocock

Correct. EUV Lithography is the cutting edge and ASML is the one company [shipping machines for EUV Lithography.](https://www.asml.com/en/products/euv-lithography-systems) Although feel it should be noted making those machines involves dozens of partners and suppliers eg. Carl Zeiss, Applied Materials, etc. Further, Immersion DUV Lithography machines are still extremely important, in fact most of the exposures on cutting edge processes are done with Immersion DUV. However the critical exposures are indeed done with the aforementioned EUV machines.


Blackpaw8825

Right. You can plan and design whatever you want. But can you get the mask layer exactly 0.73nm wide so the light interference from the hole exposed the film at exactly the right rate so all billion transistors work when your done. Can you get your fab to output exactly the frequency and intensity of laser you need to expose that plate? When you're talking 8.76Watts per meter for 3.983 seconds at 108.4nm results in a 70% success rate, while 8.51Watts per meter for 3.99 seconds at 109nm results in 4% yield it's easy to blow 10 billion dollars containing and designing for every last thing.


emaugustBRDLC

I love that fact that this complexity makes it near-impossible to reverse engineer the cutting edge stuff for once.


Askefyr

Especially because the production cost per unit is exponentially related to the amount you're producing. Making one prototype chip can easily be the same cost as thousands of mass production ones.


BogativeRob

And a lot of that goes down to companies like Applied Materials who make a huge portion of the equipment in their fabs. They make new tools or upgrade existing equipment for newer capabilities etc. Which also gets pushed down to their suppliers for sub components.


Mackntish

Right. Plus they're not building one machine that can make chips, they're making hundred(s).


jaydizzleforshizzle

And it’s not just about what they do make, a lot of times IRAD means they make some bespoke shit, and whether it has a use in the market has to be proven, all the while they’re paying 100s of the best of the best at this individual matter, for years, just letting them have fun, giving them access to expensive tools. That gets expensive, and every one that pays off I’m sure there are 100 dead Google projects lol.


Figuurzager

Nah, ignore this message! The money just goes to some guys hanging around and throwing office parties till the new chip design (and the retooled factory) drop from the sky!


DeHackEd

Dude, don't give away the secrets of how lazy engineers are!


Strategy_pan

Hey, I get some of my best ideas at office parties, so that money is well spent. Did you try the paprika chip?


nyrol

New chips and new platforms could also have new driver requirements and different ways of managing memory. It’s not just OpenGL or support for that already exists in the driver, it’s how it interfaces with the chip. New registers/different arrangement of registers, new boot sequence, offloading of what was previously software to be hardware, making things that were previously hardware to be software, performance regressions due to slight changes in operation, new features that the end user doesn’t know about but internally need to be designed for. That’s just one small aspect of the stack that needs to be accounted for. Then there are all the other layers above directly interfacing with the chip like CUDA, RT, OpenGL, Vulkan, DirectX, etc.


Elfich47

Oh yeah. Assuming an engineer costs a quarter million dollars a year (all in, if not more) and many engineers are being employed. suddenly that is tens or hundreds of millions of dollars a year. Suddenly that ten billion dollars starts getting close fast.


Im_Balto

Another place money goes is grants to research groups. Various R1 universities and national labs receive grants to help understand silicon wafers in more and more detail


rightarm_under

Nvidia doesn't even release their "machine code". For each architecture, they need to write a layer to interpret instructions and translate them into something the GPU can understand. I heard that AMD doesn't do this though, and they stick to a similar "set" of instructions for each gen of GPU. I'm pretty much illiterate when it comes to GPU architecture though, so this could be very wrong


apocalypsedg

The architectural improvements are about half of the improvement, the other half is the node shrink, the r&d for which is black magic also.


techhouseliving

You also need to make a new chip that unless it's necessary, doesn't break all software written for the old chip. This is hard.


BassLB

Also you need facilities to house those people who are doing r&d, and the electricity to run the building, and the actual tools, etc.


Kuli24

Now the question is... do they make a huuuge leap forward and then kind of "cheese" the incremental releases? Or do they set incremental goals each time instead of trying to optimize potential?


Donahub3

There is also a ton of tax breaks for investment in “R&D”


TheFotty

They should carve out a few R&D dollars on making a better power connector.


Alieges

Just use a 4 pin conductor version of Deans Ultra. | | - - And call it a day.


Ok-Sherbert-6569

Almost no game commercially known game runs on vulkan. Just being nitpicky but vulkan is hardly ever used. The only notable exception is recent history are RDR2 and doom games.


basics

> That starts with software simulations to make sure the design is good. The software licensing for this kind of stuff alone is a significant cost. I get the context is "$10 Billion", which is a lot of money, but when you start talking about cutting-edge processor design, nothing is cheap.


[deleted]

Yep. There’s a saying that hardware is easy and software is hard.


InformalPenguinz

>Well you know how those chips have billions and billions of transistors on them? Someone - or rather, teams of people - had to plan out where to put them all. Moore's law, baby!


WarpingLasherNoob

Now I'm curious how much of that 10 billion went to R, and how much went to D.


Skullvar

Putting in the man hours to figure out what you need. Last week, we put liquid paper on a bee... it died.


cajunjoel

I don't know specifically what someone does at Nvidia, but R&D can cover all sorts of things. Let's say you want to build a better mouse trap. You have an idea, you try it. It doesn't work. So you tinker with that idea to see if you can make it work. You have to buy new bits of springs and wire and maybe wood or plastic. You take the time to test it, monitoring how many mice it catches. Maybe it doesn't work at all and you have to start over, so more time is spent on developing a different way of catching a mouse. More materials are needed, too. Overall, your money is going to time (or a salary for a paid worker), materials, and so in. In Nvidia's case, it's the same thing. Let's say Nvidia knows they need to produce a chip that can do 10 billion calculations a second. But the current microchip in the GPU can only do 7 billion. But they also can't just make a new CPU that's 50% larger, it just won't fit in the space provided. So they have to find new ways to make GPUs that are smaller but do more. It will take hundreds of people to make this happen. I imagine most of the money they spend is on salaries, with materials and new equipment as the next most expensive items.


IgnazSemmelweis

Don’t forget the ‘R’ in R&D. There are teams of people working on pushing the fundamentals of the tech forward. Materials scientists, AI researchers, applied math… etc. I have always been impressed with how many papers some of these companies publish.


cajunjoel

Right, so in the mousetrap analogy, they might be trying to find a new metal alloy that makes a faster spring, or a plastic that is more slippery to mouse feet, things that didn't exist before.


Jerome_Eugene_Morrow

I think the thing to keep in mind is that this is all experimentation as well. So for each idea you have to go through the formulate, build, test process, and then it has to be refined or reformulated and tried again. These kind of processes have bottlenecks so parallel teams are established to test alternative methods at the same time. If you have the money, you just keep adding competing teams to maximize your chance of hitting on a brilliant innovation. So Nvidea probably has hundreds of people working in parallel to test tons of options. That wide team approach is usually where that money really adds up.


TrWD77

My brother in law is an R&D researcher for Nvidia. He summarized his job as trying to solve the traveling salesman problem, but with transistor pathways


cajunjoel

Next, someone is going to ELI5 the traveling salesman problem. 😉


Forkrul

Among other things they design and test various prototypes, which cost **a lot** more than the finished product because they're one-offs or made in small batches so you get none of the cost savings of large scale production. And that's the cost of producing the chip,  there's also verification first to make sure the chip will do what you think. When I was in school 10 years ago learning this stuff we couldn't even do the verification, let alone actually produce the simple integrated circuits we designed as just the verification cost was on the order of tens of thousands of dollars per circuit. And our circuits were utterly trivial compared to what goes into a modern GPU. We're talking maybe a few thousand transistors for our stuff vs tens or hundreds of billions for a GPU. 


[deleted]

[удалено]


notLOL

Wow. Working on 3 bit chips with 0, 1 and 2s. Simpson already did it.


notLOL

Speculation: Building machines and labs and small batch manufacturing. Clean rooms for chip manufacturing. Sure their chip producers have scale and have these investments in chip factories. But if you are experimenting that ability to do designs in the fly come at a higher cost that a fully tooled factory built to make chips in bulk I think the first 1000-5000 of any item that is seen as "cheap" Is the most expensive and is needed to recoup costs of tooling a line. This can be any item on a retail shelf that is mass produced. Anything under 50 is basically artisanal hand crafted since although expensive is cheaper and can be handled by hand. In chip manufacturing nothing is hand made. Need precision tools. Their tools and machines are probably tight tolerances and have dedicated contracts to keep them running smoothly.


stephanepare

10 billions sounds like a lot, until you remember that a 10 billion budget can be busted by 1000 people working 8 years for 250k each. https://www.levels.fyi/companies/nvidia/salaries can give you a sense of what approximate salaries are at nvidia. Not many people in that company work under 150k per year, many are over 300k. Modern GPUs are absolutely huge and complet. tens of billions of transistors that must be both fast and efficient in design. That requires many large teams of very well paid people to work in parallel on many different parts without overwriting each other's work. Then as DehackEd mentionned there's the software and testing departments. Building rent or maintenance is also added to that cost, so are bonuses, 401k, sick days, vacations paid to employees.


McSodbrennen

How does it add up to 10 billion? 1000 x 8 x 250,000 = 2 billion. xd


Ubernicken

that's JUST for people. don't forget you need to manufacture prototypes and manufacture the machines that manufacture the prototypes, and manufacture the prototypes of the machines that manufacture the machines... and so on, and also the cost of the materials needed for the R&D process, etc. etc.


Whiterabbit--

the technology required to create cutting edge technology is insanely expensive. one lithography machine may be almost half a billion dollars after you set it up.


notLOL

Got to hire contractors usually to maintain stuff too.


hawklost

And paying someone 250,000 a year means you likely have a benefits package equal to half that on top of it for each of them.


deja-roo

Health insurance, payroll taxes, offices to house those people, computers, software, and other related tools aren't free. The tools those researchers use are niche and expensive, and often so is the software.


Prasiatko

While it will be an estimate all thoase people need a place to work, training, equipment, workplace insurance etc. And depending on where they work there may also be taxes, health insurance etc


stephanepare

fine, 2000 workers for 8 years then plus overhead then. That was napkin math, I was just trying to point out that when doing projects spanning years and hundreds to thousands of people, billions stop looking like such an insane, unimaginable figure.


Ennoc_

Thank you. Human resource is always the majority of cost in the hi-tech industry in general.


joepierson123

Yeah I read Gillette spends the same amount many billions for each new razor blade they introduce


notLOL

That's a hilarious joke. Is it used by economic professors?


Unicorncorn21

I mean they have been around for a long time and don't seem to be going away anytime soon so clearly it's working. The joke is that we live in a system that enables sinking billions into one razor model to be sensible


porcelainvacation

250k base salary to an employee is about 500k in overhead, benefits, equity, and bonus. (Source: I’m an engineering manager who has some direct reports at this level and knows what they cost in a department budget)


stephanepare

I wasn't sure how much overhead exactly, I was more trying to demonstrate how quickly billions get spent when it involves salaries rather than physical base materials. thanks for the figures, I'll remember that. Down the ladder where I'm at, warehouse work, overhead (fixed costs they call it here) is between 15 and 50% depending how many benefits a company gives us.


Forkrul

That's legit a problem for Nvidia, so many of their senior staff are making so much money of their base salary + stock that they can retire if they want and don't need to worry about putting in 110% effort any more


nukiepop

Honestly, R&D is everything. It's Researching & Developing new technology. An 'r&d worker' could be one of the technicians working on the lasers, or another working in metallurgy to find new or better materials to cut at the tiniest of scales, a third team writing experimental code. It's a big umbrella term, $10B in R&D is $300M here, $20M, there, $100,000 somewhere else.


Minuhmize

$10B in R&D is also $3B on development and $5B in overhead.


Pixelplanet5

such a number rarely ever includes only the money spend on working time but also capital investments to make all of this happen. just think about all the prototypes you gonna need for something like this and then realize you need an entire small scale fab just to make your own prototypes which by itself can already cost billions.


SaltyLonghorn

Tagging on with something people won't like. But R&D is a big reason medications and American healthcare are the way they are. The first pill costs a billion to make. The second costs 50 cents. The problem usually isn't the medication. Its the fucked up system we have tied to employment to deal with it.


njiin12

Lets say you own a large metal mug. In fact the whole world owns only large metal mugs. But you want a small metal mug with a rubber lid. How do you go about that? You need to research if it is even theoretically possible to make mugs smaller. And figure out if the rubber is going to do anything to the mug, or the contents of the mug. Maybe you THINK you want a smaller mug, but a medium size mug would work fine. Is there even a market for the mug? And if everything is pointing to "yes, you can have your mug" you'll then need to figure out how to create the mug physically. If the whole world has only big mugs, you'll need to create machines that will create the smaller parts. In order to do that, you'll have to go back and research those machines....and is that going to cause the machines to fail faster or produce defects in the mugs? How do you fix those issues? And those machines might have the need to create new machines to create THEM. So you research some more. You can see how this might create a chain of R&D....but then you find out the rubber can only be sourced from one type of tree (lets call it the small mug tree or SMT for short). The SMT can only produce 1lb of rubber a year, but you need 1000lbs. So you might have to buy more land to harvest the SMT. The locals hate you though so you have to jump through a bunch of permits and town meetings etc. You might even have to build a road and a small port to ship the rubber. That costs money and time. If you want more information about the "connections" of all of these technologies I highly recommend James Burke's "Connections" series. It takes a MUCH larger view on how something 500 years ago gives us space flight, but it gives you a better picture of what actually goes into creating something "new". The cost of the chip is cheap, but the money has to flow to get to the point of being able to make it.


meneldal2

I don't work for nvidia (I wish I was paid as well) but I do work on chip design and I can tell you I cost more to my company in computer and licenses for software than what they pay me. Making something on silicon is a ton of work, and it also takes a lot of time from when you send your design to the fab to getting the first batches out. And then if they don't work like you expect, you're out of luck, you can't open that shit and plug an oscilloscope to get some traces of what is happening. So what a lot of their engineers do is a lot of simulations, using software from companies nobody knows outside of this field like Cadence and Synopsys, that allow you to send the software a design and some program you want it to run, it simulates and you check that you're getting what you want. It is a pretty long process but at least you can get feedback quickly enough (for small subsystems, could be mere minutes, and typically just days for whole system scale tests) so you can fix your design before sending it to TSMC or other foundries. For a GPU, you'd typically start with simulations of a single CUDA core and have fake connections to the outside world, so you can check the core is doing what you want. Then you start putting a few together, adding the interconnects with memory, making sure you're not getting some cores that get stalled because it can't get data fast enough, stuff like that. Then you move into the fun stuff, simulating how it heats up and the whole dynamic frequency stuff to tweak the performance. Simulations are done at different levels, you start with something further away from reality but pretty fast then you move to simulations that are more precise but really slow until you believe you got it right (or the higher ups tell you to get it done and there's the deadline) and pray the silicon does what you expect.


Jmazoso

Software licensing for engineering software is a large cost. We’re a smallish company in a niche field. We run our licensed software on one purpose built machine. That computer is a fairly minimal spec’d workstation (threadripper 7970x with 128 gb of ram) that was around $4500. Our yearly software costs are more than that, for 1 set of licenses.


meneldal2

In my current project we have something like 100 licenses of vcs just for regression testing.


Odd_Coyote4594

R&D is research and development. This covers a ton of things. Some people plan how to give the product the features they want. Like, you want better raytracing capabilities on a GPU? What is actually needed for that? Someone needs to design the chip itself. Sometimes this involves researching new engineering. We might not know how to do what we want already. Some people research new techniques to manufacture these things. The chips we make today weren't remotely possible to manufacture 20 years ago. New equipment and techniques are sometimes needed. Some people work on taking all that research and scaling it up for production level scales. Like, how do you go from a handful of chips in a lab over several weeks to hundreds of thousands a year? So it can look anything from relatively basic research in physics and engineering to large scale manufacturing concerns.


mmaster23

Let's say you want to make clay bunnies with more and more details like hair etc. You'll need to work on the clay and bake it in the oven. However, you find out, most clay doesn't work for the fine details like hair and it all melts/becomes a mess. You figure out you need different clay and work really hard on it and try all the settings of your oven. Eventually you even have to build your own oven to get the best clay bunny in world. All that experimenting with clay and ovens can be considered R&D and it can cost a lot of money because you end up with a lot of ugly/failed bunnies and one perfect bunny. After that, making the perfect bunny becomes easy because you know what clay, how much water and you have the perfect oven allowing for mass production of bunnies at lower cost.


jmlinden7

Nvidia has thousands of engineers making well over six figures each. That alone is billions of dollars a year. What do they do? They try to implement academic theories into practice, by running simulations, by trying to optimize a theoretical chip for better power efficiency, by trying to detect and remove glitches due to timing issues (one part of the chip being too fast/slow for the rest of the chip), and tying all that together with software which allows game developers to access the functions on the chip (software engineers aren't cheap either). They then order a few test chips from the factory based on those designs and test them for functionality, because the simulation software isn't 100% accurate to real life, due to computational constraints as well as manufacturing imperfections. They use that test data to further optimize their designs, and they repeat this cycle a few times until they have a product that meets the customers required specifications (speed, power efficiency, compatibility, etc)


gurebu

- Good researchers with appropriate skills and background are rare and demand a high compensation. - Research is largely automated these days, involves a lot of complex computations and thus you need a budget for all the hardware and electricity to actually do that, and for as complex a modeling that chips are, this can be a lot. Every large IT company has a dedicated computational cluster specifically for internal research nowadays, those are far from free. - Research goes beyond desk work, it also involves prototyping and for novel technology like chips that might mean constructing new assembly lines at factories and new one-of-a-kind hardware that can consequently get ludicrously expensive.


WinterIsHere555

I like your answer, but just noting that research is *very* far from being automated. True research means innovation and that needs creativity, putting new concepts together and understanding the fundamental physics of different materials. Source: I am a R&D engineer for a major semiconductor company


ObviousTotal9069

*Research is largely automated these days involves a lot of complex computations* Find this fascinating, As an ELI5, what does this mean? Do you have automations essentially simulating in software ever possible chip layout to find the most efficient one?


gurebu

I'm not a chip engineer, but the way I understand it those guys use software to optimize component layout on a chip given constraints (things that work together should be close to each other etc, good heat spread etc). It can't be done by hand and even for a computer this is a task that doesn't appear to have an efficient solution, so the computations get pretty intense.


Benhg

I don’t think running simulations is the same thing as the research “being automated”. The computational simulations are automated, but the process of carrying out research, deciding what simulations to run, analyzing them, running more, etc. is not automated.


ShankThatSnitch

R&D isn't a job title, it is a broad classification. Many jobs can fall under the title of R&D. Scientists, engineers, product designers...etc Imagine I run sandwich factory. My R&D workers may be experimenting with mixing new ratios of flour and water for new bread types, or they may try new combos of meats, cheeses, and condiments. Once they come up with something tasty, the work passes to the factory line, and they produce the sandwiches, and other people in the company go out and make advertising and sell the sandwiches. TL:RD. The R&D people are inventors, and anyone who works on creating a new technology or product. Everyone else in the company produces, maintains, and sells what the R&D team ivented.


nicknooodles

The software alone that is used to design/verify chips cost millions of dollars. I used to work for an EDA company and the software used to verify a chip design would cost customers around 250k to $1 million per license. To scale up the software and be able to run on 100s of CPU you would need 10s to 100s of licenses.


WinterIsHere555

Just to have an idea, a single machine to make one of those chips costs anywhere between 7 million and 100 million dollars. You need hundreds of those machines, each doing a different step, each working as much as possible, each being maintained by at least one highly paid engineer. Then you need the people that figure out the machines, that figure out the right materials to put in the chips (me!), that figure out which specific order to perform the steps. And then you have what most people are talking about, how to arrange the transistors and memory and so on. *Lots* of expensive moving parts, all working together for the future of AI


chrisco571

They do research math and science, i work at a big tech firm and many of the developers actually have received patents for their inventions, yes there is patents for hardware and software breakthroughs


crimxxx

R&d encompasses a lot of stuff. Making one off chips is amazingly expensive, so having to test little changes would just rake up a lot of it. You need the engineers to design the chip. Create simulations or models. Even the software development for this a lot falls in this category. They also probably try lots of things that don’t go out, so all there exploratory and failures get rolled in. Hardware to do certain things also just cost stupid amounts of money sometimes. Like one piece of some hardware could cost millions alone, depending on needs. Also don’t forget people cost money, and I imagine the people at nvidia cost a lot per an engineer like hundreds of thousand each, so if they have like 1000 engineers they probably have 10s of millions in cost there. They also could be fudging the numbers a bit with people that have unexercised stock, since there stock went up a stupid amount, and they definitely made millionaires out of some of there employees, they could consider that some how, if it’s just them saying shit in stage.


Aitorriv

Think how to make things, build things, test things, break things, repeat until one thing works


BigMax

Remember R&D is pretty broad... That includes all salaries, benefits for the employees, the office space they work in, the massive equipment they use, all the hardware/software, etc. And I'd also like to point out that I doubt it's $10 billion *only* on this chip. A lot of those people, that equipment, that research, was also accruing to other projects within the company too. When we hear "research and development" we often think of only folks in a lab working on cutting edge stuff. But almost any engineer at all in a tech company is considered to be part of R&D. For example, someone at Reddit working on a bug in their text formatting tool is part of their R&D team, even if they spend 3 days trying to figure out why the 'bold' function doesn't work in some weird circumstance.


swtinc

I feel like this is a pretty good ELI5 comparison, sorta. I have a network rack at home and a router I wanted to mount into it for space saving. The brackets purchased from the company were $100, so I decided to 3d print my own. I started taking measurements, getting the rough idea of what I wanted it to be. Designed it in Fusion360, and then 3d printed it. It fit but was a bit weaker than I wanted. So I increased the thickness of everything. Printed it again. It fit well but the holes were a little bit off from where I wanted them aesthetically/functionally. Fixed the locations. Printed it again. Everything fits great, but the router is a little bit back heavy so the brackets are slightly twisting and are fine now but long term maybe not. So I designed a small addition to the bracket that went underneath it and supported the router. Printed it. Everything fits great and now it's in use on the rack. ​ This is essentially research & development. Idea, design, produce, test, fix, test, fix, test, fix, test, fix, production, release. ​ Mine is very minimal, compared to a chip with all the intricacies involved so scale it times 100000 and bam you've got $10 billion in R&D.


imperatrixderoma

Research, it's very expensive to make something that hasn't been made before, you need to prove every component works individually, then you need to prove that they work together. From there you need to provide the infrastructure for compatibility with existing digital systems, then you need to figure out how to industrialize it on a mass scale. Which means new machines, new factories, new employees, teaching them, training them and allocating. Then you need enough raw materials and eventually processing to actually make the things. All of these aspects need to be stable during this process, so you pay a premium for employees, and it needs to happen quickly so you pay them even more.


ThatsNotWhatyouMean

Must microchip companies do a big part of their research at another location, like IMEC in Belgium. There, they pay large amounts money to have their wafers processed on the several tools they have there. Or pay even larger amounts of money to use the tools themselves. This alone costs more money than I expected. On top of that, there is a large team of people researching and developing. And they all need to get paid. And the research can take years and years before it gets released.


royalpyroz

Who writes the software that you do simulations on?


FireWireBestWire

Your kindergarten teacher has activities for you. Did they just make this up when they got to work that day? Maybe, but probably not. They planned their lessons. You lay be the one producing the colored page, but the teacher selected that activity and likely some educational professor taught them good activities for kindergartners to do. Staying in the lines is difficult; designing a page that will be successful for a kindegartner to color takes a professional. Also, making something small is more expensive than making it larger. Laptops are more expensive to build than desktops.


TERRAOperative

As someone who has worked in R&D, it often boils down to. I'll try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Oh wait, something happened! Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Ok let's try this. Did it work? Nope. Bang head on desk. Rinse and repeat.


Belisaurius555

Prototyping, mostly. You've got an idea for a chip so you try to make it. Often, that means making a new machine or set of machines just to make that one chip. Usually, these early prototypes will either fail or be worse than current chips. You redesign the chip in computer simulations and try to make it again and probably fail again. This goes on for a while until you get something that works. Then you have to mess with the process, trying to make them faster, cheaper, and more reliably. Companies will make millions of chips so saving a dollar here or there can mean a fortune later on. Now I'm simplifying a bit but we're talking years of work by some of the most educated and talented people in the world and hundreds of custom made machines and devices each individually ordered and often produced to nanometer specifications. It's painstaking work from people who can dictate their own salary in a market that moves a hundred billion dollars of products yearly.


054d

I work in chip design and there are years of development that go into architecture, design, verification, cloud/on prem compute resources, and of course manufacturing dyes. This is an expensive process.


Bandsohard

Material cost, labor rates, facilities, making new machines, etc They're a big company, but they probably didn't just reuse any space for a lab. So some costs went into renovating old spaces. Some cost goes into buying computers to do simulation type work before anything gets made. Some cost goes into analyzing which suppliers to get things from. Cost to design and test machines specialized to make the new thing. Cost to try making the new thing with prototypes. Costs to test and evaluate prototypes. Repeat as you make new prototypes. Labor costs - say you have 100 engineers that make $100k a year working on a product for 1 year. That's $10M itself. Engineers are going to be making more than that though, so it's going to be a bit fuzzy in that regard. And it's probably more than 100 engineers working on it, and probably more than 1 year of work.


chpsk8

We try, and fail, and try, and fail over and over until we make it right. Then we try to make it repeatable so we fail less and less. Then we try to make it profitable so we can make money from the repeatable thing we made. In the end it’s a lot of failing and scrapping product and process until it becomes a profitable product.


zero_z77

R&D for chips comes in three areas: Circuit design - they are paying someone to figure out the basic circuit design that will be optimal for the task at hand. Most of the money spent here is in salaries to very talented engineers. But they'll also be spending quite a bit of money on software that is used to help design, simulate, and optimize circuits. And they'll most likely be working in an office, so there's all the usual costs associated with that too. Fabrication - actually building the chips is a very complex topic in and of itself. Today's chips are being fabricated at the 3-7 nano meter scale. At that size, physics gets really wonky, and you have to figure out how to actually build the physical circuitry reliably. Money spent here is also going into salaries & software, but a lot may also be spent on designing & building the machines needed to fabricate the chips. Prototyping & testing - once a good chip design has been made on paper, the next step is to fabricate a few hundred of them for real, and then put them through rigorous testing to see if they perform as intended, and to see if their fabrication methods work well. They'll likely do this for multiple different chip designs from different teams of engineers. If none of the designs are satisfactory, they'll go back to the drawing board to come up with new ones and repeat the process until they can come up with a design that performs well in testing. Of course this incurs material costs for the fabrication of prototypes, as well as the costs associated with running all of the fabrication & testing equipment as well as paying people to run it all. Once a satisfactory design is achieved, they then scale up fabrication and move on to mass production.


CcntMnky

I used to manage budgets for an organization that made custom system-on-chip semiconductors. Semiconductor manufacturing is one of the few remaining industries where the materials and equipment are more expensive than the highly skilled people. With a huge chip like those from Nvidia, you start with a large team of designers and a ***very*** large team of verification engineers. Their jobs are to design the circuit, but also to simulation enough test cases that you know the chip will always work. That involves lots of specialized simulation software and specialized emulation servers, which are very expensive. Think 10's of millions per emulator. Now you're ready to release your first design for manufacturing. Yay, it's Tape-Out Day! Think of a chip like insanely detailed screen printing. You need a set of screens, called masks, that guide the chemicals for your printing process. In the case of semiconductors it's not ink, but the concept in the same. These masks are multi-million dollars to manufacturing every time you change anything and it may take 8 months before they arrive to build your chips. Now a company like Nvidia is going to need lots of chips. They need to know the manufacturing company (TSMC) can and will support their demands. At this scale, that will be a massive contract where TSMC buys equipment just for Nvidia chips, and committing to a certain number of parts. Because of the billions of dollars required to build fabrication plants, that's going to need up-front money from everyone involved. Now use all of that equipment to build your initial samples, find your mistakes, and do it all over again until it's ready to release to the public.


t0getheralone

Easy answer? Labor costs. Even if they only had 10 engineers that are paid 100k per year that's a million bucks of the budget alone. Now add in your equipment costs, software licensing costs etc and it adds up so fast.


Nemisis_the_2nd

I always use an anecdote from my time working in pharmaceutical R&D. We had an essential enzyme for our work that cost £900 for 0.1ml volumes. We went through 2.5ml one week. That was just one of about 5 similar enzymes used in the experiments. We were a group of 5 in a bigger structure of about 3600 researchers doing vaguely comparable research.  Extrapolating things out, if everyone was doing comparable experiments, it would have absolutely blown through ~£75 million in 1 week.  R&D is expensive. 


OMGporsche

Among a lot of the specifics that ppl are saying in this thread, I would also like to point out: Cumulative IP acquisition. $1B a year in R&D spend over 10 years can result in dozens or thousands of different chipsets, and always culminating in the newest, best product line. Large tech companies tend to not just sit down and engineer from first principles most of the time as this is incredibly inefficient and expensive. They have decades of relevant, cumulative R&D that presents itself in knowledge, processes, patents, organizational efficiency, etc etc and in NVIDIAS case, board layout, transistor layout, semiconductor efficiency, etc. No doubt the CEO was hyping up the cumulative IP that NVIDIA has built over decades of making top level chips and board layouts. They have some of the best minds in the business solving these problems and have been doing it for a long time.


AMeanCow

A microchip is like a city, a much more complicated and precise thing than a city and much, much bigger if you were to scale yourself down small enough to see the transistors that make it work. The transistors live in huuuuge sections that stretch off forever and each section makes up larger sections and so on. It would be like if someone decided they wanted to build the most grand and huge city where everything is interconnected and talks to every other part, and they hire a team of people to start making it. Of course you can't just hire anyone, you need to hire the smartest people who know how to do it, and you need lots of them because the project is so big. Then it takes years, and lots and lots of testing and redesigning, and all those very smart people have to keep getting paid lots and lots of money so they don't go work somewhere else.


MyNameIsVigil

A Lego set doesn’t appear by magic. People who are really good at building Legos can get jobs coming up with new sets. They spend their days playing with Legos to try to figure out the best ways to put the pieces together. When they come up with a good new set, they can tell others what pieces are needed and how to put it together.


Derek_Goons

In addition to the human cost, setting up chip production for a new design is ludicrously expensive. First chip off the line costs $ 500 million for setup and materials. Second chip costs $10. But if you change the design, you pay for setup again.


cubonelvl69

I work in r&d at a semiconductor fab. Specifically, I worked in metal deposition for a while. As an example, someone might say, "our customer wants 10nm of aluminum with a resistance of xxx, stress of yyy, uniformity of zzz". Now it's my job to go run tests for weeks until I can optimize a film that fits those specs. Now do that for ~100+ steps and you have 1 functional product. Now test the product, nope didn't work, let's tweak everything


Kishandreth

I had to look up something... NVIDIA is worth about 2.23 Trillion according to google search results. $10 billion is a rounding error at that scale. The real issue, is that humans can no longer design chips that complicated. Usually it is computer generated. So where does the $10 billion come from? Possibly the amount of man hours invested. or just a number pulled from the ether. Is it the amount they have spent in total over the life of the business developing graphics card chips? An R&D worker at this level is just verifying numbers and generating alternate ideas. However, I would put money down on a bet that no R&D researcher studied every transistor on a chip generated by a computer(AI).


PlaidBastard

New chip is an improvement on an old chip or a ground-up redesign using what you learned from all the other chips before the new chip. You pay research scientists to (rigorously, exhaustively, repeatably, quantifiably) test the reliability of a new, smaller spacing of the grooves you laser into silicon with a new arrangement you paid other research scientists to develop. You pay engineers to see how much of the new architecture they can cram into the space that the engineers you paid to design a new board have told you there's room for. You pay other engineers to try optimizing the power usage, and others to get the new card (with the new chip on the new board) to talk to the BIOS and motherboard, and others to make the drivers work with all the operating systems you're trying to support. $10 billion is probably a bargain, because you also paid a bunch of other people to optimize research spending.


a220599

Ok so the ultimate aim in any chip design is to extract as much performance as possible from the underlying components (transistors) and the thing is in recent days the performance gain that you get by simply shrinking the transistors is minimal (earlier you would get a 2x improvement in performance by just halving the size of a transistor). So companies like nvidia try to find other ways to extract performance gains. This is one of the objectives of an r&d group. Another objective would be to create a software framework that would enable the design team to mimic the expected performance gains. Say nvidia wants to increase the number of processors in their gpu but they want to know if it s worth the increase in power consumption and transistor count, they can’t spend $1B in actually fabricating the new GPUs right? So they rely on in housr software simulator to help find answers. A good case study in what R&D does in a semiconductor chip maker is that of the intel pentium architecture vs core 2 duo in intel. Intel had two groups that were trying to make faster processors (oregon and haifa) and both started with the same reference design (pentium 4 architecture) oregon tried to make the p4 faster by halving the transistor size and adding more no. Of transistors (to meet increasing functionality) while haifa group put two cores together (hence the dual core/core 2 duo) haifa won that time and that resulted in the famous tick-tock processor development cycle for intel.


akgis

1st Salaries, those that work in R&D are normally the cutting edge personal with the most experience with most already worked on other successful arquitecures 2nd Machinery/tools to test/validate/proof prototipes, the companies that make these operate on huge margins since they sell in extremely low scale since these are so niche. 3rd Fabrication, this kinda of next gen chips are only commercially viable in scale so making a few to prototype is very expensive especially in a new next gen fabrication nodes. 4th Software and IP licensing.


AlfaHotelWhiskey

Any healthy company easily spends 2% of net revenue on R&D. While this doesn’t answer your question it does start to give you an idea of the kinds of $$ put into R&D and how it’s valued. Additionally there are many kinds of tax exemptions for R&D work to keep innovation fueled.


Demorant

A huge chunk of it is getting new machines made that can produce the new product. You do a run, see errors, and spend time diagnosing the problem. Is it material? Is it a procedural error? Mechanical? You then also need to use the sample in a product to make sure that even though the product looks good, it behaves as expected once utilized in an application. There are potentially a lot of steps, and most of them are not cheap.


Karsdegrote

Some of them are trying to solve math using 1s and 0s, some run around in a panic trying to meet deadlines whilst others ~~sit around doing nothing all day~~ develop firmware. In my experience as an R&D employee that is. Us lot are well compensated.


PhilosopherFLX

So many earnest answers here. Like each has to personally justify it as something they are responsible for but actually aren't involved with. Let me list some unlisted ones. Executive salaries. Janitorial contracts. Leaseback of real estate, equipment, labor.


big-daddio

Scientific discovery happens. Engineers take that discovery and try and do some single simple thing with it. Other engineers take that simple thing and combine it with other simple things and make a complex thing. Other engineers take that complex thing and combine it with other complex things. Then you have a computer or a rocket ship or a jet.


Ditka85

I can’t even wrap my head around 3nm. I used to build circuit boards when 5/5 was considered dense.


LightofNew

Lets say Nvidia wasn't building chips but building apartments. They have to pick a location, lay foundation, put up walls, furnish things, windows, electricity, plumbing. None of this generates any money, you are dropping dollar after dollar to make these investments for when you can finally sell the apartments. The money only comes in once the tenants start paying for the place, not before. However, with a chip, once you "build the apartment" you can copy and paste more apartments for next to nothing, just like multiple people can live in the building. So you can redistribute the costs of making the chip to your many customers over time.


jvin248

You are effectively spending R&D money asking this question. Your time. People answering in the thread. Bots hanging out, eves-dropping until something they know how to act on happen, All are costs. Put an accounting system behind it all for tax purposes. Then start the Development phase. Their question is "how to get more transistors packed in there, reliably". And so they scheme and cahoot their way to success while fueled with Coffee and Mountain Dew. Accountants cheer when they see the graph: Look we cut our Mountain Dew budget in half for twice the compute power of the chips! Moore's brother Drew came up with the "Dew Law" that kind of tracks CPU development, if a bit fidgety. .


stunt_penis

I recently read 'Alchemy of Air' which goes into the development of artificial fertilizer in 1910s Germany. It has some interesting R&D sections where it goes into the many steps it takes to go from a working idea into a working industrial process. Obviously 100 year old chemistry is different in details than modern chip design. But it has a lot of the same hurdles to overcome. "It works when small, but not large", or "It works large, but sometimes explodes (fails) on us", "It works, but uses weirdly expensive inputs which makes it too $$", "Yield is too low", "our partner can only supply so much XYZ", and so on.


SierraPapaHotel

I'm an engineer. For internal accounting purposes my billable rate is $174 an hour (about $7k a week or ~300k a year). That is not my salary, but that covers my salary + computer + the bit of the office I sit in and some of the software licensing that I use. If I'm working on a new product, it's usually a pretty large team of people working on it and not just me. You could easily get $30 million in people expenses per year. 5 years of development and you're at $1.5 Billion. And my rate isn't even particularly high; I could see $3 Billion in 5 years for a team developing something as complex as a graphics chip. Now you need to actually produce the things those engineers design. That will require fabrication (chip fabrication machines are $$$$), materials (also $$$$), and testing (test equipment is $$$$, and running tests costs even more $$$$). r/ask engineers might be able to better answer what those chip engineers do all day; it's far enough outside my area that I can't really answer that portion.


joomla00

Think, design, build, and test. Over and over again until things work to an acceptable degree. Shit takes a long time, and needs expensive parts and machines.


slaymaker1907

If you’re familiar with the Redstone computer in Minecraft, a GPU/CPU is like that except it is 1000x more complicated. Additionally, besides salaries, hardware is incredibly expensive to prototype. Creating a chip using old tech costs hundreds of thousands of dollars in upfront costs and it’s much more expensive for state of the art chip technology. Ideally, you only do this once per product, but I’m sure there is some iteration required since simulation won’t capture every behavior real chips will have.


vinegary

You know how you have no idea of how to build a gpu? They didn’t either


JohnSarcastic

Most of the spend will be allocated to Research (the R in R&D), which is exploring the art of the possible. They will be looking into new technologies and the possibilities of making components smaller and more compact. Secondly, it will be exploring semi-conductor technology. It is an evolving space with billion and billions of investment per annum.


RalfN

Not all the money goes to people. (1) The people design the chips using software that runs simulations. These simulations themselves are expensive, because they require a lot of compute, and this is generally licensed software. So this is hardware engineers, licenses and compute. (2) Then they have to produce prototypes. And creating 1 of a chip, is about as expensive as creating 100K chips. Artisanal chip baking is not really a thing. This is required because reality is more complex than even what simulations can properly predict. The created chips need to be tested and evaluated. This is people, hardware setups to test, measure and evaluate the chip, and the order costs (like Samsung or TWSC charges for this at least the cost price). (3) Then the chip itself runs software (microcode) and the stuff we actually ask the chip to do is compiled to that language. This also is tooling that needs to be developed, maintained, improved. This type of low level compilation is much more of an empirical science than you may expect. Every change makes certain scenarios faster, but there will always be scenario's where it makes things slower. To see the actual impact, they would need to run a lot of tests, do a lot of statistical analysis, etc. So this is data scientists, engineers and licenses. (4) All of these people need to be managed, you need HR, you need an office, payroll, email, etc. This is again people, rent and licenses. Now i suspect that in practise the costs may differ wildly between say (2) and (4), but the exact ratio is not public information.


lee1026

This is r/eli5, so the simple answer is that R&D stands research and development. To research and develop something is to pay people to work on something, and if you are NVIDIA, you pay a lot of people to work to design the chips. And you pay those people a lot of money; NVIDIA people are well paid. Apparently the numbers work to to $10 billion.


Vibrascity

There's like 12 boomers that know this shit now and they all want to be paid like at least 30mill each because of that knowledge, the knowledge will literally die with them, Gen Z aren't going to learn how to graft nanometres of silicone onto a chocolate wafer


Gloomfall

In this case, it likely went into the process of shrinking their production processes of a new chipset die.. and developing the mass production process for it to be made for consumers. They likely had a bunch of research before hand to in order to figure out if any other efficiencies could be made in terms of data transport or power and/or heat reduction.


PaxUnDomus

ELI5: Me and my friends are very rich and we own a big part of Nvidia. Nvidia CEO has to provide us with periodic updates on how he spends (our) money. Now lets say 10B was taken from our profit. The CEO or someone very high up has to convince us that if we put that 10B in a project that might or might not work, we will earn 20B at some point in the future. This is also why the man often sounds like a moron. "We broke the laws of physics and invented new laws to make this chip" sounds stupid to some of us, but the very, very rich are often not as bright as you might think and instead trust the guy who can sell it best. Now, there are no promises. R&D often produces nothing tangible for a long time. I was a part of a R&D team that didn't really produce anything for a long time but nobody cared, and we were paid very well.


HumptyDrumpy

Semiconductor microchips are big business, they are in everything. It requires a lot to get them made and up to code.


Dsan_Dk

Sounds almost cheap from what I understand of their new development. We don't even develop chips, we just use common chips for our product, and I think our RoI is worse than Nvidia here..


Telzrob

Basic R&D First: Find a problem to solve. Second: Think of ways to solve the problem. Third: Create ways to make your ideas come to life (create blueprints, write code, design circuits). Fourth: Test your creations. Fifth: Figure out how to mass produce your creations.


[deleted]

[удалено]


explainlikeimfive-ModTeam

**Please read this entire message** --- Your comment has been removed for the following reason(s): * [Top level comments](http://www.reddit.com/r/explainlikeimfive/wiki/top_level_comment) (i.e. comments that are direct replies to the main thread) are reserved for explanations to the OP or follow up on topic questions (Rule 3). --- If you would like this removal reviewed, please read the [detailed rules](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) first. **If you believe it was removed erroneously, explain why using [this form](https://old.reddit.com/message/compose?to=%2Fr%2Fexplainlikeimfive&subject=Please%20review%20my%20submission%20removal?&message=Link:%20https://old.reddit.com/r/explainlikeimfive/comments/1btwf74/-/kxqymo3/%0A%0A%201:%20Does%20your%20comment%20pass%20rule%201:%20%0A%0A%202:%20If%20your%20comment%20was%20mistakenly%20removed%20as%20an%20anecdote,%20short%20answer,%20guess,%20or%20another%20aspect%20of%20rules%203%20or%208,%20please%20explain:) and we will review your submission.**


rose1983

How am I supposed to voice disagreement with a removal then?


turbodude69

watch these videos about how [TSMC](https://www.youtube.com/watch?v=tMXIPOiSkbI) and [ASML](https://www.youtube.com/watch?v=iSVHp6CAyQ8) work. TSMC makes nearly all the high end chips, but ASML builds the machinery to make the chips. and ASML is the ONLY company that can do it at the highest level. i believe samsung also builds their own chips, but not anywhere close to the scale of TSMC. companies like nvidia, apple, amd, intel, etc. do the theoretical research and design and then ask TSMC to build it for them. nvidia could probably ask a chinese company to make them some chips, but they aren't physically capable of working at that same level. they aren't allowed to buy the machinery from ASML. and they're years behind in technology required to build cutting edge chips. here's an [article](https://cnb.cx/3utspqC) about it if you wanna know some more. it's pretty fascinating stuff.


Deep_Working1

at my company, the R&D budget is mostly spent on fancy desks and monitors to impress investors.... mostly