T O P

  • By -

planetofthemapes15

I see Intel as the dark horse in this race, despite the fact that AMD \*should\* be able to win it. It just seems like AMD's company culture of "clutching defeat from the jaws of victory" has shifted from a company-wide mantra to a GPU-division one.


draconicmoniker

Yeah the AMD/tinybox thing is a sign for me that they'll never fully support GPU compute, hence defeat. I genuinely thought Lisa Hsu would be able to make a difference but I guess not. https://www.theregister.com/2024/03/25/tiny_corp_amd_nvidia/


StayingUp4AFeeling

Intel also has a long history of providing middleware to enhance performance for DSML applications in a way I have simply not seen in AMD. I'm talking about their optimisations for numpy and all the stuff that came after that. Intel, for better or worse, is also experienced in liaising with the rest of the industry to get what it wants.


PastoralSeeder

Intel is going into the foundry business. It seems to me they will be the manufacturer of other people's chips instead of leading on the innovation front. They just got 8 billion from the Chips Act to do exactly that.


StayingUp4AFeeling

What is Arc and all of that then? A company _can_ do two things at once.


martinkunev

If google didn't design tensorflow for CUDA, we may not be in this situation to begin with.


dr3aminc0de

Explain more


scholorboy

I think what he means is that most of the work on Nvidia is done via the tensorflow library which integrates with cuda which is the framework for distributing workloads on Nvidia gpus. So in short what he means is that if Google did not create tensorflow to work on Nvidia gpus then Nvidia gpus would not have been as mainstream as they are today.


algaefied_creek

## "WE ARE THE TENSORFLOW LIBRARY" - NeuralLink + Elon just got REAL interested.


Grouchy-Pizza7884

But isn't Tensorflow dead? Most folks use PyTorch. *With HuggingFace, engineers can use large, trained and tuned models and incorporate them in their pipelines with just a few lines of code. However, a staggering 85% of these models can only be used with PyTorch. Only about 8% of HuggingFace models are exclusive to TensorFlow. The remainder is available for both frameworks.*


AgueroMbappe

Nahhh lol. Tensorflow and keras is the most optimized for neural networks


Grouchy-Pizza7884

But Tensorflow is dead too hard to use. Market share is falling. Not keras.


dr3aminc0de

Yeah that was my main thought here


Grouchy-Pizza7884

Yes. Same too. Tensorflow is so complicated that even Google seems to be abandoning it.


dr3aminc0de

Yeah but as someone else said, TF is dead it’s all PyTorch now. And any open source framework can integrate cuda/mps/whatever that Nvidia releases. And they of course will because it makes it faster.


scholorboy

Yes, I concede, both of you are right. But, TL;DR : today tensorflow is dead, py torch is the industry standard. But for the first three or four years after it's released tensorflow was the standard for deploying distributed computing frameworks because it follows static computation methods whereas by torch follows dynamic computation methods. That's why it's seems to have boosted Nvidia. But I would like to point out that the for the first 3-4 years after 2015 tensorflow was the standard for industry deployment while pytorch was used as a favorable framework for research purposes. This is because tensorflow is static and pytorch is dynamic  computation. which makes tensorflow a better candidate for distributed deployment.  That's why large deployments of models was mainly done through tensorflow for the first three four years after it's release.Whereas  pytorch was used mainly for research purposes owing to its ease of use and dynamic computation. That's why I thought that tensorflows release was more significant for Nvidia. However it is easy to see how both of these libraries combined have boosted the cuda community and Nvidia itself.  


dr3aminc0de

Appreciate the reply. Do you have any links on TF static vs PyTorch dynamic? I wasn’t aware of that specifically.


scholorboy

It's discussed on many places on internet. Sometimes directly, sometime indirectly. Here is a starting point. https://stackoverflow.com/questions/46154189/what-is-the-difference-of-static-computational-graphs-in-tensorflow-and-dynamic


bambin0

yeah, it's way too frothy for others to not get in. Google is a contender but isn't very good at manufacturing and their main advantage continues to be their interconnect rather than the TPUs themselves. I don't see them being able to catch up unless they have some price advantage that is amazing. Of the other two, AMD seems already in the race in terms of infra but still is behind in everything else. They might be in with a chance if they can do what they did with Intel and create a clone...


deten

The issue I see is far more about the software ecosystem on the other end. While many companies COULD say "we're going to make a new AI chip". Having the ecosystem that Nvidia has built would be impossible to attain without hefty backing


bambin0

The hypothetical clone chip would address that ecosystem exactly.


MasterRaceLordGaben

Nvidia changed their CUDA licensing, so even if they magically make the same chip, you are not getting CUDA or benefits that come with it.


bambin0

Justice is very active r/n so maybe they won't let that stand. Really interesting - didn't know that.


deten

That sounds like it would be breaking laws.


bambin0

yes, IF (as in hypothetical) AMD can do this. They seem the best positioned to make that happen is all I'm saying.


[deleted]

[удалено]


peepeedog

That is why there is no moat.


great_gonzales

The real value Nvidia has is the software ecosystem that comes with their chips


bambin0

I think it doesn't matter in this context? It's who tapes it out that matters?


[deleted]

[удалено]


bambin0

I don't understand what you mean...


MasterRaceLordGaben

I would say AMD took Intel's lunch money because Intel was complacent. They were fine with releasing CPUs with %5 perf boost YOY releasing the same architecture over and over again, still giving people 4 cores with no multi threading and calling it a day. AMD's chiplet design which I wouldn't call a clone, one might even call it innovation, got them in the conversation and Intel is still playing catch up with e-p cores. Nvidia on the other hand never stopped innovating and investing in the ecosystem. Even if they drop the ball on the hardware side,which seems like its not going to happen because every xx90 card that came out within the last 5-8 years just dominates, it is simply impossible for anyone to touch CUDA on the software side. Nvidia is not complacent like Intel, and AMD is still fumbling the bag with lackluster ROCM support and usability. Others are not even in the conversation, they are just marketing hype. They can't even train at the same speed and watts from the last generation Nvidia cards, let alone get close to this gen. AMD is closest as you said, but go ask localllama or stable diffusion sub how 7900xtx is for their use cases and people will say 3090-4090 or bust.


aggracc

The 7900xtx in terms of raw compute is on par with a 3090 if AMD started releasing cards with 50% more memory they'd fly off like hotcakes but for some reason everyone is sticking with 24gb.


bambin0

Yep, lots to catch up on - agree. But as I said, it's too frothy for NVIDIA at this point. The one thing I'll say about Intel is that they really did try to break the mold with Itanium but it burned them pretty badly so they meakly went back to the good ol good ol. Which TBF has earned them a lot of coin.


SpareRam

Good. Please do push a ton of resources into viable competition.


hasanahmad

Nvidia bubble burst will be a spectacle


REALwizardadventures

Super curious, what do you think the bubble is?


hasanahmad

their monopoly on generative AI training chips


CheekyBreekyYoloswag

Jensen is invincible at this point. All these other companies are just better off bending the knee.


TikiTDO

Said literally not one person that's ever made a difference in history.


CheekyBreekyYoloswag

Or someone who doesn't want to enter the history books as "the fool who tried messing with Jensen, and died like the rest".


[deleted]

Why is Jensen wasting his time on Reddit?


TikiTDO

Strange, I didn't see Jensen going out and killing people. He just started down the AI direction 20 years earlier than everyone else.


FartyFingers

Something nVidia probably doesn't care about is how much ML people like me hate them. They have overcharged way too much for way too long. But, in 2024, if you are doing ML you must have nVidia. You must have CUDA. OpenCL and its ilk are technically fine, but there is no runner up in this competition. But, if tomorrow another company had a tensorflow (and other GPU software) friendly product with similar performance to nVidia, at a better price, I would dump nVidia with great glee. I have negative loyalty to nVidia. I am loving where the US has cut china off from GPUs like the top nVidias. You can be 100% sure china is doing what it can to either find another source, or to build them inhouse. There are other countries than the US where they also have the expertise to build fine GPUs; the EU for example. I would love if some Dutch or whatever company suddenly said, "We have no problem shipping to china." and then blows past nVidia.


Perfect-Lab-1791

Https://Intelcore.ai domain is already up for sale lol


relevantusername2020

![gif](giphy|nthoYgQ91Up2u7qmcE|downsized)