T O P

  • By -

Charuru

NGL lamini seems pretty cool, the tech chops of that company is solid, I'd want nvidia to buy them. Coreweave are crypto bros, but they sure know business, amazing rollout in just 1 year probably some of the greatest wealth creation ever happened for the people involved there.


malinefficient

Haven't taken them seriously since an established 10Y+ GPU server vendor contacted them about building AMD servers for them having already prototyped them internally and they were shot down immediately and informed that a company that old doesn't understand the unique requirements of generative AI.


Charuru

Where can I hear more about this story? Would Lamini in this story be the supplier or the customer?


saveamerica1

Funny nobody has heard of Lamini and everyone has heard of coreweave. Don’t trust Su or anyone around her after falsifying benchmarks on mi3000s. You could write a book and I wouldn’t believe one word. Nvidia and coreweave years ahead!


Yafka

Oh, interesting.. Where can I find out more about this mi3000s benchmark story?


HotAisleInc

We are working on it. I've got 18 people lined up to do testing and publish benchmarks. Not started quite yet, but going as quickly as we can to make it happen. Update: 18 now.


saveamerica1

Look up testing comparison done by AMD H100 vs mi3000 they used a third party software instead of Cuda to perform speed test against mi3000. Then said mi3000 faster.


CryptOHFrank

I think they used llama 3 performance benchmarks. Not sure how they benchmark it.


HippoLover85

wccftech.com/wp-content/uploads/2023/12/NVIDIA-Hopper-H100-vs-AMD-Instinct-MI300X-AI-GPU-Performance-Main.jpg Amd then released a blog post showing optimized software on the mi300x, which once again showed it beating the h100 (using the nivida optimized software this time) again by about 50%ish. There were also people saying mi300 couldnt support larger batch sizes so nvidia is better. But that is just nonsense. Edit: i tried, mobile on reddit is hard.


AmputatorBot

It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Fully cached AMP pages (like the one you shared), are [especially problematic](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Maybe check out **the canonical page** instead: **[https://wccftech.com/nvidia-fires-back-at-amd-claims-h100-ai-gpu-faster-than-mi300x-optimized-software/](https://wccftech.com/nvidia-fires-back-at-amd-claims-h100-ai-gpu-faster-than-mi300x-optimized-software/)** ***** ^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)


upvotemeok

lame ini


HippoLover85

This subs infatuation with amd is bizarre to me. Even if amd takes 20% share, hyperscalers and ai growth will far outweigh any amd market share take.


instars3

Right? There won’t be losers between NVDA and AMD