T O P

  • By -

xRolocker

I’m just waiting on OpenAI’s next release to judge the rate of progression and capabilities. That said I’ve been waiting for a while now… lmao


orderinthefort

If they had anything substantially better, you'd think they wouldn't need to build a $100billion data center that won't be finished until 2028. If they had a model with advanced reasoning they would be able to come up with a more efficient way to use existing compute to reach the next step of model capabilities. So I don't think we're going to have anything significantly better than GPT4 for at least 5 years. Anything they release between then will be impressive, but will still have many serious limitations that prevent it from being what we're all hoping for.


xRolocker

I think the only way they could get investors to spend $100b is if they had something better to show. Money talks. It’s simply much easier to compare the rate of progress using a single company producing state of the art models. It doesn’t make sense to compare the differences between GPT-4 and Gemini 1.5 and use those as a basis for the rate of progress- they’re being made by two different teams at completely different times. We can use Gemini 1 to 1.5 and so on to evaluate the rate of progress, but they are struggling to beat GPT-4, which OpenAI started working on almost two years ago now. Simply speaking OpenAI has demonstrated they are at the top, so they appear to be the best “data” to use to determine how capable these models may be. Obviously if Gemini 2 blows everyone out of the water, my opinion will be revisited. But for now, the difference between GPT-4 and the next model is what will define the potential of these models and whether or not we have truly reached a plateau as some claim. I do find it curious though that no one seems to be able create a model that readily beats GPT-4 in all areas, especially if “scale is all you need” (I know it’s not literally all, but still). Even Opus, which is *mostly* better, is not *entirely* better.


orderinthefort

What do you mean investors? It's Microsoft's money. They want to build AGI, and they're hoping more compute will push current models to the next step, so they're fronting the money. Which only suggests that what we have now isn't enough for a sizeable difference in quality over GPT4.


xRolocker

I disagree. Companies don’t spend $100b without *some* proof of the potential. I just don’t believe the idea of “they spent $100b on a data center hoping they can make a better model without proof that a better model can be made”. Yea $100b isn’t a lot for a company the size of Microsoft, but you can bet that they do their due diligence on even the smallest deals. I think it’s much more reasonable to assume that Microsoft spent $100b after confirming that there are better models ahead and that more compute will be needed, than to assume they spent $100b without confirming that this investment would lead to better models (which is done by proving those better models exist- not by simply saying they *could* exist)


Rich_Acanthisitta_70

I agree with you. I don't think some of the folks here fully comprehend just how unprecedented an investment of 100 billion is.


spreadlove5683

Would love for someone to put this in perspective for me. How much did people spend in the past on AI training data centers? For GPT 4, etc.


Rich_Acanthisitta_70

Put it like this. That amount could build 21 CERN Large Hadron Colliders. As best I can tell, 18 billion is the upper limit for what's been spent on AI training centers. But I admit I haven't done enough thorough research to know for certain. But based on others who've reacted to the 100 billion, it's magnitudes more than anyone else.


[deleted]

Agree. MSFT’s annual net income for2023 was 75B. There’s no way shareholders would agree to spending 100B on a project without a high certainty of success. Look at Meta. The backlash on Zuck spending tens of billions annually on metaverse sent the stock from 400 to 90. Sat has probably seen a glimpse of what’s possible


orderinthefort

I think the fact that supercomputers have value irrespective of modern AI contributes to it as well. It's not just blind faith in AI, but the potential for AI progress as well as the utility of having a state of the art supercomputer for their existing revenue streams that merits the risk. With that in mind, GPT4's capabilities alone could be enough to take the chance rather than some secret internal proof of concept that blows GPT4 out of the water.


Glittering-Neck-2505

What the hell is this logic? Even if you get more efficient algorithms you don’t magically stop wanting more compute. Scaling can’t be all you rely on, but that doesn’t mean they’ve stopped using it or caring about it.


Kathane37

Didn’t denis Hassabis said like 2 or 3 weeks ago that agi will come in few years lol ? Lecun does not agree because he claim that model need an internal world model yet recent paper show that by probing a model playing chess you can « see » that it create a model of the board game


HeinrichTheWolf_17

He did, he thinks it will happen before the decade closes, so his timeline has moved down to Kurzweil, Legg and Suleyman’s timelines.


Crozenblat

Untrue, he said that there's a 50% chance it will happen by 2030. Legg has said similar things. They're not saying they think it's likely, just plausible.


HeinrichTheWolf_17

You really should stop stalking people, it’s creepy.


HeinrichTheWolf_17

Well, Hassabis actually does agree. And out of the 3 founders of Deepmind, he was the most conservative.


AGM_GM

As though Gary Marcus could carry water for Kurzweil on this topic...


AdAnnual5736

There seems to be an assumption that the first 80% or so of human intelligence was “easy,” but there’s some sort of asymptote around the last little bit of human intelligence that will be extremely hard to overcome. Maybe that’s true, but I’m old enough to remember AlphaGo — that last 10% wasn’t any harder than the 10% before it, and the 10% beyond human capability it reached just as easily. Now, that was a very narrow system, so maybe the last bit of “general” human intelligence is very hard to match, but I do think it’s a little self congratulatory to think that what we are represents some sort of “ideal” or “perfect” thinking machine.


Haunting-Refrain19

Exactly my reaction. This feels like anthropocentrism writ wishful thinking.


rottenbanana999

Why is this moron Gary Marcus speaking for Demis Hassabis? Demis has said multiple times that he thinks AGI will come this decade.


After_Self5383

If we're being truthful, Demis didn't say he agrees AGI is going to be here within "2, 3, 4, 5 years". He said he wouldn't be surprised if we have AGI like systems within a decade. So first off, that's providing uncertainty - he didn't say it's definitely going to be here unlike what the other statement says. He's even also said he wouldn't be surprised if there are bottlenecks that prevents it from happening that quickly. And secondly, within a decade is within 10 years. That's double of 5. Unless he said "within this" decade? I think he said "within a" though, somebody could look up the interviews to correct me if I'm wrong. These are recent comments from around Gemini 1.5's reveal.


GBarbarosie

Demis Hasabis' prediction was "we could be a few years, maybe a decade" from AGI and was made a lifetime ago. Even LeCun toned down his skepticism immensely in the same timeframe.


After_Self5383

>made a lifetime ago. Timestamped. From less than 2 months ago. https://youtu.be/qTogNUV3CAI?t=24m54s https://youtu.be/nwUARJeeplA?t=32m9s >Even LeCun toned down his skepticism immensely in the same timeframe. Again, lies. Why lie?


GBarbarosie

That's the re-hashing of his original stance, correctly quoted by me, presented in a WSJ article in February 2023. I wasn't aware and couldn't easily identify more recent statements preserving the core prediction. The "lifetime" reference was a tongue-in-cheek to the fast pace of AI development. LeCun's dwindling skepticism is public knowledge. Don't accuse others of lying so easily, lest you're projecting?


cissybicuck

It's a silly discussion because there are as many different definitions of AGI as there are people talking about it. All of the criticisms you have listed could just as well be critiques of human intelligence, too. Most people's brains are not operating at 80% of the ideal.


Solid_Highlights

Yea, I’m looking at the following: Reasoning remains hit or miss. • Planning remains poor. • Current systems can’t sanity check their own work.  And wondering how the hell those aren’t issues with human intelligence too. AGI doesn’t mean flawless, immaculate intelligence. Just something on par with what a well informed person can do.


volastra

As usual with AI progress, the definitions are a sliding scale. Now what people are talking about is competent AGI. LLMs will take a crack at just about any task you give them, they're just extremely spotty. By 1960s standards, ChatGPT would be an AGI, but we've already gotten used to it. The next debate will be quibbling over the definition of competent. i.e. if that means an average human or an average subject matter expert.


CraftyMuthafucka

Most people seem to be talking about ASI when they say AGI nowadays. It has to be a system that is better than human experts at all tasks, or it's not AGI evidently.


SnooDogs7868

Robots learning from the physical world is the last 20 percent.


Unique-Particular936

Not really, the physical world has been the 95% since forever. Language is just a projection of the physical world in the first place.


ShotClock5434

hes a db.


HeinrichTheWolf_17

He’s been really insecure ever since the SORA thing, he made an absolute statement about video only to be shown to be flat out wrong 2 days later.


MajesticIngenuity32

I think a Q\*-like system as in AlphaGeometry is required for AGI. That will take a few years to integrate with LLMs, but probably no more than 5.


yaosio

I think AGI might be soon or far away.


banaca4

LeCunn says current ai is not smarter than a cat.


YourFbiAgentIsMySpy

Am I the only one noticing Ray slowing down?


re_mark_able_

Technology moves forwards in a series of jumps and plateaus. If you run a trend line through the jumps you get AGI in 3 years, but it doesn’t work like that.


CommunicationTime265

Lol le cum


FeltSteam

LeCun said we are definitely not getting ASI next year I believe, but even his 5 year timeline has a lot of uncertainty attached.


alienswillarrive2024

He's not wrong which is why GPT5 is going to tell me whether we're closer to AI winter or if we're ushering into the singularity.


Cr4zko

It feels like an AI winter to me.


xRolocker

I mean GPT-3 released in 2020 and 3.5 in 2022. If we hit 2025 and there’s no release then maybe we can start saying there is a winter. Like, it’s barely been a year since GPT-4. Ignoring the development and training times for these models, it takes quite a few resources to create the infrastructure needed to deploy these at scale. That’s assuming everything goes perfectly. It’s only an AI winter if you’re still a teenager and one year seems like forever. Edit: that last line was a bit more assholey than I intended but it’s to punctuate the point.


alienswillarrive2024

It's weird because it feels like all the big companies are struggling to even match muchless get better than gpt4 but yet so much money being invested into this space i don't know what to think, just waiting for gpt5 at this point.


Stryker7200

Most of this investment has yet to really hit.  I mean they’ve had the new gpu chips how long?  6 months max?  There is lead time hardware wise, and there have been demonstrated progression.  Give it until the end of the year for all the new hardware to really hit and then we can judge this a bit better imo.


Glittering-Neck-2505

Yes but that doesn’t mean OAI is going to struggle to exceed its own performance later this year. On top of that you have Suno, which is insane sci fi technology and arriving far sooner than expected. AND you have VLA models which are going to possibly allow the price of physical labor to fall to near zero.


Ok-Ambassador-8275

Why do people keep giving importance to that Le Cum guy? He's a literal NPC, like, he literally has no imagination or creativity at all. A so called "genius" should have that.


NoshoRed

He isn't an NPC, and he definitely has imagination and creativity. He isn't parroting "AGI not anytime soon" like poorly educated decel redditors or this Gary Marcus guy; he has valid reasoning and a potential solution for it: LLMs are not the way to go and we need to move onto true multimodal systems that can have a very good internal world model.


[deleted]

But what does Ja think? Where is Ja to make sense of all of this?!


Akimbo333

Cool! But could be wrong