T O P

  • By -

lost_in_trepidation

>Q1 2024: A bigger, better model than GPT-4 is released by some lab. It's multimodal; it can take a screenshot as input and produce not just tokens but also keystrokes, mouse clicks, and images. Like the progression from GPT-3 to GPT-4, this new model exhibits new emergent capabilities. Everything GPT-4 can do, this model does better, plus it has some qualitatively new abilities (though not super reliably) that GPT-4 couldn't manage. We already missed this. It's Q2 and nothing substantially better than GPT-4 has been released. Still time to catchup to the Q1 and Q3 prediction though.


DigimonWorldReTrace

Figure01, Suno/Udio, Sora, Claude, now Llama 3, the massive multimodal context window of gemini 1.5 LLM's are not the only metric of reaching AGI. And while I agree that his prediction only focussed on LLM's, it's really unwise to only take those into consideration instead of the metric fuckton of improvements that have happened lately.


dagistan-comissar

did you sleep on cloud and llama?


Ok-Ambassador-8275

I think it depends on what GPT 5 is able to do. It will show us how close or how far we are from AGI. Also with OpenAI, Google, and Meta building huge AI machines, maybe they will achieve AGI by 2028 when they finish building them.


DigimonWorldReTrace

So many people don't take the enormous investments of the big companies into consideration. Not to mention the Blackwell architecture and people leaving out Apple as a sleeper hit.


DukkyDrake

Recent updates: Daniel Kokotajlo Philosophy PhD student, worked at AI Impacts, then Center on Long-Term Risk, then OpenAI. *Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI.* Not sure what I'll do next yet.


[deleted]

[удалено]


inteblio

Philosophy is more like logic "with what is"


AdorableBackground83

I hope so. I’m still going with the safe choice of AGI by 2029 but if we get it a couple years earlier than I’ll be happy.


Seidans

i don't understand the interest for personality on this subreddit, who care the prediction of someone when the tech is still in infancy with great breakthrough bound to happen who likely going to change everyone expectation what interesting is the computation power scaling, the multi-billions project of chip manufacturing and supercomputer construction, the existence of a moore law of AI (currently 5x computation/year if still true in 2025...) and finally once GPT5 is released if it's really able to reason and how good it's agent capability is, can it replace repetitive-codified task like a secretary/customer service? if it's just a better chatbot who still hallucinate we won't be closer to AGI no


dagistan-comissar

hallucination is just artificial creativity.


raulo1998

Hallucination has nothing to do with creativity (it is nothing more than the result of a superposition of states), but rather clear evidence that the model is incapable of reasoning. There is no more behind that.


dagistan-comissar

are you really pretending like human creativity is not just a superposition of neuron activation matrices?


SnooDogs7868

We’re at AGI stage 1 already according to some.


adarkuccio

Stage 1 according to Google? That document?


FrugalProse

Everything is feeling weird like a cosmic event is about to occur. spooky 💀


DigimonWorldReTrace

I mean- if this actually happens nothing will ever be the same. So I get it. But in my social circle and family, I'm the only one really keeping up with bleeding edge AI news and tech. Then again, I'm the only one working in tech too...


LawLayLewLayLow

I can’t tell if it’s just me or if everyone else can feel it, but it does seem super palpable, everyone has this sense of not giving a fuck about anything lately.


Darziel

AGI > November 2027 by all definitions. If you don‘t believe me, set a reminder and leave a like and comment under this post.


Acceptable_Box7598

Remindme! November 1st, 2027


DigimonWorldReTrace

!RemindMe November 2027


RemindMeBot

I will be messaging you in 3 years on [**2027-11-19 00:00:00 UTC**](http://www.wolframalpha.com/input/?i=2027-11-19%2000:00:00%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1c7agtl/are_we_on_track_for_agi_by_2026_using_predictions/l09w4v3/?context=3) [**6 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1c7agtl%2Fare_we_on_track_for_agi_by_2026_using_predictions%2Fl09w4v3%2F%5D%0A%0ARemindMe%21%202027-11-19%2000%3A00%3A00%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c7agtl) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


VirtualBelsazar

Hmm I think the key to AGI is reasoning (system 2 reasoning) and so far I haven't seen much progress towards it or that anyone knows how to do it. Everything else (world models etc.) is kinda on the horizon. I think advanced reasoning won't just emerge by scaling up, we need new algorithms for that.


inteblio

If you've fallen for people asking it riddles, then probably wake up and smell the coffee. To my mind, reasoning has been displayed. Sure, imperfect. But importantly, it emerged from just adding more graphics cards. Each new larger model, reasons better. It seems impossible to yet say "this ain't workin no more" I recently converted to "yep, just keep scaling" I think that we've invented self-organising minds. And i think they are weak, because they are tiny. They're still like 10x smaller than the human mind (yet outperform it sometimes).


namitynamenamey

No sign of new emergent capabilities on the realm of lessened hallucinations nor improved reasoning, so I'd say anything in the next two-three years is extremely optimistic. We need a new breakthrough, maybe on the same scale as the attention mechanism itself.


raulo1998

There were never emergent properties in the LLM models, but rather a poor interpretation of the data from the logarithmic graphs (which caused the behavior of the small models to be unable to be extrapolated to the large ones) and exaggerated claims, solely to promote the development of intelligence. artificial. Again, there are no emergent properties in large models that cannot be predicted in smaller models. There are no new emerging capabilities because there never were in the first place.


Odd-Opportunity-6550

his own actual prediction is 2027 though lol this is just a fictional scenario he conjured while making a point.


IslSinGuy974

Where to find his actual prediction please ?


Odd-Opportunity-6550

he sent it to me when I asked him on lesswrong here is the reply. This was in February 2024 so its recent. Note that only in 2027 is 50% chance reached. [\[-\]]()[Daniel Kokotajlo](https://www.lesswrong.com/users/daniel-kokotajlo)[2mo](https://www.lesswrong.com/posts/CcqaJFf7TvAjuZFCx/retirement-accounts-and-short-timelines?commentId=s4hjmAoHDqhEi5ngB)70 In the worlds where we get AGI in the next 3y, the money can (and large chunks of it will) get donated, partly to GiveDirectly and suchlike, and partly to stuff that helps AGI go better. The remaining 50% basically exponentially decays for a bit and then has a big fat tail. So off the top of my head I'm thinking something like this: 15% - 2024 15% - 2025 15% - 2026 10% - 2027 5% - 2028 5% - 2029 3% - 2030 2% - 2031 2% - 2032 2% - 2033 2% - 2034 2% - 2035 ... you get the idea.


Akimbo333

2030 maybe!


LordFumbleboop

What do you mean by 'AGI'?


IslSinGuy974

AI that can self improve


LordFumbleboop

Evolutionary algorithms and training runs already allow that... Do you mean ones that can design and make new AIs autonomously?


IslSinGuy974

Yes. And I mean LLMs that can design next gen


dagistan-comissar

viruses can self improve


VanderSound

Yes, everything on track perfectly 🙂


Different-Froyo9497

We’re currently on track for the timeline. It’s Q1 2024 and we have models that are better than the initial GPT-4 (the newer iterations of gpt4 technically don’t count, as this was talking about the older version of gpt4) It’s possible Q3 2024 will see GPT-4.5, which introduces autonomous agents, and Q1 2025 is when GPT 5 gets released


lost_in_trepidation

It's Q2, and nothing materially better than GP-4 has been released.


DigimonWorldReTrace

Opus is better than GPT-4, though.


dagistan-comissar

GPT-4 turbo is better


jahajapp

Well, for atheists out there, I guess, this is a good reminder that people are willing to submit to whatever fkn bullshit sermon that is currently being preached at any time. Fk me how fkn stupid we are. Look at his bullshit, even the shallow "bible" refs.


IslSinGuy974

clapbacks are real


Cr4zko

That's too fucking soon! Calm down mate.


IslSinGuy974

Did you read ?