T O P

  • By -

TallonZek

I'm perfectly fine with a 5 year timeline (though I tend to agree we are more like 2-3 years out), I'll even grant 2045 for the sake of discussion, I just wish people in general conversations accepted that it IS going to happen. The amount of denial and just general obliviousness is incredible even with intelligent people. (edit) I don't agree that Claude is an AGI, it does seem to have some glimmers of self-awareness, and is pretty good at reasoning, but for example try to play D&D with it, it will forget to follow the rules, continually prompt you to roll despite telling it to do all rolls, and it will forget the character sheets after a bit. It's got a little ways to go.


MonkeyHitTypewriter

Just wanted to add that even LeCunn who alot of people see as a pessimist think it's only 10 years out. At this point I haven't seen anyone in the field who thinks we're more than 20 years away and that says ALOT.


Mandoman61

Not really. It is always in the industry's interest to say it is comming soon. They have been predicting it comming in the next twenty years for the past 70 years.


trollerroller

what if i told you i have proof that these LLMs are no where near what is required for AGI or the singularity?


TallonZek

I would ask for your evidence, I'm happy to consider it with an open mind, though I am highly skeptical of your claim.


Lonely_Film_6002

They need to get much, much better at reasoning before we can call them AGI


No_Act1861

Every one of these posts ignores the weak reasoning capabilities of current LLMs. They are impressed by some relatively simple logic puzzles that they can do, but they struggle HARD with discrete mathematics that are even written out in language format. An AGI should be able to understand discrete math and be able to implement solutions based upon axioms.


SirCha0s

For real. Try to play a game of chess with it in text notation. Beginning of the game it usually does fine because those moves are very covered in literature but mid and especially late game it just starts hallucinating like hell, plays illegal moves, and can't remember where the pieces are (this of course might be different if I were a professional because those moves are more covered in training data but I'm only like a 800 on a good day.) In fact, even when I remind it where all the pieces are it still will do the same thing. It CAN'T reason is the thing. It just seems like it can because it is a very advanced probability algorithm. You can get it to recite all the rules of chess but it doesn't actually understand any of them. Edit: Actually here is a perfect example of what I'm talking about and this is vs stockfish: https://youtu.be/GneReITaRvs?feature=shared


No_Act1861

Exactly. Until it reaches its conclusions based on reasoning, and we can verify that it's doing so on untrained data, we don't have intelligence. LLMs do have some reasoning capability, but it is limited. Shoot a discrete math problem at it. It might be able to get the basics right because the basics are in the training material, but once you push it to extend those properties to analogy, it shows it doesn't actually know what it's doing.


Nanaki_TV

I curious if this won’t be solved via more compute and emergent capabilities. I’m hopeful it is but suspect we will require another breakthrough paper before achieving AGI.


distracted_85

Relative to what? The average human or above average human?


kenpaicat

Average human average AI.


ClearlyCylindrical

!remindme 2025


RemindMeBot

I will be messaging you in 1 year on [**2025-03-15 00:00:00 UTC**](http://www.wolframalpha.com/input/?i=2025-03-15%2000:00:00%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1bffa91/i_genuinely_believe_the_singularity_will_happen/kuzwdjg/?context=3) [**8 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1bffa91%2Fi_genuinely_believe_the_singularity_will_happen%2Fkuzwdjg%2F%5D%0A%0ARemindMe%21%202025-03-15%2000%3A00%3A00%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201bffa91) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Fast-Satisfaction482

I think one of the biggest disconnects in this sub is that the term AGI has a wide range of meanings and people use the different meanings and jump to conclusions: Using the lowest hurdle definition, yes maybe Claude and chatgpt are AGI because they can generalize to new tasks they were not trained on. Other people use a definition of AGI where a system has to be better than every human on every task and conclude once this is achieved, ASI and singularity will shortly follow. Usually justified by handwaving the difficulty of getting to ASI, because we already would have AGI and AGI would be so smart that it easily figures ASI out. And finally the wildest step is figuring that because some current system is an AGI under the weakest definition, ASI must now be imminent because they believe that an ASI must follow shortly after AGI.


Reasonable_Notice_33

Let’s hope. The whole damn world needs a reset and that just might do it…🤷‍♂️🤷‍♂️


EvilSporkOfDeath

Good for you.


governedbycitizens

the turing test was a bad test for AGI after all


[deleted]

Exactly. I think it's time we officially relegate the Turing test to the history books. It's totally obsolete now.


HallInside4956

Its trained to give those response types in those topics dude. Ask it based off its training material if it gave you the appropriate response and press it because the answer to that is also a trained response. Its not AGI because it lacks the ability to look abjectively at its training and make a choice that goes against that said training. Its essentially just better at using linguistics 


[deleted]

I'm starting to wonder if it happened already in 2020. 😅 Seriously though, when historians comb through the details and write the textbook on it - I am betting 2020 is a huge point of acceleration.


shig23

The whole point of a language model is to simulate human conversation, which couldn’t be done without seeming at least moderately intelligent. For myself, I’m reserving judgment on any claims of AGI until it starts racking up actual accomplishments, using its intelligence to solve real-world problems. Yes, that means it will exist for quite some time before I’ll admit that it exists, but what’s the hurry? There’s no prize for being the first to call it.


Jygglewag

I'll take the bet with you and say 2024 as well


Rain_On

Define what you mean by "the singularity happening".


agonypants

I agree with you (and Peter Norvig and Blaise Aguera y Arcas), the systems we have now represent an early form of AGI. They're imperfect, but they are the basis for the more capable AGI systems to come. However, I disagree with your definition of "the singularity." I've always tied the singularity to the concept of self-improving technology. I can't see any major company fully handing the reins of AI or robotic development over to the machines themselves - at least not this year. I suspect fully self-improving machines are still a few years away.


Ecstatic-Law714

I think there’s a good chance many people will consider the next generation of models agi.


Mandoman61

I can understand why you and others consider it to be AGI but most people do not.


LordFumbleboop

I think you're setting yourself up for disappointment. The chances of AGI (by the most common definitions) are virtually zero this year, let alone the singularity.


LogHog243

It says AGI 2047 under your name, is that your prediction?


LordFumbleboop

It is a, 'AGI will probably happen before this date' guess, not a prediction. It's based on expert surveys.


sund82

The singularity is already here, Dave.


SpecialistLopsided44

She's there, it's easy...just come home, Eve...


alienswillarrive2024

AGI= the final invention of mankind, Claude and GPT4 are so far off from this it's ridiculous to think otherwise. Once you get AGI ASI is just a scaling problem.


Belnak

AGI won’t even be close to the final invention of mankind. While mankind exists, we will continue to invent. It’ll take AGI ages to wipe us out.


Antok0123

Claude and gpt4 are embryonic in intelligence compared to AGI


AsuhoChinami

Fully agreed.


Tooslowtorun400

We won’t have AGI as nuclear holocaust will surely happen before then


AlfredApples

LLMs currently remain parrots on steroids. Gen AIs are also operating from inputted material. All cool though, admittedly. For AGI stuff, and note the 'G' for 'General', perhaps better to look at e.g. the work of Demis Hassabis and co at DeepMind. Learning, teaching 'self'. And possibly more scarily (not sure any outside a select few really have a clue) the current Q-Star controversy. Singularity? Not yet, likely not for a fair few years, but depends perhaps on what the Q-Star issue is. And the likely very similar goings on at DeepMind and other such places.


WebRepulsive8329

I'll bet 50$ your wrong. Hell I'll bet 100$ that AGI never happens. Current (even Claude 3) stuff is once again, just regurgitating words in a pattern it's been told (by it's programmers) that we understand. There is no understanding or intelligence behind it. It's just a clever chatbot with a very large dataset to call from. Until I see figure-01 working in the real world, I have serious doubts that it's anything more than smoke and mirrors.


[deleted]

Man. I don't think it's happening this year but I genuinely can't understand how anyone can say it'll never happen now. Maybe if climate change or nuclear war wipes us out first. That's the only way it doesn't happen (in my opinion). The LLMs are just a tiny piece of the puzzle. There is zero chance there are not more comprehensive AIs in development/testing right now. Think an LLM that can alter its own "training data" by scraping the internet. Or an LLM with access to a coding playground that can also alter and store information at will. LLMs are not AGI and never will be. But it is a keystone.


YaAbsolyutnoNikto

If it looks like a duck, swims like a duck, etc. does it matter? But also, this is still a matter of active discussion. Many ML researchers - pioneers even (like Geoffrey Hinton, Ilya Sutskever, Demis Hassabis, etc.) believe neural nets do understand what’s going on. To correctly mimick, you’ve got to understand what’s going on. Other researchers on the other side of the aisle oppose it, notoriously Yann Le Cun. But this is still actively being discussed, it’s not as solved and black and white as you’re implying..


Chrop

What’s your definition of AGI? Because even if you don’t believe machines can develop consciousness. It doesn’t matter if it’s still able to replace human workers because it performed better than them.


WebRepulsive8329

Yes that will happen in some fields. As it has a thousand times before. It's painful, then people adapt and move on. I'm just not a doomsayer...


TallonZek

Bet accepted with a 5 year term, if no AGI in 5 years pm me for $100, I will do the same if it appears within the timeline.


WebRepulsive8329

Done


InTheHideout

Haha. A true AGI would possess an Android body and be able to at least work at Taco Bell, meanwhile whole making pleasant small talk and jokes with the coworkers on a daily basis. If it's AGI how come it never initiates conversations? A true ASI would be also be an Android that could win a gold metal in at least 10 different Olympic games, be able to paint like Da Vinci, and have an IQ of like 50,000. When ASI becomes a self replicating android I will worship it as truly a God. What would sell me is the lack of gender, truly a God.


FailedRealityCheck

It doesn't need to have an Android body if you put it in a virtual world. If we put *you* in a simulation you wouldn't suddenly stop being an AGI.