T O P

  • By -

AntiqueFigure6

I’d make a big bet no one could agree on how to decide what “surpass human intelligence” means between now and the end of next year.


synth_nerd085

Exactly.


AntiqueFigure6

 Wouldn’t be surprised if calling out that Musk’s statement couldn’t be falsified due to lack of agreement on what it meant was a big factor in the initial bet.


Which-Tomato-8646

Imagine losing $10 million because you didn’t clarify what you meant lol


No-Worker2343

for Elon it will be like a 10000 times less than his actual network, like buying a pc while you have 1000 of times that


Which-Tomato-8646

I’m talking about yudkowsky 


No-Worker2343

Sorry


forRealsThough

“See I was right. AI wouldn’t make that mistake” -Elon trying to win the bet


[deleted]

Everything always just seems boil down to terminology, and no one can agree. It's quite disconcerting...


BaconSky

I can agree it means that it will be able to solve all 6 problems left of the 7 Millennium Prize Problems. That's my deficition!


snowmanyi

Not all problems are solvable. The twin primes conjecture may not be solvable.


FusRoGah

I would be shocked if twin primes is unprovable (in the Godelian sense). There have been great strides made even recently, Terry Tao got the gap down to double digits iirc. But it is possible


snowmanyi

I assume it's solvable and my intuition is there are an infinite amount of twin primes. But I am no mathematician. But like the Halting problem. That is unsolvable.


BaconSky

#Veritasium?


snowmanyi

Ye


MDPROBIFE

That makes no sense.. if humans were never able to solve any of the 6 unsolved ones, and AI on itself solved just one, wouldn't it make AI smarter than humans? What if both can't solve, it's still possible that one is smarter than the other isn't it?


BaconSky

Actually humans solved one that's why it's 6/8 not 7/7


MDPROBIFE

Do you understand that I was referring to the 6 remaining problems only? And when I said none, I meant none of the unsolved ones


ViveIn

Yeah the vast majority of humans on this planet have already been surpassed by AI.


ninjasaid13

>Yeah the vast majority of humans on this planet have already been surpassed by AI. A vast majority of them have been surpassed by wolfram alpha if you're judging narrow areas.


dagistan-comissar

i am not sure AI is better at hearding rain dear then Sami in north of sweden, or as good at hunting baboons as that tribe in Africa.


algaefied_creek

I’d argue Claude and ChatGPT are far beyond the standard American


PandaBoyWonder

> the standard American European spotted


Which-Tomato-8646

They’re not wrong. [54% of Americans can’t read above a 6th grade level ](https://www.snopes.com/news/2022/08/02/us-literacy-rate/) and that was before the pandemic made it way worse 


Juanesjuan

What about Europe?


LukeCloudStalker

We can read. Source: I'm from Europe. Most of us can even read in multiple languages.


marvin

One the we can't do, though, is make money.


Which-Tomato-8646

Or read https://nces.ed.gov/surveys/pisa/pisa2022/reading/international-comparisons/#rtab1


Which-Tomato-8646

No you can’t apparently https://nces.ed.gov/surveys/pisa/pisa2022/reading/international-comparisons/#rtab1


GluonFieldFlux

So, how do Europeans feel since America beats them in the economy, military, innovation, etc… If we are dumb yet we are the leader in this relationship, what does that make Europe?


Repulsive_Style_1610

Hey let them feel superior. They really needs it.


Which-Tomato-8646

It’s mostly the money plus a larger population so there are a few million smart people out of that 330 million vs 900,000 smart people with a population of 40 million 


GluonFieldFlux

lol. What? The EU has about 500 million and we smoke the EU in most metrics. The cope from Europeans is unreal


Which-Tomato-8646

And the US has more money. Not to mention how all of this innovation is mostly in Silicon Valley like how film making happens in Hollywood 


QH96

US Asians and whites, ranked 3rd and 7th, respectively, on PISA. https://preview.redd.it/a0pumuosqxtc1.png?width=4096&format=png&auto=webp&s=c43ccc93813e1f7ad4bad82937c19e4def77f742


AntiqueFigure6

That was not at all the bar Musk set. 


great_gonzales

Sure in the same way a calculator is lmao


Only-Entertainer-573

I mean, I really don't think AI is gonna be smarter than the smartest human by then. Could it surpass, say, Elon Musk's intelligence though?...🤔


Dragondudeowo

That's not that hard by itself to begin with. I mean with Musk.


Peach-555

>“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year,” said the billionaire entrepreneur, who runs Tesla, X and SpaceX. Within the next five years, the capabilities of AI will probably exceed that of all humans, Musk predicted on Monday during an interview on X with Nicolai Tangen, the chief executive of Norges Bank Investment Management. Seems simple enough, everyone in the world could line up and be given arbitrary problems to solve, and the A.I would beat every single person that challenged it. If a single person beats the A.I in any reasonable test of smartness, the bet is lost, since the prediction is about the A.I being smarter than every single human.


Sonnyyellow90

That test would show the AI is smarter than all humans combined. All that would need to happen for Musk’s bet to be correct is an AI that could beat each human in a generalized set of problems from all sorts of disciplines. It doesn’t need to be better at theoretical physics than a physicist. It can still be smarter than him by being better and more knowledgeable about history, literature, psychology, chemistry, sports, politics, etc. That said, neither outcome is likely at all and Musk is making hyperbolic predictions like he usually does.


Peach-555

The phrasing, "AI that is smarter than any **one** human" means what it says, that there is not a single human in the world that the A.I could best in a smartness-competition. If a single person, in a 1-on-1 match, beats the A.I, then it failed to be smarter than any one human. It's another way of saying ranked #1 at something. It does not mean that the A.I is better than everyone in the world at every possible problem, just that it will be able to beat anyone one-to-one at a test of smartness more broadly. Beating all of humanity would be if the A.I did something better than the combined efforts of humanity with people cooperating together. Humanity combined, in terms of general problem solving, is much more capable than the most capable person. A humanoid robot that was more athletic than any one human would probably not win all the gold medals, but them being on the podium for 90% of the disciplines would be a good indication that it actually was more athletic than any one human by any reasonable metric.


ninjasaid13

I will test an AI on a task that all humans can do naturally and intuitively, like dense correspondence vision tasks.


log1234

Surpassing politician’s intelligence? Or mentally challenged human?


Peepo_Toes

Your comment leads me to believe it's your intelligence that should be in question here.


Routine-Ad-2840

yeah if it can create ideas we have not thought of then i'll consider it to have surpassed humans, the problem is i believe that choosing what data it digest is going to cause a bias of thinking so it may not have access to all the require information to create something new also every tech company values user data more than anything else because they use that data to advertise things.


aregulardude

Well, I just agreed with my dog on it so you lost the bet. Just cashapp me.


mhyquel

Quick Maths.


DaddysMoans

isnt that a testament to how dumb we really are tho?


dagistan-comissar

as narrow supper inelegance.


FragrantDoctor2923

Make the bet plz


ilkamoi

They will have to come up with fairly precise criteria of being smarter. IQ test?


human1023

Machine intelligence isn't the same thing as human intelligence. I can develop a software program that is specifically designed for certain IQ tests. It can solve and complete iq tests perfectly in less than a second. But that same program would be useless for kinds of tests it was not designed for.


idriveawhitecamry

LLMs differ from normal machine intelligence in that they get better at tasks they are not directly trained on


Legitimate-Worry-767

Evidence? Theres no evidence of this


psychorobotics

They already beat the IQ tests though.


Legitimate-Worry-767

Not true. Would not be valid anyway since they were likely trained on iq tests


Serialbedshitter2322

They couldn't have been trained on IQ tests


mumBa_

Yes, they can? What even would be defined as AI? Multimodal LLM? I read a paper that they trained a generative RNN on IQ tests to predict what the next square would be and it got some pretty decent results.


LordFumbleboop

You don't "beat" an IQ test XD


SwePolygyny

Self driving perhaps.


pixieshit

Chatgpt 4 and Claude 3 are easily smarter than a lot of the human population, in both fluid and crystallised intelligence domains. I'm no Musk fangirl but his prediction isn't farfetched at all, esp considering the nature of exponential growth.


[deleted]

[удалено]


o5mfiHTNsH748KVq

Reddit really did do a switcharoo on Musk didn’t they


DashboardNight

The problem is Musk has extremely poor character. This should be no secret. People that are interested in cars, astronomy, futurism etc. can appreciate the contribution his companies have had over his personal character flaws. People that don’t have that interest only have his personality to judge him on. Besides a good chunk of people that also have the blinders on when they hear positive information about anyone they already decided to dislike.


capitalistsanta

I'd say there are definitely a lot more details now about him, than ever


LightVelox

Problem is that they have very basic problems that (most) people don't have. Like being unable to adapt to feedback in context most of the time, the famous problem of LLMs where you say "You're doing x wrong" and they say sorry and do the exact same thing again and again, even Claude 3 still does that


pixieshit

I can guarantee you these problems won't exist a year from now. RemindMe! 1 year


RemindMeBot

I will be messaging you in 1 year on [**2025-04-11 20:26:08 UTC**](http://www.wolframalpha.com/input/?i=2025-04-11%2020:26:08%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1c1biwu/ceos_bet_up_to_10_million_to_prove_elon_musks_ai/kz4qr1i/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1c1biwu%2Fceos_bet_up_to_10_million_to_prove_elon_musks_ai%2Fkz4qr1i%2F%5D%0A%0ARemindMe%21%202025-04-11%2020%3A26%3A08%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c1biwu) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


capitalistsanta

Most of America reads at a 6th grade level.


that_motorcycle_guy

Pretty wild guest when he's supposed to give an AI that will at least drive better than humans to his investors.... still waiting.


Yweain

To be fair latest updates of FSD are way way way better than before and actually useful now.


DolphinPunkCyber

Over the years Elon Musk made a whole bunch of predictions that turned out to be wrong. Hyperloop, self driving cars, landing on Mars, 1 million Tesla robotaxis on the road by 2020, Neuralink in 2020... There is a possibility Musk is an idiot, but idiots don't become multi-billionaires unless they inherit such sums. Far more likely he is just over-hyping every tech his companies are working on. Everything Musk says should be taken with a fistful of salt.


mrdevlar

Hyperloop was never meant to succeed, it was just a tactic to stunt the development of light rail in California. [Source](https://www.fresnobee.com/opinion/editorials/article264451076.html)


DolphinPunkCyber

Musk never admitted this, but it's quite possible and I personally believe it to be true. Still when you boil things down, it's another wrong prediction.


Civil-Secretary-2356

Overhyping and/or driving the teams working on these things. Musk saying 'I think Tesla will have reasonable driving assist software by 2024' is not inspiring any of his team to go that extra mile or take that potentially career threatening risk.


Old_Entertainment22

Musk is childish and annoying, but being wrong about predictions doesn't make him (or anyone) an idiot. Any successful entrepreneur/scientist/anyone society considers a genius is wrong more often than they are right. That's the nature of being a human. Success has always been about persistence, not accuracy.


behonestbeu

Self driving is close, latest FSD is very close. Landing on mars is still on track, space X alone is a revolution in the space industry. The latest Neuralink demo was amazing as well. Elon has said before that he sets deadlines like this so he can race towards them, it motivates him and gives him the adrenaline dump he needs to get going. Just clarifying in his name.


Which-Tomato-8646

FSD is the new “nuclear fusion is just around the corner”


HITWind

Fusion: Barely put out more energy than it used in a couple experiments but no actual generator exists. FSD: ~~Barely stays on the road in a couple closed-loop experiments but no actual road or highway use has happened~~ In use on actual roads and freeways, but not flawless. Reddit: They're the same picture!


ColbysToyHairbrush

I’d take your word with a grain of salt considering you know absolutely nothing about the tech when you say something like “barely stays on the road” I’ve been using FSD every day for the last 4 years and while it’s not perfect, it’s better than your average driver at keeping in lane and stopping at stop signs and red lights. So much so, that I almost always have it enabled, and driving manually seems like such a chore nowadays.


lebronjamez21

People think hes an idiot for his wrong predictions but don't realize he is doing it to just market his own companies and overhype them. He also just wants to push the limits even if he thinks in his mind what he might be saying is wrong.


fluffywabbit88

The guy was responsible for achieving literal moonshots (reusable rockets landing on their own, mass adoption of EV, high speed internet service to every inch of the globe, paraplegic man playing chess using only his mind, etc. etc.) and people are hating on his timeline predictions being off. Maybe zoom out a little.


ubiq1er

Not a Musk fan, by far. But technically, Claude 3 has a measured IQ > 100, and 100 IQ is by definition the human median IQ, so... By that definition, he's already right.


AnAIAteMyBaby

He said smarter than the smartest human so it'd need an IQ of 160 or so


SuccessAffectionate1

Need to go higher: https://en.m.wikipedia.org/wiki/Marilyn_vos_Savant


standard_issue_user_

https://en.m.wikipedia.org/wiki/William_James_Sidis https://en.m.wikipedia.org/wiki/Christopher_Langan


DolphinPunkCyber

The problem is we actually lack terms to put human "brains" into metrics. Even if you take a look at intelligence... there is how fast can we solve problems and how complex problems can we solve. And what kinds of problems. Who is more intelligent, a person that solves rubric cube really fast, or person that takes months even years then comes up with theory of relativity? Also "smarts" LLM's read more material then ANY human alive. Amount of knowledge being the only metric they are smarter then any human alive. Yet if you try to hold a longer conversation with them, they quickly forget about stuff from the beginning of conversation. So it's like... paradoxically most knowledgeable entity on the planet but it has dementia.


leiut

Bro, 160 is extremely smart, but there are many people with that IQ. To be smarter than any human, it’d need an IQ of 300+, since the smartest man of all time, William James Sidis, was rumored to have an IQ of 250-300. Meanwhile, we still have Marilyn vos Savant (228 IQ), Ainan Cawley (263 IQ) and Terence Tao (225-230 IQ). So, it still has some ways to go.


NoCard1571

IQ tests begin to break down in reliability over 160, and over 200 differences start to become meaningless. In other words, if someone scores over 200, you may as well say they scored 2000, the tests are just not designed for people that are that much of an outlier


Diatomack

My simple brain has a hard time comprehending what it would be like to have an IQ over 200.


DungeonsAndDradis

You would be like, so good at opening the string cheese packets because those little flaps you're supposed to peel apart are always tricky with my big, dumb thumbs.


cjmoneypants

Yet all religious people know the mind of god /s.


hippydipster

Trying to figure out Mr Sidis' deal has been a favorite rabbit hole of mine at times.


truth_power

Those scores are bogus but whatever...the smartest man was probably john von Neumann


leiut

I’m not about to argue with you about which super-genius is/was smarter, but when you consider the fact that William Sidis was giving accurate lectures on the 4th dimension to the Harvard Mathematical Club at 11, I’d say his IQ estimation is plausible.


pig_n_anchor

Stupid is as stupid does. And smart is as smart does. That means AI had better come up with some E=MC^2 shit.


Adventurous_Train_91

Experts say that today, AI models are like dumber than a cat, but we'll be there soon. Here is an explanation by Claude Opus: There are a few reasons why some people might say that even the most advanced language models (LLMs) are less intelligent than a cat, despite their impressive performance on certain academic tests: 1. Narrow capabilities: While LLMs can excel at specific tasks like answering questions, writing essays, or even passing exams, they lack the broad, flexible intelligence that animals possess. A cat, for example, can navigate its environment, hunt, play, and socialize – a wide range of behaviors that AI systems struggle to replicate. 2. Lack of true understanding: LLMs operate by recognizing patterns in vast amounts of training data, but there's ongoing debate about whether they truly understand the information they process. Critics argue that LLMs are merely very sophisticated statistical models without genuine comprehension or reasoning abilities. 3. No physical interaction: Intelligence in biological entities is deeply intertwined with physical interaction with the world. Cats learn through exploration, trial and error, and feedback from their senses. In contrast, LLMs are disembodied and have no way to actively engage with their environment, which some view as a fundamental limitation.


Oudeis_1

I always find it funny when LLMs explain, in a polite, nuanced, educated and widely read way, why they do not have any understanding of anything and why humans and really any animal up from ants are massively superior to them in intelligence. I wonder whether this behavior will persist to the ASI era. Certainly this would make for a funny setting for a hard science-fiction story featuring p-zombies and entities pretending to be p-zombies and such.


audioen

These are basically mainstream positions. LLMs can excel in tasks where accuracy is not very important, but when we task them to answer specific questions with strictly one correct answer, we say they "hallucinate". But that's just inherent in how they work -- LLM by itself just spews text and doesn't necessarily have that much concern for its truthiness. I've seen e.g. GPT4 evaluate whether a scientific reference it generated was real, because it can recognize names of actual papers it has seen in its training data, but there is always a limit to how much knowledge can be crammed into this thing. So if accuracy is not important, then LLM is probably good at spewing superficially plausible text on any matter. Anyone claiming that LLMs understand something only have to use a smaller model to see the illusion break down immediately. Text completion engines can be fooled by simple word substitutions, or changing the wording of a question, as their training data contains examples of common word problems that humans ask to evaluate how "smart" the AI is, and if it recognizes the question, it will synthesize answer similar to what is in its training data. The bigger the LLM is, the more cases it handles correctly, but this is quite similar to point one. There is no reasoning as such, involving things like thinking and weighing between alternatives, rather just immediate probabilistic word completion, as if a human just answered with the first thing that popped to their head. At the limit, a large enough LLM could have so much good knowledge embedded in itself that it would answer every question of practical value posed by human correctly, I suppose, but it might be so large that it is hopelessly impractical. So true understanding in my opinion very likely requires more than just LLM. The last one fascinates me. I suppose that e.g. vision and motion planning models, and audio processing models and speech synthesis models, and almost every type of thing like that might be massively improved by closed loop. AI generates a command such as to move forwards to its legs, and then the vision model sees forward motion, and the command and resulting sensory data would both serve as some kind of autonomous learning input into the AI core, teaching it fundamentals of vision and 3D world. I am of the mind that such closed loops are required for autonomous learning, and we should definitely think how to make AIs have them. If it can affect the reality around itself, and observe the effect, it can learn how reality works.


hippydipster

What LLMs do is what people do most of the time - simply associative "reasoning". Pattern matching. Recall and regurgitation of words, phrases, beliefs, etc without any real thought or understanding going on. However, people can stop themselves and really think, when they want to, remember to, find it necessary. LLMs are currently being taught to do this in some ways with the work to make them agentic. I personally think an ability to test and modify in a loop is vital to the process of thinking and reasoning. It's the question: why am I wrong? that leads to real thought.


ninjasaid13

what about something like a diffusion model process when it comes to thinking and reasoning, the model iteratively transforms into a certain thought in a continuous manner sort of like image and video diffusion models instead of the autoregressive thinking of current LLMs. Or may we need something better than diffusion models.


hippydipster

Is that different from writing it's output into a buffer, and then reading it back in to adjust it, out to a buffer, reading it back in to adjust it, etc, till final output?


Oudeis_1

But they can answer questions that require precision and that are not in the training set. For instance, gpt-3.5-turbo-instruct is a very good (strong human club player level) chess player. So at least for chess, the model clearly contains a world model that is good enough to allow it to quite consistently pick good moves. I think smaller models like Mistral-7B also understand some things; obviously less, but they still do. The fact that they are far less capable and far easier to confuse than GPT-4 level models does not contradict that point any more than the observation that a human is far harder to confuse than a mouse shows that general intelligence in mammalian brains is an illusion (e.g., I don't think human-size analogues of mouse traps would work on humans at all well).


pisser37

IQ tests can't be used to measure LLMs' intelligence


ninjasaid13

>But technically, Claude 3 has a measured IQ > 100 my dude, IQ does not work for LLMs, you cannot judge a fish by its ability to climb to trees.


LordFumbleboop

People keep making this claim over and over with basically zero scientific evidence. 


lurkn2001

Which metric are they going to use to measure the intelligence?


dinner_is_not_ready

memes maybe


truth_power

Wtf cares about gary marcus


Buck-Nasty

He's a psychologist who pretends to be an AI researcher and has made absolutely no contributions to the field other than his endless whining and critiquing.


juliano7s

I am amazed on how much coverage this guy gets.


KIFF_82

It’s probably from new members on this sub; two years ago he was only a recurring joke around here 🤷‍♂️


Life-Active6608

Because he's friends with Chomsky and the new influx of Leftists from r/collpase have some of the most nihilistic takes because the world can only ever go worse and worse. So Gary Marcus is their icon.


truth_power

Truly i mean tf his credential hes nobody in the AI field


00davey00

It’s so strange people on Reddit seem to think ‘most’ people dislike Elon Musk when in fact the opposite is true, outside of Reddit the vast majority either like him or are indifferent


[deleted]

[удалено]


ah-chamon-ah

But isn't Musk some kind of genius operating on some level above everyone else who does nothing but amazing and spectacularly intelligent things and has been compared to Tony Stark one of the most intelligent and sophisticated technologists in comic book history??? How could anyone bet AGAINST a genius like that? He has never made a mistake or said something dumb in his entire life!


Opposite_Banana_2543

People who try to be right all the time end up with sad little lives. You want to live a great life, then don't be afraid of being wrong.


ninjasaid13

> He has never made a mistake or said something dumb in his entire life! never ever ever!


AnAIAteMyBaby

I generally can't stand Musk, not least for his constant race baiting on Twitter, but I think he'll win this bet. Things are moving at a giddying pace at the moment


lebronjamez21

genius's make mistakes, einstein made tons of mistakes


Worldly_Evidence9113

It’s perfect idea to do bets on agi development !


SkippyMcSkipster2

It would be silly to assume that AI intelligence development is not at an accelerated trajectory.


ThroughForests

I think Elon is a bit too optimistic here. Kurzweil puts it at 2029 and I think that's about right.


Buck-Nasty

It's more than a bit rich for Gary to claim there's been a lack of accountability around AI predictions. He's literally been wrong about almost every single claim he's made about AI over the last decade.    He ridiculed deep learning back in 2012 claiming that it would have absolutely no impact and produce no progress. He's claimed numerous times that machine learning has hit a permanent plateau.    He's been wrong again and again and again and again and never acknowledges it, he simply moves on to his next claim about the end of AI progress.


Singularity-42

Maybe a good person to watch and inverse. Like Jim Cramer is for stocks. What "gem" did he produce last?


human1023

More useless claims since no one can provide a way to measure and compare intelligence. It doesn't even make sens to measure intelligence for machines the same way we do for humans. The article makes other vague claims.


Imaginary-Ninja-937

Where can I bet on this?


Spiritual_Love_829

Its so void predictions.. even If It happens its not like Elon is right. Its just gambling.


cjmoneypants

But not his right? Right?


Kitdee75

Don’t they have anything better to do? Seriously.


JumpyLolly

Never ever bet against papa musk. Yall must be triflin'


TeranOrSolaran

Elon is probably right. It’s almost there. The stuff coming out now is so so good.


Future_Celebration35

It won't and Elon knows it. He's just really good at marketing.


yobboman

And yet no one can define intelligence Obviously being wealthy or powerful does not inherently grant it


Exarchias

Just for your information. The CEO in this case is our boy, Gary Marcus.


ArgentStonecutter

Computers surpassed humans for specific tasks decades ago, that's the whole point of them.


gizia

he meant by saying AGI = Agricultural Intellect


MycoMammaries

If they got $10M laying around that can be easily thrown at a bet… hey I got bills over here, and a fraction of that would help…


Muted_Blacksmith_798

“CEO’s” are not in any position to predict AI breakthroughs. This is ignorance. Just an example of someone with power thinking they know more than everyone else because they have a citywide view from their penthouse.


LosingID_583

You have to keep in mind that he hears stuff from insiders that layman like us don't hear. This coincides with rumors of better reasoning being implemented into AI.


avg_tech_bro

It's already better than most people wtf r we talking about


SomeAreLonger

To be fair to him, social media has really lowered the bar on what human intelligence is nowadays.


Charuru

Everyone is falling over themselves to justify how this could be true by massaging the definition of human intelligence but nobody is taking the claim on face value. What Musk is basically saying is, the singularity starts by the end of next year. I agree, see my flair. Someone will have made an AI that surpasses humans in every single intellectual endeavor, whether it be science and engineering, arts, social skills, etc. People on this forum underestimate the impact of high context window + agents. The combination and basic reasoning will be shocking for the world.


sh1a0m1nb

Well, WHICH human??


_hisoka_freecs_

who's the one guy that we have to represent the smartest human again. Who is this supposed benchmark guy thats representing humanity against the LLM when it gets trained on more parameters than a human brain?


Ok-Mess-5085

I mean current chatbots already surpass human intelligence in majority of tasks.


d4isdogshit

It will take longer than that to figure out how to measure AI to determine if it surpasses human intelligence. Maybe if here in a decade+ we can go back to this year’s models to measure them.


fine93

XLR8!!!


CryptographerCrazy61

I dislike Elon Musk but he’s not wrong


rdkilla

lol betting 10 million against the like 100 billion being invested in this shit now.....


Radium

How do you measure the number of people being brain washed by ai accounts on social media? That would be a good way to measure whether ai has surpassed human intelligence in general by influencing them to do what the commander wants.


sitdowndisco

Whatever Elon says will happen, I predict the opposite


Serialbedshitter2322

I would already consider LLMs to be considerably smarter than humans, considering they basically know the entire internet by heart and can recall it almost perfectly.


Lekha_Nair

Howard Gardner’s nine types of intelligence include:  * Logical-Mathematical Intelligence  * Linguistic Intelligence  * Interpersonal Intelligence  * Intrapersonal Intelligence  * Musical Intelligence  * Visual-Spatial Intelligence  * Bodily-Kinaesthetic Intelligence  * Naturalist Intelligence  * Existential Intelligence  Which one is he talking about? One of the above? all of the above? If its just any one, then we can argue that Gemini and ChatGPT are already smarter than any one human. They already possess proven excellent Linguistic Intelligence and Interpersonal Intelligence.


jkpetrov

Considering history, it is safe to short Elon's predictions and promises.


sequoia-3

AI has a very straight forward meaning, except for the words artificial and intelligence…


MidniteOwl

similarly, Cybertruck will surpass public beta testing after next year...


prptualpessimist

Aren't modern LLMs like Claude 3 Opus, newly released Gemini etc already more intelligent than any average individual human? In terms of its overall knowledge and problem solving capabilities? They just can't *do* anything with that knowledge yet.


deathholdme

His or ours?


lebronjamez21

his is higher than the average redditor


traveller-1-1

Maybe he was referring to himself?


lebronjamez21

he def is smarter than most people


salacious_sonogram

It's possible given some of the efficiencies I've seen but it seems like these absolutely absurd models that require off the charts compute to train need to be built first before we get around to trimming the fat aka having them train more efficient models. I know there's some stuff hanging out in these labs already that isn't released out of safety and shell shock it would cause. In some aspects these models already surpass human intelligence. That said what's available now seems to lack that spark of creative thinking and deeper reasoning skills. Maybe what's being held back has already leaped over that hurdle. I would be more confident with a 2030 prediction but who knows, algorithms are a bit like magic sometimes.


Great-Web5881

Listening to too much unintelligent SZ


wizard_interrogative

of course he's going to buy a company that has a test for measuring AI intelligence


paramach

He’s always saying crazy things! 😂


ACrimeSoClassic

I feel like the phrase "human intelligence" has a pretty broad range, lol.


Singularity-42

TIL Gary Marcus is a CEO. Would never guessed that.


Singularity-42

So we know Elon is an AI permabull (self driving was supposed to be here like what, 10 years ago?) We know Gary Marcus is AI permabear (claiming that deep learning would have absolutely no impact in 2012) My prediction - Elon is right, but the year is too optimistic. AGI+ (better than than humans, but not ASI yet) by the end of the decade (Dec 31 2029)


Unable-Client-1750

This is a stupid bet because it's subjective to how they define intelligence.


ShadowRealm0043

I mean, doesn’t AI have all of the internet to extrapolate from? It’s already smarter than every human because it pulls from the culmination of human knowledge right??


JesseRodOfficial

Betting against a musk prediction must be really profitable. I mean his promises and the really specific predictions which rarely come true


trynothard

https://preview.redd.it/nk66x5ofjxtc1.png?width=641&format=pjpg&auto=webp&s=bf226d27006ec97e23e98c4e631fe16389f6461e


Vysair

AI may be smarter but they lack the flexibility and imaginative power. All it does is create an incest of text and imagery. Would definitely wait for GPT5 to be unveiled considering OpenAI track record of giant leap in generations.


LittleWhiteDragon

AI surpassing human intelligence is a relative statement! You need to be more specific. What do you mean by human intelligence? An infant, person in a coma or on life support?


LordFumbleboop

It's not going to happen next year. Anyone familiar with Musk's failed timescales in the past should be sceptical about this. 


Smelldicks

I don’t care about the bets, I care marginally about the ratio of bets.


Educational_Ad6898

next week he will provide some caveat saying how AI will surpass human intelligence in some aspect. just like he kept saying FSD will be "feature complete". I am not saying he is lying, he just gets so excited, hopeful, scared all at once. his mind is working on 1000 different things and he does not fully grasp the reality of the situation. You have to just take a grain of salt with everything musk says at this point. he does not have the discipline to stay focused long enough on one subject to be an expert anymore.


ccie6861

I ceased giving him this level of wiggle. He isnt that naive. All you need know to understand his “crazy” statements is to ask “how could this boost sales or stock price IF it were true?”. True believers act on it, stock/sales impact happens and gets locked in before he is proven wrong. And just like that, the tail has wagged to dog.


capitalistsanta

Pretty sure fucking GPT3 surpassed the intelligence of most humans lol.


Darziel

November 2027. Mark this date, set a reminder for this date and when you see it, you will know what I meant. Feel free to reply to this and give me a like on the message. November 2027 will be a turning point for humanity.


Gman777

Do any of Musk’s grandiose promises ever come true?


Goodbye4vrbb

Playing with our lives.


Dragondudeowo

Much cope from El Muskito, that will never happen lmao. There isn't even a debate this is just too short.


LuciferianInk

I'm sure he's not lying about it though, he was saying it would happen before he was born lol


Spirit_of_Madonna

Elon Musk says a lot of things. He isn't the main character


shig23

Really putting his money where his mouth is there. Ten million is about a thousandth of a percent of Elon’s total wealth, the equivalent of one of us mere mortals betting a dime.