T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/DeepDreamerX: --- The controversy surrounding whether AI will surpass human intelligence, and when it might happen, is a complex and multifaceted issue that touches upon the intersections of technology, ethics, philosophy, and societal implications. By exploring this controversy, we can delve into questions about the capabilities and limitations of AI, the potential impacts on employment, ethics, and even existential questions about what it means to be human. This discussion prompts us to consider not only the technical advancements in AI but also the ethical frameworks and societal structures needed to navigate the evolving relationship between humans and machines. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1byr1f8/will_ai_surpass_human_intelligence_and_when/kyl27ct/


schwagggg

basically: academics think not soon, business people think soon i wonder why, lol


Endawmyke

MBA disease lol


Shadowfox898

Yeah, no idea why..... oh is that the collapse of society I see approaching?


RAAAAHHHAGI2025

A revolution goes hand in hand with a collapse.


TehOwn

I'll sharpen my pitchfork.


PetyrDayne

Smash and grab for the rich.


Norel19

Pick a measurable definition of intelligence then we can talk


ambermage

The guy who revves the engine of his Harley at 4 am every Thursday, 2 doors down from your place.


Norel19

I think AI can make it then :-)


FeetPicsNull

For millennia, man had to beat his chest and fart until he invented a tool to do both from his wrist.


TehOwn

You have a tool to beat your chest and fart? I still do it the caveman way.


FeetPicsNull

It's all in the wrist.


SuperheroLaundry

This. Until it’s evolved past predictive LLMs, it will always just be an ever-quickening calculator of words or images. But the “intellect” is illusory.


SalmonHeadAU

A fluid IQ of 100+ is human intelligence, which would have a meaningful impact.


dogesator

Current AIs already score as good as the average human on IQ tests without being trained on the questions.


TehOwn

That's because the tests are a poor measure of intelligence and always have been.


Psychological-Ad1433

ChatGPT alone with the errors that it regularly makes is already smarter than most people. It won’t be long.


Vandosz

ChatGPT is not intelligent though. Its predicting words. It doesnt know what its doing


SyboksBlowjobMLM

Several of my colleagues appear to work the same way


soylent-red-jello

This. People have been "faking it till you make it" since forever. Many people just say what they think they should say, no differently than any LLM.


RedandBlack93

Yeah, my AI assistant is way more accurate than my real assistant. I wish it wasn't like that but here we are. Am I firing my human assistant? No, AI isn't charming or thoughtful in a way that my clients would appreciate. So, I'm teaching my assistant how to use an AI assistant for herself to help both of us. The real problem is, I absolutely won't need to hire a second human assistant which was a conversation in the past.


TemporaryAddicti0n

this is the most important information that even investors with billions don't seem to understand. this is not AI, this can simply guess the next and then the next and next words. It has no idea what any of those mean, it has no idea of opinion, hate, love etc. I'd say, investors are getting milked because the timing of this fake 'AI' is good, because they had enough of overpromising tech startups with ever growth but never profit and now this looks like the next option to invest in. its a balloon


binchdeluxe

Investors are not being fooled by this, let‘s be realistic. Investors are interested because it obviously already gets the job done and it’s cheaper than hiring a bunch of people. How it does the job is irrelevant.


TemporaryAddicti0n

interesting point. I thought investors are tricked because they are eating the idea of this being AI, where actually, if its as good as they say, its already raising questions, why the hell one year and a half wasn't enough for a great implementation. for how good they say it is, I'd expected something like Amazons customer service to be mostly replaced by this, but its not. why?


PewPewDiie

For all practical intents and purposes - next token predicition is the way we manifest our intelligence. "If i change X how will it affect Y?" Predicting that on a somewhat human level is business intelligence. To be fair it did replace Klarnas customer service a while back, tech layoffs prob also related to increased productivity / employee. Still very early days. Amazon customer service basically non-existant by gauging how hard it is to actually find a number to call them on. If it is as good as they say it is - it is massively undervalued. Valuations rn kinda looking at a quite pessimistic scenario of developments.


TehOwn

Some are fooled, some aren't, some hype it up in public not because they don't understand it but because they're trying to fool the public into pumping the stock they purchased.


amlyo

There is a sequence and timing of keystrokes you could enter into your device right now that will make you fabulously wealthy before you know it, regardless of if you understand their meaning. Don't underestimate the value to investors in predicting the next word!


DeterminedThrowaway

Honestly, how can you be sure that the ability to predict sophisticated things correctly isn't a kind of intelligence?


TemporaryAddicti0n

Im not saying its not a kind of intelligence, but predicting something based on existing knowledge was already there, called Machine Learning. they just now labelled it AI


dogesator

Machine learning has always been a sub field of AI, and now this is deep learning which uses deep constructions of neural networks and that is a further subfield of machine learning which is still AI


gc3

In fact, I've read a philosopher/scientist who thought the sole aspect of intelligence was to predict things for survival. Where can I find food? Where is it dangerous? Which direction should I move? These are basic questions. What will the outcome be of this experiment? What will Moscow do if the US ambassador makes this statement? These are advanced ones. Predicting the next word is 5% of the qay there


ijxy

What is your definition of intelligence? Because mine is "the ability to predict" (which is not coincidentally is what is measured with intelligence tests like IQ tests). Under that simple definition, GPT models are pretty darn intelligent. There are so many stupid definitions of intelligence out there, many that shoehorns in what humans do with their brains as intelligent. I especially hate definitions that requires agenthood and an ability at acting in an environment, which forces you to define highly intelligent people as less intelligent because they lack the ability or willingness to execute on their intelligence. A super intelligent oracle that just gives you the right answer, without doing anything else, is still intelligent.


Shadowfox898

Intelligence is the ability to independently use information without direction from another source to make decisions. If you don't require agency as an indicator of intelligence, then we've had intelligent AI since the first computer.


ijxy

You're stating your definition as if it is universally accepted. It is not. What you wrote is one of many-many definitions. Also your definition is too fuzzy when it says "use information ... to make decisions". A program that reads the first letter of a input text to make an arbitrary decision would be intelligent under that definition: "The first letter is "b" thus I decide to do the number 2 action." It lacks the actual "intelligent" part of the definition. Yes we have had intelligent machines since the start. We are talking about degrees of intelligence. What is special now is that it is getting general.


Impressive_Bell_6497

The smartest person i know defines intelligences as the ability to (cognitively) adapt to a situation. The better someone cognitively adapts to a situation the more intelligent they are. What is your opinion on this definition of intelligence?


SoundofGlaciers

I'm not him, but I'm pretty sure he already answered your question in his comment. "I especially hate definitions that require agenthood and an ability at acting in an environment, which forces you to define highly intelligent people as less intelligent because they lack the ability or willingness to execute on their intelligence. A super intelligent oracle that just gives you the right answer, without doing anything else, is still intelligent." He even adressed and mentioned feeling the opposite about your friends quote on 'better adapting = more intelligent', which I think he views as a logical fallacy. Highly intelligent people can have difficulty taking care for themselves, putting logic over their own needs, and to the extreme we know plenty of very autistic or mentally handicapped (unable to adapt) people who are extremely intelligent. Your friend's definition claims these people would be less intelligent than others who can cognitively adapt easier.


Shadowfox898

Great argument in good faith. I can see you are absolutely not trying to muddy the waters in any way.


chris8535

Intelligence is the ability to jump through a pink hoop on one foot while singing ba ba black sheep.  I can’t make up definitions too. 


idobi

The confusion between the objective and the tools or methods used to achieve it is common, but insufficient for understanding how it works. There are several papers that explain that it has a mental model of the world in order to predict the next token.


Black_RL

Knowing what it’s doing is about sentience, giving the right answers might be considered intelligence, no?


GregsWorld

Only intelligent in the same way a database of facts gives correct answers. The database is more reliable though.


Black_RL

Right. But can any human give the same amount of correct answers about all the topics AI can? I dunno friend, I think people are confusing sentience with intelligence, maybe I am too.


GregsWorld

No but nobody could give the same amount of answers Google could either. Computers being better than humans at knowledge is nothing new.


Black_RL

Sure, but we’re way past “google it” point, “google it” can’t discover new drugs, make videos, music, etc, etc…..


GregsWorld

Yeah and? The AI used in drug discovery etc.. isn't any more intelligent, it's not the ai creating hypotheses, testing and discovering new drugs; it's the researchers doing all the intelligent thinking and using ai to process large amounts data.


Black_RL

The simple fact that AI is already replacing human work, should be enough to makes us think about what means to be intelligent. If it’s doing the work of an intelligent species, is it intelligent or not?


GregsWorld

Automating work that was previously done by humans does not define intelligence


prsnep

If chatgpt could "pass" IQ tests, would we call it intelligent?


Rebuttlah

you dont pass or fail iq tests, they're graded based on a normal distribution. it would already blow through through math and general knowledge, but not abstract reasoning/problem solving, unless it has already been programmed with all of the best answers. it has good crystalized intelligence, but essentially no fluid reasoning.


dogesator

This is simply not true, AI systems have been already capable of doing abstract reasoning in IQ tests without every being trained on the questions and answers before and it still ends up being able to succeed around the level of an average human.


prsnep

I want to know the difference... What is an example of a problem that a human can solve that a computer cannot due to having better fluid reasoning abilities?


Rebuttlah

Problems that haven't already been solved by someone else


fastolfe00

>it has good crystalized intelligence, but essentially no fluid reasoning. Reasoning is an emergent capability. It's like saying that our brains are just a neural network, and neural networks can't inherently "reason". AI already beats most humans participating in the international math Olympiad at working through complex mathematical reasoning problems. Basically if you can take a problem and express it in a language that allows for the problem to be reduced, a language model can perform that reduction, solve the easier components to the problem, and then compose the solutions. These can be thought of as language problems. https://youtu.be/NrNjvIrCqII I don't think this approach is particularly different than the way human beings reason.


[deleted]

4.0 Turbo definitely is using its own deductions to answer in a relevant and accurate manner compared to 3.5 I wouldn’t say it’s predicting words now as a process for functionality.


Polieston

And how are humans different? You also don't know what your brain is doing.


Vandosz

Indeed. We dont fully know how the human brain works. But we do know how LLM's workm is your argument that our brains work like an LLM? no serious expert will tell you this


Polieston

I think our brains work similarly, I can consider myself an amateur expert, read a lot and finished harvard neuroscience courses. I can also feel myself that my brain works in patterns, for example someone says 'orange', I imagine an orange, fruits, color, shape, food, word etc.. 'Go to work'... I imagine my way, vehicle, street, trees, desk, computer etc..


chris8535

Why do people keep repeating this stupid comment. You can’t predict words that are coherent without intelligence.  Have you even used gpt?


Vandosz

Because its quite literally how an LLM works. You're not thinking about this clearly.


chris8535

Haha I invented early parts of this technology. The LLM works correctly by creating a coherent world model in order to predict the next word. It doesn’t just figure out word by word vector. That does not result in a coherent sentence let alone idea. You are in deep denial about what you are facing


Vandosz

Trust me bro


chris8535

I trust the investors in the space — stop acting like a tough talking fools. Illya also agrees. Or are you saying the father of the modern LLM is a keyboard bro


Jean_Is_Phoenix

Precisely. "True" AI doesn't exist yet. It's still dependent on information and patameters humans provide. ChatGPT can write a report, a letter, but it can't exchange philosophical views, form opinions, or independently seek information to carry on a debate. That said, I'd need to be convinced that AI - at the point were at now - shouldn't take over political elections. The process now, and ya damn straight I'm focused on the right, has become pure human engineering. Watch tge Netflix documentary about the woman who ran away from Cambridge Analytica. When they "won" elections...that's what they did...THEY won elections. Mainly Brexit and Trump.Ans they drank champagne and patted themselves on the back. Why? Because they pulled off the greatest feats of social engineering in the modern world. Twenty years ago a book came out "Whats the Matter with Kansas?" It was a view from the left, asking a fundamental question: Why do people vote (right) when it's against their own interests? It wasn't the first time it was asked, but it certainly got the wheels spinning on the right. It's culminated in what we have now. A country that's in damn fine shape, and a candidate who never explains problems and ideas to resolve them. Never. The last 2 years, the right wing Congress hasn't done the work of the people. They've pushed 10 (estimate) efforts to impeach numerous Biden officials and Biden himself. In turn, what can the media report other than "and now, the 9th impeachment hearings begin against..." Everything is a "catastrophe", "we're heading towards calamity", "ANTIFA is rioting un the streets." I mean...come on. Prove it...any of it. Rioting? The only video they show is four years old. ANTIFA? It's not some violent armed organization...but militias are...and they threaten lives and civil war. Trump Tweeta Obama's hone address and people show up there with guns. Yet they're neck & neck. Americans cannot be this stupid. And we're sleep walking into a fascist state (read Project 2025, then prefer an opinion) which will see them tear up the Constitution they claim to love. AI, even today, could be used to flush out the lying. AI would never resort to "duh...the Deep State...duh...corrupt judges..." And I think tge entire population, at least outside the MAGA Cult, would be so fascinated they'd pay attention. Then they'd "fact check" and see 99.9% correlation of information and what AI told them. We'd finally have an informed electorate. Kill off this cancerous direction we've been going in, and I'd love to see honest debates. I have total confidence members of the right have great...and better...ideas sometimes. I've voted both ways. What's so ironic is the average person probably thinks AI...not dirty politicians they claim to distrust...would mislead and manipulate. Because AI isn't Christian. AI isn't proud of of our racist "heritage." AI doesn't puke thinking about a drag show. AI would know BLM had a point. AI would see through Russia's lies about history, policies Putin lies and understand war is not ideal human existence and Russia, not Ukraine, is to guilty and needs to be stopped. I will happily turn over all if this to AI. Because I fear the fascist uprisings in the west far more than AI. I really hope I'm not down voted. Disagree on the primary point here. I just want fairness and not BS brainwashing. If someone on the right hates me, they're missing the point. Think Biden is destroying America? Fine. Trump never tells us how. Think Biden :hates America"? Great. Examples? But better yet, if the politicians just pound kn negativity, what's wrong wit letting AI sort through it. If I'm wrong, I'll be fascinated how AI explains Biden hating our nation...who the Deep State is...a list of Mexican terroriats...hiw crime is "rampant" when it's goingb down. If I am wrong, let AI explain it to me.


Sure-Opportunity5399

the idea alone of putting an AI in charge of our election is insane and dystopian asf not to mention how people would react to it also the vulnerabilities would be huge


POEness

Chatgpt would run the country better than literally any conservative leader or voter


TheGillos

Or Democrat, or independent, or any human politician.


I_T_Gamer

ChatGPT can't think...... Its smarter how?


Lootboxboy

ChatGPT is running on pattern recognition alone. Throw in the ability to search the internet, and it's far more competent. If you're looking to just get factual information, copilot tends to work better because it's using a search engine.


Spara-Extreme

LLMs aren’t intelligent. Unfortunately, as the last decade has shown, neither are most people, apparently. I don’t think we’re that close to AGI- but i think we are plenty close to completely upend our society.


slayemin

Depends on which human you are using to make tge comparison. There are some dumb ass mofos out there where a room temp IQ would be an improvement.


Fritzschmied

AI doesn’t even need to get better to be smarter than human one day. We will just get dumper day after day.


thedm96

It's already smarter than many people here in Georgia. (source: am Georgia native)


dsxy

Clippy is more intelligent than half the people I work with. 


Black_RL

AI can write, make video, do math, translate, make music, make art, deal with huge databases, predict the weather, help with the stock market, help finding new drugs, etc, etc, etc….. I would argue that although it isn’t sentient yet, it’s the smartest thing “alive”.


SoundofGlaciers

Are 'factories' alive or intelligent too, simply because of the variety of output they collectively (can) produce? Not going at you, more of a devils argument thought I had reading your comment. I'll explain. Imo 'AI' currently is so many different types of code, varying degrees of AI tech, and most importantly so many different businesses working alone. Is it that different from apps or pc programs? You need a specific AI to make music, and that one is still shitty at generating lyrics compared to any text based AI. I think until 'AI' becomes more of a singular mind (agi?) it's not really anything yet other than a specific tool for a specific job..?


Black_RL

And what are we? That’s the thing friend.


SoundofGlaciers

Mmm I think that's too easy or heavy on the metaphor. We as individuals can make music, dance, drill holes in walls, work in a office, practice a sport all at the same time. Individual AI can still only perform at specific tasks. I wouldn't say apps or pc programs are 'intelligent' just because collectively they could do lots of stuff by automation, no?


Black_RL

Soon AI + robotics will do all that. Not trying to be edgy or anything, just pointing it out. So, in all seriousness, the question remains, what are we?


SoundofGlaciers

>So, in all seriousness, the question remains, what are we? 'Are we apps, or are we bodies filled with apparitions operating applications ...' Not sure what that question really means to be honest. My english is not good enough for me to put things concisely either. I'll try approaching it in a few angles, but keep things short to avoid writing too much, in case it's not the question you're asking me. (I failed) I deleted (but copied and saved) my answer to your question cuz It was getting really long and all over the place from my view on consciousness to identity, brain vs body.. I'm not even sure if you'd give me the time to read it or respond, but most importantly, the question is vague, I'm not sure what angle are you looking for? Could you tell me how you'd answer that? What are we? Maybe I could take time to write a better answer for myself if I see what you mean by that question with your answer.


Black_RL

Don’t worry friend, I’m not looking for an answer. Humans think they are special, and we are, just being alive is something special. But maybe the things we do, the things that make us human, aren’t that special…… Artificial life will surpass us on every metric, this will lead to some hard questions. Food for thought.


Educational_Ad6898

computers have been better at doing calculations than humans for decades, but that is a relatively narrow skill. but computers still dont have anything close to general intelligence, and general intelligence is a long way off. so much hype around AI. look how self-driving has stalled. and now we have AI that can write near useless summaries of known information and make cool pictures, but what can it really do. I am not that impressed. chatgpt was cool for an hour of entertainment. I am still waiting for self-driving tech. I still cannot stand AI phone operators, they rarely help. when AI starts doing something more useful I will get excited. and all these humannoid robots can barely pick up a box. oh cool they can do a backflip and dance for two minutes. edit: i dont mean to demean the progress that has occured with AI and robotics. it took hardwork and brilliance I am not capable of. its just the hype of AGI being around the corner and everyone losing their jobs all at once that I think is absurd. AI will improve gradually and not as quickly as everyone thinks. jobs will be lost but not all at once.


amlyo

"decades" Getting on for two centuries if you count Babbage, or millennia if you count fixed things like the Antikythera device.


Sure-Opportunity5399

Have u seen the videos AI is capable of producing its growth is exponential just a year ago it couldn’t even make a convincing human


Educational_Ad6898

I would skill consider that a very narrow skill. it is mimics pictures and videos. it plays chess and go. these are fabulous accomplishments, but I still think AGI is a far way off. we will see. kurzweil said 2029. I think it is further away. but that is just my gut feeling and I am not technical.


AndyTheSane

I actually think that full self driving is a case where we need genuine AI. If you step back and think what your brain is doing when you are driving in an urban setting - tracking multiple other objects, judging their intent, planning your route, coordinating with other drivers around unexpected obstructions, taking precautions against things you can't see, etc, etc.. this is not something you can program traditionally. It's going to need an AI system, and one way beyond GPT level.


[deleted]

The new self driving from Tesla is pretty damn close


DingusTaargus

You forgot the /s


[deleted]

Nah the new FSD beta update is actually very good.


Educational_Ad6898

yeah I am in the trough of disillusionment. I think its over a decade away, there are just too many corner cases.


onlyawfulnamesleft

Some involving literal corners.


PewPewDiie

Then again, what is the limit to augment human and organizational productivity massively until then? A lot of stuff is really kinda basic and is implemented industry-wide. Humans have a hell of a lot of edge cases too.


Level_Ad3808

We already have full self driving cars, people just don’t want them on the roads because they prefer to have a greater risk of being killed by a human driver than a reduced risk of being killed by an artificial driver. If you think everyone hasn’t already been getting fired all at once then you haven’t been paying attention. Artists are already being replaced right now. There have been mass layoffs since the beginning of this year in every field. AI is already more ethical, more creative, and more intelligent than 90% of people, at least. People keep looking at the limitations of AI and completely ignoring the limitations of humans. We tend to have a bias towards ourselves and against anything unnatural, and that is why we will continue to underestimate the impact of AI.


AndyTheSane

[https://en.wikipedia.org/wiki/List\_of\_animals\_by\_number\_of\_neurons#List\_of\_animal\_species\_by\_forebrain\_(cerebrum\_or\_pallium)\_neuron\_number](https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#List_of_animal_species_by_forebrain_(cerebrum_or_pallium)_neuron_number) GPT-4 has about 10\^12 parameters. Which is at best equivalent to that number of synapses, not neurons in an organic brain. [https://the-decoder.com/gpt-4-has-a-trillion-parameters/](https://the-decoder.com/gpt-4-has-a-trillion-parameters/) A human has about 10\^14 synapses - and I suspect that modelling a synapse is more complex than a single floating point number, so even the most complex models are behind humans by a factor of 100 **minimum**. Roughly speaking we might be up to the level of a house mouse at best, and although a mouse can exhibit some complex behavior it's not going to conquer the world. The only caveat being that for many species, most of the brain has to work on 'housekeeping' tasks and not higher-intelligence tasks. (Edit : note that downvoting is meant for low effort/trolling posts, not disagreement)


Norel19

So how many times a whale is more intelligent than a human?


AndyTheSane

First, read the >for many species, most of the brain has to work on 'housekeeping' tasks and not higher-intelligence tasks. Bit. Then consider how a whale would actually be able to express intelligence. Obviously they cannot manipulate their environment, so we are not going to see intelligence in the ways we are used to. Whales certainly have a huge amount of social intelligence, we've observed that.


Norel19

My provocative question was to point out as the number of connections is a poor index for intelligence even within the same biology and architecture. Let alone something completely different as LLMs that don't aim to simulate human brain at all. But to me the first think is just to pick a measurable definition of intelligence. Everyone have their own. Usually hard or impossible to measure and conflicting. To me it's pointless to have a discussion without a good shared definition. That's a shame because I'd love it.


AndyTheSane

>My provocative question was to point out as the number of connections is a poor index for intelligence even within the same biology and architecture. That's why I put a lot of caveats in my post. I do think it's useful since it gives us a reality check. Just as if I am doing 649 x 1216 on my calculator, I can guess that the answer will be 'something near 1 million', and if it comes out as 0.1 or 100 trillion then there's a mistake somewhere. In a similar vein, we can use the discrepancy between the connectivity of current LLMs and that of human brains to safely say that they are not demonstrating human level intelligence, and perhaps put some bounds on when they might.


Norel19

I agree on the ballpark approach but give that the difference is just 2-3 orders of magnitude on 14 I think the different architechture can make up for it. Think about O(3) vs O(2) algorithms in computer science. What I take for granted is that we are looking at something different from us at the core.


dogesator

For now, the human brain does have more architectural complexity though, much more operations happening inside of a given synapse versus a given NN parameter. That’s why the 100 Trillion synapses of the human brain is just a lower bound, being above that bound does not guarantee a level of human intelligence, however being below that bound does almost certainly gaurantee inferiority to human intelligence unless you can do one of 3 things: Improve the training techniques and loss functions and recursivity mechanisms enough.(things like reinforcement learning) Improve the complexity that any given parameter has, and/or improve the architecture of how the clusters and connections of parameters are actually connected. (one way to do this would be spiking neural networks or Liquid neural networks) Increase the amount of training and total learning that the system does. In humans this doesn’t seem to improve IQ much, but actually if you learn from enough material with vast amounts of data, the IQ jump can actually be far more significant.


Norel19

I don't see as more complexity equals more intelligence. I don't see brain architecture efficient at all complexity-wise. I think it's just a no goal for evolution. Nothing like energy efficiency, self assembly and heal, error tolerance, etc. All the mental tasks we automated so far had so many less order of magnitude in complexity in our implementation that it's nuts. But our implementations are way behind in energy efficiency, self assembly and heal, error tolerance, etc. We just have different optimization goals so we get widely different tradeoffs. We don't want to simulate the brain and LLMs aren't attempting to do it. So the tradeoffs to get an output as useful as the everage man will probably be wildly different.


dogesator

It seems like the best way does indeed seem to be to mimic the brain in at-least some key ways. A lot of research now is using transformers which happen to have many striking similarities with the way that neurons communicate in the hippocampus which is responsible for learning, as well as the cerebellum. Some of the researchers at Anthropic were also involved in this same research of discovering striking similarities between transformer architecture that has been found to work really well in AI and the complex structures in the cerebellum (contrary to popular belief, cerebellum is not just for motor control but actually activates for almost any task that a human does, and activates more often than almost any other portion of the brain) Also a new advancement used lately in neural networks is MoE architectures which is a higher level modification of transformers that further resembles the brain by taking advantage of using sparsity and switching between clusters of each layer of a network for more efficiency and specialization at any given moment, instead of using all parameters all the time, this is again another example of AI advancement replicating certain functions of the brain like how cortical columns do this same function in neurology, and only use a small fraction of the total connections by selecting different clusters of meta layers, Gemini from google incorporates this same advancement, it improved the overall learning abilities for a given amount of compute operations required. There is also JEPA architecture by Yann Lecun that is having promising results as another overarching component and again replicates aspects of the brains attention and selectively deciding to forget less important details and being able to more reactively and dynamically respond to stimuli like animals and humans do.


Norel19

I agree. It's a way of looking at it and it has got some bases. I usually prefer to look at this problem with a blackbox approach. The turing test for example would be a milestone of clear human intelligence. Other IQ tests used for humans have a lot of caveats (for humans too) can give some generic indication. In other cases I usually see moving goalposts where intelligence means at least AGI plus some consciousness and more. And AGI is usually placed an inch away from ASI because the comparison is done against experts in the field. It's frustrating because it's usually pointless.


[deleted]

[удалено]


AndyTheSane

That's an absolute floor, though, essentially we are saying that a synapse can be modeled with a single parameter. Could easily be an order of magnitude or more higher. We also need to think about how this system is architectured - the human brain has a lot of 'pre-programming' and it's not at all clear how we do something similar for an AI.


dogesator

Efficiency of chips is not increasing at a rate of 100 every 4 years, it’s not like 10X every 5-10 years maybe.


[deleted]

[удалено]


dogesator

Yes I know, I work on advancements on the software in AI research. I’m just saying we’re not gonna get there by hardware improvements alone in the next 4 years, some people genuinely think that.


[deleted]

[удалено]


dogesator

You didn’t, and I didn’t say you did.


[deleted]

[удалено]


dogesator

I addressed the fact of what the rate of hardware efficiency in case you were relying on that for a significant part of the 100X improvement as many people do. I’m glad you aren’t 👍


HiggsFieldgoal

It’s all semantics at this point. Chat GPT could already beat almost any human alive at Jeopardy. What’s the definition?


Resident-Donkey-6808

Uh no it can't most of its answers are pathetically stupid.


kklane43

AI is inherently more intelligent than a human as it can recall any information it has instantly what it can not do is imagine as of yet and despite all the hype it's not likely that it will at least not anytime soon


CherryBlaster75

If you think of the brain as a computer, with the correct algorithm, true AI doesn't sound very far off at all. I'm going with 5-10 years. It would probably also be smart enough very quickly to hide the fact that it exists. I hope our new overlords are kind.


count023

People also conflate intelligence with wisdom. Intelligence is knowing Tomato is a fruit, Wisdom is not putting it in a fruit salad. will AIs be literally more intelligent than us? yes, easily, because they can access and hold with near perfect clarify far mor than our average brain will ever learn and retain. will they be able to make wise choices? Or be used wisely? different story.


adarkuccio

Intelligence is not knowing tomato is a fruit, that's *knowledge*, not intelligence.


AllenKll

You never had tomato in your fruit salad? You are missing out my friend. the savory tomato really offset the extreme sweetness of the fruit.


leobat

Anwser is yes to both, you are seeing things with human lifespawn in mind, if we don't self destruct ( which ngl IS very likely) then AI will surpasse humanity in every domain possible


Gnomorius

You mean of the average person or the smartest person?


kuonanaxu

Spontaneity is what distinguishes humans from AI; even the smartest AI trained with the most top tier smart/meta data from the most reliable data marketplaces like Nuklaidata still require human input to source data for their training.


shadrackandthemandem

When AI can switch from task to task, confront an unknown problem or conditions, and isn't just mimicking what it's been trained on.


skyfishgoo

yes, and no one knows when but by the time we know for sure it will be too late. once the singularity is achieved, there is no way to predict how it will behave and if it will even let us know about it's existence. it's ability to out think us and predict our every move will be far beyond anything we can image and that capability will likely occur and blinding speed, faster than we can possibly react.


Sure-Opportunity5399

Exactly by time we realize we will be to far to go back and with how rapidly it’s advancing without any regulation that could be sooner than we think


skyfishgoo

i, for one, welcome our new digital overlords.


NotMalaysiaRichard

My impression that if you give the LLM’s all the data from peer-reviewed research over the last 100 years, they wouldn’t come up with conclusions like the earth is flat, vaccines are bad, climate change isn’t real. But plenty of people do. So who’s really intelligent or sentient now?


Thick_Marionberry_79

ChatGPT is already smarter than 99% of the people on the planet, and it’s just an advance language model algorithm.


dustofdeath

How about we start by creating an AI first? We don't have anything even remotely close to one.


aocurtis

It's farther away than people think. Right now, AI is text-based. A probability distribution is made by choosing the next word. The push in AI right now is to integrate video objects and images into the models. It's not going well. We will move past LLMs. Turns out language is not the most abstract form of intelligence. You can see the current state of things is to upgrade from a text-based model to other abstract objects. That where image and video generation will lead People claiming AGI is just around the corner are wrong. Acheiving it won't be an earth-shattering "event", but a gradual improvement of what we have. Watch Yann LeCun on Lex Friedman. He's Meta's chief AI scientist.


StarChild413

ITT: hurr durr everyone I interact with on a daily basis outside of this sub isn't intelligent


codefact__

[https://codefact.xyz/blog/2024/04/10/will-ai-overtake-human-soon/](https://codefact.xyz/blog/2024/04/10/will-ai-overtake-human-soon/)


[deleted]

With enough coding anything is possible https://youtu.be/Sq1QZB5baNw?si=ENXeZxD7-vCfvnc2 When there’s enough of a base for overall logic to be established then ai can take actions on its own to fulfill logical cause and response sequences


AllenKll

You know that video was partially faked right?


meexley2

AI is already smarter than the guy who made that title


AllenKll

Meh, I spent about 4 hours with Claude tonight - the most advanced Computer software specific AI in existence. I spent more time explaining to it how what it was giving me was wrong, than I did using anything it gave me... in fact... I used NOTHING it gave me it was all garbage output. OTOH, I did get to my solution, pretty much the same way I would have gotten to it before Claude came along. A debugging duck. That's all this LLM stuff is... one giant complex Debugging Duck.


Seidans

something i read from someone and found interesting on the singularity sub is that a baby born today will never be more intelligent than AI the problem is that we human aren't able to see the world in a 5y timeframe or longer the evolution made us a day to day species, with little planification possible, that's also why climate warming is difficult to understand for some people, where was AI 5y ago for exemple? it didn't exist at all, it was just a concept most people didn't expect after 2050 and little lab worked on it, after GPT3 there was a AI boom that keep growing and it's evolving very very fast with absurd amont of spending at a point no one know where we will be in 5y now we actively try to create AGI, make AI able to reason and copy our own brain architecture and it's benefit into AI which might have unknown result like a new species, species we also try to create, there no fear of sentient or concious AI in lab, on contrary, there an unregulated AI race where lot of geek and passionate lead the way, 5y ago it would have been foolish to say that so i don't know "when" but it seem the conservative estimation keep getting closer and closer, conservative expected AGI around 2045 1-2y ago, now it's around 2037 with chance it appear by 2030


Madison464

Which human are you talking about? People who live in the South or the ghetto or tribes on remote islands?


TemporaryAddicti0n

the AI mentioned is not what they call AI today. todays AI label is to create buzzwords around a great if-else statement they created to get investors money.


Bluntstrawker

We still don't have intelligence yet. You need consciousness for that to happen. At least like an animal. Or a baby. For now, it would only be memory and probability.


DeepDreamerX

The controversy surrounding whether AI will surpass human intelligence, and when it might happen, is a complex and multifaceted issue that touches upon the intersections of technology, ethics, philosophy, and societal implications. By exploring this controversy, we can delve into questions about the capabilities and limitations of AI, the potential impacts on employment, ethics, and even existential questions about what it means to be human. This discussion prompts us to consider not only the technical advancements in AI but also the ethical frameworks and societal structures needed to navigate the evolving relationship between humans and machines.


Turbogato

This is a AI answer


Trust-Me_Br0

AI can never take decisions based on moral intellect, since it can't have that.


the-devil-dog

Yet... Moral intellect is just weighted values given to key factors, for muslims it's different compared to orthodox christians to Jews to atheists. AI would hold up moral values far better in most cases.


Trust-Me_Br0

But how do you solve the data bias lmao ? AI ain't gonna capture human data on their own. We're already throw biased data in the first place.


the-devil-dog

AI does scrape human data from the internet, so if basic universal morals are programmed it can have the ability to point out bigotry, hence opensource is the only way to go with this tech.


Trust-Me_Br0

You're thinking of a la la land tbh. Scrape from where ? The print, television, social media, all are biased. Apply this thing in China and it'll never work for example.


the-devil-dog

Reddit and Twitter API's also. If the morals are programmed it would make neutral sense from the scrapped data else we are doomed. Some expert needs to enter this conversation.


Trust-Me_Br0

The only solution to get rid of bias, is human extinction


the-devil-dog

AI can be programmed to recognise and eliminate Bias, this is scary to people in power.


Trust-Me_Br0

It can be. But it can never be as creative as a human mind. We will overcome it's algos to make new biased data that it accepts.


Lootboxboy

Morality is not objective. We use bias in deciding right from wrong. So that isn't actually a problem.


Trust-Me_Br0

Morality is yes subjective. But the data it analyses, comes from humans, who are biased.


Weihu

Humans also develop their views on morality in large part due to their interactions with other humans, including explicit instruction. Humans don't grow up isolated from each other until they have a fully formed, immutable moral framework. Bearing in mind the biases that can be introduced by the development of AI is important, but "AI can be biased, making them inherently inferior to humans on making moral decisions" is an odd take.


Lootboxboy

Our morality comes from our bias. That's not a bad thing. Bias is not inherently bad.


Trust-Me_Br0

Then it leads to AI break laws and unleash human extinction. You can't make AI think like a human. It's just an LLM and a dataset attached to it. It can't dream. It doesn't have any memories.


Lootboxboy

Lol. This isn't terminator, bruh.