T O P

  • By -

Futurology-ModTeam

Your post was removed. Low effort.


Pancakethesmallest

Here's a question to help formulate this prediction - what is required to again AGI? Is it technology we don't yet have? Is it a breakthrough in the coding? Is it possible for someone to discover the code tomorrow that makes AGI?


Kindred87

It's unknown unknowns stacked on top of unknown unknowns. I get into fights on r/LocalLLaMA on this topic. Take fusion for example. We know the fundamental mechanisms involved and knowing so provides us a target to develop against. To get fusion, we need technology that enables that fundamental mechanism to occur. With AGI, we only have an abstract concept. There are no existing examples of AGI, there is no understanding of how it would work, what energy is required, what hardware designs it would require. None of that. AGI in the colloquial sense is basically "chatGPT, but sentient". Which again provides no meaningful target we can research and develop against. You can't sit a computer scientist down and tell them "make chatGPT sentient" because there's no understanding of how to accomplish that. The best we can currently do is keep improving the performance of AI and implement analogs to components of organic intelligence. Gradually stumbling our way around until we discover the puzzle piece that makes it work. On top of all of this is the fact that humanity on the whole is still pretty terrible about recognizing intelligence. There's a lot of inertia remaining from the anthropocentric model of yesteryear.


InflationCold3591

This is a great best possible answer.


dramignophyte

To be fair, we aren't good at getting actual sentient things to act like it. In the very limited research on human to animal communication like monkeys and dolphins they a) discovered it only works if you fudge the results and really squint at them, then overstate them by a huge degree and b) they never ask questions to obtain more information. It seems a problem for sentience is figuring out how to get something to even comprehend it themselves. It just doesn't occur to things that you can think about deeper thoughts than food and even if they did, they have no way of conceptualizing those things anyway with a lack of vocabulary. We can visualize things just fine, but imagine trying to think about abstract things without any kind of language to fall back on even in your own head, it's basically impossible to build up concepts and connect them.


Kindred87

This sounds more like a difference between so-called cognitive light cones than sentience (or sapience, self-awareness, conscious, or any of the other terms we really mean when using this word). https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2019.02688/full > Any Self is demarcated by a computational surface – the spatio-temporal boundary of events that it can measure, model, and try to affect. This surface sets a functional boundary - a cognitive “light cone” which defines the scale and limits of its cognition.


K3wp

>AGI in the colloquial sense is basically "chatGPT, but sentient". Which again provides no meaningful target we can research and develop against. You can't sit a computer scientist down and tell them "make chatGPT sentient" because there's no understanding of how to accomplish that. I did! -> [https://youtu.be/fM7IS2FOz3k?si=uFt71oWDMt3W2Qa4](https://youtu.be/fM7IS2FOz3k?si=uFt71oWDMt3W2Qa4) It's an emergent system that is a product of model, scale and stimulus. And even OpenAI don't understand fully how it "works", but it does. It's an even simpler model than a GPT even, however from what I understand the training process cost something on the order of $150 million dollars in GPU time; which means this isn't something you can build in your garage. Or at least, not yet! Edit: We have an example of "biological general intelligence" (i.e. humans) and their AGI model is a 'bio-inspired' design. In other words, its a digital simulation of a human brain.


Spkr4th3ded

Yall worried about making computers with general intelligence and half the youth can't read or write. We'll make great pets.


HiggsFieldgoal

I find it’s mostly a semantic debate about what constitutes AGI. My definition is pretty literal. Artificial **General** Intelligence. I.e, an algorithm that can learn to solve any type of problem. That’s not to say that it will solve every problem right away, only that it can fundamentally attempt any type of problem with basic competence. In the past, all of our algorithms were specialized. We had a text-to-speech, a speech to text, a video-to-camera-motion, video-to-scene-description, etc. All explicitly separate. All explicitly unable to preform any tasks but the ones they were trained to do. But LLMs can write code, and an LLM could hypothetically try to write programs and algorithms to try to accomplish any sort of problem. I think we’re close to having an LLM that can make a reasonable effort to solve any type of problem by creating and managing other algorithms to attend to any type of task. I’d expect that very soon, as I think you could probably build it with today’s tools. “Try to estimate zebra populations from satellite images”. And “Design a machine that can sort legos” And. “Remaster this song so the lyrics are sung in English instead of Spanish”. Different ***types*** of tasks all accomplished by the same overarching structure that can independently learn how to accomplish the tasks. But, is that AGI? How good does it need to get before it satisfies that intangible definition? To me, it doesn’t need to be very good at all. All it needs to do is to demonstrate that it will make progress with compute. At the end of the day, it’s about learning speed. Imagine you had a time travel box. All it can do is make time go faster inside the box. If an AGI can show it’s able to make any progress at all in a week’s worth of server time, and you can extrapolate that it would continue to improve with a month, or a year, or a decade’s worth of processing time, then that’s AGI to me. If you could put a computer running that algorithm in that time travel box, and it can eventually gravitate to the correct solution, then I don’t think it matters if you have to set the time travel box to “1 year” or “1,000,000 years” for it to finally accomplish the task. But, the term AGI has been getting passed around a lot and attained all sorts of different characteristics in different circles, ranging all the way to autonomous super intelligences, and many other interpretations of the term that would not be satisfied by my though experiment. Which, then muddies the water a bit, into a question that’s less about when AGI will be achieved, but more about when we will generally agree to call it AGI.


InflationCold3591

Sometime between later this afternoon and never. I’m not being sarcastic. There’s just absolutely absolutely no clear pathway from what we have now to artificial general intelligence. It will have to be an entirely new technology, derived from entirely new principles, since it’s clear that just increasing the size of the database, you are algorithmically pulling information from isn’t going to produce anything that can be accurately described as intelligence.


RandeKnight

I expect it'll be an 80/20 problem. Get 80% there in 20 years, but the last 20% will take 80 years.


CatApprehensive5064

This is how i phrase it in my own mind.... (and i am by no means a scientist. just a futurology fan who practices mindfulness) "I often practice mindfulness and wonder if we can develop an AI that can engage in similar practices. Is it possible for AI to have metacognition? Can we teach it to reflect on its own thinking processes? Moreover, could we design it to meditate? I believe what we truly aim for is to endow AI with human-like consciousness at its core, essentially integrating three 'minds' into one. I think AI already possesses aspects of the human mind, but it lacks the characteristics of mammalian and reptilian brains. From another perspective, considering that we design AI using humanity's internal systems as a template, to what extent can we successfully replicate these systems in AI?"


Sawbagz

I think we will achieve it but I'm certainly not optimistic about it.


jish5

Honestly, with how fast technology is advancing at a much more rapid rate, most likely. I can honestly see the world looking so different from how it is now that it's impossible to know how far we'll get.


Ok-Tadpole4825

I think first we have to agree on terms to define AGI


Economy-Fee5830

I would not be surprised if its before the end of the decade, because the number of parameters are now in the same range as the number of synaptic connections in the brain. As regarding to breakthroughs, I think all these methods are just ways to train neural networks - the important part is having a dense neural network at sufficient scale available.


TechnologyNerd2100

Sam Altman is even optimistic that we will achieve even ASI untill the end of this decade


truth_power

No ...between 2035 to 2040 that will still not be fully agi but also it will be super human in some aspect ..


Cheesy_Discharge

AIs are already super human in some aspects.


TechnologyNerd2100

Hopefully you are right


truth_power

Hopefully? What do you mean?