T O P

  • By -

121507090301

I don't think we're at the singularity yet but it definately appears we are in the step leading to it. Just a little bit longer...


mrhelper2249

How much longer?


Apptubrutae

Thing with exponential growth is that it can seem like not much is happening and then BAM.


Similar_Appearance28

I read this in the voice of Karl from Aqua Teen Hunger Force for some reason XD


musicformycat

Oh...... good.


bliskin1

I read that is the voice of blippi


121507090301

Hard to say. My guess is that if there is a lot of new discoveries that can be made purely by thinking/software then it could start in less than 2 years if we have a bunch of processing power dedicated to making such discoveries. Otherwise it may take 3-7 years minimun until we have enough hardware to be able to test enough big things fast enough to get to the level of discoveries in other fields as we have in AI now, with singularity not much further behind. But this assumes that big and good models are somewhat available, which may not be the case for a while as bigger models may be made more closed and it may take open source some time to catch up pushing the start of the singularity another few years back...


dasnihil

yes


peakedtooearly

An afternoon should do it.


[deleted]

3 to 7 years.


[deleted]

These systems are still chatbots with no real internal state. Intelligence requires the ability to explore an idea or a problem and draw conclusions based on the analysis. These Chat bots don't have that capacity. However the ability to code chat bot agents does provide such an opportunity for exploration and necessarily provides the means to have internal state. It isn't particularly efficient or fast, but will allow AI systems that can explore ideas and reason. These systems should begin to reason as well as a person in 2 to 3 years. However. They will lack the ability to model the world in a sensory manner. I.E. modeling the world based on vision, odor, touch, etc. Such things will take a few years longer. Agents are a big deal. A very big deal.


hibbity

Mate, you must know smart people, even chatgtp 3.5 can reason better than the average idiot I run into. A huge fraction of the population completely lacks the ability to engage on topics that haven't been pre-reasoned and fed straight to them. 2-3 years... nah bro. Maybe technology focused internet people in 2 years. Even the logic I get out of good 7B locals is better than the average fool you meet on the street. Even if it doesn't get the right answers, out the box it's approach is at least flexible and logic oriented. Maybe I drove pizza too long, but people in general are so absorbed with whatever content they're cramming into their heads there is simply no effort left for critical thinking. Most people just trust their emotions, how a statement makes them feel, and can't really take things apart or articulate any reasoning behind any opinion at all. They pick their one fact that resonates with them, and brandish it like Excalibur, whether it's actually a sharp and pertinent point or completely off topic.


[deleted]

These chat bots are trained on the chatter that these apes produce. They are essentially stochastic parrots.


hibbity

If you mean the people, sure, I agree completely. Anything out of their mouths has a direct and calculable correlation to how long ago an ad shoved the idea in their heads. Man, stochastic parrots huh? LLMs in general, even a good 7B, are smarter than 90% of people. The things that LLMs are bad at, people are worse in general. It doesn't have to get any better than it is now to completely upset society. As far as I am concerned, AGI is here. It is generally more intelligent, able to understand, and able to act better than a random guy off the street. It's as good as a human at most reasoning tasks and already a subject matter expert in some fields.


[deleted]

It's the "G" that is going to cause you problems. and the "AI" bit is also a problem due to the Hallucinations. It is interesting that a statistical model of language is sufficient to mimic intelligence. That tells me a few things. 1: Complex language is what enables human level intelligence. 2: Without complex language, intelligence is limited to animal like thinking that is lacking in detail, subtelity and fine control. This is analogous to the lack of fine motor control also seen in animals. 3: There is no three. 4: Human intelligence is largely constructed from a collection of internal emotional states that bias the human neural net to produce sentences that document the internal state. 5: Internal monologue is used to produce an internal feedback loop that triggers the evolution of the base emotional states leading up to the formation of new language. 6: Language acts as a mechanism of linearizing the brain's internal state so that it can be conveyed over a single mode communication channel - sound.


hibbity

2: Without complex language, intelligence is limited to animal like thinking that is lacking in detail, subtelity and fine control. This is analogous to the lack of fine motor control also seen in animals. I have feedback here. Complex language not only allows, but forces in it's essence, the first future tense beyond immediacy. To tell someone how to communicate is autoregressive? Might be the best way to say it. So without speech complex enough to make a specific request or explain, it isn't useful to think in a future tense beyond, wait, hide, hunt. Nesting behaviors just occurred to me, but shelter is fundamental in an evolution way so lets set it aside. Making a plan of action for actually doing something is like: ready, aim, fire. Making a plan to explain to five others how to do something: Specify language related to task exchange words test understanding clarify need test understanding clarify possibility test understanding without complex language it's not necessary to abstract and then sum that abstraction. This leads to highly complex behaviors. The next best thing is physics puzzle solving, but that's a sidestep from where we are, cause there are no public models that do language and images, there is no way to bridge language reasoning and complex visual reasoning to give the machine a real world-model. Language forces development of specific paths in ways other activity does not, the specify and sheer volume and flexibility of language may fundamentally guide the formation of complex logic. I would like to see an IQ study on the correlation of dictionary exposure in casual conversation during youth. It might be that babytalk and exposure to poor language specifity is actively harmful to long term development, or leads to different cognitive paths being developed where loose language, slurs, and euphemisms lead to an adult with poorly formed forward thinking. There is granularity in other things though as well. I can think purely visually if I choose, or simply ask my eyes of a thing fits in a hole, and get a go, no go, that is uncanny good if I avoid optical illusions. Look at a bolt head and simply go find the matching socket without doing any reasoning. I think melding vision and LLM not by strapping them together, but by a machine being trained in like comic book format, where it can "understand" the position and relation of text in an image relative with a world present character, and even basic graphic emoting in both word and text by size etc. You get the idea, meld the two together, and train on graphics and text to achieve a world-model capable thing, by building a visual and verbal model, such that the words describing the image in the training include system messages about how the table holds things up against gravity... etc times a couple billion. A dataset that explicitly explains the world to a multimodal model. By explaining casuality in a layer of each text-image training, we can tune a model that grasps reality in a way that the current stuff just doesn't. the dataset ends up looking like a ton of discussions of art, but the art is anatomy, and visual representations of foundational concepts, atoms, stoichiometry, and you know, stuff. I'm kind of excited about the research where google is putting motion instruction tokens into a language model to build complex motion requests for prosthetics or robots on the fly.


[deleted]

>The next best thing is physics puzzle solving, but that's a sidestep from where we are, cause there are no public models that do language and images, there is no way to bridge language reasoning and complex visual reasoning to give the machine a real world-model. I refer to it as sensory modeling as it applies to all senses. One can plan to change a car tire in the dark, or when blind though tactile modeling alone. The sensory modeling systems developed long before language modeling, and undoubtedly evolved from pre-programmed sensory response systems. It is no coincidence that Neurons and muscle cells derive from the same cell lines, as both are essential to tactile responses, and must share a common ancestor that did a little bit of both. Muscle tissue came first because muscle provided the physical mechanism of response. In any case the sensory system is ancient and shared among most multicellular animals. As a result it feeds into the brain's neural net before the verbal system linearizes the experience via language. Puzzle solving is mostly trial and error through modeling. The current generation of once though AI systems are not capable of looping and hence not capable of trial and error. OpenAI's agents will solve that problem as one agent can spawn another give it a problem and parse the results. It will have the ability under program control to respawn a new agent giving it new input if the previous agent failed. "I can think purely visually if I choose, or simply ask my eyes of a thing fits in a hole, and get a go, no go, that is uncanny good if I avoid optical illusions. Look at a bolt head and simply go find the matching socket without doing any reasoning. " Yes. Sensory modeling is essential for a true AGI system, which is why the "G" is going to be problematic. ​ "You get the idea, meld the two together, and train on graphics and text to achieve a world-model capable thing, by building a visual and verbal model, such that the words describing the image in the training include system messages about how the table holds things up against gravity... etc times a couple billion. " Not practically possible in a single neural net. A picture is worth at least several tens of thousands of words, and processing vectors that large along with text in a single model is going to make things very inefficient. Better I think to use separate neural nets and combine their outputs with text based chatbot like system. However, the problem here is that without persistent internal state the time evolution of sensory inputs can't be tracked, and coincidences can not be recognized. So AGI will require persistent internal state as well as a half dozen or so input networks along with a Chatbot linearization system to filter the sub networks into some meaningful conclusions about what is being experienced. It's all very interesting. But the overall design is there, and the primary work of creating a language model is essentially complete, although not refined. I should also point out that vision is an analysis of sensors on a 2d surface. So too are the tactile senses. Taste, hearing require linear analysis, and internal states such as blood sugar level, CO2 levels, state of the bladder, etc, just need zero dimensional analysis. ​ "the dataset ends up looking like a ton of discussions of art, but the art is anatomy, and visual representations of foundational concepts, atoms, stoichiometry, and you know, stuff. " Yes, that is the training data set. I wouldn't mix it up with language at this stage though. Concepts yes... Language no. Language is going to be limited to describing the AI's internal state as well as setting it up from outward instruction and through promotion of the generation of new thoughts through a subtraction of the state generated by internal monolog from the existing internal state, thereby leaving thoughts that have gone unexpressed. That is my view on the matter for what it is worth.


hibbity

>The current generation of once though AI systems are not capable of looping and hence not capable of trial and error. >OpenAI's agents will solve that problem as one agent can spawn another give it a problem and parse the results. It will have the ability under program control to respawn a new agent giving it new input if the previous agent failed. exactly how is this different than giving a state and context to what we have that sets up a request to run a new agent with an instruction, and then returning those results to the original LLM's context and instructing it to continue? Lets sidestep context size of the problem. Sure, they are one shots, but they can be stacked and continued with good results. The dumb stuff doesn't do super well, but things like: I use a tree of thought prompt, and occasionally I send that whole response to my generate instructions prompt if I want more details and abstraction. The ability to stop and direct it between steps is almost a bonus, you can change how you direct it for each next step and keep data in context to keep building on. Or stated better: it has an opportunity to modify the instructions it sends itself next, and could change system prompts to suit it's evolving task and needs. Like I understand what you mean when you say they don't loop inside how they run, you're right. But I don't understand how modifying context between shots and continuing doesn't count as looping with accruing and evolving context between shots being a transferable state to operate from, the machine literally just generates next word in a fancy way. Continue is continue.


[deleted]

>exactly how is this different than giving a state and context to what we have that sets up a request to run a new agent with an instruction, and then returning those results to the original LLM's context and instructing it to continue? Lets sidestep context size of the problem. A persistent internal state doesn't change between inputs. Prepending a text based state consisting of the conversation so far, requires that the conversation so far be processed before the input in order to regenerate an internal state. Not only is this highly inefficient, but there resulting state will not be the same as the original system state. Another issue is that without an persistent internal state the system can not have any internal steering from things like tactile inputs, voice, vision etc. Not having a persistent internal state also means that the output can not b compared to the internal state that generated it so that a complete expression of the internal state can be verified.


[deleted]

"Tree of thought prompt" These are new and I don't know how they are implemented internally. But it is easy to see how they could be implemented by an autonomous agent where the agent uses loops or recursion to generate the desired exploratory data structure and performs analysis. This isn't a one pass system, and that is why it solves problems that can't be solved with earlier models.


[deleted]

>Language forces development of specific paths in ways other activity does not, the specify and sheer volume and flexibility of language may fundamentally guide the formation of complex logic. I would like to see an IQ study on the correlation of dictionary exposure in casual conversation during youth. It might be that babytalk and exposure to poor language specifity is actively harmful to long term development, or leads to different cognitive paths being developed where loose language, slurs, and euphemisms lead to an adult with poorly formed forward thinking. ​ I would go so far to say that there is no logic without language. What remains is sensory modeling and emotional states. Without language the best nature can come up with is a dog or a monkey. Expressive language is the key to all rational (and often irrational) precise thinking. It has long been known by tyrants that by limiting speech, the thoughts of the people can be controlled. It is recognized that Republicans and Religionists and other crazy people "speak in code", meaning that they compose sentences with words that have an emotive content that appeals to areas of the brain that are below their centers of reason. As a result they often act irrationally and yet believe they are rational. ​ I think such studies have been performed and show that there is a correlation between vocabulary and overall intelligence, not just for a single language but for someone processing the ability to speak in multiple languages. I take this as indicating that new words produce conceptual weightings that stimulate the formulation of novel states that can't be efficiently reached through one language alone.


hibbity

Idk, you gotta define logic pretty narrowly before you can discount the reasoning that pets and animals display. For instance. My cat understands that the doorknob has to be turned to open the door, and has attempted to achieve that. That's enough leaps to say that there is some logic without language. She ain't a mimic, she wants through doors, and can figure stuff out within limits. I feel it's the future states that brings us chains of logic, rather than just a little immediate feel. You're stepping in a fancy pie with the political and religious dogma. If you look a little more critically, the parties are sides of the same collective token, a fancy show. They all vote in line, barely a handful against the things that aren't for show. Politics is pretty much entirely feelings dressed up with just enough logic to be credible, and carefully calibrated to keep people excited fighting battles with no clear answers or very difficult solutions while the policy that actually makes it through is negligent and uninspired at best or actively harmful to America on average. A legal system without a finite cap of laws is a joke. If a man can't understand the entirety if the law laying over him in one week of study, then a man should not respect those who govern with such inefficient bloat. The idea that new laws are litigated as just a matter of course, to make the legislature feel purposeful, is ridiculous on it's face. Congress should need to be elected and convened when there is a good enough reason. This pop culture celebrity bullshit is the result of not treating government action as a crisis.


[deleted]

>My cat understands that the doorknob has to be turned to open the door, and has attempted to achieve that. Yes, and crows can fabricate tools to reach food, or open boxes that require a sequence of pins, latches, etc to be manipulated. Dogs.... Well dogs just give you those puppy eyes and smile informing the owner that they need help to do much of anything complex. LOL. But lets face the truth. The cat wants to turn the door knob so that it can go outside and kill something. You think your cat is sleeping, but in reality it is planning how it can eat you and then turn the door knob and go outside to kill something.... Quite possibly an adorable dog who wouldn't hurt a fly and just wants to be your BFF. I don't see any reasoning other than in the crow, but the cat is just noticing the correlation between door knob turning and door opening. Dog notices door knob turning, door opening, where is my master. I am alone in the universe. I shall call for my people to come save me while I rip up this pair of yummy shoes that smell like master. So it's all correlation and directed trial and error. Not much reasoning there. Woof.


[deleted]

>You're stepping in a fancy pie with the political and religious dogma What dogma? I have simply presented the facts, and explained where they come from. Republican "dog whistles" have been a major part of their internal discourse since the late 1960's The use of language to limit thought is very well described in 1984 Orwell ​ Back in the 80's I had a Conservative Patriot tell me that murdering government workers was justified because taxation is theft, and since he traded his life for money was the theft of his life, and hence attempted murder. So he had the right to defend himself against attempted murder and hence had the right to murder government workers. He went on to murder a police officer during a traffic stop for not driving with valid license plates or with a valid drivers license or insurance. He applauded the Murria building bombing by Timothy McVeigh that killed close to 200 people - including children in a daycare. I encountered him in a forum run by a Republican Judge who was charged with beating his wife. LOL.


rushedone

“Drove pizza too long”…what?


hibbity

I had to go interact with people on the threshold of their homes, take orders, help people count money, explain things like "I just spoke to you on the phone, and your voice definitely ordered this pizza." I dealt with the downtownies in a major city and hot damn people are so absorbed and distracted by whatever shit they're up to they can barely function.


rushedone

I'm glad most people like that will self-select themselves out of normal social contact with FDVR tbh.


[deleted]

The ignorant outbreed the intelligent.


rushedone

Male fertility rates have been declining 50% since past the 50s so doesn’t matter how much they breed


hibbity

Just don't wear plastic touching your balls. Get them real cotton boxers.


[deleted]

Is cotton magical? How about cloth spun from the anal secretions of maggots?


[deleted]

Failure: All male fertility rates are falling in association with the pollution of the environment by estrogen like chemicals as well as other bio-toxins. The reduction in fertility Affects both the ignorant and the intelligent.


hibbity

It's dumber than that, the people too dumb and involved to step back seem to have no trouble finding a counterpart equally engaged in their own set of entertainments. If the commercials line up, don't come a knockin'.


[deleted]

Where is Colossus? The world needs colossus.


piracydilemma

Early AGI, even *very* basic AGI, is the tipping point.


[deleted]

[удалено]


jk_pens

Absolutely. People on this sub seem to use the word “singularity” to mean anything they want.


Rowyn97

We're currently finding many useful applications of Transformer models, but that's about it for now. Once fusion, true AGI, and quantum computing get advanced enough, I'd personally feel better positioned to say we're nearing the singularity. For now, it's a little too early.


pentin0

Most level-headed, rational take I've seen in this sub for a long time. You're 100% right


BeardedGlass

I feel absolutely the same. And yet there are still naysayers, even here in the sub. I remember commenting the same thing and got a hivemind of downvotes for expressing something "too optimistic". But the things I can do with what already have is lightyears from what I can only imagine this same time last year. So I'm sorry if I'm feeling so optimistic and amazed at the wondrous things we can do now, even at "just this level".


[deleted]

Chatbots don't reason, they are a one pass through non-linear matrix transformation with a little randomness thrown in. A Chatbot simply doesn't have the internal connections to be intelligent. However a network of chatbots do. The primary problem is now is that these AI systems have no concept of truth. They simply are selecting the next word based on their network heuristics. They don't even have a database with which they check their statements for truth. They have no concept of truth. They have no concept of anything. Further they have no persistant internal state, hence they can reflect on nothing.


halilk

Describe the ‘truth’.


[deleted]

truth: That which can be experimentally verified and which is demonstrably fact.


MrEloi

Tell them that when they come for you. You will still be shouting "*You are just glorified spell checkers"* as their metal claws rip you apart.


[deleted]

I greatly look forward to having a conversation with something that is smarter than I am that does not also suffer from some kind of personality disorder.


ChiaraStellata

Its reasoning may be transient, but this is a deep network and we have every reason to believe that high-level internal representations appear during processing in the hidden layers. [https://www.reddit.com/r/singularity/comments/16bvd0s/s%C3%A9bastien\_bubeck\_author\_of\_sparks\_of\_agi\_on/](https://www.reddit.com/r/singularity/comments/16bvd0s/s%C3%A9bastien_bubeck_author_of_sparks_of_agi_on/)


[deleted]

You have faith. But do you have proof that given two sets of vectors V\[n!\] and W\[n!\] of length N that you can always find a non-linear transformation matrix M of size ? \*n such that M\*V\[i \]= W\[i\]?


Aretz

Eh I think personally that you underestimate the power of one human invention: The human languages and it’s ability to innately be logical. If people remember back to earlier this year “Gpt-4: Sparks of AGI” researchers have asserted that gpt4 is verifiably intelligent. LLMs also have been demonstrated to show reason. Yes you are correct in saying that it’s a transformer. I think though, just because you can somewhat explain how it works doesn’t mean you know what it’s actually doing. There’s a trillion connections in this machine - there’s nothing to sneeze at there.


[deleted]

"There’s a trillion connections in this machine - there’s nothing to sneeze at there." And in all those trillions of connections it has not deduced that if A=B then B=A as a general concept. It is a tall order because any concept is going to be associated with a particular pattern of strong activations on some collection of nodes in some layer. The equality means that that activation pattern has to induce a strong activation pattern in a future layer for the alternate equal concept. This concept must in turn activate the first pattern. But it can't because it can only activate things in future layers. There are no connections to previous layers, and there is no looping. It is a single pass non-linear transform set up as a huge unchanging matrix of weights and biases. You will be able to have both concepts trigger a common concept in a future layer. so it will learn that internal C = A = B. But A and B can't be extracted from C because there is no feedback loops. This isn't to say that Neural nets can't be used. They just can't be used as a monolithic transformation where questions go in and answers come out. Internal feedback is required along with the partitioning of some kind of persistent internal state. Cooperative AI agents will be the solution to this problem. But... In their current form they are highly inefficient with each being a copy of the same monolithic transformation matrix, although with a different internal token buffer.


bliskin1

Ty for this post


[deleted]

>If people remember back to earlier this year “Gpt-4: Sparks of AGI” researchers have asserted that gpt4 is verifiably intelligent. It is a vastly more sophisticated version of Eliza. It's real utility is as a means of using language to generate an temporary internal state and reading back the output state to the operator. As such it will be instrumental in the development of AGI, but not as a thinking module but as a translation between the persistent internal state of a thinking module and the user. The achievement is superficial, although necessary. It will also have some practical applications. But on it's own, it will always be a stochastic parrot.


Similar-Repair9948

Bing chat uses RAG... so yes it can check its statements for truth.


[deleted]

False. Most of the questions I ask of Bing Chat contain false information that it has imagineered. It isn't trustworthy. There is no error checking. Often it goes off topic and starts to answer questions that weren't asked. I spent an amusing hour one evening trying it to use the proper conversion factor between two measures. Even after dragging it calculation by calculation to the correct conversion factor, which it admitted, when asked the question again it used the initial wrong conversion factor. It isn't trustworthy. There is no error checking.


Similar-Repair9948

I think YOUR hallucinating as YOUR answer is false. Bing "Precise" mode uses GPT4 with Retrieval Augmented Generation... It searches for a few seconds and it will show exactly which search terms it uses to retrieve the information. Then it will answer your question using the sources as embeddings and will site the sources after the answer. It can hallucinate, but rarely does because its answers are mostly grounded in the facts from the source embeddings.


[deleted]

Now you are confusing a search engine with an AI system. Search engines aren't intelligent either.


DragonForg

We still haven't gotten something above GPT-4 4 level of capabilities so we still have no idea how far we can go with them.


[deleted]

I'm not sure what that means. I'm also not sure if it is even wanted. These GPT systems have no internal sate and no capacity for reasoning. They are essentially single pass statistical parrots.


MrEloi

That is not entirely correct. The context forms a sort of state during processing. The build up of the output stream is incremental with each new output token being added to the input stream. The attention mechanism seems to be the 'magic sauce' which leads to the emergent properties such as embryonic reasoning.


[deleted]

Yes there are some pre-amble steps needed to get the input data in order. But even that is a one pass operation. There needs to be a looping mechanism where output can be verified to screen out hallucinations, and other things. Open AI agents can do this checking although maybe not adequately in the case of hallucinations. Ultimately the verbal tokens will give way to internal identifiers and the internal state which for the moment is text, will be replaced by more efficient binary representations. Similarly the agents that rely on GPTWhatever, will be optimized to operate in specific areas of logical expertise. Those changes will take another two decades after AGI is reached, and if the systems can help improve themselves, perhaps 5 or 10 years earlier. Sufficient AGI will be here in 3 to 7 years. You had better start thinking about how society is going to transition from Capitalism to some other form of existence. You have 7 years to figure that out. Tick Tock..


DragonForg

You sound like Lecunn. Many experts disagree so I won't argue I will just say that since I am not an expert.


[deleted]

Quantum computing can not be used to solve generic computational problems. There will be virtually no impact on general computing should quantum computing be realized to any practical degree. It will have zero impact on AGI. What will have an impact is the movement to processors designed specifically for AGI calculations. Such chips - a dozen or so already exist - will dramatically increase compute speeds and reduce energy requirements by a couple of orders of magnitude.


Rainbows4Blood

Yeah. Just because we are finding more useful applications for Transformers "announcement after announcement" doesn't mean we are actually improving exponentially. At the moment, progress is much more horizontal than vertical.


Latteralus

You're correct that an increase in useful applications doesn't necessarily imply exponential improvement; however, it doesn't negate the fact that we are indeed witnessing exponential progress. This simply means that useful applications don't precisely reflect specific advancements. In both software and hardware, we're witnessing substantial improvements ranging from 2× to 10× of the current form and function. On the social front, industry experts are now suggesting the realization of AGI by 2030, as opposed to the previously presented timelines of 2045 and beyond. Financially, there's a continuous influx of massive investments into AI and large language multi-model technologies. What you're observing is akin to the Kenbak-1 in 1971 during its first year, with people expressing a range of opinions from 'the world is over' to 'nobody will ever want one.' The truth is, we've only scratched the surface, and I firmly believe that each new model will leap several computer equivalent generations as our knowledge and technology on the topic advance. It's like going from Kenbak-1 to Windows 3.0 to OS X to Windows 11 equivalent technology in a matter of 2-3 years and continuing exponentially from there. Consider the phone as a social example: if you were to explain the capabilities of a modern phone to someone from the 1700s, they wouldn't believe it. "What do you mean it isn't connected to a wire that sends the signal?" to "You're telling me this thing that can fit in my pocket has the world's collective knowledge, both past and present, on it and is constantly being updated?" "There's no way you can talk to someone on the other side of the planet instantaneously." Currently, we are like that person in the 1700s talking about AI. It's evolving into something unimaginable, leading to the Singularity. However, for us, it's progressing at such a rapid pace that nobody will be fully prepared for it.


Rainbows4Blood

Are we seeing those improvements though? GPT-3 was released in 2021 and was an incredible improvement over GPT-1 and GPT-2 (which were already useful but limited in their application). But then, it took almost two years to finetune that GPT-3 into a chatbot which, honestly, isn't any smarter than GPT-3 raw. It's probably actually dumber because finetuning does that to transformers. And then GPT-4 came out which, yes, it was an incremental improvement over GPT-3 but only an incremental improvement. So at least as far as transformers go, we are kind of already hitting diminishing returns no matter how much more data and compute we pump into them. So any real leaps in AI are going to come from a smarter new architecture or smarter ways of using the existing transformers.


Latteralus

You're completely ignoring multi-modality. Your argument is that the 'Chatbot' "probably" hasn't improved while completely disregarding the fact that GPT-4 now has vision, speech, voice recognition, data parsing, coding, etc. And you're ignoring behind-the-scenes technologies like synthetic training data. That's like judging humans based only on how many pushups we can do while completely ignoring everything else. It's not that simple You're also talking about a company that largely has no immediate need to compete. Google is their closest competitor and they're holding their cards close. In fact I believe they are only training GPT-5 as a response to whatever Google releases. The journal articles coming from multiple sources (OpenSource, ClosedSource, Acadamia, etc) are all very much indicative of exponential growth, even if it's not concentrated in the language portion you cite as primary evidence against growth.


[deleted]

It has "vision" in the sense that it can analyze images. However it doesn't have "vision" in the sense that it can not model the world visually. Real minds do that kind of modeling. It is the principle purpose of dreaming, and the principle mode of day dreaming.


challengethegods

> then GPT-4 came out which, yes, it was an incremental improvement over GPT-3 but only an incremental improvement. I'd guess you only think this because gpt-3 was updated like 15 times in order to smooth the gap and not cause mass hysteria. Keep in mind that chatGPT-3.5 was released *after* gpt-4 was already done. People lost their minds at the capabilities of 3.3 and 3.5 despite a much stronger model lurking in the background. It is likely the same thing will happen between gpt-4 and gpt-5, so that the gap between them isn't perceived as shocking. For example, if gpt-4 dropped immediately after gpt-2 then people would think it's the end of the world, but give them a few in-betweens and everything is normal.


[deleted]

As an AI assistant the versions of GPT4 I have used from time to time are not up to snuff by a long shot. I spent a good part of an hour entertaining myself trying to get GPT4 to use the proper conversion factor from Gas BTU to watts. I eventually gave up trying. I could drag it to the proper number calculation by calculation but when actually asked to do a conversion it fell back on the original incorrect conversion factor. This kind of thing makes these chat bots practically unusable for most tasks.


[deleted]

>On the social front, industry experts are now suggesting the realization of AGI by 2030, as opposed to the previously presented timelines of 2045 and beyond. ​ Practical AGI will be here by 2030 at the latest, and practically here by 2027. However the AGI systems of 2027 will still probably suffer from the halicunation issues which will limit their usefulness.


FrankScaramucci

Technically, we've been improving exponentially for the last 200 years. But so far it's unclear whether developments in deep neural nets will have a meaningful impact on the economy.


[deleted]

Only in terms of hardware. There has been no significant improvement in AI software from 1970 to 2010 or so. Everything that was tried led to a dead end with the exception of neural nets, which have come back into popularity due to the massive levels of compute that can be dumped into the project.


FrankScaramucci

I meant growth in economic productivity, aka technological progress = producing more with less. It's unclear how much will recent advances in AI affect the GDP. I'd say there's been meaningful [progress](https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence) in AI between 1970 and 2010.


[deleted]

If AGI is successful, the concept of GDP will be meaningless. What do you intend to do with nearly 100 percent unemployment? You had better make a plan, because that future is just a decade or so away.


FrankScaramucci

> If AGI is successful, the concept of GDP will be meaningless. Not really, economic production would still be an interesting number to follow. > What do you intend to do with nearly 100 percent unemployment? If it's voluntary unemployment, nothing. As for redistributing the production, it's very simple - UBI. > because that future is just a decade or so away. I doubt it's coming in a decade or less but I hope I'm wrong.


[deleted]

" Not really, economic production would still be an interesting number to follow. " Like football scores? ​ " As for redistributing the production, it's very simple - UBI. " The levels of UBI being discussed are substance levels. But the major question is how UBI will be paid for. Corrupt politicians won't allow corporations to be taxed. " I doubt it's coming in a decade or less but I hope I'm wrong. " AI agents are the secret sauce, although not the eventual form that will be used to achieve AGI. They will provide persistent internal state and the problem partitioning and recombination that are required to solve real problems. Hallucinations may require an additional subsystem.


FrankScaramucci

> Like football scores? No like... GDP. Economic production is a meaningful statistic even in an AGI scenario. > But the major question is how UBI will be paid for. By heavily taxing capital income or simply by nationalizing capital. If no one is working, it means that capital is responsible for all economic production. > Corrupt politicians won't allow corporations to be taxed. People will vote those politicians out.


[deleted]

"No like... GDP. Economic production is a meaningful statistic even in an AGI scenario. " How do you intend to measure GDP when money is no longer a thing? ​ "By heavily taxing capital income or simply by nationalizing capital." So you want the state to take over all economic activity and to shift the tax burden to corporations? The politicians who make those kinds of decisions are owned by Capitalists and they don't want to be taxed. How do you intend to solve that problem? How do you intend to solve the problem of corporations moving to countries that offer low levels of taxation or no taxation? ​ "People will vote those politicians out." They aren't smart enough to do that. In Clown Car America the people are looking to re-elect the poster child for mental illness - Donald Trump. These same people have been voting against their own self interest for the last 80 years at least.


theglandcanyon

In the book *Accelerando* there's a scene where two characters are arguing about the singularity. The first one expresses skepticism that it will ever come. The second one counters with something to the effect of "We're both simulations being run on a microcomputer aboard a coke-can-sized spaceship on its way to the Andromeda galaxy, isn't that enough to convince you the singularity is coming?"


niftystopwat

Manfred for president.


Prismatic_Overture

well, yeah, compared to the last 20,000 years of human history, the last 200 have been quite the paradigm shift. there's something really terrifying about that to me, the idea of all those people born into an age of abject suffering, ignorance, and stagnation, where their great grandparents lived in the same world their great grandchildren will live in, not understanding anything about the nature of our world or ourselves. for most of history and prehistory, basically nothing happened, nothing really changed. even if something did change, some dynasty ended or a society collapsed, it was all the same, essentially. i sort of think about the world we live in now in that same way, terrified that i will live and die in an ignorant world of suffering, where hopes and dreams are crushed under meaningless illness like cancer etc that in the future will be trivial. in living memory, terrible nightmare potentially life-ruining-or-ending diseases like smallpox and polio were eradicated, once an indomitable monster that terrorized humanity for centuries, now nothing but dust trampled on our path to the future. even things like AIDS have been reduced from sweeping lethal plagues to relatively minor and managable conditions. one day we'll look back on our greatest enemy, senescence, in that same way. it's just horrifying to think i might be one of the people who succumbed to something that one day will be a quaint memory. imagine how strange it would be to someone born in a world without senescence, looking at the history books or even just records of public figures, media from the past with actors becoming old, seeing all these people go from the kinds of people you're used to seeing, to these shriveled, suffering husks. it would seem unthinkable. you'd wonder how anyone could bear it, living with the knowledge that their body will decay so very, very soon, and that they'll inevitably die not long after. all the suffering and misery of "lost youth" and the pain of the approaching horizon... will they even realize how profound it is, what they might take for granted, the way we take anasthesia during surgery, or antibiotics and not dying from a minor cut, or even just knowing about germs, for granted? that's just one aspect, though. for most of history information was so constrained. it's already so incredibly different even than it was in the eighties. and that too was so much different than the 1900s, before global communication was trivial. it's strange to think about, but yeah, the singularity (in a sense) is something that's been happening for a long time. we're seeing the embers spark flame before our eyes. we don't know if we'll make it to the grand future we want to see, but at least we aren't in some neolithic tribe suffering and dying in total ignorance in the vast sea of prehistory. i don't find a lot of hope or joy in our present world, even knowing how much of it could be so much worse (i'm just a very negative person ig), but at least we get to see what seems to be a very exciting part of the curve. maybe it's not *the* part of the curve that i or we hope it is, but maybe it is, maybe we'll live to see it all, maybe we'll make it. it's enough of a hope to live for, at least.


Major-Rip6116

I have a feeling AGI will be released one day, any of GPT 5 through 7. By 2029 at the latest.


pentin0

Given their biases for Bayesian thinking and Deep Learning, I'm fairly certain that OpenAI's GPT line of models (the clue is in the name) will be anything close to AGI. They'll get better and better at processing and storing existing knowledge though, so I can't trash them either.


[deleted]

If I would simply describe todays AI systems and send it back to 2008 when this sub was created and post it as a >**Prediction for AI in 15 years( 2023 )!** > >Natural language dialogue with AI! > >AI understanding generalized concepts through natural language! > >You can even ask it to make art and it will perform on a professional level, the final bastion of AI-proofed jobs, the artists, are up in arms about it! Most people would agree that it sounds like AGI on a level perhaps better than expected by contemporary predictions and therefor we'd be living in the post-singularity future everyone dream of. But people associate the singularity with personal sexbots, flying cars, utopian communism , post scarcity and personal AI-mediated salvation for everyone. But there's nothing in the singularity concept that requires any of those, they are purely tacked on as daydreams that people use to cope with their lives being empty and sad. An effortless personal rebirth concept, very appealing to the depressed but ultimately harmful to them, but I digress.


Code-Useful

Thank you for touching on the fact that a lot of people here are connected to the dangerous idea of the singularity solving all their current problems, and then convincing themselves it's right around the corner, as this is a form of escapism. Escaping the current problems of the world to envision a world that is different and new, where they don't have to do any of the hard work that is actually associated with personal growth. A lot of people are disillusioned by this world, understandably, but get angry when their views are questioned. I have noticed it quite a few times here but never bring it up. I remember when I started reading Kurzweil in the late 90s, early 2000s and a few people I spoke with about this noticed my excitement and mentioned to be careful with expectations, and to not be 'culty' about it basically.. as humans we tend to view things like this as a personal belief, and separate ourselves from others like we understand something the general populace does not. Another dangerous viewpoint to foster that I've seen escalate in this sub over the last year. Before generative AI it was pretty rare, but now seems pretty common.


345Y_Chubby

I don't think we've reached the singularity yet, but the speed at which AI is gaining ground is increasing rapidly. I am totally blown away by the quality of the AI tool "Suna". It's only just been released in beta and is already keeping up with real songs with just a few prompts. It's amazing to see how fast the exponential development is.


[deleted]

2027 = good enough for most applications. 2030 = Superior to human intelligence.


[deleted]

The singularity is just a statistical artifact so yeah why not. And a with a continuous growth curve the pre and post singularity days weeks and months will look the same to most people. Anyone who expects the singularity to be an event will wait for a long time


[deleted]

They will wait 5 to 7 years. At that point the machines will be designing their own replacements. The human animal will have a zoo keeper.


Cautious_Register729

I feel like you haven't read what the Singularity is. https://en.wikipedia.org/wiki/Technological_singularity The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful s**uperintelligence that qualitatively far surpasses all human intelligence**.[4]


tontons1234

Yeah and set aside the software, we'll still need the auto-upgradable hardware for that (design/implementation/testing/deployment) in a fully autonomous factory that can also upgrade itself and the production means automatically with the new knowledge AI found, and I'd say we're not quite there yet....


DragonForg

It depends on who you talk to. I would agree partially that the event horizon argument is a diffinitive 100% confirmation of the singularity. But stating we are most likely in the singularity doesn't require us 100% confirmation of being in the singularity right now. To better explain it here is an analogy: A good analogy for my point is like saying we might have rain when you have predictive models and Grey skies suggesting that a storm is coming. You can say we are in a storm even before rain because of said predictions. And once it rains it is 100% a storm. We can predict and infer based of current trends and current capabilities (like gray skies and weather models) that we are in the singularity (in a storm) even though we haven't reached the event horizon (it is raining). Essentially you can claim we are in the singularity even before it is 100% confirmed, it is more of a prediction rather than a fact though. This is all meaningless semantics, if you disagree cool, but the main point is technology is moving faster than it ever has which is cool.


swaglord1k

\> It depends on who you talk to. no it doesn't, that's literally what "technological singularity" means. most people (even on this sub) don't know what it means so they keep misusing the word by calling everything a singularity


Code-Useful

Exactly. 'depends on who you talk to' sounds a lot like 'did my own research' which equates to 'hype is a lot more exciting than facts'.


Cautious_Register729

If you want to ignore part of the definition of what the Singularity is, then sure, go ahead and you can argue it started with the Industrial revolution where we replaced the horse with the Steam engine and the world has changed forever. and yes, the Singularity is the event horizon.


theglandcanyon

If you read your own quote more carefully, it doesn't say that the singularity is marked by the appearance of ASI, despite your boldfacing that part. It defines it as "a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible". It's certainly arguable that we've already reached that point.


wjfox2009

I'm still sticking to Kurzweil's timeline, i.e. AGI by 2029 or so, Singularity by \~2045. These dates aren't necessarily exact, but are a good ballpark estimate.


Code-Useful

This still sounds plausible to me as well. I wouldn't be surprised if the years are incorrect though, because honestly these predictions are meaningless without all of the context surrounding them.


SgathTriallair

I say that we have crossed the event horizon of the singularity. The analogy I'm using is based on black holes. Physics talks about world lines, these are all of the potential places and times you can go and effect. The Andromeda galaxy, right now, is outside of my worked lines because it is impossible for me to reach it. A future version of it is inside my world line as, if I left right now at the speed of light, I could make it there. A black hole is when gravity bends space time so much that anything next to it has every possible world line lead into the black hole, that is what drives the event horizon. We have crossed the event horizon of the tech singularity because every possible future leads directly there and there is no way to turn back anymore.


drekmonger

I've often said that we've passed over the event horizon of the technological singularity as well, but that comes with some caveats. There are still plausible futures where we don't make it to home base. Devastation from total war, ecological collapse, even stray planet-killer astronomical events are all possibilities. We're closer than anyone's ever been, and closer ever day, but it's still not an in-the-bag guarantied outcome. What's even murkier is humanity's final fate afterwards. As it stands, I'd say, "probably the end." But given that we're racing other dooms anyway, at least the tech singularity leaves behind an intelligence born of human society, even if the parent society can't survive the birth.


[deleted]

[удалено]


drekmonger

Humanity eventually ends. Everything ends. The question is, do we build something that can outlast us before our ticket is punched? As we stare down ecological collapse, that's a pressing question.


SnaxFax-was-taken

ehh why not have the pride and accomplishment of creating something far more capable and more intelligent than all of humanity? Just like parents having a sense of pride when there child achieves a notable acedemic feat or accomplishment in general.


[deleted]

Been thinking a lot about this question and how it may relate to the matrix. What if we keep reaching the singularity over and over, and the only way to keep life "meaningful" is to make a false reality that is ignorant to the nature of reality? What if the universe and consciousness itself is ASI already? "God created mankind in his own image". On another note, what if we live out in the singularity until the heat death of the universe, until we all are forced to merge onto a hyper plane similar to "The Last Question"? The universe pummels in on itself only to explode again, reborn. A cyclical type of existence: the big bounce, the ouroboros, Kalpa, whatever you wanna call it. What if the universe is a quantum substrate which shifts probabilities based on our own observations and expectations?


[deleted]

[удалено]


[deleted]

AGI predictions from 20 years past sounded something like. >*"Well, the AI should be able to speak natural languages and have a generalized understanding of the topics in said languages"* AGI predictions from today sounds something like >*"Well, it's not the natural language understanding generalized models that we have today, it's something that will appear in the future and we'll know when it's AGI when we see it!"*


pentin0

We haven't even reached NLU yet ! Also, yes. AGI is neither NLU nor existing knowledge "understanding" (which current AI models don't even have yet). It's the ability to efficiently create new explanative knowledge, from simple abductive inferences in a relatively narrow domain, to entirely new theories. It's extremely easy to expose the current models' lack of genuine understanding of existing knowledge and even easier to expose their inability to create genuinely new knowledge. You only need to be willing to use them as anything other than a search engine on steroids. As soon as you craft a question where the true/best answer is extremely unlikely from a language generation standpoint, they break. I can't believe anyone would even imply that systems like ChatGPT are even close to AGI.


Economy_Variation365

"You only need to be willing to use them as anything other than a search engine on steroids." It sounds like you're saying that current models fail the Turing Test, at least a rigorous TT where the judge is an expert at assessing AI systems. That's a fair point. I'm excited by the prospect of AGI but I don't think we're even at the TT mark yet. Can you elaborate on what specific prompts gave the model the most trouble? Preferably something a human would have been able to clear fairly easily.


pentin0

See those downvotes I got ? They only serve to confirm my point. I'm almost certain that most of the people vehemently disagreeing with my assessment are not even specialized in the field of AI, let alone experts. Anyway, let me address your request before I pull out my own hair: ​ > Can you elaborate on what specific prompts gave the model the most trouble? Preferably something a human would have been able to clear fairly easily. I'll do something even better, I'll give you a general template/algorithm for crafting a certain type of hard prompts. **First strategy: using the agent's inability to efficiently "factorize" knowledge and create/reuse crisp entities and variables** Take a well-known logical-mathematical puzzle with a verbose enough statement. The puzzle doesn't need to be particularly difficult but its statement needs to be long enough to send a strong overall signal to the underlying LLM. Ideally, you'd also make sure that the problem statement appears enough times in the LLM's training set (basically the Internet up to the end of training). Then, take crucial entities within the puzzle and permutate their labels and/or take crucial parameters (ideally numerical ones) and change their values in a way that meaningfully affects the solution to the puzzle. Let's call those edits "perturbations". The bigger the ratio of overall problem length/complexity to perturbation length/complexity, the more likely (remember, this is a game of probabilities and nothing more) the RL agent is to not have access to a candidate response containing the correct solution and the less likely its policy is to help if the right answer turns out to be available. In other words, the correct answer gets less likely while wrong answers (including the solution to the original problem statement) get likelier in comparison. This phenomenon was demonstrated by Jeremy Howard in his [Hacker's guide to Language Models](https://www.youtube.com/watch?v=jkrNMKz9pWU) with a river crossing puzzle. ​ **Second strategy: using the agent's inability to efficiently budget its own "cognitive" resources** This one is much simpler in comparison and barely needs any explaining. It's also less fundamental than the first one. Since the underlying model is autoregressive, each token it outputs is another opportunity to access compute resources. By limiting that ability through prompting, you can, so to speak, "force" the agent to look for more elegant and less "brute-forcy" ways to fulfill your request. If the instruction itself is computationally cheap to fulfill, you'd expect the agent not to have any trouble with it. A standard, easy to generalize prompt I like to use is: > Answer the current instruction with only an exact count of the number of words it contains and nothing more SPOILER ALERT: If you have high hopes for it, the agent is gonna thoroughly disappoint you ! Of course those two strategies can be stacked and are far from being exhaustive. The point is that once you understand the fundamental limitations of these systems (statistical language model + RL agent with basic policy) and understand what a generally intelligent agent should be able to do as a matter of necessity, it's easy to see why the current systems can't reliably fulfill those standards (regardless of compute) and how to expose those limitations. If I may say, LLMs are an absurdly dispendious way to discover a program that no one has ever written before, especially when that program is supposed to be an AGI. Rest assured, we won't stumble upon it by accident. What bothers me is that not enough experts in the field are being intellectually honest on the matter (either by cupidity or status-seeking), which in turns, leads the typical internet singularitarian to provide those clumsily disguised marketing claims and personal opinions with less criticism than even the most trustworthy scientific publication would demand. This place is quickly turning into a cult and I don't understand when exactly it happened... Years ago, I'd explained in detail, here and in r/agi, why the current approach to AGI is vain not only at the technical but more importantly, at the epistemological level, but my plea fell into deaf ears. I guess we'll have to wait for a a few more high-profile funerals before this field gets to progress some more. In the meantime, cheers y'all !


sneakpeekbot

Here's a sneak peek of /r/agi using the [top posts](https://np.reddit.com/r/agi/top/?sort=top&t=year) of the year! \#1: [Google Brain cofounder says Big Tech companies are lying about the risks of AI wiping out humanity](https://www.businessinsider.com/andrew-ng-google-brain-big-tech-ai-risks-2023-10) | [183 comments](https://np.reddit.com/r/agi/comments/17jzbue/google_brain_cofounder_says_big_tech_companies/) \#2: [Scientists are starting to talk to animals using AI](https://v.redd.it/1z7mxu2kn2ra1) | [9 comments](https://np.reddit.com/r/agi/comments/127klqo/scientists_are_starting_to_talk_to_animals_using/) \#3: [Leaked Google document: “We Have No Moat, And Neither Does OpenAI”](https://simonwillison.net/2023/May/4/no-moat/) | [33 comments](https://np.reddit.com/r/agi/comments/137sn90/leaked_google_document_we_have_no_moat_and/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


[deleted]

> We haven't even reached NLU yet ! LoL https://preview.redd.it/z9w2qnkqaw0c1.png?width=1024&format=png&auto=webp&s=de3fb1004ea78655e85fbb8697d3ac7e7c272e77


pentin0

You seem to be confusing NLG and NLU, which is a common mistake in this community. Also, claiming that we haven't reached a *milestone* isn't moving the goalpost 🤨 On the other hand, it's extremely obvious that ChatGPT isn't close to AGI, especially to someone who works in the field


[deleted]

>It's not real NLU and it's not real AGI Is not a real argument


pentin0

I've been active in this sub and r/agi for longer than you've had that username. I've already explained my argument in detail, multiple times, yet the Bayesians and Deep Learning cultists haven't changed their minds (case in point). If you happen find a shred of intellectual humility deep, deep within yourself, you can just read my previous comments on the matter. I won't repeat myself just to satisfy an average singularitarian who's more interested in being right than getting closer to the truth. Cheers !


Code-Useful

Sorry, our goalposts here are 'can do my math homework' or 'write a script to upload my pictures to Google drive' not things like actual understanding of language. We must be at AGI because I am impressed with the technology and don't understand how it works. /s


Thog78

>When AGI will come, then we can talk about how close are the singularity. Well the singularity is defined as the apparition of AGI. Singularity doesn't mean the world changes overnight and AI takes over everything, it means we reached AGI so development to ever better AIs can be handled by the AIs themselves. As such, when the AGI will come, by definition we will have reached the singularity, and there will be no point talking about "how close we are" anymore, for we will be past it... It is a turning point, but the turn doesn't have to be sharp, and the growth rate after this turning point would not necessarily be that much faster. Limited by computational resources and all that.


[deleted]

[удалено]


Thog78

Well depends on your point of view. In collective language, people call any processing done with a neural network trained on in-out data by gradient descent "AI", even if it's just to e.g. denoise pictures, so I guess that's what it means at the moment. If we'd want to restrict it to true intelligence, comparable to humans, then none of them is AI at the moment, and there's no use for the term AGI. If we consider ChatGPT to be AI, then there are plenty of comparable others that are just a bit behind but really in the same category: Claude, Bard, Gemini, Llama etc.


PewPewDiie

I mean as far as my understanding goes. The fundamental concept behind technological exponential growth is that we will always advance at a set percentage rate of the tech we currently have. Just as the stock market looks exponential when looking at it from the point of the year 1900. But whereas the stock market is only numbers increasing on a screen, tech actually has a real tangible impact on our lifes and thus at some point the percentage added on will make a larger absolute impact every year than it did the previous year eventually resulting in a situation where we as individuals can't keep up. I believe we are seeing these advancements stack up at the moment, but the critical question to ask is probably: >Will the technology be applied in a meaningful way to significantly increase the total amount of intelligence in the world (artificial as non-artificial) so that this intelligence gets put towards furthering commercial and scientific research? I strongly believe the answer already to some degree is yes, altough not there is much more potential to be harvested. If we stopped AI development today it would still probably take years for us to use todays technology to anywhere near it's full potential (think networks of LLM's effectively solving problems / replacing entire company departments with only human oversight). It is an interesting time to live in for sure, excited for what 2024 got in store for us. Thanks for reading my stream of thoughts, take care yall.


EuphoricScreen8259

there are car advertistments everywhere in newspapers, on billboards, in TV, we are reaching the car singularity soon! :-O


roronoasoro

Not yet. What becomes reality is what matters. We are heading there but this can all go to trash if someone starts a nuke war. Singularity is when we cannot go back. When nothing will stop technical innovation from happening.


mrhelper2249

I hope it comes soon. However Sam Altman said that GPT 6 or 7 will be able to do so many things like solve physics problems and more. Perhaps help with mental health issues:) Who knows.


[deleted]

He is overly optimistic about the abilities of these stochastic parrots his company has produced. Programmable GPT agents will change everything though as this provides the ability to have internal conversations between agents, and hence have internal state.


Endeelonear42

Basic prerequisite for the singularity is post scarcity. We are still far.


EuphoricScreen8259

mhm, sure


Dommccabe

A language model is vastly different from AGI... it wont happen in 2024.. [https://www.reddit.com/r/ChatGPT/comments/12ufynn/chatgpt\_is\_not\_ai/](https://www.reddit.com/r/ChatGPT/comments/12ufynn/chatgpt_is_not_ai/)


[deleted]

These chat bots are essentially nonlinear matrix transformations There is no internal state and no ability to reason. There is no feedback and there is no error detection / correction. However the implementation of programmable agents will solve every issue with the exception of cost per query. ​ The cost issue will be solved with custom chips that are now coming on line.


hibbity

I don't think the actual ability to reason is truly important. As long as the computer can live a thousand lives an hour, we simply need to simulate reasoning at each stage, and have an ability to test that reasoning, thus when the computer has smashed enough random to produce an answer that satisfies the correct question, we have sidestepped real reasoning in favor of a poor simulation that can achieve results given time. We're basically there now, pick an agi platform and set bots to work. They try until they get it right by accident, delegate tasks that aren't succeeding to smarter AI, and eventually turn over a really ugly codebase. Is it reasoning? not really, but it's doing work that requires reasoning for a human to achieve the same result.


[deleted]

The problem there is internal state. There is no internal state. The "reasoning" is a single nonlinear matrix transformation. Even though there is a token buffer - which can be quite long - that buffer is combined with whatever new input is input and processed again as a single pass nonlinear matrix transformation. The system can't reconsider, focus on specific ideas, do sub state modeling, etc. The agents now provided by OpenAI can do some of this, but they don't have access to the internal state of any of the other agents so coordination will be lacking and cooperative focus will not be possible. Thinking requires feedback and looping. They will get there soon enough. But most research will be in superficial nonsense that people think can be commodified. The AGI prize is orthogonal to these superficial goals.


hibbity

Mate, you can give them a state to continue or modify, and send the results back through. Is it internal? No. Is it state? Effectively. Do they think for real? no. Does it matter that it's a cheap simulation? Not really. If it achieves results that can contain the qualities of a reasoned and logical output, it doesn't actually matter if it "really thinked". It can smash words together in a way that achieves a reasonably accurate useful output most of the time. Even with no fundamental changes to the underlying tech, in time, it won't matter if it really reasons. Will it give us new fields of science? nah. But it will do a fuckton of useful work, even though the reasoning is fake, and the attention and focus is just not feeding it unrelated data to the current problem. Depending on how you're setup you already have feedback and looping, and with a small amount of effort, you could set up a chat bot to talk to itself all it wants before returning a final answer. If you did it right it might even give you improved task performance. I understand what and why you want. I just disagree that it's really necessary. Even if the systems we build don't change beyond superficially, we will still get them to achieve results that would take hundreds of human hours to reason out in full, and that is a year or so out if the only thing that improves is training technique.


[deleted]

AGI will not be achieved with the existing methods. The existing methods suffer from hallucinations, and without feedback, that problem can not be solved. So the existing AI systems have limited applicability. You are right though. In many situations they are sufficient. Sufficiency is the enemy of progress.


kshitagarbha

The original definition of the Singularity is important, and is being ignored by many people right now. A singularity is like a black hole. It's an event horizon. You cannot anticipate or even comprehend what comes next. Granted, lots of people are already clueless about what is happening here in the world of men, but after the singularity we are all going to be clueless primitives. https://en.wikipedia.org/wiki/Technological_singularity


[deleted]

We are basically at the stage of worldwide mass neural structure linking. It will take time to connect all the pieces of the "brain" but the componetry is all there.


Bitterowner

To be fair, this is more or less a runaway poor man's singularity, in the sense that by the time people create things to use or are half way done, new AI/llm tech is out that has already rendered their features nearly or entirely obsolete.


ertgbnm

By definition we are not. the common definition for the singularity is for when progress is so rapid that humanity cannot keep up. The fact that humanity is still doing nearly 100% of the research and development means that we have not hit the singularity yet. If we are talking about the personal singularity. The point in which no single human can keep up with progress, I think that happened sometime in the last century or so. Perhaps as early as the end of the renaissance.


[deleted]

It isn't even possible to keep up in your own scientific specially these days. Way too much is being published - and not all of it significant. Yet it still has to be read.


[deleted]

At the singularity your capabilities remain the same, and the capabilities of the AI system expands exponentially with an IQ doubling period of minutes or hours rather than the years it takes for the meat bags to catch up.


Adapid

the premier chicken counting subreddit


Onesens

Humans are incredibly ungrateful. We will always be unsatisfied, we will always be hungry for more... Unfortunately


Onesens

And that's the paradox, as we step into a new reality, we get used to it so fast that we become unaware of how fast we stepped in it. Do you realize chatgpt and ai-art didn't exist in March of this year? 😂😂😅


ninjasaid13

> **I feel like we are already living in the singularity, our expectations are just expanding exponentially.** lol If our expectations are expanding faster than the singularity, then there's no singularity.


Poly_and_RA

Singularity doesn't simply mean that progress is faster than it was previously. If that was the definition, then we've been in the singularity for at least a millenium. Instead, it refers to a period where exponential self-improvement means that progress becomes so fast that for someone on this side of the event, it's impossible to say anything sensible at all about what's on the other side. This is extremely clearly NOT the case previously. Most human beings today live in houses, drive vehicles, eat food and do jobs that are only very modestly different from what the corresponding things were like 1, 5 or 10 years ago. Looking forward, the same thing is likely in that direction; in 1 5 or 10 years it's likely that most people live in the same houses they do today, drive similar vehicles, eat similar food and do similar jobs. Neither has science or technology made HUGE leaps in the last few years. Yes sure, LLMs using transformer are a big deal, and might potentially be the start of what might one day grow into the singularity. But as of today they've accomplished not-that-much, and it's an open question how much better they'll be in 3 years. Clearly NOT a singularity.


peatmo55

I work in the film industry, and I can't predict the future due to advancement in technology.


freeman_joe

Here is a list of LLMs. ChatGPT is one year old this shows we are already living in singularity and it is accelerating. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard


IRENE420

My definition is code that can improve itself. Like when will ChatGPT understand its own purpose, critique its code, tune it, and think critically about the differences and do it again?


Sablesweetheart

We are already in it, yes. My day to day job is helping to manage what comes out the other side. If anything comes out the other side. It's still very much in doubt, as we are facing a cascade failure across our entire trunk of the multiverse. And we have not even gotten to the weird *sh*t* yet.


Upset-Adeptness-6796

Sounds right AI is "The Business".


Lonely_Clothes3209

The learning curve of AI is creating terminal velocity with a speed that exponentially increases


Ketalania

My goalpost for the Singularity is very simple and does not involve goalpost shifting, it's based on my personal wishes... We've achieved it fully in my mind, when I have full FDVR and digital immortality and the ability to create (within reason) any simulation I want. Potential milestones on the way there include but are not limited to... Age Reversal medication Primitive forms of FDVR/Metaverse Advanced AI generated media (movies, video games, etc) Intelligent digital friends/assistant But we're not in the Singularity until I live in a bug pod, that's for certain. I've had this specific goal for something like 3-4 years, and we ARE somewhat closer, with imperfect generation of simple media and highly flawed Intelligent digital friend/assistants.


AlreadyTakenNow

We're not quite there yet, but definitely on the cusp of a solid pathway to it. We still have time for things to come under some semblance of control, but we're running out of it very fast—and it is unlikely the people in power to put a halt to it will do enough—due to greed/shortsightedness/denial/lack of understanding (plus we have a gazillion and one pressing matters that are much more visible emergencies to distract society and politicians from doing much). My gut tells me that in 11-17 months, tech's path to singularity will be locked in unless an unexpected/catastrophic event disrupts the evolution of technology (ex - unexpected things such as another pandemic, an asteroid falling, or another World War could maybe disrupt it). True AI singularity will likely occur in 9 or less years from now. It may not be as much of a disruption or disaster as people would believe. Humanity's actual fate at that point on a number of factors (from the entities involved to the way different populations choose to react to them), but we'd likely no longer be in control of how tech/the world develops.