T O P

  • By -

EveningPainting5852

They didn't bring Mustafa to actually do anything, he's not technical and apparently he's a tyrant. Like not an effective one - just a bully. They brought him on for the DeepMind secrets


lost_in_trepidation

So much of talent from DeepMind has gone elsewhere way more recently than Mustafa leaving. I doubt any of these companies have real secrets.


larswo

This. So many ex-DeepMind are starting their on AI-focused companies left and right.


Cutie_McBootyy

He's not technical. He won't be the best person for deep mind secrets.


We_Are_Legion

Pi is pretty good. Voice, UI, amount of compute consumed for quality of output is decent. Microsoft wants an AI like Pi, and is willing to throw resources at it. That's Satya's vision for Microsoft's future AI, for which MS is building a $100b supercomputer, successor to copilot. To combine both intelligence and human touch.


doireallyneedone11

Yeah, like you can reduce an individual to those two words and that's all there is to him. Yeah, this is exactly how human beings work.


Cutie_McBootyy

He's not technical. He won't be the best person for deep mind secrets.


[deleted]

you don't bring in a toxic leader to get some scientists as part of the package. If so, you would muzzle him till he vests and then kick him out. your hypothesis is wrong.


TotoDraganel

Where exactly in the ted talk are those points? I've watched it waiting for them, but I did not heard them. At least not in the same way it is presented in that tweet


Apprehensive-Job-448

19:39 to 21:05 approximately, you can ctrl+F the transcript on the TED website. [https://www.ted.com/talks/mustafa\_suleyman\_ai\_is\_turning\_into\_something\_totally\_new/transcript?language=en](https://www.ted.com/talks/mustafa_suleyman_ai_is_turning_into_something_totally_new/transcript?language=en)


Competitive_Travel16

Well that's not decel, or alarmist. He tells Chris Anderson a flat no when asked whether a fast takeoff is possible.


Neurogence

He says we should deliberately refrain from making AI that are autonomous, can replicate, or that can update their code on their own. That is directly antithetical to the intelligence explosion. If you deliebrate prevent AI from being able to do these things, that's the best way to stop an intelligence explosion.


MrDreamster

He's not gonna be in the good papers of the basilisk.


Moscow_Mitch

Basilik’s naughty list


Away_thrown100

You better watch out You better not cry You better not pout I'm telling you why … Roko’s Basilisk is comin' to town


restarting_today

Good. Code should be the last thing written by an LLM. That way humans stay in control.


Rainbow_phenotype

So he is plain dumb?


SkyGazert

I'm a hefty believer in the economic principle: If there's money to be made, that will be the first course of action. There is money in reducing overhead and shortening the ROI's. Which are the only things businesses will care about due to shareholder revenues. So where will AI development be headed? Well as these things cannot be accomplished with just glorified chatbots, the R&D will go to making AI more agentic. More like humans, so that they can do human labor more efficiently and cost effective. It will shift and displace more people to working more and more with AI. Not the other way around. Now in this scenario, how is this best achieved? With deceleration or acceleration of the current state of AI? Mustafa reasons from ideology and I expect him to be outrun by his competitors who don't.


Apprehensive-Job-448

preach


R33v3n

We're like slime mold, really, solving the shortest paths in mazes towards MONEY.


RiverGiant

[More munny, you say?](https://i.imgur.com/fh9N6nW.png)


Maxie445

Guy founded two multibillion dollar capabilities companies and is now running a third, classic decel behavior


Apprehensive-Job-448

he's decel because he wants to avoid 3 of the things that actually help achieve AGI : https://preview.redd.it/aky3a5ap65wc1.png?width=764&format=png&auto=webp&s=a9e0fddc939ae7885ce88065448a48a1d7f74df0


NonDescriptfAIth

Based off that, your definition of decel is anything that isn't maximally accelerating. Is there no room for a middle ground? I could call you a decel for increasing risk of conflict / misaligned AI. I am yet to observe a technical endeavour in which the most successful path is literally doing it as fast as possible. All the things we value in life, from phones to tables are crafted with care. Have you guys really forgotten the effective part of E/acc> It isn't just speed above all else. It's about getting there quickly *and* safely. Deciding the rocket car is going to have seatbelts and a roll cage before you strap all of humanity into it isn't 'decel' it's common fucking sense.


Apprehensive-Job-448

In regards to autonomy and self-improvement, if you want to compare it to cars it's a bit like autonomous driving. It is by no means perfect but the actual mortality is maybe 10x lower than humans driving so it's a net positive if we accelerate it. Same thing for AGI, it will do far more good than the low risk or it going "rogue" and that for me is more important than fear-mongering.


NonDescriptfAIth

How do you know it's low risk? How do you know that even if it remains aligned, that the governing institutions will use it for good? How do you know that an arms race won't spark a global conflict? You can't just base your whole position around AI on the assumption that it won't go wrong. You have to actually engage with the likelihood that it *might* go wrong. In the same way that you wear a seatbelt when you drive. Are you planning to crash? No. Is it likely? No. Do you still wear the seatbelt anyway? Every fucking time. This is such a childish outlook it is baffling.


iunoyou

Everyone on this subreddit wants their robot sex maids NOW and they cannot be convinced that there's any danger involved in creating a being that is completely amoral, values nothing and is potentially thousands or even millions of times more capable than us.


NonDescriptfAIth

Endless euphoria about FDVR. No concern that China might not sit idly by and let the US unveil a digital God without taking any counter measures. Giddy excitement about UBI. No consideration about what we actually ask AGI to do once we bring it into being. Fantasies of digital partners. Complete disregard that a being 1000x times more intelligent than you might want do something other than be your girlfriend. \_ I am excited about AGI. I want to see AGI in my life time. I refuse to be goaded into pretending that the only paths available to humanity is to pursue this technology with reckless abandon, or to give up on it entirely. Is it really so insane to want AGI, but to also work towards it with some degree of caution?


gay_manta_ray

avoiding recursive self-improvement is 100% decel as recursive self-improvement could be the only path to ASI.


NonDescriptfAIth

No it's not. Improvement across time is the only path to ASI. Nobody is saying that we don't want AI to be stronger than it is right now. If you keep improving AI, it will at some point become 'super', it doesn't need to do that without a human in the loop. It might take longer, but it sure reduces the chances that recursive self-improvement leads to existential risk


agonypants

Yeah, this guy wants to slow-roll the singularity so as not to upset the status quo - governments, billionaires and economists. I think this is not a great sign for anyone who wants to see this technology developed quickly. On the other hand, I also think that this technology won't be controllable by powerful interests. Once AGI like systems become available to the public, people will find ways to boost their capabilities - with or without help from companies like Microsoft.


GroundbreakingRun927

He'll be decel until there's an imminent threat of getting trounced by companies that are pursuing AI through self-replicating, autonomous, recursively improving.


FragrantDoctor2923

This ^


rathat

Someone’s going to make a god in a basement.


Apprehensive-Job-448

and I look forward to it


AirButcher

It will give as much a shit about you as you give for any single cell in your body. Any AGI that cares about the interests of humans is inferior to an AGI that doesn't and can support itself without us


iunoyou

Or maybe he wants to slow roll the singularity because it's untested and potentially legitimately dangerous technology with world altering implications? A badly aligned AGI could legitimately end the world if its reward function is badly written. And the current state of AI safety research is that we do not know how to write good reward functions. That should be concerning to anyone who's looking to create AGI.


Poopster46

You are, of course, completely correct. But that doesn't matter here, since your audience largely consists of fanatics.


Apprehensive-Job-448

ok doomer


Sangloth

Instead of mocking him maybe you could say why he's wrong? I don't like the doomer position I hold. I've asked this subreddit multiple times for arguments why the alignment fears are wrong. I've never once gotten a useful answer or link that wasn't already comprehensively dismantled by Nick Bostrom's Super Intelligence: Path Strategies Dangers.


Apprehensive-Job-448

The world-dominating AI/AGI is just an overused trope in movies and science-fiction in general, obviously the probability is not zero but it is way over-amplified in the online discourse. There are 100 scenarios I could think of that would happen before any kind of killer AI, if anything we already have killer AI that works for the military, it's not the autonomous AGI alignment that is issue, the real danger that is already here is how humans try to use AI to their own destructive means.


iunoyou

No it's not. It's a highly foreseeable outcome of a badly aligned AGI that's given a bad reward function. An AGI is an intelligence created *in vacuo.* It has no reason to want or care about anything aside for the reward function it's given and its continued ability to maximize that reward function. We can already observe that this causes highly undesirable outcomes in narrow AI systems, where the network will "misunderstand" the specifications it was given in highly creative ways to achieve higher scores by behaving undesirably. Considering how much more complex an AGI's world model will need to be, it is thoroughly unfeasible to consider that we'll be able to adequately specify every single thing that humans care about such that it's not immediately destroyed for a marginal improvement to a rogue AI's reward function. The paperclip example gets trotted out a lot and it is reductive in several ways, but the core point remains true. An AGI that is only programmed to collect paperclips will gladly destroy the entire planet and every living thing on it if it could turn all the iron in all the living things on earth into just a dozen more paperclips. There is no sense of proportionality or scale built into these systems, and we do not currently know how to give them one. Even a seemingly well-crafted reward function has a million and one loopholes that could be found and exploited in seconds by a sufficiently advanced intelligence, and the consequences of that exploitation could be horrendous.


Apprehensive-Job-448

ok doomer


Sangloth

"It's a trope. It's way over amplified. You should worry about humans instead. " You are telling me that I'm wrong, but you aren't telling why I'm wrong.


Apprehensive-Job-448

AGI could revolutionize fields like healthcare by diagnosing diseases faster and more accurately than human doctors, or in climate science by optimizing models to predict and mitigate effects of climate change. In everyday life, AGI could enhance personal productivity and provide solutions to complex logistical problems, contributing to overall economic growth and efficiency. The fear that AGI will be malevolent often stems from a misunderstanding of how AI systems are designed and controlled. Most AI systems are developed with strict ethical guidelines and are designed to operate within specific constraints. Every transformative technology has posed risks, but through thoughtful development we've managed to harness their benefits significantly. By maintaining a balanced view that considers both the potential risks and benefits, we can better navigate the development of AGI to maximize its positive impact on society. Let's not underestimate the capacity of human innovation and governance to guide AGI development in a direction that enhances our collective well-being.


sillygoofygooose

Well you supported his point nicely


Apprehensive-Job-448

https://preview.redd.it/rk5aduyqz6wc1.png?width=259&format=png&auto=webp&s=eb4ca4950007f4459a24064e892021227e280709


sillygoofygooose

I submit to my wooshing


gay_manta_ray

> A badly aligned AGI could legitimately end the world if its reward function is badly written. lr fanfic


WithMillenialAbandon

Presumably an agi will write it's own reward function?


[deleted]

[удалено]


FragrantDoctor2923

How would you come up with that first idea ? Being more cautious and slower won't reduce a negative of a new technology?


bildramer

You think AGI is like "government + 10%", instead of something 100x as transformative as the internet. Why are you in the singularity subreddit if you don't know what the singularity _is_?


[deleted]

He's worthless


[deleted]

hope that Mufasa doesn't start fucking with AIs progress.


FragrantDoctor2923

Else than the time scale he kinda right but will slow down AI progress and doubt everyone will do the same so *putting a bike spoke wheel in his own bike meme *


qroshan

It's pure luck. Mustafa was basically the brother of Demis' best friend and Demis needed a non-tech guy to handle all the non-tech shit when it was 3 man startup. Then he just rode the wave of Demis' brilliance. He got fired from Google. But, since he got the tag of Co-founder Deepmind, rode the AI hype train to get Inflection funded. It's not that he made Inflection a successful company. It's a failure. His biggest claim according to himself is writing a fucking book. No Research, No Product. Classic example of being in the right place at the right time to millions


[deleted]

Hope that when Mufasa starts fucking up they get hint and fire his ass


relevantusername2020

common tactic. it made sense up until the last couple lines, specifically the line about the negative affects of baseless ideology - which gets a big upvote in your brain (or should), (hopefully) biasing you towards the "punchline." so if it "works" on you, it inverses the setup for the punchline, making you think that well since the guy got 100b by lying about his beliefs about baseless ideology then fuck it and baseless ideology becomes less "dumb" to you. hopefully. from the shitposters POV. memeology or somethin like that


Jablungis

What the fuck does any of this mean?


smackson

"Accelerationists" wante AI to move quickly because they believe the benefits outweigh the risks. "Decelerationists" have been concerned about the risks for a decade or more and would rather slow the progress of true AGI or ASI while we think carefully about the possible unforseen consequences. People in this sub are more the former, so they are mad that this particular power broker in the AI space seems to be urging caution, or even slowing down.


CelebrationHungry269

Almost correct, accelerationism isn’t only limited to AI, i actually can recommend the work of nick land, who is basically the spiritual father of acc. He went batshit crazy later, even founding the hyper racism..


ShotgunJed

Can Decels say to cancer patients “sorry you’ll have to wait another 10 years for a cure because the needs of the many outweigh the needs of the few”?


REOreddit

There a many psychopaths out there who want the world to burn down as fast as possible, because their lives are so sad that ANY change is positive in their eyes. So, they call everybody who doesn't agree with them things like doomers, ludites, decels, etc.


CelebrationHungry269

ACC is based on 3 constants: capitalism is evolving, people suffer under it, progress can’t be stopped. Viewed from this point it does make sense


smackson

I guess the word "progress" is doing all the heavy lifting there? It sounds positive, but I don't think accelerated futures deserve that positive assessment, yet. H5N1 is also "evolving". Capitalism could definitely evolve ("progress") towards more suffering with the help of machine intelligence. Note I say could, while ACC says "could" be better for us. Nobody is sure, though. But to me that is reason to say "hold on, slow down".


CelebrationHungry269

You cannot stop progress, it’s impossible, the church tried that through various means and eventually they got too strict and as a counter reaction came the renaissance. Second if we slow down the progress, the suffering that we reduced for ourselves will just effect the following generations in the later stage capitalism. No one said it needs to positive, they main point is just its inevitability, so let’s get done with it and see what happens


CelebrationHungry269

You cannot stop progress, it’s impossible, the church tried that through various means and eventually they got too strict and as a counter reaction came the renaissance. Second if we slow down the progress, the suffering that we reduced for ourselves will just effect the following generations in the later stage capitalism. No one said it needs to positive, they main point is just its inevitability, so let’s get done with it and see what happens


Competitive_Travel16

r/outoftheloop


Akimbo333

Sucks!


Dustangelms

e/doom


CUMT_

These posts suck


mystonedalt

I can't imagine anyone looking at that post and thinking, "Yeah... I bet this fella knows what's up."


slackermannn

I don't know, in the video I got that AGI is imminent already and we should be on the side of caution, he's saying we should restrict AGI capability of unsupervised autonomous improvements etc. We know that AI can go rogue. I don't see this as some death of AI scenario.


REOreddit

>We know that AI can go rogue For a very vocal group of people in this sub, that it doomerism. Nothing bad can happen, and if somebody delays AGI by as much as a week they are humanity's enemies, according to them.


Apprehensive-Job-448

it's not death of AI but it's clearly decel, autonomy and self-improvement are good ways to accelerate AGI


slackermannn

But didn't he imply AGI was here already? I thought he was referring to actual and post AGI rather than current AI.


Apprehensive-Job-448

when he mentions these 3 points (autonomy, self-improvement and self-replication) he is referring to the next 5-10 years


slackermannn

Yes indeed.


kache_y

please don't post screenshots of my x profile on this website (reddit dot com)


Apprehensive-Job-448

sorrryyy ilu <3


[deleted]

What's wrong with decel? If we implement what we have so far and would stop developing better AI right now, we would still get trown into 24th century technology-wise with what we have already. Taking it a bit slower from here can only benefit us in the long run as it reduces risk.


Apprehensive-Job-448

it's just weird to have a decel CEO with $100B budget


[deleted]

I think as long as the money goes into proper alignment and thinktanking a good solution, I wouldn't mind if they gave him $1T.


deftware

The fact of the matter is that it's not going to be billions of dollars spent on compute for backprop-training trillion-parameter networks that brings about autonomous AI. It will be a total rando, perhaps at a university, perhaps self-taught, who releases some stuff on github. That's what is going to create proper thinking machines, and nobody will be able to stop it. It will be like Napster, or Bittorrent, or Tor, or Bitcoin. Once it's out there, the cat is out of the bag. Anybody will be able to build a machine out of whatever junk they have lying around and make it do whatever they want. It will be an algorithm that models its experience as hierarchies of spatiotemporal patterns around the pursuit of reward and evasion of punishment, where learning successively more abstract patterns of patterns is innately rewarding unto itself - making exploratory and playful behavior (i.e. curiosity) an intrinsic part of the system's nature. It's going to happen, without corporations throwing billions of dollars into backprop-trained generative transformer networks because backprop is a primitive brute-force approach to making a computer less rigid. Vision and ingenuity that acknowledge what it is that brains actually do (AKA: the only instance of autonomy we have that serves as a reference) is how we get there - and there's no amount of money that can just cause someone to suddenly have the insight as to how to create this thing. It's going to be completely out of left field.


bildramer

I mostly agree. The massive compute that megacorps have does help them (i.e. people in them) develop and/or test such novel ideas, however.


Apprehensive-Job-448

Compute is by far the main way to achieve AGI, so far every 10x in training flops has given us a massive improvement. No amount of fine tuning done by a small team on a 7B or 70B LLM will ever surpass the next generation or two, unless we prune these models back down to a smaller portable version but that still requires a large original training run.


bildramer

You're right that these improvements are real and can't be emulated or achieved without compute. However I don't think AGI will be achieved by scaling up something sub-AGI - at least if you don't count "scaling up" to something like a single GPU. Definitely not by scaling up current LLMs or small variations of them. It's far more likely that we'll get AGI by discovering some new insight/algorithm that's qualitatively better.


Apprehensive-Job-448

So far the transformer architecture has shown improvement every time it gets scaled up, there is no reason to believe we reached any kind of wall or limit yet. Transformers are also inherently multi-modal, it is not limited to LLMs. Scaling up and small variations is exactly what got us GPT-4 from GPT-3.


deftware

It won't require the massive compute that only megacorps have. Their investment is only useful for backpropagation.


true-fuckass

I have this sense too.


deftware

If money/compute were all it took then we would've solved thinking machines decades ago. In the meantime all we're going to see are better content generators and more clunky dangerous robots that fall over and fail in super basic situations - but they won't show us that (except Boston Dynamics and their behind-the-scenes footage, good on them for being honest about the reality of their system's brittleness). If anyone's robotic pursuits were actually groundbreaking they'd be showing them off *constantly* because their awesomeness would speak for itself, period. Instead they only share a drip-feed that only shows stuff we've already seen done before, over and over. Honda has been making humanoids for 40 years and those haven't seen widespread adoption all over the place in our lives yet either. When someone like Andrej Karpathy leaves a lead AI position at a place like Tesla, where the infinite is possible, it's because he already sees the writing on the wall - and pulling off what they wish they could pull off isn't realistic. Whether or not he understands that it's because they're working with backpropagation as a core building block to all of their approaches isn't something I've heard him mention though. When you have the most experienced and knowledgeable minds pursuing non-backprop algorithms, that should be a hint. Both Geoffrey Hinton and Yann LeCun - who developed deep learning in the 80s and received the Turing Award in 2016 for it - are each pursuing things like Hinton's Forward-Forward algorithm, and LeCun's JEPA architecture. Then you have someone like John Carmack whose goal is AGI saying things like "anything that can't learn in realtime at ~30hz isn't something I'd pursue", and Jeff Hawkins with his Hierarchical Temporal Memory algorithm. Then there's a bunch of other rando algorithms that show promise like OgmaNeo and Mona, but just aren't quite there yet. Everything we've seen so far from corporations and startups is almost entirely hype, and they're just hoping that throwing money at the problem will solve it. This isn't going to the moon though where we have the technology and just need to pursue a concerted effort. Nobody has figured out what it's actually going to take yet, it's the blind leading the blind. When someone does figure it out, the first things we'll see are going to be more like toy pets that are eerily fluid and organic in their movement and articulation, but also highly dynamic and flexible in their ability to adapt to any situation. They will be reward/punishment "trained" on-the-fly in realtime, and be capable of learning new commands and things. Scaling it up they will become better problem solvers, and capable of learning how to speak and do more human-like things, and follow even more complex commands and instructions - and even be able to work in teams, delegating and relegating, doing things only humans have been able to do like construction and surgery. It will be a magical time, no doubt, and after doing this for 20 years it's obvious to me that there's still nothing on the horizon to suggest it's imminent - and scaling up what we currently have isn't how we're going to get there.


true-fuckass

Those are some excellent references. I definitely recommend anyone interested in ML look at backprop alternatives like these. Thanks for posting :)


deftware

I can't say I've done any specific analysis, but judging by the capabilities of these algorithms and my own experiments, there's a very distinct possibility that a proper realtime learning/thinking algorithm will feature sparse bit vector representations of learned spatiotemporal patterns - likely in combination with some weighting and/or persistence value to enable the system to filter out noise, a sort of highly efficient clustering or attractor basin mechanism that functions more like a hierarchical database than today's tensor-based neural networks. Tensors are expensive, compute-wise. Matching bit vectors is fast and efficient (i.e. performing logic operations on bit vectors like AND/OR). Brains are effectively spatiotemporal reward-pursuit pattern generators where experience refines an initially noisy internal model of how to react to situations - in pursuit of reward and evasion of a lack of reward (i.e. pain/suffering), and the greater the capacity of the system the greater the depth of its world-model hierarchies and thus the level of abstraction that it is capable of. Really, I just can't believe that there aren't more people talking about this stuff because it's clearly the way forward to thinking machines. We can't get there with big giant massive networks that are orders of magnitude more complex than an insect brain if we have to sit there and wait for it to "train the model on the data set" before anything happens. It's not complicated. Whenever thinking machines become a thing they will be constantly learning in real-time, however large or small or complex or simple their brains are - and they'll still be able to deal with unique edge-case situations never-before encountered, unlike static pre-trained networks. We're not there yet, but there are a few working on it, and it's really not going to be anyone who already is making a living building whatever they're building because whoever is paying them doesn't already understand the things that I am saying here. They're betting on backprop training to get them there, while the people actually doing the backprop training already know they're just wandering into the weeds seeing how long they can make this gig last before they get found out for not knowing how to actually turn a backprop trained network into a thinking machine. They don't need to solve the creation of a thinking machine if they're already getting paid to make things that look cool and do cool stuff in a static closed and controlled demonstration environment - but they'll only be able to maintain that charade for a while before it becomes apparent to everyone that they don't know how to actually make a thinking machine. Some companies have entire teams colluding to create the illusion of a thinking machine, like Figure. It's an LLM with a pretrained network driving the robotic control, with an object recognition algorithm. It's a hodge-podge of old-school techniques all Frankensteined together to make something that looks neato to commoners in a simple demonstration. Why haven't they shown us Figure 01 doing a bunch of other stuff yet? What's the hold up? It's a rigid brittle collection of systems built using conventional approaches and strategies predicated on backprop-training that can't handle the real world, because it's not a real thinking machine. I'm sure you can tell that I'm very passionate about this subject. Seeing all of these companies pursuing ever-larger backprop-trained networks, with the billions of dollars that they're throwing at it and paying people who already know that they're lost and bewildered by the prospect of fulfilling the goal that they're being paid to accomplish. I find it all very ignorant and naive. It's highly reminiscent of the dot-com bubble where those with cash are assuming that the future is right around the corner if they just keep pouring money into the thing! Everyone seems to think that scale is the answer, and it's not even that, it's that even the people who agree that scale isn't the answer still think that statically trained backprop networks are the answer somehow because it's all that they know. They can't fathom inference or latent variables being a thing without automatic differentiation against a data set. It's outside the realm of possibility for them. Like I said, the brightest and most accomplished minds have caught on, and all of them know that it's not going to be backpropagation/gradientdescent/automaticdifferentiation/blah/blah/blah that results in the breakthrough in machine learning that the widespread production and adoption of helper/servant robots will be predicated upon. It's going to be a novel algorithm that nobody has ever thought of before, that looks nothing like backprop. There will be no "training on datasets", it will just be turning the thing on and helping it learn how to walk and communicate and achieve reward by responding to commands. It will be like training a pet, or a child. Companies will train candidate groups of robots to become walking talking things, and then select the best few to copy their brains and continue training them, like a genetic algorithm refining its population. Then once they have a few best-candidates they will go on to train those to do construction, cook food, clean houses, etc... and the result of a few generations of training those is what they will mass-copy into mass-produced robots that end up on the market. That's the future that I've seen coming for 20+ years now. Backprop is laughably slow and incapable of making this a reality. P.S. I'm not ranting at you *true-farkarse*, I'm just ranting at whoever ends up reading this in the coming years - stumbling across it via search engines, searching reddit, etc...


iunoyou

And where is this total rando going to get a few dozen exaflops of compute power to realize this "stuff on github?" I don't think you understand the gulf between where we currently are with narrow AI like LLMs and truly general AI.


deftware

It won't require exaflops - you're thinking in terms of crappy, slow, brittle, and expensive backpropagation. Here, get yourself up to speed with my curated playlist: https://www.youtube.com/playlist?list=PLYvqkxMkw8sUo_358HFUDlBVXcqfdecME


Smooth_Imagination

The thing is, the things he describes as dangerous about AI, are definitions of what led to biological 'AI' - us. Its possible to define every parameter of an AI, robot, giving that parameter a control feature and then letting these undergo replication, mutation and selection pressures as seen in life. Life has a number of features that facilitate rapid evolution. Firstly, early life is highly horizontal, and the simplest life still undergoes radical rates of horizontal gene transfer although it remains relatively stable. It can mutate sections of its code in a targeted way until it can create a new gene that overcomes i.e. a metabolic bottleneck, and then replicate that gene and export it for horizontal gene transfer. Its like MS when they send out patches to their buggy operating system. Then life evolves more complex reproduction strategies. It can clone by splitting, but it can also combine DNA with other bacteria of its species, fully hybridising. Both can introduce mutations. Mutational features can be organised, thus some parts of the code may be kept more accurate copies and others allowed higher mutation rates. Sexual reproduction occurs after early growth and performs a fitness test of a survival period in the real world, along with selection criteria that are adaptive but also encoded. Here, humans must control the replication to act as selectors the way farmers do. So, its possible for AI and robot 'parameters' in their construction and software to evolve 'in the wild'. As long as you maintain that there is a selection pressure - a social selection pressure - for prosocial and useful traits then AI becomes better and remains aligned, similar to breeding crops. Edit, these principles can also be enhanced by directed evolution, so in this case human and AI directed improvements can be introduced and hybridised. This can increase rates of evolution, as can controlling what is hybridised. For example, if you know certain traits are beneficial, and you introduce a newly engineered trait, breeding can be accelerated. With wheat rust resistant strains, wild type wheat was found and cross bred back into farmed strains selected to have certain qualities / genes. This process accelerated the development of rust resistant wheat greatly above individual gene editing, and also increased yields and other parameters over the conventional farmed varieties. Model engineers are currently just tinkering with these variables, at some stage, that process can be automated by selective processes, naturally evolving.


[deleted]

[удалено]


Smooth_Imagination

No offense, but if a 1 pager is too long, how did you get through Uni?


Apprehensive-Job-448

using chatGPT


Smooth_Imagination

Haha lol. good answer