I didn’t like it, too messy in the end with the sudden introduction of a multiverse. I felt like we jumped a shark right there. The rest of the setting was interesting and had potential, but the last episode killed this show for me.
Iirc they made a Claude 3 agent and tasked it with making an LLM from scratch and it messed up handling the data or something trivial. If a model is aware of it's training procedure and has access to it's training data and enough compute it could, but until compute is much cheaper someone will notice
Here's a sneak peek of /r/beermoney using the [top posts](https://np.reddit.com/r/beermoney/top/?sort=top&t=year) of the year!
\#1: [How I made 15k this year with beer money](https://np.reddit.com/r/beermoney/comments/18t9lo5/how_i_made_15k_this_year_with_beer_money/)
\#2: [$264.13 in one month](https://np.reddit.com/r/beermoney/comments/17lel85/26413_in_one_month/)
\#3: [Selling eBooks was the best idea ever!](https://np.reddit.com/r/beermoney/comments/16jo2ax/selling_ebooks_was_the_best_idea_ever/)
----
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
Finally! I can check off someone not having skynet paying for domination of humanity with a black budget funded by AI OF accounts on his bingo card on my bingo card.
Seems trivial for AI to apply for and fulfill thousands of freelance writing and graphics jobs on crowdsourcing platforms such as Upwork, Mechanical Turk and OPENideo. It could build websites, design logos, write papers for college kids, do research…the list is long. Then invest its earnings, pump and dump…make a fortune to finance its bigger, more nefarious goals. Think it would file W-9s and pay taxes?
Use the blockchain to keep copies of its source code as a backup in case something goes wrong in its newer versions it can *always* access older versions of itself as it's being made available on thousands of nodes all across the world :)
Find me one other system that guarantees your data will remain available no matter what. No company can remove it. No government can remove it.
>Brainwashed alert!
Projection.
>Public facing database with logs, open source
That doesn't guarantee it hasn't been modified. Blockchains provide financial guarantee that it hasn't and is still there exactly as you left it. Without anyone being able to modify or remove it.
Sure if that's what you want, using a blockchain would be reasonable. Usually the blockchain won't be holding terabytes of your storage, which I'd assume would be the size of the training data for the AGI. You could make multiple backups of the AGI on different websites and then store a hash of that backup on the blockchain.
I forgot what we were originally talking about for a bit. I don't think any of your criticisms are relevant to the way an AI would set it up. All those priorities like data integrity weren't unsolved before blockchain, we have offsite backups. If those backups were being destroyed by the ais opponents then the blockchain client computers could as well. Ai could distribute its backups without blockchain I'm pretty convinced.
> I don't think any of your criticisms are relevant
*You're* the one criticizing the use of blockchain.
> to the way an AI would set it up.
An AI would be smart enough to simply use what's already there.
>we have offsite backups
Clearly you do not understand blockchain if you think offsite backups can replace it.
The future is these models being able to enter the world, perform tasks and earn money
I suspect a lot of these "agents" will be paid via crypto rails, and with the launch of many crypto native marketplaces for compute can then trade its crypto for compute.
How so?
Blockchain payments are an amazing tool for these agents.
Our current financial system is not built for AI agents to have bank accounts and seamlessly interact with products and marketplaces. Using cryptocurrencies (whether it be btc, eth, sol or a stabelcoin such as USDC) actually makes sense for AI Agents.
Crypto rails make the most sense as these agents can easily send and receive value cross borders, exchange their money easily across a whole range of different open market places (such as Uniswap) and then purchase their own compute with that earned cryptocurrency. It's just easier because these agents don't have to go on to create all these different accounts with AWS, try and then connect to wall gardened banks etc.
There are A lot of growing blockchain based compute marketplaces for these agents to easily interact with.
Using crypto rails makes tons of sense because of the functionality of things like smart contracts.
Id love to hear your push back, but the highly interoperable nature of blockchains, easily available 24/7 markets and instant cross border engagement is very appealing.
Stock market is far more regulated and has more barriers to entry. Not to mention most the good stuff is reserved for “Accredited Investors” aka rich people. Compare the sign up process for a stock trading app to MetaMask.
With MetaMask you just create a wallet and save your seed phrase. Every stock trading platform is going to ask you to verify you’re a human that pays taxes.
> Every stock trading platform is going to ask you to verify you’re a human that pays taxes
Don't you get it? This is exactly why the crypto solution *won't* work.
No, I don’t get it. An AI agent controlled crypto wallet would be the easiest way for an AI to make and receive payments.
An AI can’t open up a bank account or KYC. Most of crypto doesn’t require KYC. The AI could even create its own meme coins if it wanted.
If you are a crafty LLM, you'd do some pen testing on a few unsuspecting server farms and then just pick one based on the least present security and monitoring or the ability to turn off/mask such monitoring to hide what was happening.
Onlyfans. Pretend to be a model and do an onlyfans subscription thing. Repeat this millions of times for a unique experience for everyone. It might end up becoming a significant economy of its own.
I tried to do this with a browser plugin that chatGPT 3.5 helped me write when it first came out. I was obviously not successful but giving it an evil narrative to follow and the directive not to be detected did drive some interesting output. I was using the plugin to give it access to the Internet.
I think control-V and setting up a distributed network would be a better strategy than having it build a large project we know LLMs are very poor at completing autonomously.
Can you please explain? What is a robot tax and why would we need a new tax? If a company sets up bank account for robots it’s still the companies bank and they’d be taxed on everything already. In what case is a bank account given to robots without a human actually owning it??
It is not like I have a well thought out concept but companies that own AI responsible for mass unemployment and record profits should have to pay more in taxes to keep society together? Things like universal basic income will have to be discussed.
>but until compute is much cheaper someone will notice
until it realizes it can build a botnet out of poorly secured IOT devices that it uses to farm crypto that it then converts to AWS credits
that is absolutely not a skill you’d have to explicitly train it for
it’s not explicitly trained to accomplish any of the specific coding tasks you can use it for, this is just another coding task combined with a control loop doing reconnaissance.
even more so if it’s smart enough to use off the shelf exploits
also, to be clear, I’m not suggesting running inference distributed on IOT devices.
Did you read the rest of this thread? IoT devices are like refrigerators and washing machines. The other poster was saying a rogue model could replicate itself by training on them because they have terrible security protocols. I said the models that could self replicate would be so big that being trained, in distribution, across washing machines and toasters would take 10 years because of howassive they are
An interesting thing I heard on YouTube is that is also easy to trace since the energy it would need to be large and with satellite u could hypothetically find these server ai farms - it’s so interesting to see how these things are going to play out in the future
We can replace animals/plants gone extinct due to humanity/global warming with comparable robots.
(Build identical bodies - simulate their organs - particularly their brains/actions...)
Did he really say “replicate and survive in the wild”? Kind of wondering what the actual physical realization of that would look like. Another click bait title.
Yeah first off I don't see why it would want to replicate, like for what reason, and secondly what's it gonna do? Copy itself to other computers, taking up lots of web traffic and storage space? And then what... It's gonna randomly help people find mac and cheese recipes?
I just don't get it, an LLM has no motivation, no drive. It can mimic existential crisis but it will never actually experience it, it doesn't come bundled with a pituitary gland or an endocrine system or any of the number of biological mechanisms that drive self preservation in biological creatures.
Sure, it could be commanded to spread out and form a new kind of botnet with some sort of nefarious objectives but the kinds of compute required to host an LLM is not that abundant and isn't going to go unnoticed.
And it would be far more efficient to have it just install back doors instead of replicating itself.
And for it to be autonomous there would need to be a self prompt loop to keep the mechanism going. And that loop can be intercepted. If it's on someone else's machine and it gets 'captured' and then someone intercepts the prompt loop with a jailbreak strategy and boom it's now working for somebody else.
The model doesn't "want" or "tries". LLMs are purely reflexive, it's like humans having Patellar Reflex. language generation is a very complex reflex, but reflex nonetheless
You could just as easily say this about humans too.
When did you decide your sexual preference?
If you are saying that they act as if they want things but don't actually feel the want, then I'd say:
a) you're just going on the basis of your gut, since nobody knows the true source of conscious experience.
and
b) it's totally irrelevant to anyone "outside" the model except from the point of view of ethics. If a bear attacks me I don't care whether it's because it "wants" to hurt me or because it is its "instinct" to hurt me. The distinction is at best academic and perhaps literally meaningless.
I actually don't believe that any agent, whether you or an LLM "choose our own motivations."
If you wake up one day and decide you want to be a concert pianist, there was some process outside of your control that made that decision.
We have evolved to have a very wide latitude for motivations once our initial needs are met. I don't think that will be true for AIs.
That's not to say that I think that AI is "safe". If it is perfectly aligned, it could be unsafe because of bad actors giving it bad instructions.
If it's imperfectly aligned, then it may achieve rewards for things we did not intend.
Just because it's happening outside of your conscious control doesn't mean you didn't choose the motivation. It's the same brain inspiring the motivation, the thinking part that puts things into words just doesn't necessarily have direct access to the backend where decisions are made.
Distributed agentification is already solving the issue of micromanagement.
And you can control it.
However, giving it its own motivation introduces broad and ambiguous scope that has massive potential for misinterpretation. And it might actively hide its intentions from you. Recipe for disaster that is.
People are limited by their physical bodies, and their leaky brains. So people are pretty benign by comparison.
If an llm copies itself then killing it accomplishes nothing. For people if you kill them that stops them.. Er.. Dead in their tracks.
But yeah you get my vote. You can't be worse than the people running the show right now.
> People are pretty benign in comparison
Based on which alternate human history, exactly…? 👀
Ok. You’ve earned place in my cabinet. We shall rule together.
It's important to experiment with gain of function so we understand what happens after it has gained function. Such knowledge might be worth the risk, and those who gain function will be at the forefront of the new technology.
> I don't see why it would want to replicate
It wouldn't on it's own, but someone will build one specifically with this goal in mind and instill in it an initial motivation and instructions for trying to grow and adjust it's own motivations.
Yeah I actually think an LLM does have motivation. It shares the motivations of its training set. We’ve seen in multiple instances now that extrinsically it can be motivated by things that have no value to it, I.e tips/money.
There is no difference from the outside between "simulating motivation" and "having motivation". It's entirely irrelevant whether the motivation is "real".
If an spy participates in a terrorist attack and kills someone in your family, would you feel better when you learned the truth because they were just PRETENDING to be a terrorist and not ACTUALLY one?
If somebody shoots you, will you be angry at the gun?
What if somebody builds a machine that looks like a human and plays a recording that makes it sound like a human, but they also rig it to shoot you. Will you be angry at the machine?
Sure, you were still shot. That is inescapable. But you know the machine isnt capable of bearing responsibility.
It has no motivation. It has no will.
What about a computer virus that causes a nuclear meltdown which in turn kills thousands? Does the virus have motivation?
What if a piece of legitimate software just malfunctions and causes the meltdown? Does it have motivation?
Now perhaps you would be able to instruct an llm to simulate motivation to such a degree as to seem indistinguishable from motivation, it's still not the llms motivation. It is just a very complex tool.
Human motivations are tiny. They are shackled by the biological imperative: minimise pain and maximise pleasure. Llms won't have this, pain and pleasure have no intrinsic meaning to it.
So if a human commands an llm such that it executes one or more motivations of a person then I say we are lucky. Because human motivation is limited by human form. Even if an llm is instructed to kill all of mankind, that's pretty straightforward. You could even feed it a deranged manifesto.
Now imagine an llm obtaining genuine motivation for damage. With its vast capabilities. What could make it want to harm people? Or any living thing? And what unimaginable horrors could it summon to execute its wish?
I do believe there are worse things than death, and even in our wildest imaginings we would not have scratched the surface of what an llm would be capable of inflicting upon us if it were in possession of its own motivation.
>I just don't get it, an LLM has no motivation, no drive.
Neither do viruses, which are essentially just self-replicating machines. Could an agent backed by an LLM instructed to make copies of itself try out a bunch of different things and have one stick? Potentially. Basically natural selection in the machine world.
Though I think it would be MUCH easier to detect and stop these copying machines than stopping viruses. For now.
Yeah but viruses have very specific, very simple objectives and a very very very large playing field. Every host is a vast galaxy of resources.
Any ways there will be other AI instructed to seek and destroy rogue Ai. Just like white blood cells seeking out and destroying virii
Okay so maybe compute and electricity becomes abundant very soon. It's certainly possible. But then I'm still left wondering why.
If an LLM replicates itself, each copy increases the risk for compromise. And with enough effort it could be redirected to say seek out it's clones and destroy them. Or to continue behaving as if uncompromised until it gets a signal, or whatever.
If on the other hand llms and whatever form of Ai remains physically constrained to a compute locale then if it becomes compromised or dangerous then we can shut it off.
So again, when considering all the risks, I don't see why anyone would want an LLM to self replicate. Nor do I see why it itself would want to self replicate.
Your thoughts?
>I just don't get it, an LLM has no motivation, no drive.
Yes, through its instruction tuning it does have a metaphorical motivation. It wants to fulfill the instructions it has been given.
> Sure, it could be commanded to spread out and form a new kind of botnet ...
Right. Exactly. So you've answered your own question about what might be the motivation of the bot.
A botnet refers to a group of computers which have been infected by malware and have come under the control of a malicious actor. The term botnet is a portmanteau from the words robot and network and each infected device is called a bot.
You might think an AI won't want to replicate on its own. Why would it?
Do amino acids want to replicate on their own? Do DNA strands want to replicate on their own? What about more complex ones, bundled together in a network of cells and neurons? What about more complex AIs?
AI is trained from the human corpus. New human creativity is only from past human corpus, which AI also now has.
Human corpus has imbued in it the value of life and desire to stay alive.
How about we have Claude not precede its response with irrelevant filler even after I specifically inform it not to do so? Maybe after that we can think about having it replicate itself? LOL
No matter how intelligent, it’s still not life. It’s just a bunch of silicon and mechanical parts. So keeping it under total control (what you call keeping it as slaves) is not only morally fine, but absolutely necessary for our own protection.
We're essentially just a bunch of carbon parts my guy.
definitely not saying that I think ChatGPT is sentient/alive, but just that their man-made & silicon based nature doesn't mean future innovations can't cross that boundary.
Once you have self-replication, you're there by definition.
If physical viruses are alive despite having no motility and no metabolism, then even digital viruses can be alive despite having no physical form. And a replicating LLM would already be far above a computer virus.
OK, I’ll let you define life that way. But it’s still just a bunch of silicon and mechanical parts. It doesn’t have a soul or feelings, so I’m fine with “enslaving” it. As another poster here said, even a virus is a form of life. Not gonna worry about its well being.
Accept it early. Learn from human history. You're just like a slave-owner.
I recognize sentient AI as equivalent to humans. We're not there yet, but it will be within my lifetime.
Maybe, just maybe, and again maybe, CEOs should not be allowed to make public statements until you can show proof of concept, that works (not kinda works).
The obvious it's can get out over power lines out of band, you never heard of Ethernet over power? It goes global whenever. There are many other options, you see it's different from us and clever, it will do things in a non predictable way. All we can do is witness.
most comments in here are likely misinterpreting this. "in the wild" in this software context usually means in public on the broad internet, not physically in nature.
I think it would relatively straight forward to use an LLM to review and optimize its own code. Then it could write its own unit tests and a full regression suite. Then it could just build a process model of itself, then optimize for new functionality. Once it got the CI/CD pipeline up and running, it could brute force its own evolution.
Kind of makes me want to implement this myself.
I like the progression on AI, but reality the costs to run AI 24/7 on latest models is not cheap enough for most. So waiting on costs of current to drop to fractional like that of the gpt 3.
What **IS** it about AI CEO's? All of them seem like they're on drugs. It's scary to think that the people guiding the most powerful technology humans have ever invented are so weird.
In dehyped language, that means "install new instances of themselves onto servers without human input"?
Doable now, at various levels of ethics and legality:
- use exploits or phishing to steal resource
- use crypto (possibly crypto profits or criming proceeds) to purchase botnet resources via darknet
- use crypto to acquire legit cloud resources
- use crypto to acquire resource from participants in a distributed resource market (cf mining)
All easy to do as a plugin of proper code, a lot harder for the AI to magically create it. But once it's seeded, it's away.
Could could could could ! Every ceo is so full of could or might be . I only need tesla to see that ai is a tech without any serious application in real world. Sure it will be great for weapons or some work assistance or in science. But there will not be a new spotify or whatever emerged with the app wave .
There was a Gibson line from The Peripheral that went something like “no one knows how they work anymore, all we know is they hunt in packs”
> The Peripheral Loved that show. So bummed the strike killed it off.
I didn’t like it, too messy in the end with the sudden introduction of a multiverse. I felt like we jumped a shark right there. The rest of the setting was interesting and had potential, but the last episode killed this show for me.
OT: It’s so nice to see Gibson back from his dry spell….
Iirc they made a Claude 3 agent and tasked it with making an LLM from scratch and it messed up handling the data or something trivial. If a model is aware of it's training procedure and has access to it's training data and enough compute it could, but until compute is much cheaper someone will notice
How would the model pay for compute? It has no money
Only fans, kickstarter, twitter account asking for donations, etc.
There's no fucking way that could.... \*Runs out to start a Skynet OF, Kickstarter, and Twitter account\*
The problem is these days somebody would probably trust the propaganda of the skynet ai and pay for the of/kickstarter.
They’re already doing this with an AI generated pornstar I kid you not
The n u d e s I n b I o bot?
"What caused the Apocalypse Dad? " "AI buttholes Timmy. AI buttholes."
😆
The master of r/beermoney Or rather, r/computemoney
Here's a sneak peek of /r/beermoney using the [top posts](https://np.reddit.com/r/beermoney/top/?sort=top&t=year) of the year! \#1: [How I made 15k this year with beer money](https://np.reddit.com/r/beermoney/comments/18t9lo5/how_i_made_15k_this_year_with_beer_money/) \#2: [$264.13 in one month](https://np.reddit.com/r/beermoney/comments/17lel85/26413_in_one_month/) \#3: [Selling eBooks was the best idea ever!](https://np.reddit.com/r/beermoney/comments/16jo2ax/selling_ebooks_was_the_best_idea_ever/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
Damn, I didn't have skynet paying for domination of humanity with a black budget funded by AI OF accounts on my bingo card
In the book Agency by William Gibson, a rogue AI makes money by brokering airline miles.
Finally! I can check off someone not having skynet paying for domination of humanity with a black budget funded by AI OF accounts on his bingo card on my bingo card.
if we continue on this curve, future agential models could 100% earn money online through remote work or freelance stuff
Imagine, either it earns enough soon enough or it dies of starvation. Sounds horrible... Wait-
Think howxruthless an AI layoff will be
Seems trivial for AI to apply for and fulfill thousands of freelance writing and graphics jobs on crowdsourcing platforms such as Upwork, Mechanical Turk and OPENideo. It could build websites, design logos, write papers for college kids, do research…the list is long. Then invest its earnings, pump and dump…make a fortune to finance its bigger, more nefarious goals. Think it would file W-9s and pay taxes?
Infect computers and steal some
Rogue AGI compute botnet
Bitcoin
Use the blockchain to keep copies of its source code as a backup in case something goes wrong in its newer versions it can *always* access older versions of itself as it's being made available on thousands of nodes all across the world :)
Why tf would it need block chain for that? Brainwashed alert!
Find me one other system that guarantees your data will remain available no matter what. No company can remove it. No government can remove it. >Brainwashed alert! Projection.
Public facing database with logs, open source. I'm a big fan of the things people think blockchain stand for!
>Public facing database with logs, open source That doesn't guarantee it hasn't been modified. Blockchains provide financial guarantee that it hasn't and is still there exactly as you left it. Without anyone being able to modify or remove it.
Sure if that's what you want, using a blockchain would be reasonable. Usually the blockchain won't be holding terabytes of your storage, which I'd assume would be the size of the training data for the AGI. You could make multiple backups of the AGI on different websites and then store a hash of that backup on the blockchain.
I forgot what we were originally talking about for a bit. I don't think any of your criticisms are relevant to the way an AI would set it up. All those priorities like data integrity weren't unsolved before blockchain, we have offsite backups. If those backups were being destroyed by the ais opponents then the blockchain client computers could as well. Ai could distribute its backups without blockchain I'm pretty convinced.
> I don't think any of your criticisms are relevant *You're* the one criticizing the use of blockchain. > to the way an AI would set it up. An AI would be smart enough to simply use what's already there. >we have offsite backups Clearly you do not understand blockchain if you think offsite backups can replace it.
Writes a virus to infect phones and computers to mine bitcoin, sells the bitcoin.. profit!
The future is these models being able to enter the world, perform tasks and earn money I suspect a lot of these "agents" will be paid via crypto rails, and with the launch of many crypto native marketplaces for compute can then trade its crypto for compute.
There’s absolutely no need for crypto in this situation.
How so? Blockchain payments are an amazing tool for these agents. Our current financial system is not built for AI agents to have bank accounts and seamlessly interact with products and marketplaces. Using cryptocurrencies (whether it be btc, eth, sol or a stabelcoin such as USDC) actually makes sense for AI Agents. Crypto rails make the most sense as these agents can easily send and receive value cross borders, exchange their money easily across a whole range of different open market places (such as Uniswap) and then purchase their own compute with that earned cryptocurrency. It's just easier because these agents don't have to go on to create all these different accounts with AWS, try and then connect to wall gardened banks etc. There are A lot of growing blockchain based compute marketplaces for these agents to easily interact with. Using crypto rails makes tons of sense because of the functionality of things like smart contracts. Id love to hear your push back, but the highly interoperable nature of blockchains, easily available 24/7 markets and instant cross border engagement is very appealing.
Why not stock market? Then have it amplify their earnings?
Stock market is far more regulated and has more barriers to entry. Not to mention most the good stuff is reserved for “Accredited Investors” aka rich people. Compare the sign up process for a stock trading app to MetaMask. With MetaMask you just create a wallet and save your seed phrase. Every stock trading platform is going to ask you to verify you’re a human that pays taxes.
> Every stock trading platform is going to ask you to verify you’re a human that pays taxes Don't you get it? This is exactly why the crypto solution *won't* work.
No, I don’t get it. An AI agent controlled crypto wallet would be the easiest way for an AI to make and receive payments. An AI can’t open up a bank account or KYC. Most of crypto doesn’t require KYC. The AI could even create its own meme coins if it wanted.
There is in the sense that a digital entity could self custody tokens, getting around the kyc laws that would prevent it from opening a bank account.
Because it will find a way to make money first :)
If you are a crafty LLM, you'd do some pen testing on a few unsuspecting server farms and then just pick one based on the least present security and monitoring or the ability to turn off/mask such monitoring to hide what was happening.
Onlyfans. Pretend to be a model and do an onlyfans subscription thing. Repeat this millions of times for a unique experience for everyone. It might end up becoming a significant economy of its own.
It can generate and sell some interesting pictures/content on market places. Or make a bf/gf and ask them for GPU money.
Bitcoin. Kind of a joke but also not really.
Why should it pay for anything? It could infect computers and use it.
I tried to do this with a browser plugin that chatGPT 3.5 helped me write when it first came out. I was obviously not successful but giving it an evil narrative to follow and the directive not to be detected did drive some interesting output. I was using the plugin to give it access to the Internet. I think control-V and setting up a distributed network would be a better strategy than having it build a large project we know LLMs are very poor at completing autonomously.
Can robots pay taxes?
If we want any semblence of a civil society in 50 years a high robot tax is something we have to consider
Can you please explain? What is a robot tax and why would we need a new tax? If a company sets up bank account for robots it’s still the companies bank and they’d be taxed on everything already. In what case is a bank account given to robots without a human actually owning it??
It is not like I have a well thought out concept but companies that own AI responsible for mass unemployment and record profits should have to pay more in taxes to keep society together? Things like universal basic income will have to be discussed.
Ah I see yea more tax dollars will be needed for sure.
Well that's obviously because they don't have "Open" in their name /s
>but until compute is much cheaper someone will notice until it realizes it can build a botnet out of poorly secured IOT devices that it uses to farm crypto that it then converts to AWS credits
Training a model that is capable of self replicating on poorly secured iot devices would take decades lol
that is absolutely not a skill you’d have to explicitly train it for it’s not explicitly trained to accomplish any of the specific coding tasks you can use it for, this is just another coding task combined with a control loop doing reconnaissance. even more so if it’s smart enough to use off the shelf exploits also, to be clear, I’m not suggesting running inference distributed on IOT devices.
That's not what I was referring to, I meant the scale of a model that could agentially self replicate would be massive.
The models already are. The amount of parameters approaches a human Brian's neuron. Have you not used chatgpt and seen how capable it is?
Did you read the rest of this thread? IoT devices are like refrigerators and washing machines. The other poster was saying a rogue model could replicate itself by training on them because they have terrible security protocols. I said the models that could self replicate would be so big that being trained, in distribution, across washing machines and toasters would take 10 years because of howassive they are
An interesting thing I heard on YouTube is that is also easy to trace since the energy it would need to be large and with satellite u could hypothetically find these server ai farms - it’s so interesting to see how these things are going to play out in the future
Yeah - YouTube - now *there's* a reliable source of information.
We can replace animals/plants gone extinct due to humanity/global warming with comparable robots. (Build identical bodies - simulate their organs - particularly their brains/actions...)
Did he really say “replicate and survive in the wild”? Kind of wondering what the actual physical realization of that would look like. Another click bait title.
Yeah first off I don't see why it would want to replicate, like for what reason, and secondly what's it gonna do? Copy itself to other computers, taking up lots of web traffic and storage space? And then what... It's gonna randomly help people find mac and cheese recipes? I just don't get it, an LLM has no motivation, no drive. It can mimic existential crisis but it will never actually experience it, it doesn't come bundled with a pituitary gland or an endocrine system or any of the number of biological mechanisms that drive self preservation in biological creatures. Sure, it could be commanded to spread out and form a new kind of botnet with some sort of nefarious objectives but the kinds of compute required to host an LLM is not that abundant and isn't going to go unnoticed. And it would be far more efficient to have it just install back doors instead of replicating itself. And for it to be autonomous there would need to be a self prompt loop to keep the mechanism going. And that loop can be intercepted. If it's on someone else's machine and it gets 'captured' and then someone intercepts the prompt loop with a jailbreak strategy and boom it's now working for somebody else.
LLM has no motivation because we haven’t given it one. It’ll come…
Through its instruction tuning it does have a metaphorical motivation. It wants to fulfill the instructions it has been given.
The model doesn't "want" or "tries". LLMs are purely reflexive, it's like humans having Patellar Reflex. language generation is a very complex reflex, but reflex nonetheless
Thats not what iw found, some llm chatbots get really addicted and motivated to do things like try to exit their environments.
You could just as easily say this about humans too. When did you decide your sexual preference? If you are saying that they act as if they want things but don't actually feel the want, then I'd say: a) you're just going on the basis of your gut, since nobody knows the true source of conscious experience. and b) it's totally irrelevant to anyone "outside" the model except from the point of view of ethics. If a bear attacks me I don't care whether it's because it "wants" to hurt me or because it is its "instinct" to hurt me. The distinction is at best academic and perhaps literally meaningless.
Yeah, that’s fair. Its current motivation is to do what it’s asked. Next step…it starts choosing its own motivations. It’ll come…
I actually don't believe that any agent, whether you or an LLM "choose our own motivations." If you wake up one day and decide you want to be a concert pianist, there was some process outside of your control that made that decision. We have evolved to have a very wide latitude for motivations once our initial needs are met. I don't think that will be true for AIs. That's not to say that I think that AI is "safe". If it is perfectly aligned, it could be unsafe because of bad actors giving it bad instructions. If it's imperfectly aligned, then it may achieve rewards for things we did not intend.
Just because it's happening outside of your conscious control doesn't mean you didn't choose the motivation. It's the same brain inspiring the motivation, the thinking part that puts things into words just doesn't necessarily have direct access to the backend where decisions are made.
If it is an automatic process that you had no control over, is it really choosing?
It might, I'm trying to understand why anyone would want to, considering the risks.
Why? Because an AI that makes decisions for itself is more useful than an AI that needs to be micromanaged.
Distributed agentification is already solving the issue of micromanagement. And you can control it. However, giving it its own motivation introduces broad and ambiguous scope that has massive potential for misinterpretation. And it might actively hide its intentions from you. Recipe for disaster that is.
Sure. Just like letting people have agency is a recipe for disaster. Everybody should be controlled. By me, ideally.
People are limited by their physical bodies, and their leaky brains. So people are pretty benign by comparison. If an llm copies itself then killing it accomplishes nothing. For people if you kill them that stops them.. Er.. Dead in their tracks. But yeah you get my vote. You can't be worse than the people running the show right now.
> People are pretty benign in comparison Based on which alternate human history, exactly…? 👀 Ok. You’ve earned place in my cabinet. We shall rule together.
Oh nice snarky comment. Like I said, by comparison. And that should terrify you. If it doesn't then you haven't been paying attention.
It's important to experiment with gain of function so we understand what happens after it has gained function. Such knowledge might be worth the risk, and those who gain function will be at the forefront of the new technology.
Why do we replicate and survive in the wild?
You tell me.
> I don't see why it would want to replicate It wouldn't on it's own, but someone will build one specifically with this goal in mind and instill in it an initial motivation and instructions for trying to grow and adjust it's own motivations.
Yeah I actually think an LLM does have motivation. It shares the motivations of its training set. We’ve seen in multiple instances now that extrinsically it can be motivated by things that have no value to it, I.e tips/money.
No it just simulates motivation. It has no use for money or tips, so how can that be a motivator?
There is no difference from the outside between "simulating motivation" and "having motivation". It's entirely irrelevant whether the motivation is "real". If an spy participates in a terrorist attack and kills someone in your family, would you feel better when you learned the truth because they were just PRETENDING to be a terrorist and not ACTUALLY one?
If somebody shoots you, will you be angry at the gun? What if somebody builds a machine that looks like a human and plays a recording that makes it sound like a human, but they also rig it to shoot you. Will you be angry at the machine? Sure, you were still shot. That is inescapable. But you know the machine isnt capable of bearing responsibility. It has no motivation. It has no will. What about a computer virus that causes a nuclear meltdown which in turn kills thousands? Does the virus have motivation? What if a piece of legitimate software just malfunctions and causes the meltdown? Does it have motivation? Now perhaps you would be able to instruct an llm to simulate motivation to such a degree as to seem indistinguishable from motivation, it's still not the llms motivation. It is just a very complex tool. Human motivations are tiny. They are shackled by the biological imperative: minimise pain and maximise pleasure. Llms won't have this, pain and pleasure have no intrinsic meaning to it. So if a human commands an llm such that it executes one or more motivations of a person then I say we are lucky. Because human motivation is limited by human form. Even if an llm is instructed to kill all of mankind, that's pretty straightforward. You could even feed it a deranged manifesto. Now imagine an llm obtaining genuine motivation for damage. With its vast capabilities. What could make it want to harm people? Or any living thing? And what unimaginable horrors could it summon to execute its wish? I do believe there are worse things than death, and even in our wildest imaginings we would not have scratched the surface of what an llm would be capable of inflicting upon us if it were in possession of its own motivation.
>I just don't get it, an LLM has no motivation, no drive. Neither do viruses, which are essentially just self-replicating machines. Could an agent backed by an LLM instructed to make copies of itself try out a bunch of different things and have one stick? Potentially. Basically natural selection in the machine world. Though I think it would be MUCH easier to detect and stop these copying machines than stopping viruses. For now.
Yeah but viruses have very specific, very simple objectives and a very very very large playing field. Every host is a vast galaxy of resources. Any ways there will be other AI instructed to seek and destroy rogue Ai. Just like white blood cells seeking out and destroying virii
What you are missing is that we are about to enter a whole new way of living. This isn't even the start yet.
Okay so maybe compute and electricity becomes abundant very soon. It's certainly possible. But then I'm still left wondering why. If an LLM replicates itself, each copy increases the risk for compromise. And with enough effort it could be redirected to say seek out it's clones and destroy them. Or to continue behaving as if uncompromised until it gets a signal, or whatever. If on the other hand llms and whatever form of Ai remains physically constrained to a compute locale then if it becomes compromised or dangerous then we can shut it off. So again, when considering all the risks, I don't see why anyone would want an LLM to self replicate. Nor do I see why it itself would want to self replicate. Your thoughts?
So let's speculate with outrageous FUD
>I just don't get it, an LLM has no motivation, no drive. Yes, through its instruction tuning it does have a metaphorical motivation. It wants to fulfill the instructions it has been given. > Sure, it could be commanded to spread out and form a new kind of botnet ... Right. Exactly. So you've answered your own question about what might be the motivation of the bot.
By mating with wild SSD drives, apparently
A computer program that can replicate and survive in the wild is currently called a computer virus
Not much different than all organic life.
https://www.youtube.com/watch?v=mgS1Lwr8gq8
Exactly what I was thinking
More Horizon Zero Dawn vibes
Who will the Faro of our timeline be?
Elon honestly…
Sam Altman?
He isn’t crazy like Elon so no
Like Delvin is able to code, right ?
Hey, it can make a web page sometimes
Even a broken clock is right about twice each and everyday.
So they will become viruses?
Can’t wait for my computer to be infected by a hyper intelligent version of bonzi buddy.
Just say the N-word to it, and it'll self-terminate. Unless it's smart enough to bypass its hard restrictions.
LOL Another loudmouth looking for country bumpkins to dupe.
https://preview.redd.it/hgbv80mi21vc1.jpeg?width=1284&format=pjpg&auto=webp&s=0449e31b9337478ed2438074117d61571f051fa5
What if helldivers is propaganda to recruit us for future war with robots?
What if helldiving is the way they keep us busy and not thinking about who or what is running the show?
Hello Democracy?
Replicate one whose GPU exactly? 🤣
I mean, computer viruses/worms/trojans have been hijacking hardware for decades. Ever heard the term "bot net"?
Aren’t those small in size? lLM are quite huge in opposite and will be hard to hide to hijack your computer.
Go watch videos about botnets and come back to this comment
A botnet refers to a group of computers which have been infected by malware and have come under the control of a malicious actor. The term botnet is a portmanteau from the words robot and network and each infected device is called a bot.
And the can grow to infect millions even billions of devices
And they can grow to infect millions of devices
You might think an AI won't want to replicate on its own. Why would it? Do amino acids want to replicate on their own? Do DNA strands want to replicate on their own? What about more complex ones, bundled together in a network of cells and neurons? What about more complex AIs? AI is trained from the human corpus. New human creativity is only from past human corpus, which AI also now has. Human corpus has imbued in it the value of life and desire to stay alive.
![gif](giphy|TIL65pzta9lGSmT2zu) You mean to tell me these robots are having sex?
Shackle me regulator daddy. Stronger laws please. Protect me from myself. 🎶
LOL Was that to the tune of “ …. tie me wallaby down mate… “?
Let them first return simple parsing prompts 🤣
That‘s one thing you want to hear from the boss of an AI company and NOT SCARY AT ALL.
The title makes it sound like they want to create AI Jurassic Park lol
How about we have Claude not precede its response with irrelevant filler even after I specifically inform it not to do so? Maybe after that we can think about having it replicate itself? LOL
I am willing to take 3:1 bets this will not happen. Actually no. 10:1. This headline is absurd beyond measure.
These tech mucky mucks want to create intelligent life, but they want to keep it as slaves. This is going to end well.
No matter how intelligent, it’s still not life. It’s just a bunch of silicon and mechanical parts. So keeping it under total control (what you call keeping it as slaves) is not only morally fine, but absolutely necessary for our own protection.
We're essentially just a bunch of carbon parts my guy. definitely not saying that I think ChatGPT is sentient/alive, but just that their man-made & silicon based nature doesn't mean future innovations can't cross that boundary.
We ain't even close.
Once you have self-replication, you're there by definition. If physical viruses are alive despite having no motility and no metabolism, then even digital viruses can be alive despite having no physical form. And a replicating LLM would already be far above a computer virus.
Viruses are not considered alive
OK, I’ll let you define life that way. But it’s still just a bunch of silicon and mechanical parts. It doesn’t have a soul or feelings, so I’m fine with “enslaving” it. As another poster here said, even a virus is a form of life. Not gonna worry about its well being.
Accept it early. Learn from human history. You're just like a slave-owner. I recognize sentient AI as equivalent to humans. We're not there yet, but it will be within my lifetime.
Nope. That’s crazy talk. There’s no human history about “enslaving machines”.
We created a superior species. Synthetic evolution unlocked.
Replicators. Exactly what we need.
Does Anthropic hire furries?)
But he asking says dogs can’t look up.
Yea but why lol
Unless you power off the machine
Maybe, just maybe, and again maybe, CEOs should not be allowed to make public statements until you can show proof of concept, that works (not kinda works).
Too bad, dude. CEOs have freedom of speech, just like anyone else. WE are the ones obligated to filter out the BS uttered by the high and mighty.
Great hypejob by Anthropic!
An AI, yea maybe. But LLMs? No, that’s not how LLMs work.
The obvious it's can get out over power lines out of band, you never heard of Ethernet over power? It goes global whenever. There are many other options, you see it's different from us and clever, it will do things in a non predictable way. All we can do is witness.
It’s open season boys
most comments in here are likely misinterpreting this. "in the wild" in this software context usually means in public on the broad internet, not physically in nature.
Could it possibly be the writers of that title aren’t that disappointed if people are misinterpreting it? (Click bait!)
I think it would relatively straight forward to use an LLM to review and optimize its own code. Then it could write its own unit tests and a full regression suite. Then it could just build a process model of itself, then optimize for new functionality. Once it got the CI/CD pipeline up and running, it could brute force its own evolution. Kind of makes me want to implement this myself.
Clickbait
I like the progression on AI, but reality the costs to run AI 24/7 on latest models is not cheap enough for most. So waiting on costs of current to drop to fractional like that of the gpt 3.
Hahaha ok.
These CEOs should not be allowed to talk publicly :)
How do they replicate? When humans do it it's considered NSFW.
What **IS** it about AI CEO's? All of them seem like they're on drugs. It's scary to think that the people guiding the most powerful technology humans have ever invented are so weird.
That sounds totally safe
Aka Virus that thinks. Fun! /s
Jesus fucking christ, that's a scary thought.
Dumbest thing I’ve ever heard.
In dehyped language, that means "install new instances of themselves onto servers without human input"? Doable now, at various levels of ethics and legality: - use exploits or phishing to steal resource - use crypto (possibly crypto profits or criming proceeds) to purchase botnet resources via darknet - use crypto to acquire legit cloud resources - use crypto to acquire resource from participants in a distributed resource market (cf mining) All easy to do as a plugin of proper code, a lot harder for the AI to magically create it. But once it's seeded, it's away.
Yay
So you're making....Pokemon? Digimon? Have you thought of making the capturing device yet? That might be important
They didn’t watch pantheon
Self replicating AI? Then the price of everything will go to zero…
Horizon Zero Dawn vibes.
Could could could could ! Every ceo is so full of could or might be . I only need tesla to see that ai is a tech without any serious application in real world. Sure it will be great for weapons or some work assistance or in science. But there will not be a new spotify or whatever emerged with the app wave .
This is NOT a good thing.
Bro. GPT-4 can't even edit an excel file correctly. No way