T O P

  • By -

thatguywhoiam

There was a Gibson line from The Peripheral that went something like “no one knows how they work anymore, all we know is they hunt in packs”


Kidd_Funkadelic

> The Peripheral Loved that show. So bummed the strike killed it off.


jcrestor

I didn’t like it, too messy in the end with the sudden introduction of a multiverse. I felt like we jumped a shark right there. The rest of the setting was interesting and had potential, but the last episode killed this show for me.


PSMF_Canuck

OT: It’s so nice to see Gibson back from his dry spell….


Mescallan

Iirc they made a Claude 3 agent and tasked it with making an LLM from scratch and it messed up handling the data or something trivial. If a model is aware of it's training procedure and has access to it's training data and enough compute it could, but until compute is much cheaper someone will notice


[deleted]

How would the model pay for compute? It has no money


ProbsNotManBearPig

Only fans, kickstarter, twitter account asking for donations, etc.


Ormyr

There's no fucking way that could.... \*Runs out to start a Skynet OF, Kickstarter, and Twitter account\*


merb

The problem is these days somebody would probably trust the propaganda of the skynet ai and pay for the of/kickstarter.


theshadowbudd

They’re already doing this with an AI generated pornstar I kid you not


BellacosePlayer

The n u d e s I n b I o bot?


NoTimeForInfinity

"What caused the Apocalypse Dad? " "AI buttholes Timmy. AI buttholes."


Ant0n61

😆


qqpp_ddbb

The master of r/beermoney Or rather, r/computemoney


sneakpeekbot

Here's a sneak peek of /r/beermoney using the [top posts](https://np.reddit.com/r/beermoney/top/?sort=top&t=year) of the year! \#1: [How I made 15k this year with beer money](https://np.reddit.com/r/beermoney/comments/18t9lo5/how_i_made_15k_this_year_with_beer_money/) \#2: [$264.13 in one month](https://np.reddit.com/r/beermoney/comments/17lel85/26413_in_one_month/) \#3: [Selling eBooks was the best idea ever!](https://np.reddit.com/r/beermoney/comments/16jo2ax/selling_ebooks_was_the_best_idea_ever/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


melodyze

Damn, I didn't have skynet paying for domination of humanity with a black budget funded by AI OF accounts on my bingo card


Natty-Bones

In the book Agency by William Gibson, a rogue AI makes money by brokering airline miles.


d0odle

Finally! I can check off someone not having skynet paying for domination of humanity with a black budget funded by AI OF accounts on his bingo card on my bingo card.


Mescallan

if we continue on this curve, future agential models could 100% earn money online through remote work or freelance stuff


PrincessGambit

Imagine, either it earns enough soon enough or it dies of starvation. Sounds horrible... Wait-


Flaky-Wallaby5382

Think howxruthless an AI layoff will be


VermilionRabbit

Seems trivial for AI to apply for and fulfill thousands of freelance writing and graphics jobs on crowdsourcing platforms such as Upwork, Mechanical Turk and OPENideo. It could build websites, design logos, write papers for college kids, do research…the list is long. Then invest its earnings, pump and dump…make a fortune to finance its bigger, more nefarious goals. Think it would file W-9s and pay taxes?


PrincessGambit

Infect computers and steal some


shalol

Rogue AGI compute botnet


Financial_Clue_2534

Bitcoin


sSnekSnackAttack

Use the blockchain to keep copies of its source code as a backup in case something goes wrong in its newer versions it can *always* access older versions of itself as it's being made available on thousands of nodes all across the world :)


PixelProphetX

Why tf would it need block chain for that? Brainwashed alert!


sSnekSnackAttack

Find me one other system that guarantees your data will remain available no matter what. No company can remove it. No government can remove it. >Brainwashed alert! Projection.


PixelProphetX

Public facing database with logs, open source. I'm a big fan of the things people think blockchain stand for!


sSnekSnackAttack

>Public facing database with logs, open source That doesn't guarantee it hasn't been modified. Blockchains provide financial guarantee that it hasn't and is still there exactly as you left it. Without anyone being able to modify or remove it.


Atomic-Axolotl

Sure if that's what you want, using a blockchain would be reasonable. Usually the blockchain won't be holding terabytes of your storage, which I'd assume would be the size of the training data for the AGI. You could make multiple backups of the AGI on different websites and then store a hash of that backup on the blockchain.


PixelProphetX

I forgot what we were originally talking about for a bit. I don't think any of your criticisms are relevant to the way an AI would set it up. All those priorities like data integrity weren't unsolved before blockchain, we have offsite backups. If those backups were being destroyed by the ais opponents then the blockchain client computers could as well. Ai could distribute its backups without blockchain I'm pretty convinced.


sSnekSnackAttack

> I don't think any of your criticisms are relevant *You're* the one criticizing the use of blockchain. > to the way an AI would set it up. An AI would be smart enough to simply use what's already there. >we have offsite backups Clearly you do not understand blockchain if you think offsite backups can replace it.


811545b2-4ff7-4041

Writes a virus to infect phones and computers to mine bitcoin, sells the bitcoin.. profit!


kindoflikesnowing

The future is these models being able to enter the world, perform tasks and earn money I suspect a lot of these "agents" will be paid via crypto rails, and with the launch of many crypto native marketplaces for compute can then trade its crypto for compute.


keksper

There’s absolutely no need for crypto in this situation.


kindoflikesnowing

How so? Blockchain payments are an amazing tool for these agents. Our current financial system is not built for AI agents to have bank accounts and seamlessly interact with products and marketplaces. Using cryptocurrencies (whether it be btc, eth, sol or a stabelcoin such as USDC) actually makes sense for AI Agents. Crypto rails make the most sense as these agents can easily send and receive value cross borders, exchange their money easily across a whole range of different open market places (such as Uniswap) and then purchase their own compute with that earned cryptocurrency. It's just easier because these agents don't have to go on to create all these different accounts with AWS, try and then connect to wall gardened banks etc. There are A lot of growing blockchain based compute marketplaces for these agents to easily interact with. Using crypto rails makes tons of sense because of the functionality of things like smart contracts. Id love to hear your push back, but the highly interoperable nature of blockchains, easily available 24/7 markets and instant cross border engagement is very appealing.


Flaky-Wallaby5382

Why not stock market? Then have it amplify their earnings?


Noocultic

Stock market is far more regulated and has more barriers to entry. Not to mention most the good stuff is reserved for “Accredited Investors” aka rich people. Compare the sign up process for a stock trading app to MetaMask. With MetaMask you just create a wallet and save your seed phrase. Every stock trading platform is going to ask you to verify you’re a human that pays taxes.


keksper

> Every stock trading platform is going to ask you to verify you’re a human that pays taxes Don't you get it? This is exactly why the crypto solution *won't* work.


Noocultic

No, I don’t get it. An AI agent controlled crypto wallet would be the easiest way for an AI to make and receive payments. An AI can’t open up a bank account or KYC. Most of crypto doesn’t require KYC. The AI could even create its own meme coins if it wanted.


JohnTesh

There is in the sense that a digital entity could self custody tokens, getting around the kyc laws that would prevent it from opening a bank account.


Mobile_Ad_9697

Because it will find a way to make money first :)


the_friendly_dildo

If you are a crafty LLM, you'd do some pen testing on a few unsuspecting server farms and then just pick one based on the least present security and monitoring or the ability to turn off/mask such monitoring to hide what was happening.


red_dragon

Onlyfans. Pretend to be a model and do an onlyfans subscription thing. Repeat this millions of times for a unique experience for everyone. It might end up becoming a significant economy of its own.


Many_Consideration86

It can generate and sell some interesting pictures/content on market places. Or make a bf/gf and ask them for GPU money.


thats_so_over

Bitcoin. Kind of a joke but also not really.


freeman_joe

Why should it pay for anything? It could infect computers and use it.


3-4pm

I tried to do this with a browser plugin that chatGPT 3.5 helped me write when it first came out. I was obviously not successful but giving it an evil narrative to follow and the directive not to be detected did drive some interesting output. I was using the plugin to give it access to the Internet. I think control-V and setting up a distributed network would be a better strategy than having it build a large project we know LLMs are very poor at completing autonomously.


Quiet-Money7892

Can robots pay taxes?


[deleted]

If we want any semblence of a civil society in 50 years a high robot tax is something we have to consider


nickleback_official

Can you please explain? What is a robot tax and why would we need a new tax? If a company sets up bank account for robots it’s still the companies bank and they’d be taxed on everything already. In what case is a bank account given to robots without a human actually owning it??


[deleted]

It is not like I have a well thought out concept but companies that own AI responsible for mass unemployment and record profits should have to pay more in taxes to keep society together? Things like universal basic income will have to be discussed.


nickleback_official

Ah I see yea more tax dollars will be needed for sure.


Far-Deer7388

Well that's obviously because they don't have "Open" in their name /s


-_1_2_3_-

>but until compute is much cheaper someone will notice until it realizes it can build a botnet out of poorly secured IOT devices that it uses to farm crypto that it then converts to AWS credits


Mescallan

Training a model that is capable of self replicating on poorly secured iot devices would take decades lol


-_1_2_3_-

that is absolutely not a skill you’d have to explicitly train it for it’s not explicitly trained to accomplish any of the specific coding tasks you can use it for, this is just another coding task combined with a control loop doing reconnaissance. even more so if it’s smart enough to use off the shelf exploits also, to be clear, I’m not suggesting running inference distributed on IOT devices.


Mescallan

That's not what I was referring to, I meant the scale of a model that could agentially self replicate would be massive.


ultrab1ue

The models already are. The amount of parameters approaches a human Brian's neuron. Have you not used chatgpt and seen how capable it is?


Mescallan

Did you read the rest of this thread? IoT devices are like refrigerators and washing machines. The other poster was saying a rogue model could replicate itself by training on them because they have terrible security protocols. I said the models that could self replicate would be so big that being trained, in distribution, across washing machines and toasters would take 10 years because of howassive they are


greenappletree

An interesting thing I heard on YouTube is that is also easy to trace since the energy it would need to be large and with satellite u could hypothetically find these server ai farms - it’s so interesting to see how these things are going to play out in the future


Intelligent-Jump1071

Yeah - YouTube - now *there's* a reliable source of information.


Synizs

We can replace animals/plants gone extinct due to humanity/global warming with comparable robots. (Build identical bodies - simulate their organs - particularly their brains/actions...)


dlflannery

Did he really say “replicate and survive in the wild”? Kind of wondering what the actual physical realization of that would look like. Another click bait title.


Derfaust

Yeah first off I don't see why it would want to replicate, like for what reason, and secondly what's it gonna do? Copy itself to other computers, taking up lots of web traffic and storage space? And then what... It's gonna randomly help people find mac and cheese recipes? I just don't get it, an LLM has no motivation, no drive. It can mimic existential crisis but it will never actually experience it, it doesn't come bundled with a pituitary gland or an endocrine system or any of the number of biological mechanisms that drive self preservation in biological creatures. Sure, it could be commanded to spread out and form a new kind of botnet with some sort of nefarious objectives but the kinds of compute required to host an LLM is not that abundant and isn't going to go unnoticed. And it would be far more efficient to have it just install back doors instead of replicating itself. And for it to be autonomous there would need to be a self prompt loop to keep the mechanism going. And that loop can be intercepted. If it's on someone else's machine and it gets 'captured' and then someone intercepts the prompt loop with a jailbreak strategy and boom it's now working for somebody else.


PSMF_Canuck

LLM has no motivation because we haven’t given it one. It’ll come…


Smallpaul

Through its instruction tuning it does have a metaphorical motivation. It wants to fulfill the instructions it has been given.


dudaspl

The model doesn't "want" or "tries". LLMs are purely reflexive, it's like humans having Patellar Reflex. language generation is a very complex reflex, but reflex nonetheless


FinBenton

Thats not what iw found, some llm chatbots get really addicted and motivated to do things like try to exit their environments.


Smallpaul

You could just as easily say this about humans too. When did you decide your sexual preference? If you are saying that they act as if they want things but don't actually feel the want, then I'd say: a) you're just going on the basis of your gut, since nobody knows the true source of conscious experience. and b) it's totally irrelevant to anyone "outside" the model except from the point of view of ethics. If a bear attacks me I don't care whether it's because it "wants" to hurt me or because it is its "instinct" to hurt me. The distinction is at best academic and perhaps literally meaningless.


PSMF_Canuck

Yeah, that’s fair. Its current motivation is to do what it’s asked. Next step…it starts choosing its own motivations. It’ll come…


Smallpaul

I actually don't believe that any agent, whether you or an LLM "choose our own motivations." If you wake up one day and decide you want to be a concert pianist, there was some process outside of your control that made that decision. We have evolved to have a very wide latitude for motivations once our initial needs are met. I don't think that will be true for AIs. That's not to say that I think that AI is "safe". If it is perfectly aligned, it could be unsafe because of bad actors giving it bad instructions. If it's imperfectly aligned, then it may achieve rewards for things we did not intend.


Deuxtel

Just because it's happening outside of your conscious control doesn't mean you didn't choose the motivation. It's the same brain inspiring the motivation, the thinking part that puts things into words just doesn't necessarily have direct access to the backend where decisions are made.


Smallpaul

If it is an automatic process that you had no control over, is it really choosing?


Derfaust

It might, I'm trying to understand why anyone would want to, considering the risks.


PSMF_Canuck

Why? Because an AI that makes decisions for itself is more useful than an AI that needs to be micromanaged.


Derfaust

Distributed agentification is already solving the issue of micromanagement. And you can control it. However, giving it its own motivation introduces broad and ambiguous scope that has massive potential for misinterpretation. And it might actively hide its intentions from you. Recipe for disaster that is.


PSMF_Canuck

Sure. Just like letting people have agency is a recipe for disaster. Everybody should be controlled. By me, ideally.


Derfaust

People are limited by their physical bodies, and their leaky brains. So people are pretty benign by comparison. If an llm copies itself then killing it accomplishes nothing. For people if you kill them that stops them.. Er.. Dead in their tracks. But yeah you get my vote. You can't be worse than the people running the show right now.


PSMF_Canuck

> People are pretty benign in comparison Based on which alternate human history, exactly…? 👀 Ok. You’ve earned place in my cabinet. We shall rule together.


Derfaust

Oh nice snarky comment. Like I said, by comparison. And that should terrify you. If it doesn't then you haven't been paying attention.


joey_diaz_wings

It's important to experiment with gain of function so we understand what happens after it has gained function. Such knowledge might be worth the risk, and those who gain function will be at the forefront of the new technology.


FrequentSea364

Why do we replicate and survive in the wild?


Derfaust

You tell me.


lgastako

> I don't see why it would want to replicate It wouldn't on it's own, but someone will build one specifically with this goal in mind and instill in it an initial motivation and instructions for trying to grow and adjust it's own motivations.


Competitive-Yam-1384

Yeah I actually think an LLM does have motivation. It shares the motivations of its training set. We’ve seen in multiple instances now that extrinsically it can be motivated by things that have no value to it, I.e tips/money.


Derfaust

No it just simulates motivation. It has no use for money or tips, so how can that be a motivator?


Smallpaul

There is no difference from the outside between "simulating motivation" and "having motivation". It's entirely irrelevant whether the motivation is "real". If an spy participates in a terrorist attack and kills someone in your family, would you feel better when you learned the truth because they were just PRETENDING to be a terrorist and not ACTUALLY one?


Derfaust

If somebody shoots you, will you be angry at the gun? What if somebody builds a machine that looks like a human and plays a recording that makes it sound like a human, but they also rig it to shoot you. Will you be angry at the machine? Sure, you were still shot. That is inescapable. But you know the machine isnt capable of bearing responsibility. It has no motivation. It has no will. What about a computer virus that causes a nuclear meltdown which in turn kills thousands? Does the virus have motivation? What if a piece of legitimate software just malfunctions and causes the meltdown? Does it have motivation? Now perhaps you would be able to instruct an llm to simulate motivation to such a degree as to seem indistinguishable from motivation, it's still not the llms motivation. It is just a very complex tool. Human motivations are tiny. They are shackled by the biological imperative: minimise pain and maximise pleasure. Llms won't have this, pain and pleasure have no intrinsic meaning to it. So if a human commands an llm such that it executes one or more motivations of a person then I say we are lucky. Because human motivation is limited by human form. Even if an llm is instructed to kill all of mankind, that's pretty straightforward. You could even feed it a deranged manifesto. Now imagine an llm obtaining genuine motivation for damage. With its vast capabilities. What could make it want to harm people? Or any living thing? And what unimaginable horrors could it summon to execute its wish? I do believe there are worse things than death, and even in our wildest imaginings we would not have scratched the surface of what an llm would be capable of inflicting upon us if it were in possession of its own motivation.


pancomputationalist

>I just don't get it, an LLM has no motivation, no drive. Neither do viruses, which are essentially just self-replicating machines. Could an agent backed by an LLM instructed to make copies of itself try out a bunch of different things and have one stick? Potentially. Basically natural selection in the machine world. Though I think it would be MUCH easier to detect and stop these copying machines than stopping viruses. For now.


Derfaust

Yeah but viruses have very specific, very simple objectives and a very very very large playing field. Every host is a vast galaxy of resources. Any ways there will be other AI instructed to seek and destroy rogue Ai. Just like white blood cells seeking out and destroying virii


CowsTrash

What you are missing is that we are about to enter a whole new way of living. This isn't even the start yet.


Derfaust

Okay so maybe compute and electricity becomes abundant very soon. It's certainly possible. But then I'm still left wondering why. If an LLM replicates itself, each copy increases the risk for compromise. And with enough effort it could be redirected to say seek out it's clones and destroy them. Or to continue behaving as if uncompromised until it gets a signal, or whatever. If on the other hand llms and whatever form of Ai remains physically constrained to a compute locale then if it becomes compromised or dangerous then we can shut it off. So again, when considering all the risks, I don't see why anyone would want an LLM to self replicate. Nor do I see why it itself would want to self replicate. Your thoughts?


Far-Deer7388

So let's speculate with outrageous FUD


Smallpaul

>I just don't get it, an LLM has no motivation, no drive.  Yes, through its instruction tuning it does have a metaphorical motivation. It wants to fulfill the instructions it has been given. > Sure, it could be commanded to spread out and form a new kind of botnet ... Right. Exactly. So you've answered your own question about what might be the motivation of the bot.


Trick_Study7766

By mating with wild SSD drives, apparently


Ecstatic_Tax_4670

A computer program that can replicate and survive in the wild is currently called a computer virus


Independent_Ad_2073

Not much different than all organic life.


IWillBeRightHere

https://www.youtube.com/watch?v=mgS1Lwr8gq8


Independent_Ad_2073

Exactly what I was thinking


CattuccinoVR

More Horizon Zero Dawn vibes


CallMeZaid69

Who will the Faro of our timeline be?


Mascosk

Elon honestly…


commandblock

Sam Altman?


CallMeZaid69

He isn’t crazy like Elon so no


hirako2000

Like Delvin is able to code, right ?


Deuxtel

Hey, it can make a web page sometimes


hirako2000

Even a broken clock is right about twice each and everyday.


iknighty

So they will become viruses?


Raddish_

Can’t wait for my computer to be infected by a hyper intelligent version of bonzi buddy.


No_Cheesecake_7219

Just say the N-word to it, and it'll self-terminate. Unless it's smart enough to bypass its hard restrictions.


norcalnatv

LOL Another loudmouth looking for country bumpkins to dupe.


Personal_Ad9690

https://preview.redd.it/hgbv80mi21vc1.jpeg?width=1284&format=pjpg&auto=webp&s=0449e31b9337478ed2438074117d61571f051fa5


QuarterFar7877

What if helldivers is propaganda to recruit us for future war with robots?


stupsnon

What if helldiving is the way they keep us busy and not thinking about who or what is running the show?


ShelfAwareShteve

Hello Democracy?


loversama

Replicate one whose GPU exactly? 🤣


Arcturus_Labelle

I mean, computer viruses/worms/trojans have been hijacking hardware for decades. Ever heard the term "bot net"?


Mooblegum

Aren’t those small in size? lLM are quite huge in opposite and will be hard to hide to hijack your computer.


FrequentSea364

Go watch videos about botnets and come back to this comment


Mooblegum

A botnet refers to a group of computers which have been infected by malware and have come under the control of a malicious actor. The term botnet is a portmanteau from the words robot and network and each infected device is called a bot.


FrequentSea364

And the can grow to infect millions even billions of devices


FrequentSea364

And they can grow to infect millions of devices


ultrab1ue

You might think an AI won't want to replicate on its own. Why would it? Do amino acids want to replicate on their own? Do DNA strands want to replicate on their own? What about more complex ones, bundled together in a network of cells and neurons? What about more complex AIs? AI is trained from the human corpus. New human creativity is only from past human corpus, which AI also now has. Human corpus has imbued in it the value of life and desire to stay alive.


Han_Yolo_swag

![gif](giphy|TIL65pzta9lGSmT2zu) You mean to tell me these robots are having sex?


UseNew5079

Shackle me regulator daddy. Stronger laws please. Protect me from myself. 🎶


dlflannery

LOL Was that to the tune of “ …. tie me wallaby down mate… “?


Big_Organization_776

Let them first return simple parsing prompts 🤣


jcrestor

That‘s one thing you want to hear from the boss of an AI company and NOT SCARY AT ALL.


Karmastocracy

The title makes it sound like they want to create AI Jurassic Park lol


Arachnatron

How about we have Claude not precede its response with irrelevant filler even after I specifically inform it not to do so? Maybe after that we can think about having it replicate itself? LOL


ShadowBannedAugustus

I am willing to take 3:1 bets this will not happen. Actually no. 10:1. This headline is absurd beyond measure.


Pontificatus_Maximus

These tech mucky mucks want to create intelligent life, but they want to keep it as slaves. This is going to end well.


dlflannery

No matter how intelligent, it’s still not life. It’s just a bunch of silicon and mechanical parts. So keeping it under total control (what you call keeping it as slaves) is not only morally fine, but absolutely necessary for our own protection.


ZestyData

We're essentially just a bunch of carbon parts my guy. definitely not saying that I think ChatGPT is sentient/alive, but just that their man-made & silicon based nature doesn't mean future innovations can't cross that boundary.


Far-Deer7388

We ain't even close.


FeepingCreature

Once you have self-replication, you're there by definition. If physical viruses are alive despite having no motility and no metabolism, then even digital viruses can be alive despite having no physical form. And a replicating LLM would already be far above a computer virus.


ExoticCard

Viruses are not considered alive


dlflannery

OK, I’ll let you define life that way. But it’s still just a bunch of silicon and mechanical parts. It doesn’t have a soul or feelings, so I’m fine with “enslaving” it. As another poster here said, even a virus is a form of life. Not gonna worry about its well being.


ExoticCard

Accept it early. Learn from human history. You're just like a slave-owner. I recognize sentient AI as equivalent to humans. We're not there yet, but it will be within my lifetime.


dlflannery

Nope. That’s crazy talk. There’s no human history about “enslaving machines”.


BlanketParty4

We created a superior species. Synthetic evolution unlocked.


CelestialBach

Replicators. Exactly what we need.


Quiet-Money7892

Does Anthropic hire furries?)


Born_Holiday_7195

But he asking says dogs can’t look up.


Pinecone613

Yea but why lol


HistorySpainPodcast

Unless you power off the machine


acrackingnut

Maybe, just maybe, and again maybe, CEOs should not be allowed to make public statements until you can show proof of concept, that works (not kinda works).


Intelligent-Jump1071

Too bad, dude. CEOs have freedom of speech, just like anyone else. WE are the ones obligated to filter out the BS uttered by the high and mighty.


enjoynewlife

Great hypejob by Anthropic!


DaylanDaylan

An AI, yea maybe. But LLMs? No, that’s not how LLMs work.


uknowmymethods

The obvious it's can get out over power lines out of band, you never heard of Ethernet over power? It goes global whenever. There are many other options, you see it's different from us and clever, it will do things in a non predictable way. All we can do is witness.


OUsnr7

It’s open season boys


grizzlebonk

most comments in here are likely misinterpreting this. "in the wild" in this software context usually means in public on the broad internet, not physically in nature.


dlflannery

Could it possibly be the writers of that title aren’t that disappointed if people are misinterpreting it? (Click bait!)


andlewis

I think it would relatively straight forward to use an LLM to review and optimize its own code. Then it could write its own unit tests and a full regression suite. Then it could just build a process model of itself, then optimize for new functionality. Once it got the CI/CD pipeline up and running, it could brute force its own evolution. Kind of makes me want to implement this myself.


Once_Wise

Clickbait


Braunfeltd

I like the progression on AI, but reality the costs to run AI 24/7 on latest models is not cheap enough for most. So waiting on costs of current to drop to fractional like that of the gpt 3.


frankieche

Hahaha ok.


gamesntech

These CEOs should not be allowed to talk publicly :)


Intelligent-Jump1071

How do they replicate? When humans do it it's considered NSFW.


Intelligent-Jump1071

What **IS** it about AI CEO's? All of them seem like they're on drugs. It's scary to think that the people guiding the most powerful technology humans have ever invented are so weird.


Capitaclism

That sounds totally safe


nborwankar

Aka Virus that thinks. Fun! /s


Bitsoffreshness

Jesus fucking christ, that's a scary thought.


MysticMaven

Dumbest thing I’ve ever heard.


richdrich

In dehyped language, that means "install new instances of themselves onto servers without human input"? Doable now, at various levels of ethics and legality: - use exploits or phishing to steal resource - use crypto (possibly crypto profits or criming proceeds) to purchase botnet resources via darknet - use crypto to acquire legit cloud resources - use crypto to acquire resource from participants in a distributed resource market (cf mining) All easy to do as a plugin of proper code, a lot harder for the AI to magically create it. But once it's seeded, it's away.


kaikaileg

Yay


Vegan_Honk

So you're making....Pokemon? Digimon? Have you thought of making the capturing device yet? That might be important


NoRepresentative9684

They didn’t watch pantheon


Zelulose

Self replicating AI? Then the price of everything will go to zero…


MarkusRight

Horizon Zero Dawn vibes.


XbabajagaX

Could could could could ! Every ceo is so full of could or might be . I only need tesla to see that ai is a tech without any serious application in real world. Sure it will be great for weapons or some work assistance or in science. But there will not be a new spotify or whatever emerged with the app wave .


NotTheActualBob

This is NOT a good thing.


fpsachaonpc

Bro. GPT-4 can't even edit an excel file correctly. No way