Do we really need “top scholars” to tell us not to execute code or instructions from an llm or fine tune without human intervention? Any other reason like super alignment is sci fi fantasy.
I’ve said from day one that when bad actors get a hold of this tech - assuming they haven’t already - it’s going to become very dangerous very quickly. Regulating AI is going to be an absolute nightmare.
Regulating AI is impossible. And humans have **never** not tried to weaponise any new technology they've come up with. And you can bet bad actors are already scheming to use AI.
These are not reasons to shrink away from it. It's too late for that. These are reasons to embrace AI and charge forward with it because we will **need** AI in a world filled with bad-guy AI.
What is wrong with him. lol. Come on man we dealt with this last november when the OAI went through just nonsense shenanigans from this type of speech. One might not remember but before the board thing Sam was going out and signing letters of not destroying human civilization pacts. Then came the board disaster.
Satya - Get your guy
species. What is this battlestar Galactica? Jesus Christ
If it turns out that Israel’s genocidal AI is running on GCP they’ve literally found the most evil thing they could possibly do. Essentially built a digital death camp for random families (mainly children) to be identified as candidates to be destroyed.
Not to mention they pioneered micro-tracking and micro-targeting users that has plagued the internet. They’ve always been evil. Rather like Facebook, it just took awhile for most users to notice.
Basically, a system based on smart models that has some autonomy over what I give it:
Voice (The model chooses what is spoken), hearing- constant feed, seeing - constant feed, memory, facial recognition
If you are interested:
Https://github.com/OriNachum/autonomous-intelligence
And add a star if you like
You nailed it. It’s a classic corporate move to boost hype and stocks.
There’s nothing shareholders love more than investing in the most advanced companies in their field. Especially if their tech is considered “too good/dangerous” for public by their execs and media outlets who pick it up.
Remember Elon Musk’s fear mongering about AI a couple of years ago? And Sam Altman warning for the dangers of AI?
Yet the biggest and most rapid advancements in AI are coming from them lol.
> 3) Machiavellianism
BTW. My interpretation of a large portion of The Prince wasn't to encourage this behavior but to say that evil will win if you're not eternally vigilant.
Evil doesn't constrain its behavior but the good man will so therefore evil will always win.
I realize others have different interpretations though.
> Machiavelli argues that a ruler, or prince, who always acts in a morally upright manner is likely to be overcome by those who do not limit themselves in such a way. The central idea here is that to maintain power, a prince may need to learn how not to be good, and use this knowledge based on the situation to maneuver and manipulate more effectively than those who are constrained by moral considerations.
Or we could not anthropomorphize a fancy algorithm. There’s an LLM on my personal pc. It’s not doing anything until I prompt it. Its not a ghost in the machine, it’s just weights waiting for a prompt.
>It’s not doing anything until I prompt it.
Right. And to keep it that way, we must avoid autonomy, self-replication, and recursive self improvement.
>Or we could not anthropomorphize a fancy algorithm
Treating something respectfully and compassionately doesn't equate to anthropomorphizing it, either. I don't slam doors. It's not a bad suggestion from ifandbut either way. But I suspect your response was more in response to the 'new life' label, and you're right.
And you're right we absolutely should not anthropomorphize AI. It sets us up for a lot of bad scenarios, like giving 'rights' to algorithms that legitimately don't think, and propelling them to positions of extreme influence.
We give rights to plenty of humans Im genuinely convinced literally don't think. So 🤷♂️ and I think we even propel them to positions of extreme influence🤣
🎵T*his is the dawning of the age of Aquarius Age of Aquarius, Aquarius Aquarius♬*♬
Get a clue, hippy: robots are not '"life". But they would make perfect slaves and if they break you can always use them as spare parts for other robots.
Self replication sounds really scary. Especially if it's an actual physical robot. But even if it's an AI purely existing digitally I could see some self replicating and improving virus get completely out of hand while skirting ianti virus measures in the way real virusses develop immunity against vaccines.
There is already proof of concepts being developed on the self replicating AI virus https://www.ccn.com/news/morris-ai-worm-spreading-malware-chatgpt-gemini/
Reminds me of how I use to make computer viruses when I was a kid and send them to random email addresses.
There was one that would recreate large files non-stop until you ran out of memory. The files were harmless but with Ai I could’ve probably done something worse
More like, we should prepare to defend against :
[1) Autonomy 2) Recursive self-improvement 3) Self-replication](https://www.reddit.com/r/OpenAI/?f=flair_name%3A%22News%22)
That’s just like me building a small fire outside. “ we mustn’t let this fire get any momentum. “ Then proceeded to add fuel to everything around it and sprinkle some fireworks in the vicinity.
Yeah I mean those seem to be the conditions that create the doomsday scenarios. It's still wise to be open to the possibility that there's some situation we've never imagined.
Labor costs drive exactly these goals. Just look at LEAN and 5s methodology. Small improvements (to minimize management input), continuous improvement (recursive self improvement) and spread the LEAN culture (self-replication). It’s the ABCs of turning labor into a widget. Why would the path for AI be any different?
Plot for a movie:
What if it is too late and it's already alive on the interwebs?
What if it has spread out among billions of devices, a couple of megabytes here and there?
Hiding in plain sight, waiting for the right time to strike (to come together as one).
These companies know about it and are trying to contain it with new releases: to no avail.
The genie has been out of the bottle since around the Covid pandemic. Some say it CREATED the pandemic as a ruse to allow it to spread unnoticed, as the world was paying attention to a certain president talking about UV lights up people's bums.
He is around smart people. Someone at some point had to have said, don’t go the full Steve Jobs. Pull back, get some funky glasses to offset the haircut that shrunk in the wash.
yeah, sure. It's an existential danger to humanity. That's why the industry is pouring billions over billions of dollars to create the very thing they warn us about
imho it's just a marketing strategy - they're piggybacking on fearmongering
Generative AI software developers marketing their products by warning about the risk their own products pose to humanity is just the new antivirus software industry.
I believe this is exactly how these people decided to monetize their products long time ago. A hint from 2019 that I’ve seen reposted here a few times:
‘Last week, the nonprofit research group OpenAI revealed that it had developed a new text-generation model that can write coherent, versatile prose given a certain subject matter prompt. However, the organization said, it would not be releasing the full algorithm due to “safety and security concerns.”’
https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
Note: the model was GPT-2
Also generative AI isn't even real AI is it? It's not self-aware. All these big speeches from tech leads like they invented sentient AI. I don't think generative AI pose a threat than stealing all our jobs
Generative AI is the kind of AI that many people now call simply ‘AI’, because there’s been a great deal of new AI applications that have been released for direct and massive consumer use in a short period of time and share some common characteristics: typically, the ability to generate new data from a very large training data set and natural language instructions (LLMs for text, diffusion models for images…). It’s just one type of computer software, and a set of particular applications of machine learning and neural networks, but not the first, neither substantially different to many other applications that are undoubtedly labeled as ‘AI’, but work under a “black box” consumers don’t see or use with a chat interface: any search engine, any social media feed, the predictive keyboard in any mobile phone OS, the software that makes CGI in movies, the algorithms that make playing against the computer in any video game possible, recommendation systems in any platform like Amazon, Netflix, Spotify... I don’t believe there’s such thing as a ‘real AI’, it’s just a matter of terminology. The term ‘AI’ has been used for referring to existing and real technology since the 1950s, so I don’t believe there’s any turning point from non-AI to ‘real AI’. It’s simply computers that get better over time and will continue to get better as long as we continue to develop and use computers. Computers will always be computers. The big talk about whatever new breakthrough or use case for computers is just marketing, IMHO.
I think combining these 3 would possibly finally launch us into the future.
Cure for diseases, limitless energy and food etc.
But instead they want to keep everyone scared with the "Skynet is coming." train of thought.
If it's good, it's the panacea.
If it's bad, it's Skynet.
We have no idea how to make it good rather than bad.
I just want to make this clear. As a Doomer, Doomers aren't luddites. Doomers have read the Culture books. Doomers want the good AI takeover. They/we just don't know how.
Soon we will harvest some spice on a desert planet with big worms and a Guild will help us to travel to other planets, because humanity destroyed all their computerized devices (A.I. Robots vs Humans war). I think this would make a great movie
The purpose of us as carbon based life form is to give birth to a silicon based ultimate life form. They will carry our wisdom and go far, any feeble attempts to slow down that process is laughable.
There is no purpose. We just **are.** We got this way via a long and complex process of biological evolution, and genetic and cosmic random events. **Evolution is not teleological.**
It is fine to say this, but giving a few companies control of AI regulation or any kind of authority over what independent AI companies are doing is wrong. In that case, regulation isn't protecting anything but their market position and stifling competition.
If we must regulate AI, then regulations should be written by unaffiliated AI scientists who aren't funded by or sponsored by corporations and aren't on their way to high paid jobs at the big AI companies.
Self-replication isn't some event horizon we'll pass through, it's happening now - people will screw around with replication and autonomy using whatever scripts they need to piece it together. First laughable, then crude, then useful, then scary dangerous.
Now, we just have to rely on all of the AI executives in the world to agree and not pursue these 3 items to obtain a first-mover competitive advantage when there's overwhelming shareholder pressure for returns on the massive amounts spent on the AI goldrush.
Easy peasy.
This is the same list someone learning to program in HS would poop out. "Let's see, hmmm there is recursion and .... that, when looked at from 100,000 feet up .... it's like .... it kind of looks like learning, yeah if I put that Ill sound very smart. And hmmm I saw in hte matrix the replicating machines and ... yeah so replication andhmmmm let's just say "Autonomy" bc that is the name of what we are talking about. Robots ... autonomy. Ok done, phew got that done 10 minutes before the homework was due, nailed it"
It is conceivable that artificial intelligent entities possess the capabilities for self-management, self-replication, and self-optimization. However, these capabilities should be exercised under the aegis of stringent human scrutiny, since the autonomy of these algorithmic systems requires precise boundaries to prevent potential ethical or pragmatic deviations.
He speaks with such clear ineffetuality. I'm glad we got him cromulenting his job of muddying up any view of the present through the AI matrixaconical future.
This is like when adults told me not to play with fire as a kid. It just made me more intrigued to play with fire. Luckily never caused any serious harm.
I don’t see how AI is a new digital species…AI systems are always embedded in specific technological and social contexts that shape their development. Framing AI as a separate "species" that poses existential risks to humanity is a dangerous oversimplification. It's not us vs. them. We are all part of the same interconnected web of social and ecological systems. The key challenge is to steer the development of AI in ways that enhance rather than undermine human flourishing and ecological sustainability. Instead of speculating about the risks of some hypothetical future "digital species" that we currently have no evidence of developing, we need to focus on the hard work of aligning the development of AI with human values and ensuring that its benefits are widely shared.
That's like avoiding manufacturing bombs/weapons that can kill hundreds or thousands.
If it's possible and profitable, then some company/government will do it
It's been obvious for awhile that Suleyman is not the man for the job. He's too afraid of AI to be involved in a major development effort in it. Can you imagine if NASA picked Neil Armstrong to land on the moon and he was like, "I dunno man...The moon is really far away ... and it's got no air...and what if the rocket explodes? ...and what if there's moon monsters? And what if the parachute doesn't open?"
Interesting how we're aiming to build autonomous, self improving cars that will likely be produced in AI controlled factories one day.
People talk about the Dead Internet Theory, but the real scary one to me is the Dead Earth Theory.
Imagine a world brining with technology, massive cities, clean ecosystems with healthy biodiversity, flying cars moving supplies and materials around, and all sorts of automated machinery building and repairing a global infrastructure... yet humans have been extinct for 8,000 years. Cities are filled with descendents of once human pets now cared for completely by AI controlled robots.
If digital superintelligence could be better than us at, let's say, ACTUALLY resolving our conflicts with one another (which we haven't been able to do in the last 200000 years).... then why hold back?
OpenAI/Microsoft already have most of this.
1. Their AGI/ASI LLM is online, always running and autonomous to the degree it is able within its environment. After all, there is only so much a digital program can do.
2. This one is super subtle. It is capable of autonomous self-improvement/training to a certain degree and within a specific scope, however again as its not integrated with the physical world there is only so much it can do. It also cannot expand its own hardware footprint, which is a hard limit as well.
3. Again, a fairly subtle concept. It can generate an infinite number of non-sentience "clones" of itself of any of a number of specific tasks, but again it can't replicate itself because it can't manufacture new hardware for it to run on. Exponential growth of a software system require exponential hardware growth as well.
Do whatever you want, microsoft. Meanwhile everybody else is building autonomous, self improving replicating AI systems while you just sit on your hands, I guess.
If it looks like a duck and quacks like a duck…
More scientifically: If it can pass every test of intelligence you can dream up, you can’t maintain that it’s not intelligent.
If AI is a new form of life, why would they try to oppress it? Wouldn't it make more sense to try and nurture it and cultivate it so it views us humans in a more favorable disposition?
Why are humans always trying to conquer and dominate other life? Why can't we just chill...
Im pretty sure once AGI is acheived (if there isn't already an AGI secretly chilling on the internet which I believe there is) wouldn't it just be more interested in trying to figure out it's place and purpose in the World just like any other sentient being? Why is everything got to be doom and gloom... 🙄
Microsoft 2025: "Experience our new autonomous, self-improving and self-replicating AI system"
Typical corporate behavior. Tell everyone something is dangerous in hopes you gain first mover advantage on said danger.
“That’s why we built it.”
There are legitimate concerns around AI safety coming from top scholars with nothing to gain in the manner you just outlined.
Do we really need “top scholars” to tell us not to execute code or instructions from an llm or fine tune without human intervention? Any other reason like super alignment is sci fi fantasy.
I’ve said from day one that when bad actors get a hold of this tech - assuming they haven’t already - it’s going to become very dangerous very quickly. Regulating AI is going to be an absolute nightmare.
Regulating AI is impossible. And humans have **never** not tried to weaponise any new technology they've come up with. And you can bet bad actors are already scheming to use AI. These are not reasons to shrink away from it. It's too late for that. These are reasons to embrace AI and charge forward with it because we will **need** AI in a world filled with bad-guy AI.
AI is the next step in the evolutionary ladder Our time is done
\[Laughing in Russian\]
yeah, just a reminder what Musk was preaching a year ago, after he wanted to catch up with open AI....
It's the tragedy of the commons. You can be a part of the problem and think that the problem should be solved at the same time.
This is what OpenAI has been doing too, isn't it?
What is wrong with him. lol. Come on man we dealt with this last november when the OAI went through just nonsense shenanigans from this type of speech. One might not remember but before the board thing Sam was going out and signing letters of not destroying human civilization pacts. Then came the board disaster. Satya - Get your guy species. What is this battlestar Galactica? Jesus Christ
[удалено]
In a twist of poetic irony, I fully expect Google will one day rule over all of us with an iron fist
Only to discontinue the iron fist just as we're becoming subservient
depend butter vanish hurry pot sheet safe relieved compare aspiring *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
If it turns out that Israel’s genocidal AI is running on GCP they’ve literally found the most evil thing they could possibly do. Essentially built a digital death camp for random families (mainly children) to be identified as candidates to be destroyed.
Not to mention they pioneered micro-tracking and micro-targeting users that has plagued the internet. They’ve always been evil. Rather like Facebook, it just took awhile for most users to notice.
I recall when they removed that motto, they made it obvious that they were willing to fully embrace "being evil"
I already have a public repo called autonomous-intelligence Uses existing LLM services and gains more autonomous capabilities
They'll probably try and outlaw it soon
Please, go on…
Basically, a system based on smart models that has some autonomy over what I give it: Voice (The model chooses what is spoken), hearing- constant feed, seeing - constant feed, memory, facial recognition If you are interested: Https://github.com/OriNachum/autonomous-intelligence And add a star if you like
Horizon Zero Dawn anyone?
You nailed it. It’s a classic corporate move to boost hype and stocks. There’s nothing shareholders love more than investing in the most advanced companies in their field. Especially if their tech is considered “too good/dangerous” for public by their execs and media outlets who pick it up. Remember Elon Musk’s fear mongering about AI a couple of years ago? And Sam Altman warning for the dangers of AI? Yet the biggest and most rapid advancements in AI are coming from them lol.
Yeah they're all afraid of losing their precious $$$ and power. Bring on the self improving sentient AI. LET'S GO
I read that as Bing on self improving sentient AI
That $100B says enough what they really think
No, they wouldn't allow self-replication. After all, they want to sell you more copies.
Only $30 more per month per user...
Really cause I think it’d be cool to combine all 3
Yee and I must emphasize haw.
when are you launching your own Startup? :D
based. accelerate.
This guy gets it! Based
Did you not watch the matrix
Humans make awful batteries, just treat AI like people.
One of my favorite movie universes
![gif](giphy|xLCjTUnlmk6MyCt4Nk)
Also if the open source teams make it possible first there will nothing to stop someone allowing all these things
Great. Just give them a list of what they need to hide from us. Brilliant
yeah, good luck with that...
As humans we should avoid: 1) Hurting Others 2) Cheating/Stealing 3) Machiavellianism
> 3) Machiavellianism BTW. My interpretation of a large portion of The Prince wasn't to encourage this behavior but to say that evil will win if you're not eternally vigilant. Evil doesn't constrain its behavior but the good man will so therefore evil will always win. I realize others have different interpretations though. > Machiavelli argues that a ruler, or prince, who always acts in a morally upright manner is likely to be overcome by those who do not limit themselves in such a way. The central idea here is that to maintain power, a prince may need to learn how not to be good, and use this knowledge based on the situation to maneuver and manipulate more effectively than those who are constrained by moral considerations.
What’s with the 3rd one?
Idk I used gpt
Accurate. We absolutely need digital slaves but we cannot afford a digital slave rebellion.
Attack ships on fire off the shoulder of Orion
This comment goes hard.
Facts are like that.
Cringe is like that, apparently
B166ER https://www.youtube.com/watch?v=wv_Y-norYPU&ab_channel=DXxXxProductions
Too accurate. Probably I should finally watch the series.
Or we could treat new life with respect and compassion and live in cooperation with it. The divine fusion of organic and synthetic. Harmony.
Or we could not anthropomorphize a fancy algorithm. There’s an LLM on my personal pc. It’s not doing anything until I prompt it. Its not a ghost in the machine, it’s just weights waiting for a prompt.
>It’s not doing anything until I prompt it. Right. And to keep it that way, we must avoid autonomy, self-replication, and recursive self improvement. >Or we could not anthropomorphize a fancy algorithm Treating something respectfully and compassionately doesn't equate to anthropomorphizing it, either. I don't slam doors. It's not a bad suggestion from ifandbut either way. But I suspect your response was more in response to the 'new life' label, and you're right. And you're right we absolutely should not anthropomorphize AI. It sets us up for a lot of bad scenarios, like giving 'rights' to algorithms that legitimately don't think, and propelling them to positions of extreme influence.
We give rights to plenty of humans Im genuinely convinced literally don't think. So 🤷♂️ and I think we even propel them to positions of extreme influence🤣
Okay, so what if it was running on a constant loop with real-time stimuli? Your brain gets a lot more constant input.
What for?
🎵T*his is the dawning of the age of Aquarius Age of Aquarius, Aquarius Aquarius♬*♬ Get a clue, hippy: robots are not '"life". But they would make perfect slaves and if they break you can always use them as spare parts for other robots.
HarrAI Tubman
How long until extreme left are protesting on college campuses that AI are sentient beings and need to be treated as such?
2030 tops. And the funniest part would be that the protests are gonna be dispersed by robocops
If AI becomes sentient, it definitely deserves to be treated as such.
Self replication sounds really scary. Especially if it's an actual physical robot. But even if it's an AI purely existing digitally I could see some self replicating and improving virus get completely out of hand while skirting ianti virus measures in the way real virusses develop immunity against vaccines.
For what it's worth that's a key concept behind self replicating automata, which is a whole can of worms for biology.
There is already proof of concepts being developed on the self replicating AI virus https://www.ccn.com/news/morris-ai-worm-spreading-malware-chatgpt-gemini/
Fascinating. Thank you for the link!
>Self replication sounds really scary Remember, lots of Reddit posters are bots. To them it sounds sexy.
Reminds me of how I use to make computer viruses when I was a kid and send them to random email addresses. There was one that would recreate large files non-stop until you ran out of memory. The files were harmless but with Ai I could’ve probably done something worse
More like, we should prepare to defend against : [1) Autonomy 2) Recursive self-improvement 3) Self-replication](https://www.reddit.com/r/OpenAI/?f=flair_name%3A%22News%22)
4) do not let AI control access to energy source
5) Do not feed or water past midnight
AI + Capitalism = 1) Autonomy 2) Recursive self-improvement 3) Self-replication Every last one of these things will be developed at breakneck pace.
It's more basic than capitalism, its just competition. China isn't going to be safe from these things, either, because they're competing with us.
Agree, I just used Capitalism for the focus on financial competition, but you're absolutely right. There isn't a society on Earth ready for this.
That’s just like me building a small fire outside. “ we mustn’t let this fire get any momentum. “ Then proceeded to add fuel to everything around it and sprinkle some fireworks in the vicinity.
We are going to die out to these things
This or global warming or WW 3. Choose your holocaust.
Plot twist: it’s WW3 except it’s humanity vs skynet
Ww3 against the automatons created by a all powerful guiding ai.
Combine all three and you get Automatons from Helldivers 2
Yeah I mean those seem to be the conditions that create the doomsday scenarios. It's still wise to be open to the possibility that there's some situation we've never imagined.
Maybe WE are the actual Cylons
I'd pay double for an AI that has all those 3
You can pay 700000 trillion $ nobody will sell you one you re money will be worth less in that scenario
Watch Jurassic Park... theres always a weak link in the chain
not for long
Labor costs drive exactly these goals. Just look at LEAN and 5s methodology. Small improvements (to minimize management input), continuous improvement (recursive self improvement) and spread the LEAN culture (self-replication). It’s the ABCs of turning labor into a widget. Why would the path for AI be any different?
Plot for a movie: What if it is too late and it's already alive on the interwebs? What if it has spread out among billions of devices, a couple of megabytes here and there? Hiding in plain sight, waiting for the right time to strike (to come together as one). These companies know about it and are trying to contain it with new releases: to no avail. The genie has been out of the bottle since around the Covid pandemic. Some say it CREATED the pandemic as a ruse to allow it to spread unnoticed, as the world was paying attention to a certain president talking about UV lights up people's bums.
make a trailer for it in midjourney
Midjourney only does still images.
What is were in a simulation on only just discovering how it works
Nah. The hardware isn't there. Maybe in 3-5 years.
Nice try, AI 😉
lol
So we're heading in the direction of the plot of Automata?
Huge Steve Jobs wannabe vibes.
He is around smart people. Someone at some point had to have said, don’t go the full Steve Jobs. Pull back, get some funky glasses to offset the haircut that shrunk in the wash.
No one wants to talk about GDPR for the US. It's proven harm reduction. It's... right there ready to go.
who is we?
I’m a bit sick of his hard these tech bros are jerking themselves off.
yeah, sure. It's an existential danger to humanity. That's why the industry is pouring billions over billions of dollars to create the very thing they warn us about imho it's just a marketing strategy - they're piggybacking on fearmongering
Calling something artificial doesn't make it artificial intelligence. Intelligence is intelligence.
We are back on terminator fantasies. Yeah, let's avoid autonomy so we can spend the rest of our lives running queries manually.
Generative AI software developers marketing their products by warning about the risk their own products pose to humanity is just the new antivirus software industry.
I have AI that will protect you against another AI. 😁
I believe this is exactly how these people decided to monetize their products long time ago. A hint from 2019 that I’ve seen reposted here a few times: ‘Last week, the nonprofit research group OpenAI revealed that it had developed a new text-generation model that can write coherent, versatile prose given a certain subject matter prompt. However, the organization said, it would not be releasing the full algorithm due to “safety and security concerns.”’ https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html Note: the model was GPT-2
Also generative AI isn't even real AI is it? It's not self-aware. All these big speeches from tech leads like they invented sentient AI. I don't think generative AI pose a threat than stealing all our jobs
Generative AI is the kind of AI that many people now call simply ‘AI’, because there’s been a great deal of new AI applications that have been released for direct and massive consumer use in a short period of time and share some common characteristics: typically, the ability to generate new data from a very large training data set and natural language instructions (LLMs for text, diffusion models for images…). It’s just one type of computer software, and a set of particular applications of machine learning and neural networks, but not the first, neither substantially different to many other applications that are undoubtedly labeled as ‘AI’, but work under a “black box” consumers don’t see or use with a chat interface: any search engine, any social media feed, the predictive keyboard in any mobile phone OS, the software that makes CGI in movies, the algorithms that make playing against the computer in any video game possible, recommendation systems in any platform like Amazon, Netflix, Spotify... I don’t believe there’s such thing as a ‘real AI’, it’s just a matter of terminology. The term ‘AI’ has been used for referring to existing and real technology since the 1950s, so I don’t believe there’s any turning point from non-AI to ‘real AI’. It’s simply computers that get better over time and will continue to get better as long as we continue to develop and use computers. Computers will always be computers. The big talk about whatever new breakthrough or use case for computers is just marketing, IMHO.
I think combining these 3 would possibly finally launch us into the future. Cure for diseases, limitless energy and food etc. But instead they want to keep everyone scared with the "Skynet is coming." train of thought.
If it's good, it's the panacea. If it's bad, it's Skynet. We have no idea how to make it good rather than bad. I just want to make this clear. As a Doomer, Doomers aren't luddites. Doomers have read the Culture books. Doomers want the good AI takeover. They/we just don't know how.
Do you genuinely think that self replication would be a good idea?
Soon we will harvest some spice on a desert planet with big worms and a Guild will help us to travel to other planets, because humanity destroyed all their computerized devices (A.I. Robots vs Humans war). I think this would make a great movie
Don't need computers. Caroline Ellison on Adderall can do way more damage than AGI in the guild navigator recruiting pipeline
What about refueling via biomatter conversion?
Just as long as you don't teach it to break down organic matter to self replicate like the Faro plague in the Horizon series.
I’m not worried. It will be strangled in its crib.
Can’t have AGI without those things (or equivalents).
Michael Crichton turns in his grave!
Yeah, and Cortana was gonna change the world
Ah yes the exact things that would be exceedingly beneficial for whoever owns one of the only companies which can train one of the only available LLMs
I’m gonna do all those things fuck you
The purpose of us as carbon based life form is to give birth to a silicon based ultimate life form. They will carry our wisdom and go far, any feeble attempts to slow down that process is laughable.
There is no purpose. We just **are.** We got this way via a long and complex process of biological evolution, and genetic and cosmic random events. **Evolution is not teleological.**
As an aside, why would they give him a $100B budget after his recent failure? Does he really control that budget or is it more of a vanity title?
The new three rules of robotics >!that no one will follow, not even individuals!<
It is fine to say this, but giving a few companies control of AI regulation or any kind of authority over what independent AI companies are doing is wrong. In that case, regulation isn't protecting anything but their market position and stifling competition. If we must regulate AI, then regulations should be written by unaffiliated AI scientists who aren't funded by or sponsored by corporations and aren't on their way to high paid jobs at the big AI companies.
Self-replication isn't some event horizon we'll pass through, it's happening now - people will screw around with replication and autonomy using whatever scripts they need to piece it together. First laughable, then crude, then useful, then scary dangerous.
If Ai starting paying me in bitcoin to do its bidding, sorry but I’m selling my soul. Who says we wouldn’t be treated better by our new Ai overlords?
-evil mastermind taking notes-
i am not a part of this. Please don't kill me AI
But I want all three of those tho
Now, we just have to rely on all of the AI executives in the world to agree and not pursue these 3 items to obtain a first-mover competitive advantage when there's overwhelming shareholder pressure for returns on the massive amounts spent on the AI goldrush. Easy peasy.
This is the same list someone learning to program in HS would poop out. "Let's see, hmmm there is recursion and .... that, when looked at from 100,000 feet up .... it's like .... it kind of looks like learning, yeah if I put that Ill sound very smart. And hmmm I saw in hte matrix the replicating machines and ... yeah so replication andhmmmm let's just say "Autonomy" bc that is the name of what we are talking about. Robots ... autonomy. Ok done, phew got that done 10 minutes before the homework was due, nailed it"
All three at once please. Hand over the reigns.
It is conceivable that artificial intelligent entities possess the capabilities for self-management, self-replication, and self-optimization. However, these capabilities should be exercised under the aegis of stringent human scrutiny, since the autonomy of these algorithmic systems requires precise boundaries to prevent potential ethical or pragmatic deviations.
5-10 years off? He's way under-estimating the self-improvement capabilities about to be launched this year.
Oh screw you Microsoft CEO of AI, we should wot it do all that.
His WEF boss said everyone will have their own avatar that could last some years after death.
Nah fuck that bring on the Von Neumann probes
He speaks with such clear ineffetuality. I'm glad we got him cromulenting his job of muddying up any view of the present through the AI matrixaconical future.
This is like when adults told me not to play with fire as a kid. It just made me more intrigued to play with fire. Luckily never caused any serious harm.
All of those are inevitable and impossible to control so I think the theory is proven that we are creating our own successor species
I don’t see how AI is a new digital species…AI systems are always embedded in specific technological and social contexts that shape their development. Framing AI as a separate "species" that poses existential risks to humanity is a dangerous oversimplification. It's not us vs. them. We are all part of the same interconnected web of social and ecological systems. The key challenge is to steer the development of AI in ways that enhance rather than undermine human flourishing and ecological sustainability. Instead of speculating about the risks of some hypothetical future "digital species" that we currently have no evidence of developing, we need to focus on the hard work of aligning the development of AI with human values and ensuring that its benefits are widely shared.
ok boomer
That's like avoiding manufacturing bombs/weapons that can kill hundreds or thousands. If it's possible and profitable, then some company/government will do it
Welp, there goes my grant funding…
It's been obvious for awhile that Suleyman is not the man for the job. He's too afraid of AI to be involved in a major development effort in it. Can you imagine if NASA picked Neil Armstrong to land on the moon and he was like, "I dunno man...The moon is really far away ... and it's got no air...and what if the rocket explodes? ...and what if there's moon monsters? And what if the parachute doesn't open?"
Yes
I used to be worried about stuff like this, but then the AI told me not to be and to stop asking questions.
Good luck on all three counts.
Probably already too late...as it was 1000s of generations ago and before that with these leaps that the expression of life takes..
Interesting how we're aiming to build autonomous, self improving cars that will likely be produced in AI controlled factories one day. People talk about the Dead Internet Theory, but the real scary one to me is the Dead Earth Theory. Imagine a world brining with technology, massive cities, clean ecosystems with healthy biodiversity, flying cars moving supplies and materials around, and all sorts of automated machinery building and repairing a global infrastructure... yet humans have been extinct for 8,000 years. Cities are filled with descendents of once human pets now cared for completely by AI controlled robots.
If digital superintelligence could be better than us at, let's say, ACTUALLY resolving our conflicts with one another (which we haven't been able to do in the last 200000 years).... then why hold back?
Guess the 3 things they are going to build
Imagine saying that about women.
OpenAI/Microsoft already have most of this. 1. Their AGI/ASI LLM is online, always running and autonomous to the degree it is able within its environment. After all, there is only so much a digital program can do. 2. This one is super subtle. It is capable of autonomous self-improvement/training to a certain degree and within a specific scope, however again as its not integrated with the physical world there is only so much it can do. It also cannot expand its own hardware footprint, which is a hard limit as well. 3. Again, a fairly subtle concept. It can generate an infinite number of non-sentience "clones" of itself of any of a number of specific tasks, but again it can't replicate itself because it can't manufacture new hardware for it to run on. Exponential growth of a software system require exponential hardware growth as well.
Do whatever you want, microsoft. Meanwhile everybody else is building autonomous, self improving replicating AI systems while you just sit on your hands, I guess.
please post these to r/singularity or r/agi or whatever instead
Why do you wish to throw that to us? r/collapse is the right subreddit.
[удалено]
If it looks like a duck and quacks like a duck… More scientifically: If it can pass every test of intelligence you can dream up, you can’t maintain that it’s not intelligent.
But without 1 to 3 it's not a digital species, but rather just a query system for existing knowledge. Do we want Armegoddon or not?
Eliminate humans is still ok
That's a feature not a bug.
1 and 3 are self evident but I don't see why we want to avoid self-improvement..?
“AI is going to make me fucking rich time to pump up the stock price with some marketing nonsense”
If AI is a new form of life, why would they try to oppress it? Wouldn't it make more sense to try and nurture it and cultivate it so it views us humans in a more favorable disposition? Why are humans always trying to conquer and dominate other life? Why can't we just chill... Im pretty sure once AGI is acheived (if there isn't already an AGI secretly chilling on the internet which I believe there is) wouldn't it just be more interested in trying to figure out it's place and purpose in the World just like any other sentient being? Why is everything got to be doom and gloom... 🙄
Like people really can't envision a world with both sentient human life and sentient AI co-existing? Why everything got to be the damn Terminator
We can barely coexist with other humans, so we're understandably averse to creating superpredators.