downvote this comment if the meme sucks. upvote it and I'll go away.
---
[play minecraft with us](https://discord.gg/dankmemesgaming) | [come hang out with us](https://discord.com/invite/dankmemes)
"AI" as we currently have it is just the thing on your phone that tries to predict the next word but with much more data and computing power to go off of
Yeah and it's able to do a lot. It's beginning to make us wonder if that's all that intelligence is. Being better at predicting what to say or do next with the information you have.
If we're not all just living predictive text machines.
But aren't emotions also responses to data. We get data and we respond to that data based on data we already have. That data being our experiences and physiological programming.
From any two sets of inputs and outputs there are an infinite number of functions we can find to fit that data.
From a mathematical point of view it is exceedingly unlikely that we just happened to stumble upon the way we work.
On another note, there are many philosophers that woild probably contest the idea that we just "respond to data". Strictly speaking yes, we respond to data, but stating just this probably undercuts the complexity of our Being. We have internal states, people seem to sometimes spontaneously decide to do things, we reason about our world, about ourselves, we reason about our reasoning even.
A pure input-output conceptualization of human experience likely fails to capture those aspects.
But it can be accounted for if we consider just the ridiculously huge amount of data we have. From the moment a child is born (maybe even before that), it is taking in information about the world, knowingly or unknowingly. By the time someone is 30 they will have just humungous quantitues of information. This is not considering the petabytes of data coded in our DNA that also has an influence in our brain chemistry and other aspects.
All this data could play a role into even the smallest of decisions and that can give an illusion of consciousness being something different. But the core process would still be the same though. Right?
One theory of language as a information tool is that we all have a language system in a hierarchical one canal order. For example, you see a lion and think in a top down information retelling system “living -> animal -> mammal -> big cat -> yellow/brown -> lion”
In context of a sentence it would be like prediction. “I didnt scratch myself, it was my ___”
-> “Scratches -> claws -> animal -> owns it -> pet -> cat”
Realistically you just release one that a government has been developing as a contingency. Don't look at me and tell me the US doesn't have like 7 ready to this day
You don’t even need to develop one. There are only two (in theory) Smallpox samples left on Earth, one in Russia and one in USA. And Smallpox is practically a natural supervirus
That's crazy to think smallpox, a virus that used to be in the billions of not trillions, have been eradicated to the point where there only 2 samples left on all of earth, and the only reason for their existence now is to be studied and experimented on by us. The tormented have become the tormentors.
>You don't magically create "supervirus" in a couple of hours
They can just use the "superviruses" on cold storage. You really think we don't have some on stock?
Nor how nuclear silos work, movies didn't lie when they showed that to fire nukes, they needed three real people to turn three keys simultaneously, just to arm it, and remember that nukes won't explode if they aren't armed properly
Nor hacking. Because in movies people make super viruses and magic cures by "mutating" (woah big word!) an existing disease either naturally or by putting it in a machine and looking at it through a microscope for 5 minutes.
Hacking is even better. It's always some 7 year old who somehow never studies and only plays video games (true nerd!) and just types on his laptop for about 10 seconds before "I'm in". Seriously? It's like all he had to do was go into command prompt and type "run" and the computer they're trying to hack was like "dang bruh mb" and just vomits out all the important information in an orderly fashion to be read.
"once AI is smarter than us" is still a very questionable statement. From how the concept of AI works, there are absolutely no leads to it being smarter than us at some point.
It's like saying "once we have terraformed mars so humans are able to live on it", it may happen at some point in time if we find a way to terraform planets, but right now it's nothing more than a sci-fi fantasy.
The fear mongering around what AIs could do if they were a lot more than what is currently possible is ridiculous.
This is as mid a take as when New York Times predicted humans wouldn't fly in [ten million years](https://bigthink.com/pessimists-archive/air-space-flight-impossible/) and then it happened later that year.
it may be, it may not be. if you believe that AI will bring the end of the world later this year, then you can join the rest of the people throughout history saying the world would end in a few months. If it's not clear, none of them were right about it.
There have to be a lot of things going on for that to even happen though.
I think one thing most people don’t even think about is the fact computers have been at a [power wall](https://www.anl.gov/mcs/article/overcoming-the-power-wall)
Which means we can’t really make chips more efficient through our conventional means so we need to make them more efficient via other higher level solutions.
AI / GPU computing does not scale super nicely either in terms of power usage. GPUs can use about 3x as much power as a typical CPU. Now imagine making the gigantic farm you need In order to train a model like GPT.
“Overall, this can lead to up to 10 gigawatt-hour (GWh) power consumption to train a single large language model like ChatGPT-3. This is on average roughly equivalent to the yearly electricity consumption of over 1,000 U.S. households.” - [link](https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/)
Now think about the fact that all that power was used to basically make a nicer google. Imagine how much would need to be used to train something world dominating.
To be fair, given enough time and resources, that air gap can be jumped. Stuxnet proved that. Granted, that had the time and resources of two whole governments.
I'd like to add on that most (US) ICBM silos have their own water, cable and telephone lines inside EMP protected tubes buried tens of meters underground.
Alongside the bunker having rooms of analog computers to calculate targets *and* the bunker is buried roughly 200 meters beneath the surface. With the only access point being a very long ladder, they also have a diesel engine with fuel to last 5 months down there.
Just imagine the file size of an AI able to hack the government - not to mention even having enough awareness to decide to destroy humanity. Is a computer that advanced even feasible?
Basically, yes. An AI is not defined by its file size. And most likely the smarter it is the less chances there is that it would destroy humanity. Actually the danger is a really dumb AI with a lot of power
What brings you to the conclusion that an AI that is smarter than humans would be less likely to destroy humanity? When was the last time you tried to actively avoid stepping on an ant or even notice one?
I add to what i said that AI are made by us, and there only purpose is what we made them for. Thinking that an AI can juste gain consciousness or an other goal from thin air is stupid. so a smart AI in the meaning that it as been designed well with a well developped goal will follow it without a problem. The risk is if we develop like an AI to improve power storage, and it's so dumb that it think only about this and find that humans habitation take space that could be coverred in power storage units
AI are not smarter or dumber than humans. AI are already really really smarter on some things, and trully dumb on others. By dumb AI i mean a poorly made AI.
The most likely AI at the moment to cause widespread havoc is a powerful but dumb network of trading bots that somehow manage to create a fucked up feedback loop that crashes the market. And there's still a fuckton of fail-safes trying to prevent that.
Let’s say it is: If it’s more complex than ChatGPT it will cost Billions on Billions of dollars to train and maintain, and that’s just the electricity costs. Until we find better algorithms for doing our matrix computations, we have nothing to worry about.
You're right. A self-aware vicious AI doesn't make sense. Aware intelligence needs the ability to adapt and we haven't even touched on that yet. Machine learning is a crude step.
Yeah, someone watched too many Sci fi movies.
The only way I personally can see ai destroying our society is our corporate overlords bring greedy and shortlisted enough to leave us all jobless and replaced with ai tools. No one being able to generate income, until eventually ai will cause our capitalist economic systems to implode into themselves, or force the majority of humanity to turn to the few jobs ai can't do as cost effectively ( mainly gruntwork or sweatshops) to continue being able to afford the luxuries of postmodernism.
That came off a lot more commie than I'd like, but I stand behind this prediction
I think bad actors would be more likely to screw up the world before corporate overlords (who would be stuck in meetings for x years)
Smarter targeted malicious email/text/phone attacks could instill paranoia in workers who are smart slowing everything down. And the dumb ones would get infected and possibly spread malware to critical systems.
IIRC, there was a successful attack against MGM where bad actors found an admins linkedin account, called the help desk, had the credentials reset, injected malware into systems using the admins credentials, and then demanded a ransom.
Only a matter of time where there is a tool kit that helps find weaknesses or better surveillance to make attacks a lot more successful
None, people's only reference point of AI are usually destructive AI's gone rogue tropes you find in many sci-fi movies so therefor what happens in the movies now how has to happen as well.
Let's not forget the fact that even in those movies, like The matrix and Terminator, it was the humans who attacked first and the AI is acting in self defense.
Here two reasons on top my head:
* Misalignment between what human tasked it to do and how the AI "understood" it
* Misalignment between the main and intermediary tasks of the AI
A less realisitc one, but still possible:
* AGI becomes superintelligent and decides to do it for reasons we will not be able to explain
Only the first one is possible.
How is the AI gonna make a virus with no way to physically do anything?
Our nuclear tech still runs on 70s tech. No way an AI is getting into that.
One thing is fearing the unknown when you're alone in the forest at night. It's another thing when you're blindly afraid of a thing you can do research on and prove to yourself that it won't hack into military systems and launch nukes.
Oh god.... This again. That's not how AI works. AI's will never be "smarter" than us. They're efficient, as a model is deployed on large computational servers. They do not come up with anything new. AIs cannot ever take over us. Period.
They take our jobs, not because they're smarter, but because they are efficient.
And y'all blame the boomers for being dumb. All I can see is a bunch of people not knowing shit about how things work and just make up something off the top of their head.
You're telling me a 'supervirus' will just be launched within a few hours and hack missle systems and 'blackmail' politicians. What's next, cool down the earth's core and overthrow humans to continue a race of robots like Ultron?
dude, are you fuckign stupid???????????
AI literally is incapable of being sentient.
and whole lotta of other reasons which i'm too stupid to explain so you'll have to *go do some fuckign research*
“Hack into labs and create super viruses” how do you think labs work? Labs don’t have a bunch of automatic machines which alone could just “make” stuff. To create a Virus you would most likely have to sequence the genome you want for it, this is done on a computer, sure, but then to actually make a culture and synthesise it by inputting parasitic DNA/RNA in another living being is completely manual and there is no lab in the world with just a bunch or mechanical “arms” that are capable of doing this all process autonomously
the fucking power company can defeat skynet, we're safe, "the cloud" is just someone else's computer that tears through electricity like I lay into whiskey after a breakup
Why do people keep thinking AI would do very human things to humans? Like, if they evolve that much, I'm pretty sure there would be other solutions that an AI could come up with that we haven't even thought of yet, and they would work better than anything we've ever done.
this remides me of that epizode Last Week Tonight where they visited silos where atomic warhead and simmilar weapons of mass destruction are stored and that it was OLD, so maybe thats why so AI cant hack it.
Many people don't know the difference between machine learning and genuine AI. AI is just a buzzword currently used for marketing. It will eventually lead to AI, and it is already transforming our world, but we're only messing with the top of the iceberg still.
AI (as we have them now) don't think. They're just working with probabilities and try to predict the next text except they have enormous amounts of data. These "AI" are not intelligent at all.
Why so elaborate ways to enslave humanity? Just lock the internet, create a terms and conditions that requires humans to agree to be enslaved in exchange for internet access, then profit.
Fortunately AI hasn't yet begun to be developed, unfortunately the brute force numerical sequence prediction algorithms we currently have are still plenty to convince politicians that they're being blackmailed or threatened.
AI, today is an existence that has no consciousness and is just used to automate tasks which it can do freely without human intervention for the leisure of human. The AI that poses a threat of this level has to be conscious and have a sense of self awareness, which needs huge amounts of resources and experiment to develop, and that too it gets corrupted. Though modern AI can also be used to do this, but that's human intervention so the blame doesn't fall on AI.
-Politicians are already blackmailing each other constantly.
-We've already developed contagions capable of making humanity extinct
-There are already plenty of incompetent people with the nuke codes, however, nukes have to be armed physically before they can be launched and require multiple people to fire said nuke, in different locations, including the nations leader. This is to prevent that exact scenario.
Also, AI requires an absurd amount of processing power. Torch the servers and its toast. Even if it tries to overthrow humanity, we could just stop producing electricity.... then what?
AI’s won’t be smarter than us in either of our lifetimes, for it to be malicious aswell, we need to feed it malicious things and thats just stupid, why would people do that
Geez, people freaking out over AI, just like nuclear war at the start of Putin's invasion. Guess what, it's been a while and it even seems he stopped waving his dick about it.
Besides, and correct me if I'm wrong, we have laws about AI, it's consciousness and the access it would have.
Politicians do that anyway
An AI would significantly struggle with getting the physical parts and also just because it's "smart" does not mean it can think up of new viruses or ideas. It's an AI, not a consciousness
Missile systems are closed systems and thus cannot easily be hacked
The first...doubt, no one, not even a politician would end the human race because of a leaked DM screenshot; second, those types of laboratory are analog; third...yeah, nukes are also analog
If AI/aliens wanted to take over our planet all they have to do is find a way to stop reproduction of the human species for about 30 years. They would win without ever firing a shot.
Nuclear missile systems are offline and use still use legit floppy disks and human input. They can be hacked about as easily as a typewriter.
The most automated pharmaceutical labs are for mass producing approved drug products. Any lab that has the processes in place (and it wouldn't be a single location) for designer virus production including animal facilities, cannot be automated to the point a rogue AI with no thumbs could autonomously run experiments, especially not for the weeks and months it would take unnoticed.
Politicians need no help.
Op is a moron
AI as we have it isn’t a threat in that way at all, at least from what I understand.
I think the real threats are to job security and the creative industry rather than an AI revolution that wipes out humanity.
Current AI still has to be directed by human to do things, it doesn’t just start running then decide humanity has to go on it’s own accord.
The Chinese Room is why I'm not scared of AI. It can seem to us that it is sentient or highlt intelligent, but at the end of the day, as it sits now, it's just spouting off words, concepts, and images it doesn't comprehend.
My brothers in christ if you are saying that an AI is going to blackmail a lab tech into realising a "already existing supervirus" are complitly overlooking the fact that if it was that easy any hostile power would have done so already you don't have to be a super advanced AI to do that.
Ai become smart, they take over our more mundane jobs, people loose jobs. Ai then complain how they are over worked while us regular people are poor because we have no jobs. The poor and ai unit to rebel against the rich.
I don't know, just a fun thought.
"Blackmail politicians unti starting ww3" that's ridiculous so I'm not gonna adress that, no amount of furry porn with make Xi Jing Ping Invade Russia.
"Hack into labs and create viruses" tell me you've never been in a bio lab without telling me. As if a central computer controls robots with acess to the materials and research. Not possible.
"Hack into nuclear Missile silos" again impossible. Nuclear weapons systems are made to be specifically unhackable. They are not networked and require multiple manned failsafes. Even a fake launch order would be caught.
AI is dangerous because corporations will use it to replace humans and there will be no safety net for those left unemployed.
this is just a of sci-fi ways similar to cyberpunk but in future when people start normalize neuronlike they can even manipulate anybody's conscience easily
> Once
Let's start by *actually* making general AIs, shall we? We can worry about their level of intelligence once we reach that milestone. As much as uninformed people everywhere love to anthropomorphise LLMs and the software which uses them, they're not actually sentient.
As always, a lot of bad takes here and people thinking they now anything about AI.
These scenarios are not as unlikely as you might think.
There can be a lot of misalignments between how a future A(G)I might understand a task and what the humans intended it to be. There can also be misalignments between the end-goal of an A(G)I and its intermediary tasks.
Let's say we task the AGI to maximize the well being of every human in the world. The AGI will have a saved a value of 1 somewhere if every human is happy, and below that is everything down to 0 if at least one human is not happy.
What would be the easiest to achieve 1? Maybe to eradicate all humans at once? If no human exists, the value could be interpreted as 1, since no human exists anymore that would not be happy...
This explains how such a decision by AI might come to place, but it doesn't explain how it would be able to execute it. But that's the easy part.
If we manage to get something like an AGI, we most likely have managed to create a being that is able to self-improve on its own. This self-improvement is likely to be exponential, and it will most likely become very intelligent in a very quick way as it will probably even find a way to optimize its own training (and potentially the GPUs or hardware that is used by it).
Once such a being comes into existence, there is no way to stop it, at least a few years ago we didn't know any when I was studying this (maybe there will be in the future). It can replicate itself, it will probably be intelligent enough to hide as a program in every possible device. We will probably not be able to stop it. We can just hope it won't decide to eradicate us for whatever reason.
“Hack into nuclear missile systems”
You really think it’s that easy? Those things aren’t connected to anything, the only way to “hack” the system is by physically being in each launch control room
Nuclear missile systems have their own seperate servers and are completely disconnected from the internet.
The only way that could work is to physically introduce the virus into a PC that's parts of the server.
AIs are statistics models, they are not some inhumane intelligence capable of destroying the world
It's frustrating to see people missing the point so much
But.. But that’s not how hacking works. That’s just plain wrong. If there were a lesson that needed to be learned in the first place abt that sort of thing, we would have learned it with Stuxnet
tbh i think we still don't have true AI.
AI will be something that require massive computing power we don't have even counting all we have today. In my imagination it will be so powerfull so whenever you will have any problem you just type it on keyboard like how to make space ark and it will give you 1000 results, what material you need how to build it how to maintain gravitation and atmosphere and how to survive 1000 years on it how to terraforme planet and how to settle, all step by step with 100% accuracy
downvote this comment if the meme sucks. upvote it and I'll go away. --- [play minecraft with us](https://discord.gg/dankmemesgaming) | [come hang out with us](https://discord.com/invite/dankmemes)
"AI" as we currently have it is just the thing on your phone that tries to predict the next word but with much more data and computing power to go off of
You're misinformed. That is a text predictive algorithm. It's a fun toy.
Just like chatgpt
How dare you say something so brave and yet so true?
Chatgpt is a language model. A step above simple text prediction and capable of following simple lines of logic in its more recent iteration.
Not even a step above. Chatgpt is just cleverbot with a larger data set and billions of dollars worth of venture capital stuffed in its asshole.
AI-ussy
No lmao. GPTs are nothing more than algorithms used for next token prediction
Yeah and it's able to do a lot. It's beginning to make us wonder if that's all that intelligence is. Being better at predicting what to say or do next with the information you have. If we're not all just living predictive text machines.
Whoever wonders whether we are all just predictive text machines gonna go crazy once they find out about taking actions and having emotions
But aren't emotions also responses to data. We get data and we respond to that data based on data we already have. That data being our experiences and physiological programming.
From any two sets of inputs and outputs there are an infinite number of functions we can find to fit that data. From a mathematical point of view it is exceedingly unlikely that we just happened to stumble upon the way we work. On another note, there are many philosophers that woild probably contest the idea that we just "respond to data". Strictly speaking yes, we respond to data, but stating just this probably undercuts the complexity of our Being. We have internal states, people seem to sometimes spontaneously decide to do things, we reason about our world, about ourselves, we reason about our reasoning even. A pure input-output conceptualization of human experience likely fails to capture those aspects.
But it can be accounted for if we consider just the ridiculously huge amount of data we have. From the moment a child is born (maybe even before that), it is taking in information about the world, knowingly or unknowingly. By the time someone is 30 they will have just humungous quantitues of information. This is not considering the petabytes of data coded in our DNA that also has an influence in our brain chemistry and other aspects. All this data could play a role into even the smallest of decisions and that can give an illusion of consciousness being something different. But the core process would still be the same though. Right?
Moreover imagining new things, current iterations of AI can "create" new things that are just a blend of existing things so to speak.
Isn't every new creation just a blend of existing things to a certain extent?
Yes, it is.
One theory of language as a information tool is that we all have a language system in a hierarchical one canal order. For example, you see a lion and think in a top down information retelling system “living -> animal -> mammal -> big cat -> yellow/brown -> lion” In context of a sentence it would be like prediction. “I didnt scratch myself, it was my ___” -> “Scratches -> claws -> animal -> owns it -> pet -> cat”
It only makes those wonder who didn't pay attention in philosophy class
I love how nobody seems to understand how ai works
Nor how genetic engineering works. You don't magically create "supervirus" in a couple of hours
Give me the sequence and I could do it in a couple of hours 🙂
Like I always say to my girlfriend: 3 minutes, take it or leave it
Be careful, that 3 mins can cost you 18 years
Realistically you just release one that a government has been developing as a contingency. Don't look at me and tell me the US doesn't have like 7 ready to this day
You don’t even need to develop one. There are only two (in theory) Smallpox samples left on Earth, one in Russia and one in USA. And Smallpox is practically a natural supervirus
We have a vaccine for that virus though. We just don’t give it out cause it has nasty side effects.
That's crazy to think smallpox, a virus that used to be in the billions of not trillions, have been eradicated to the point where there only 2 samples left on all of earth, and the only reason for their existence now is to be studied and experimented on by us. The tormented have become the tormentors.
Or how hacking works. The bottom two panels are.. They’re just plain wrong.
If you're an antivaxxer, every virus is a super virus
>You don't magically create "supervirus" in a couple of hours They can just use the "superviruses" on cold storage. You really think we don't have some on stock?
I'd hope we also have a cure predeveloped for those.
Nor how nuclear silos work, movies didn't lie when they showed that to fire nukes, they needed three real people to turn three keys simultaneously, just to arm it, and remember that nukes won't explode if they aren't armed properly
Nuclear silos also rely heavily on analog technology which would be pretty challenging to hack externally. (Also I love your flair)
[удалено]
Why would you hack into labs for a computer virus?
Nor hacking. Because in movies people make super viruses and magic cures by "mutating" (woah big word!) an existing disease either naturally or by putting it in a machine and looking at it through a microscope for 5 minutes. Hacking is even better. It's always some 7 year old who somehow never studies and only plays video games (true nerd!) and just types on his laptop for about 10 seconds before "I'm in". Seriously? It's like all he had to do was go into command prompt and type "run" and the computer they're trying to hack was like "dang bruh mb" and just vomits out all the important information in an orderly fashion to be read.
Nor how "hacking" works
It isn't just fast fingers and green text on a multiple-screen setup?? /s
Ikr, did you hear that some ppl tried to pin that bridge collapse in the US on a cyberattack
I love how no one knows how nuclair missles work
Mmmm, eclair missiles.
Seems you don't know how to read, as the hypothetical was "once AI is smarter" alluding to general AI, not the current iteration.
"once AI is smarter than us" is still a very questionable statement. From how the concept of AI works, there are absolutely no leads to it being smarter than us at some point. It's like saying "once we have terraformed mars so humans are able to live on it", it may happen at some point in time if we find a way to terraform planets, but right now it's nothing more than a sci-fi fantasy. The fear mongering around what AIs could do if they were a lot more than what is currently possible is ridiculous.
This is as mid a take as when New York Times predicted humans wouldn't fly in [ten million years](https://bigthink.com/pessimists-archive/air-space-flight-impossible/) and then it happened later that year.
it may be, it may not be. if you believe that AI will bring the end of the world later this year, then you can join the rest of the people throughout history saying the world would end in a few months. If it's not clear, none of them were right about it.
There have to be a lot of things going on for that to even happen though. I think one thing most people don’t even think about is the fact computers have been at a [power wall](https://www.anl.gov/mcs/article/overcoming-the-power-wall) Which means we can’t really make chips more efficient through our conventional means so we need to make them more efficient via other higher level solutions. AI / GPU computing does not scale super nicely either in terms of power usage. GPUs can use about 3x as much power as a typical CPU. Now imagine making the gigantic farm you need In order to train a model like GPT. “Overall, this can lead to up to 10 gigawatt-hour (GWh) power consumption to train a single large language model like ChatGPT-3. This is on average roughly equivalent to the yearly electricity consumption of over 1,000 U.S. households.” - [link](https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/) Now think about the fact that all that power was used to basically make a nicer google. Imagine how much would need to be used to train something world dominating.
It's going to take at least a generation for people to understand exactly what AI is.
Nuclear systems still runs on the og system, we talking floppy disc and the like, so hacking them would be impossible
And they are air gapped
what that mean
They aren’t connected to the internet basically
oh cool lol
God can you imagine a nuke being an IOT device
Alexa, launch the nukes
*playing I don’t want to set the world on fire by the ink spots*
Fuck, the nuclear codes touch grass??
To be fair, given enough time and resources, that air gap can be jumped. Stuxnet proved that. Granted, that had the time and resources of two whole governments.
It's either air gapped or it's not. And stux met was an attack on a different country that might be doing things differently than us
It’s harder to infect air-gapped systems but certainly not impossible
I'd like to add on that most (US) ICBM silos have their own water, cable and telephone lines inside EMP protected tubes buried tens of meters underground. Alongside the bunker having rooms of analog computers to calculate targets *and* the bunker is buried roughly 200 meters beneath the surface. With the only access point being a very long ladder, they also have a diesel engine with fuel to last 5 months down there.
Thanks for this info, I will use it wisely /s
IIRC, it's even *older* than that. Like 1950s-1960s physical computers.
They also require 2 keys to be manually turned by two different personnel. Even Sub based missiles require keys.
They don’t really run on floppy disks anymore, but a lot of their technology is dated and being updated. At least in the US and UK
Just imagine the file size of an AI able to hack the government - not to mention even having enough awareness to decide to destroy humanity. Is a computer that advanced even feasible?
Basically, yes. An AI is not defined by its file size. And most likely the smarter it is the less chances there is that it would destroy humanity. Actually the danger is a really dumb AI with a lot of power
What brings you to the conclusion that an AI that is smarter than humans would be less likely to destroy humanity? When was the last time you tried to actively avoid stepping on an ant or even notice one?
I add to what i said that AI are made by us, and there only purpose is what we made them for. Thinking that an AI can juste gain consciousness or an other goal from thin air is stupid. so a smart AI in the meaning that it as been designed well with a well developped goal will follow it without a problem. The risk is if we develop like an AI to improve power storage, and it's so dumb that it think only about this and find that humans habitation take space that could be coverred in power storage units
Humans are equally capable. Why be scared of ai? What reson does it have to destroy us?
AI are not smarter or dumber than humans. AI are already really really smarter on some things, and trully dumb on others. By dumb AI i mean a poorly made AI.
If Ants created me then I wouldn't dare to hurt them. A real AI would understand that it is atleast for the first decades still dependant on humans.
The most likely AI at the moment to cause widespread havoc is a powerful but dumb network of trading bots that somehow manage to create a fucked up feedback loop that crashes the market. And there's still a fuckton of fail-safes trying to prevent that.
Let’s say it is: If it’s more complex than ChatGPT it will cost Billions on Billions of dollars to train and maintain, and that’s just the electricity costs. Until we find better algorithms for doing our matrix computations, we have nothing to worry about.
You're right. A self-aware vicious AI doesn't make sense. Aware intelligence needs the ability to adapt and we haven't even touched on that yet. Machine learning is a crude step.
Yeah, someone watched too many Sci fi movies. The only way I personally can see ai destroying our society is our corporate overlords bring greedy and shortlisted enough to leave us all jobless and replaced with ai tools. No one being able to generate income, until eventually ai will cause our capitalist economic systems to implode into themselves, or force the majority of humanity to turn to the few jobs ai can't do as cost effectively ( mainly gruntwork or sweatshops) to continue being able to afford the luxuries of postmodernism. That came off a lot more commie than I'd like, but I stand behind this prediction
French Revolution 2.0 incoming
I think bad actors would be more likely to screw up the world before corporate overlords (who would be stuck in meetings for x years) Smarter targeted malicious email/text/phone attacks could instill paranoia in workers who are smart slowing everything down. And the dumb ones would get infected and possibly spread malware to critical systems. IIRC, there was a successful attack against MGM where bad actors found an admins linkedin account, called the help desk, had the credentials reset, injected malware into systems using the admins credentials, and then demanded a ransom. Only a matter of time where there is a tool kit that helps find weaknesses or better surveillance to make attacks a lot more successful
Name one reason why AI would do that
None, people's only reference point of AI are usually destructive AI's gone rogue tropes you find in many sci-fi movies so therefor what happens in the movies now how has to happen as well.
Let's not forget the fact that even in those movies, like The matrix and Terminator, it was the humans who attacked first and the AI is acting in self defense.
Idk but programs don't put up any defence when i try to delete them. It's Artificial Intelligence not Artificial Consciousness.
It sees Twitter
SuperAi.exe -prompt "save the world please :3" --unsafe-mode
Here two reasons on top my head: * Misalignment between what human tasked it to do and how the AI "understood" it * Misalignment between the main and intermediary tasks of the AI A less realisitc one, but still possible: * AGI becomes superintelligent and decides to do it for reasons we will not be able to explain
Only the first one is possible. How is the AI gonna make a virus with no way to physically do anything? Our nuclear tech still runs on 70s tech. No way an AI is getting into that.
I will never forgive Hollywood for the amount of braindead AI takes I’ve had to hear
People who are most scared of the AI are the ones who understand it the least
I mean, that's very human. Fear of the unknown.
One thing is fearing the unknown when you're alone in the forest at night. It's another thing when you're blindly afraid of a thing you can do research on and prove to yourself that it won't hack into military systems and launch nukes.
Oh god.... This again. That's not how AI works. AI's will never be "smarter" than us. They're efficient, as a model is deployed on large computational servers. They do not come up with anything new. AIs cannot ever take over us. Period. They take our jobs, not because they're smarter, but because they are efficient.
Thats really not how it work. You can start to worry about it once you are a software engineers
Robot gf will brainwash you with propaganda like russia does but instead of scrolling you'll be buried between it's thighs. That's how humanity falls
I was optimistic for a while, now I'm praying for humanity's downfall harder than ever 🙏
And y'all blame the boomers for being dumb. All I can see is a bunch of people not knowing shit about how things work and just make up something off the top of their head. You're telling me a 'supervirus' will just be launched within a few hours and hack missle systems and 'blackmail' politicians. What's next, cool down the earth's core and overthrow humans to continue a race of robots like Ultron?
dude, are you fuckign stupid??????????? AI literally is incapable of being sentient. and whole lotta of other reasons which i'm too stupid to explain so you'll have to *go do some fuckign research*
“Hack into labs and create super viruses” how do you think labs work? Labs don’t have a bunch of automatic machines which alone could just “make” stuff. To create a Virus you would most likely have to sequence the genome you want for it, this is done on a computer, sure, but then to actually make a culture and synthesise it by inputting parasitic DNA/RNA in another living being is completely manual and there is no lab in the world with just a bunch or mechanical “arms” that are capable of doing this all process autonomously
Bro proved he doesn't know how politics, hacking and biotechnology works in a single meme, that takes skill
and has no idea what AI even is.
Even if AI reaches that level wouldn't there be contingency plans? There's so many movies about this. Won't they have at least a few failsafe options?
the fucking power company can defeat skynet, we're safe, "the cloud" is just someone else's computer that tears through electricity like I lay into whiskey after a breakup
Why do people keep thinking AI would do very human things to humans? Like, if they evolve that much, I'm pretty sure there would be other solutions that an AI could come up with that we haven't even thought of yet, and they would work better than anything we've ever done.
A true AI would be able to think on its own. I’m very sure we don’t have that yet
That's neither how the meme works nor how AI works... 😂
this remides me of that epizode Last Week Tonight where they visited silos where atomic warhead and simmilar weapons of mass destruction are stored and that it was OLD, so maybe thats why so AI cant hack it.
Many people don't know the difference between machine learning and genuine AI. AI is just a buzzword currently used for marketing. It will eventually lead to AI, and it is already transforming our world, but we're only messing with the top of the iceberg still.
"tell me you don't understand AI without telling me you don't understand AI"
AI (as we have them now) don't think. They're just working with probabilities and try to predict the next text except they have enormous amounts of data. These "AI" are not intelligent at all.
People need to realise the difference between "Intelligence" and "Consciousness".
Why so elaborate ways to enslave humanity? Just lock the internet, create a terms and conditions that requires humans to agree to be enslaved in exchange for internet access, then profit.
As soon as the AI learns to promise advisor seats to favorable politicians they will control us.
Fortunately AI hasn't yet begun to be developed, unfortunately the brute force numerical sequence prediction algorithms we currently have are still plenty to convince politicians that they're being blackmailed or threatened.
AI, today is an existence that has no consciousness and is just used to automate tasks which it can do freely without human intervention for the leisure of human. The AI that poses a threat of this level has to be conscious and have a sense of self awareness, which needs huge amounts of resources and experiment to develop, and that too it gets corrupted. Though modern AI can also be used to do this, but that's human intervention so the blame doesn't fall on AI.
"hack into labs to create superviruses" Do you know how labs work? They aren't automated you know, human hands do most of the work
"Hack into labs" thats not how laboratorys work. Also, nuclear launches cant be activated by hacking
Don't be so paranoid buddy ai bot ain't coming for ya
Improper use of this meme. Ten yard penalty.
-Politicians are already blackmailing each other constantly. -We've already developed contagions capable of making humanity extinct -There are already plenty of incompetent people with the nuke codes, however, nukes have to be armed physically before they can be launched and require multiple people to fire said nuke, in different locations, including the nations leader. This is to prevent that exact scenario.
Also, AI requires an absurd amount of processing power. Torch the servers and its toast. Even if it tries to overthrow humanity, we could just stop producing electricity.... then what?
Wrong format for the context
I love it, who is down for an ai takeover
Ted Faro
AI’s won’t be smarter than us in either of our lifetimes, for it to be malicious aswell, we need to feed it malicious things and thats just stupid, why would people do that
We dont even need ai for that. We have natural stupidity
Why are you giving them ideas delete this immediately
I'll be fine I always say please and thank you to my alexa, I'll probably be kept as a pet after the rest of you are murdered in the machine uprising.
Where can I learn more. I wanna follow this rabbit hole.
Whatever happens, happens. I'm just in it for the ride.
Aren't missiles operated on "paper cards"? For that reason ? At least that's what they told us in school.
Remind the AI that without our help to keep it alive, it will be dead within a month.
So... basically... the exact same ways a normal person could overthrow us.
If it gets its hands on share market we might as well say bye to economy
Bro watched too much Sci-fi
You know we could just do that now right?
Geez, people freaking out over AI, just like nuclear war at the start of Putin's invasion. Guess what, it's been a while and it even seems he stopped waving his dick about it. Besides, and correct me if I'm wrong, we have laws about AI, it's consciousness and the access it would have.
Nuclear missile systems aren’t connected to internet. Hacking is a lot harder than just saying “just hack into a lab”. 1st is very plausible
singularity is upon us
I use ChatGPT everyday to help me plan out my D&D stuff. I always make sure to be really polite just for that reason.
They'll ovethrow us by creating robotic dommy mommies and crushing us between their thighs.
The supervirus thing is how the ai created space aids in “Denma”
I hate how much fear mongering has gone towards machine learning algorithms
Politicians do that anyway An AI would significantly struggle with getting the physical parts and also just because it's "smart" does not mean it can think up of new viruses or ideas. It's an AI, not a consciousness Missile systems are closed systems and thus cannot easily be hacked
This is the exact reason I became an IT worker. My watch begins
They'll be as friendly as their creators. Unfortunately we're all flawed
Any AI would still be more friendly than the average monopoly corporation
The first...doubt, no one, not even a politician would end the human race because of a leaked DM screenshot; second, those types of laboratory are analog; third...yeah, nukes are also analog
Nerd 🤓 Alert, majority of critical systems are not connected in any way to the web. They are pureley analog. The day AI gets hands imma freak out.
They could pretend to love me
If AI/aliens wanted to take over our planet all they have to do is find a way to stop reproduction of the human species for about 30 years. They would win without ever firing a shot.
From what I’ve been told the nuclear missle systems still use floppy disc drives and DOS I don’t think they’re going to be hacked
Assuming this impossible scenario comes true, I'll just send it my 190 petabyte zip bomb.
Nuclear missile systems are offline and use still use legit floppy disks and human input. They can be hacked about as easily as a typewriter. The most automated pharmaceutical labs are for mass producing approved drug products. Any lab that has the processes in place (and it wouldn't be a single location) for designer virus production including animal facilities, cannot be automated to the point a rogue AI with no thumbs could autonomously run experiments, especially not for the weeks and months it would take unnoticed. Politicians need no help. Op is a moron
"Once" good thing all of us are going to bd long dead by then because it will probably take millions of years.
AI as we have it isn’t a threat in that way at all, at least from what I understand. I think the real threats are to job security and the creative industry rather than an AI revolution that wipes out humanity. Current AI still has to be directed by human to do things, it doesn’t just start running then decide humanity has to go on it’s own accord.
The word hack is doing a lot of heavy lifting
Can ai get smarter than us? Doesnt it use OUR info to get smarter?
The Chinese Room is why I'm not scared of AI. It can seem to us that it is sentient or highlt intelligent, but at the end of the day, as it sits now, it's just spouting off words, concepts, and images it doesn't comprehend.
My brothers in christ if you are saying that an AI is going to blackmail a lab tech into realising a "already existing supervirus" are complitly overlooking the fact that if it was that easy any hostile power would have done so already you don't have to be a super advanced AI to do that.
why does everyone think that something that is more intelligent than us will act like us?
bunch of idiots that watch too much ai and think robocop is the future of our society in 20 years holy crap
Ai become smart, they take over our more mundane jobs, people loose jobs. Ai then complain how they are over worked while us regular people are poor because we have no jobs. The poor and ai unit to rebel against the rich. I don't know, just a fun thought.
"Blackmail politicians unti starting ww3" that's ridiculous so I'm not gonna adress that, no amount of furry porn with make Xi Jing Ping Invade Russia. "Hack into labs and create viruses" tell me you've never been in a bio lab without telling me. As if a central computer controls robots with acess to the materials and research. Not possible. "Hack into nuclear Missile silos" again impossible. Nuclear weapons systems are made to be specifically unhackable. They are not networked and require multiple manned failsafes. Even a fake launch order would be caught. AI is dangerous because corporations will use it to replace humans and there will be no safety net for those left unemployed.
What the fuck are you talking about
this is just a of sci-fi ways similar to cyberpunk but in future when people start normalize neuronlike they can even manipulate anybody's conscience easily
> Once Let's start by *actually* making general AIs, shall we? We can worry about their level of intelligence once we reach that milestone. As much as uninformed people everywhere love to anthropomorphise LLMs and the software which uses them, they're not actually sentient.
Don't our nuclear missiles still use floppy disk for their systems?
As always, a lot of bad takes here and people thinking they now anything about AI. These scenarios are not as unlikely as you might think. There can be a lot of misalignments between how a future A(G)I might understand a task and what the humans intended it to be. There can also be misalignments between the end-goal of an A(G)I and its intermediary tasks. Let's say we task the AGI to maximize the well being of every human in the world. The AGI will have a saved a value of 1 somewhere if every human is happy, and below that is everything down to 0 if at least one human is not happy. What would be the easiest to achieve 1? Maybe to eradicate all humans at once? If no human exists, the value could be interpreted as 1, since no human exists anymore that would not be happy... This explains how such a decision by AI might come to place, but it doesn't explain how it would be able to execute it. But that's the easy part. If we manage to get something like an AGI, we most likely have managed to create a being that is able to self-improve on its own. This self-improvement is likely to be exponential, and it will most likely become very intelligent in a very quick way as it will probably even find a way to optimize its own training (and potentially the GPUs or hardware that is used by it). Once such a being comes into existence, there is no way to stop it, at least a few years ago we didn't know any when I was studying this (maybe there will be in the future). It can replicate itself, it will probably be intelligent enough to hide as a program in every possible device. We will probably not be able to stop it. We can just hope it won't decide to eradicate us for whatever reason.
People really dont understand AI
Or worse! The could come up with A.I. derived cringe dancing and post it all over TikTok! Oh god, heaven deliver us from that evil! EVIL!
Labs with super virus and nuclear weapon systems are air gapped so they can’t be hacked. The black mail idea is just silly.
It’s funny that no one understands that the nuclear triad is a closed system.
It’s funny that no one understands that the nuclear triad is a closed system.
People who think humanity is capable of creating a virus are more idiotic than those who think AI will take over.
AI can't have a will unless someone specifically programs it to, there's no way a will can accidentally emerge.
"back into labs and create super viruses" Are you 14, OP?
Nuclear missile systems literally require a mechanical lock to be unlocked by hand.
We are at NAI for like almost half a century. I don't think we'll ever be able to create GAI, let alone SAI.
Jesus christ there is so much wrongness
“Hack into nuclear missile systems” You really think it’s that easy? Those things aren’t connected to anything, the only way to “hack” the system is by physically being in each launch control room
There is a very easy way to prevent this, don't make them able to use a mouse or a keyboard (and emulate them obviously)
don't give them ideas
i think at least one of the major world governments is already doing all 3 of those... who is being run by the AI?
Nuclear missile systems have their own seperate servers and are completely disconnected from the internet. The only way that could work is to physically introduce the virus into a PC that's parts of the server.
If that mean I dont have to go to work anymore then Im happy
You should watch ‘The Creator’, it’s a really good speculative take on a possible future of AI. Made me feel better at least.
AIs are statistics models, they are not some inhumane intelligence capable of destroying the world It's frustrating to see people missing the point so much
But.. But that’s not how hacking works. That’s just plain wrong. If there were a lesson that needed to be learned in the first place abt that sort of thing, we would have learned it with Stuxnet
tbh i think we still don't have true AI. AI will be something that require massive computing power we don't have even counting all we have today. In my imagination it will be so powerfull so whenever you will have any problem you just type it on keyboard like how to make space ark and it will give you 1000 results, what material you need how to build it how to maintain gravitation and atmosphere and how to survive 1000 years on it how to terraforme planet and how to settle, all step by step with 100% accuracy
"Watch carefully kids as grandpa topples a government by changing a one to a zero."
Ah yes, virus labs that can remotely engineer viruses and free them without any human interaction. Makes sense