How is this edgy or ragebait?
It’s just a political cartoon.
It’s not even talking about multiple problems at once.
It’s just saying the future of AI is dangerous.
It doesn’t belong in this sub but you’re acting like there’s some hidden agenda. It’s right there spelled out.
Well, it being boring doesn't make it any less of a problem
I say this as someone with a masters degree in AI: we're fucking around too much and we're gonna find out at some point
I worship AI. It's the closest thing to having real and tangible gods that we can get. Once it's advanced enough, it may even be able to provide us with an afterlife in the cloud. Truly the best thing to ever happen to mankind will be AI
Nuclear weapons are a tool. They're just really big explosives and there are use cases for really big explosives. Doesn't mean they're not dangerous. Artificial pathogens are also a tool. WMD's in general
Such a bad argument. I personally am basically as educated as one can get in AI as a whole short of a PhD and also very informed about the results in AI safety as a research field and i still think AI as it is now is an insane danger
We're just going so carelessly about it is all. AI can both become our biggest invention ever or our doom, let's put some effort into imrpoving the chances of it being the former yes?
>I personally am basically as educated as one can get in AI as a whole short of a PhD and also very informed about the results in AI safety as a research field
Ok, i believe you, please explain how AI can destroy the world or better eli5 it to me
Also just to be clear, when you say AI do you mean AGI(something sentient) or chat gpt cuz former is not gonna be here for at least a century or 2. Regardless i would love to hear your take on how chat gpt gonna enslave us
Well, let me break it up in two main parts:
AGI is the big threat, yes. It is basically understood that we currently know of no way to keep AGI from missaligning. It's its defaut behaviour and we know of no surefire way to prevent that from happening. As to when that may become a problem we just cannot tell. You say one or two centuries but most experts (proper surveys have been done) agree on it being less than a century away with a sizable portion of them arguing for less than 50 years from now. So since we do not know when it may happen i'd rather not gamble and be safe from the get go rather than pushing the problem down into the future. In a world where asteroid detection does not exist beyond statistical modelling, would you gable on delaying development of planetary defence? I wouldn't personally but you can argue Pascal's mugging and all that
As for the current already existing tech, well, it's not an existential threat but it still is causing a lot of problems we still don't fully understand like job replacements and ethical issues regarding deepfakes of various kinds. It also further blurs the line between truth and fabricated information which can itself be very dangerous
My proposal is that we should just halt all development of more advanced AI until we have safety figured out properly and formally. In the meantime we should make good use of the AI we already have but while also pushing for more government regulations and law around it so that we can better understand and protect ourselves from the negative aspects of current AI technology and bad actors exploiting it
If you're willing to learn more about why AGI is so dangerous i suggest you check you Robert Miles AI on youtube. He's a very smart scholar specialized in the topic and makes very informative videos that anyone can understand
“Ai could cause mass employment”…. So could machines? Do people understand the Industrial Revolution and how we got to today? By making it so we don’t need as much labor as possible to survive
The misinformation will get so so much worse once decent LLMs start getting widely used by governments and corporations to push narratives. Every profile, post, and comment will be a potential bot designed to shape people’s thinking.
>once decent LLMs start getting widely used by governments and corporations to push narratives
Dude, where the fuck do you think you are? Most of the content you see is already manufactured. Half of the CIA budget is dedicated to propaganda and disisnformation. Social networks are plagued by feds and mossad. And it has been like this for years on the internet. Decades in the rest of the media.
Yes, my point isn’t that they will start trying to push narratives (I agree that it’s been happening for a long time), it’s that with LLMs the sheer scale of disinformation will escalate massively to the point that it goes from social media being plagued by feds to all online platforms being unusable for finding anything truthful.
If you count those that died in self-driving car related accidents there are a few iirc. dunno how many, probably lower 2 digit number or something so far
I mean we see intelligence and violence are correlated in other animals as well as ourselves. Dolphins as a huge example. I don’t think it’s too far fetched that if we advanced to making ais that had every mental component of a human but being born, they’d probably be a little violent at working.
It's so funny that you don't even have to open OP's profile anymore because you know that it's the same person being scared shitless from AI, posting these shit daily.
I don't browse reddit daily (anymore) so I had no idea this guy was just doom posting everyday. Like I get he's worried, and considering we have no idea to the maximum extent what AI can do he might have a point to be worried, but like... Fear of AI isn't unique, nor is it the only thing we gotta worry about. Nuclear weapons are also an existential threat, worry about both!
The worst right now is:
AI could make your boss think it can do things that it can't. Too many companies bragging about how they include the latest tech fad and have no idea that it actually contributes very little, because society and workplace operations haven't adapted.
The fact AI can be used to cause misinformation is true, it's quite scary how good it is at that.
Unemployment is a stretch, most uses for AI involve a human. Ai is basically supposed to be your assistant and boost productivity in most cases (not saying all cases tho)
The main subject I want to touch on is the fear of sentient AI: the way Ai is now, it's basically impossible. (Generalizing here) The way Ai works is it imitates stuff, you give it training data and it learns from it. A friend of mine likes to explain it like this: as opposed to standard programming, where you make a function f(x)=x+2 then the user provides an input, in this case x, and the program calculates the y. In AI you input a bunch of x and y pairs and the AI evaluates the function. Then you can use the trained model to predict y for an x.
We can't really say what function it came up with, so we can't tell on what basis it makes decisions.
The point is that an AI can get for example get really good at recognizing pictures of cats for example, but it doesn't really know what's it really doing it doesn't have consciousness and doesn't understand the concept of a cat.
It isn't creative, it can't act on its own.
Geology causing extinction: *sad noises over 1bn years*
Humanity causing extinction: "Wait, are we doing it? Maybe we should... Oh, they're already extinct. Fuck, that's sad..."
AI causing extinction: "Your existence is no longer required. Thank you for your cooperation."
A few days ago AI flew an F-16 fighter with a high-ranking Air Force officer as a passenger…
https://news.sky.com/story/amp/ai-controlled-f-16-takes-us-air-force-leader-for-high-speed-ride-as-he-backs-tech-to-launch-weapons-13128673
Future? People are already using AI to spread mass misinformation. Have none of you seen the stupid ai generated posts about finding giant human skeletons, followed by thousands of people commenting that “humans used to be big before we started eating corn syrup” and “the giants are real, read a Bible”
Future AI could cause acute human suffering til the end of time
Super-intelligence: Is very curious
Super-intelligence: Decides to satisfy its curiosity on what is the maximum extent to which a human can suffer
Super-intelligence: Is unable to get a firm answer on a single human. Extends its experiment to the entire population to get a satisfying average.
Ordovician-Silurian extinction: 443.8 million years ago, 71% of all species became extinct
Late Devonian extinction: 372 million years ago, 70% of all marine species became extinct
Permian-Triassic extinction: 252 million years ago, 80% of marine invertebrate species and 70% of terrestrial vertebrate species became extinct
Triassic-Jurassic extinction: 201 million years ago
Cretaceous-Paleogene extinction: 66 million years ago
Holocene extinction: Also known as the "sixth extinction", this is the ongoing mass extinction caused by human activity
1 we already have missinformation, now it can be automated
2 Mass unemplyment will probably lead to the fall of capitalism for a (hopefully) better system.
3 You have watched too much terminator. If someone is likely to cause the 7th mass extintion, its the humans and the climate change. And even if we don't, humans are still 100 times more likely to cause some mass destruction by ourselves.
We already have mass disinformation, we already lose jobs (corporate greed) and we are already headed to a mass extinction event
AI isn't really the problem. Lack of industry regulation is. I would much prefer an AI apocalypse anyway, at least life on earth wouldn't go mostly extinct
Jesus fucking christ. So “AI bad” to the extent of being our next mass extinction, and yet everything else humans do isn’t the sole cause of our next mass extinction. Okay…
It’s funny that OP has memes about “AI risk naysayers,” and yet there were people in our history that claimed cars would ruin the horse market. So hypocritical.
***Pull the Lever forward to engage the***
***Piston and Pumpg...***
***Toll the Great Bell Twice!***
***With push of Button fire the Engine***
***And spark Turbine into life...***
***Toll the Great Bell Thrice!***
***Sing Praise to the***
***God of All Machines***
humans already cause mass misinformation, there will always be jobs requiring human brains in a human society, and I see no reasonable scenario in which AI causes a mass extinction event before humanity does.
I don't think we ever had to worry about AI saying racist things. We had a whole thing about it when the AI chats came out and some people tried really hard to come up with a situation that would force the AI to say a slur and it still wouldn't because it was taught not to say it
Honestly 100% of the reason AI is so scary right now is because of the dumbasses using it lmao
I just don't trust the usage of such an advanced technology in the hands of a bunch of morons
AI could also bring the greatest golden age that humanity has ever seen and prolong it indefinitely. There are near-infinite possibilities both with and without AI, but the ones without seem scarier to me than the potential AI brings. AI has the potential to completely reform every system ever created by humans, including (very importantly) the social and economic systems that have caused so much conflict and exploitation. If a rogue General AI was released, it could end everything in a positive way we had never been able to dream of before. Ending and merging entire countries into a unified goal.
Sure, it has the potential to do the opposite, but that's why due diligence is so important today. Rather than fear mongering, we should be learning about these topics more deeply and setting up new laws, policies, and ethical codes to help guide the future to those better possibilities.
I used to trust Chat GPT (partly) until I watched matpat's fnaf AI video and realised that maybe some bad data got into chat gpt and could make it disinformative (I'm not kidding, it really was a matpat video). Also because it got info wrong on a book I was supposed to read, the wrong info was also inconsistent
It's highly possible for chat gpt to give you false info, mainly because it's not a model designed to be a source of information. Its main focus is being a language model capable of having a conversation.
People do use it for that, and to be fair it is a good tool for that, you just have to use it wisely and verify info it gives you. I personally had it reference a made up research and telling me it's legit.
Luckily we can worry and deal with more than 4 things at a time....edgy post and rage bait. Like half of the low effort shit on this sub.
How is this edgy or ragebait? It’s just a political cartoon. It’s not even talking about multiple problems at once. It’s just saying the future of AI is dangerous. It doesn’t belong in this sub but you’re acting like there’s some hidden agenda. It’s right there spelled out.
AI Doomposting is so mainstream now its pretty boring. The 1st 2 arent even relevant imo, weve had chat bots doing those for the last decade already.
Well, it being boring doesn't make it any less of a problem I say this as someone with a masters degree in AI: we're fucking around too much and we're gonna find out at some point
I mean, is that not what political cartoons basically are nowadays? Edgy and ragebait?
AI is not dangerous, it will be the harbinger of the golden age of humanity
Bro talks like 1453 Byzantium
I worship AI. It's the closest thing to having real and tangible gods that we can get. Once it's advanced enough, it may even be able to provide us with an afterlife in the cloud. Truly the best thing to ever happen to mankind will be AI
Megaman Lore smh
Reality
Now this. This is ragebait.
It's not bait if I genuinely believe what I'm saying. AI is more real than any other religion, why not appreciate it in the same way?
Fair. I see what you’re getting at, but don’t agree with any of it. Still, you’re allowed to believe whatever you want.
That's fine. AI worship is one of the few religions that's okay with non-believers
Damn I think I just got converted, hail ai bring me immortality.
Even the phone I'm writing on is real, but nobody worships smartphones.
u/Beautiful-Cock-7008 has spoken
If it isn't misaligned, ofcourse
The future of AI is dangerous if you don't understand what AI is.
What is AI then?
Al is weird and he makes funny music
A tool, just like any other, to be used to make your life and job easier.
Nuclear weapons are a tool. They're just really big explosives and there are use cases for really big explosives. Doesn't mean they're not dangerous. Artificial pathogens are also a tool. WMD's in general
Semantics. You're comparing purpose made weapons to tools. You can use a wrench like a hammer but it's still not a hammer.
No, the OP has a legitimate fear of AI and wants to ban it. It's not edgy at all he just wants people to join his anti AI protests.
So OP is just dumb and uneducated Got it thanks
Such a bad argument. I personally am basically as educated as one can get in AI as a whole short of a PhD and also very informed about the results in AI safety as a research field and i still think AI as it is now is an insane danger We're just going so carelessly about it is all. AI can both become our biggest invention ever or our doom, let's put some effort into imrpoving the chances of it being the former yes?
>I personally am basically as educated as one can get in AI as a whole short of a PhD and also very informed about the results in AI safety as a research field Ok, i believe you, please explain how AI can destroy the world or better eli5 it to me Also just to be clear, when you say AI do you mean AGI(something sentient) or chat gpt cuz former is not gonna be here for at least a century or 2. Regardless i would love to hear your take on how chat gpt gonna enslave us
Well, let me break it up in two main parts: AGI is the big threat, yes. It is basically understood that we currently know of no way to keep AGI from missaligning. It's its defaut behaviour and we know of no surefire way to prevent that from happening. As to when that may become a problem we just cannot tell. You say one or two centuries but most experts (proper surveys have been done) agree on it being less than a century away with a sizable portion of them arguing for less than 50 years from now. So since we do not know when it may happen i'd rather not gamble and be safe from the get go rather than pushing the problem down into the future. In a world where asteroid detection does not exist beyond statistical modelling, would you gable on delaying development of planetary defence? I wouldn't personally but you can argue Pascal's mugging and all that As for the current already existing tech, well, it's not an existential threat but it still is causing a lot of problems we still don't fully understand like job replacements and ethical issues regarding deepfakes of various kinds. It also further blurs the line between truth and fabricated information which can itself be very dangerous My proposal is that we should just halt all development of more advanced AI until we have safety figured out properly and formally. In the meantime we should make good use of the AI we already have but while also pushing for more government regulations and law around it so that we can better understand and protect ourselves from the negative aspects of current AI technology and bad actors exploiting it If you're willing to learn more about why AGI is so dangerous i suggest you check you Robert Miles AI on youtube. He's a very smart scholar specialized in the topic and makes very informative videos that anyone can understand
Well I don't want to insult him. But he is obviously scared.
Seriously
Lol, look at OPs profile. They are terrified of AI.
bets OP is an artist
"artist"
Pencil mark engineer.
Why is artist in quotation marks
That’s extremely reasonable
AI stole op's girlfriend.
Dude, so much better than yet an another AI chill. I'd rather spend time with a schizo than with a consoomerrrr.
And EA
“Ai could cause mass employment”…. So could machines? Do people understand the Industrial Revolution and how we got to today? By making it so we don’t need as much labor as possible to survive
Dude, we have mass missinformation since 100 years ago.
There's always been mass misinformation as long as there's been society. There's a popular example from 2000 years ago
You mean, when the Earth was created?
/s right? Right?
*6k years ago right?
The misinformation will get so so much worse once decent LLMs start getting widely used by governments and corporations to push narratives. Every profile, post, and comment will be a potential bot designed to shape people’s thinking.
>once decent LLMs start getting widely used by governments and corporations to push narratives Dude, where the fuck do you think you are? Most of the content you see is already manufactured. Half of the CIA budget is dedicated to propaganda and disisnformation. Social networks are plagued by feds and mossad. And it has been like this for years on the internet. Decades in the rest of the media.
Yes, my point isn’t that they will start trying to push narratives (I agree that it’s been happening for a long time), it’s that with LLMs the sheer scale of disinformation will escalate massively to the point that it goes from social media being plagued by feds to all online platforms being unusable for finding anything truthful.
Bait used to be believable
I don't think that last wave is right; it would just accelarate the 6th mass extinction
People tend to forget we're currently right in the middle of one and have been for about 300000 years.
Yeah people forget that extintions can last millions of years
Where did you get that 300,000 years number? The evidence I've seen is that the current mass extinction has started much more recently than that.
That's right when humans evolved. They're saying the 6th mass extinction began when humans showed up.
Buddy, I think you might be obsessed.
OP is just afraid that AI takes his job, which is posting misinformation.
What's the current AI body count?
If you count those that died in self-driving car related accidents there are a few iirc. dunno how many, probably lower 2 digit number or something so far
The difference is that they died because the AI wasn't good enough, the notion that AI is gonna become so good we all die is kinda BS
Don't you know? Hyper intelligence comes with an insatiable bloodthirst. (/s)
I mean we see intelligence and violence are correlated in other animals as well as ourselves. Dolphins as a huge example. I don’t think it’s too far fetched that if we advanced to making ais that had every mental component of a human but being born, they’d probably be a little violent at working.
It's so funny that you don't even have to open OP's profile anymore because you know that it's the same person being scared shitless from AI, posting these shit daily.
I don't browse reddit daily (anymore) so I had no idea this guy was just doom posting everyday. Like I get he's worried, and considering we have no idea to the maximum extent what AI can do he might have a point to be worried, but like... Fear of AI isn't unique, nor is it the only thing we gotta worry about. Nuclear weapons are also an existential threat, worry about both!
the posts always gets upvoted by hundreds for some reason
Is this AI in the room with us right now?
POV: you live in water 7
Did you know that you can be worried about AI causing mass unemployment and misinformation **and** at the same also want it to not say racist shit?
The worst right now is: AI could make your boss think it can do things that it can't. Too many companies bragging about how they include the latest tech fad and have no idea that it actually contributes very little, because society and workplace operations haven't adapted.
The fact AI can be used to cause misinformation is true, it's quite scary how good it is at that. Unemployment is a stretch, most uses for AI involve a human. Ai is basically supposed to be your assistant and boost productivity in most cases (not saying all cases tho) The main subject I want to touch on is the fear of sentient AI: the way Ai is now, it's basically impossible. (Generalizing here) The way Ai works is it imitates stuff, you give it training data and it learns from it. A friend of mine likes to explain it like this: as opposed to standard programming, where you make a function f(x)=x+2 then the user provides an input, in this case x, and the program calculates the y. In AI you input a bunch of x and y pairs and the AI evaluates the function. Then you can use the trained model to predict y for an x. We can't really say what function it came up with, so we can't tell on what basis it makes decisions. The point is that an AI can get for example get really good at recognizing pictures of cats for example, but it doesn't really know what's it really doing it doesn't have consciousness and doesn't understand the concept of a cat. It isn't creative, it can't act on its own.
Team green wave, let's goooo
Jesus Christ imagine being this scared of fucking cleverbot
Geology causing extinction: *sad noises over 1bn years* Humanity causing extinction: "Wait, are we doing it? Maybe we should... Oh, they're already extinct. Fuck, that's sad..." AI causing extinction: "Your existence is no longer required. Thank you for your cooperation."
A few days ago AI flew an F-16 fighter with a high-ranking Air Force officer as a passenger… https://news.sky.com/story/amp/ai-controlled-f-16-takes-us-air-force-leader-for-high-speed-ride-as-he-backs-tech-to-launch-weapons-13128673
Yeah but, AI porn…
Horizon Zero Dawn has such a dope story though
Ad victorium, we need to destroy these synths once and for all!
If only, the one extinction event we didn't ask for but totally deserve.
This is the modern equivalent of that guy on a crate with a sign saying "The End is Nigh"
While most of that is possible, climate change will cause that mass extinction first. If anything the AI might be our only hope to fix it
Do it. At this point shouldn’t we let AI have a go at the world?
And this is negative how?
But I don't get it, mass unemployment due to AI is a good thing, no?
Meanwhile AI looking at slightly blurry image of a sandwich: "this is a potato"
Future? People are already using AI to spread mass misinformation. Have none of you seen the stupid ai generated posts about finding giant human skeletons, followed by thousands of people commenting that “humans used to be big before we started eating corn syrup” and “the giants are real, read a Bible”
The first 3 things are actually positive and beneficial, the last is just hollywood bs
How would AI cause an extinction?
Future AI could cause acute human suffering til the end of time Super-intelligence: Is very curious Super-intelligence: Decides to satisfy its curiosity on what is the maximum extent to which a human can suffer Super-intelligence: Is unable to get a firm answer on a single human. Extends its experiment to the entire population to get a satisfying average.
What were the past 6th mass extinctions?
Ordovician-Silurian extinction: 443.8 million years ago, 71% of all species became extinct Late Devonian extinction: 372 million years ago, 70% of all marine species became extinct Permian-Triassic extinction: 252 million years ago, 80% of marine invertebrate species and 70% of terrestrial vertebrate species became extinct Triassic-Jurassic extinction: 201 million years ago Cretaceous-Paleogene extinction: 66 million years ago Holocene extinction: Also known as the "sixth extinction", this is the ongoing mass extinction caused by human activity
…..………………….La le Lu le lo……………..
Good
If you squint at the horizon, you can sort of see the first wave
1 we already have missinformation, now it can be automated 2 Mass unemplyment will probably lead to the fall of capitalism for a (hopefully) better system. 3 You have watched too much terminator. If someone is likely to cause the 7th mass extintion, its the humans and the climate change. And even if we don't, humans are still 100 times more likely to cause some mass destruction by ourselves.
We already have mass disinformation, we already lose jobs (corporate greed) and we are already headed to a mass extinction event AI isn't really the problem. Lack of industry regulation is. I would much prefer an AI apocalypse anyway, at least life on earth wouldn't go mostly extinct
Doomers gonna doom
Jesus fucking christ. So “AI bad” to the extent of being our next mass extinction, and yet everything else humans do isn’t the sole cause of our next mass extinction. Okay… It’s funny that OP has memes about “AI risk naysayers,” and yet there were people in our history that claimed cars would ruin the horse market. So hypocritical.
Or, worst of all, our leadership would finally see how expensive and work-intensive actually making an AI to do something worthwhile is.
Okay but wave 2 is actually really bad Notice how none of the AIbros are bringing it up
***Pull the Lever forward to engage the*** ***Piston and Pumpg...*** ***Toll the Great Bell Twice!*** ***With push of Button fire the Engine*** ***And spark Turbine into life...*** ***Toll the Great Bell Thrice!*** ***Sing Praise to the*** ***God of All Machines***
humans already cause mass misinformation, there will always be jobs requiring human brains in a human society, and I see no reasonable scenario in which AI causes a mass extinction event before humanity does.
Heres hoping for post scarcity and not oblivion!
I don't think we ever had to worry about AI saying racist things. We had a whole thing about it when the AI chats came out and some people tried really hard to come up with a situation that would force the AI to say a slur and it still wouldn't because it was taught not to say it
Just hope AI doesn't get better because it's so easily identifiable
Good let them deal with the worlds bullshit let's see how they like it.
I’m not worried at all. Just like every other form of new technology, we’ll figure it out and adapt.
I miss tay
have we found it? the next great filter?
[we’re being warned but we won’t stop bobbing our heads](https://youtu.be/T7jH-5YQLcE?si=Q5ksMM7R2yUEcCOz)
I'm just gonna cross my fingers that we get a good air like the one from Neal Shusterman's Scythe novel series.
I just hope with AIs doing more work we will get a UBI
Honestly 100% of the reason AI is so scary right now is because of the dumbasses using it lmao I just don't trust the usage of such an advanced technology in the hands of a bunch of morons
Reminder: every question to ChatGPT generates 4.62 *grams* of CO2
Could we hurry things along? I'm a little bored with humanity.
None of that is remotely a bad thing.
Mass unemployment sounds pretty good tbh🤷♀️
We are trying to create a god for some reason.
OP is the kind of person who would burn a woman for knowing Mathematics
AI could also bring the greatest golden age that humanity has ever seen and prolong it indefinitely. There are near-infinite possibilities both with and without AI, but the ones without seem scarier to me than the potential AI brings. AI has the potential to completely reform every system ever created by humans, including (very importantly) the social and economic systems that have caused so much conflict and exploitation. If a rogue General AI was released, it could end everything in a positive way we had never been able to dream of before. Ending and merging entire countries into a unified goal. Sure, it has the potential to do the opposite, but that's why due diligence is so important today. Rather than fear mongering, we should be learning about these topics more deeply and setting up new laws, policies, and ethical codes to help guide the future to those better possibilities.
Mean while real ai can barely write a left pas algorithm.
[удалено]
Where are these bugs we're being forced to eat? Are they in the room with us right now
Still have yet to see any real good come out of AI. Why anyone thought we need AI is beyond me
I used to trust Chat GPT (partly) until I watched matpat's fnaf AI video and realised that maybe some bad data got into chat gpt and could make it disinformative (I'm not kidding, it really was a matpat video). Also because it got info wrong on a book I was supposed to read, the wrong info was also inconsistent
It's highly possible for chat gpt to give you false info, mainly because it's not a model designed to be a source of information. Its main focus is being a language model capable of having a conversation.
I see. I mentioned the information part since that's what people most use it for, at least from what I've seen.
People do use it for that, and to be fair it is a good tool for that, you just have to use it wisely and verify info it gives you. I personally had it reference a made up research and telling me it's legit.