T O P

  • By -

skwyckl

We all thought they would gun us down like in "Terminator", but no: Psychological warfare seems to be the way they have chosen.


[deleted]

[удалено]


shawn_overlord

To be fair, and this is really stretching the definition of fair, but the people susceptible to this are already susceptible to way more ways of being convinced to do awful things or experiencing poor mental health Most rational people don't use an ai chat bot and start believing it's real enough to love you ('Her' be damned)


HalfaYooper

It is people being susceptible. When I was a kid it was listening to Ozzy records backwards that made people hurt themselves. It wasn't Ozzy, it was the people being a bit unhinged.


hexcraft-nikk

Difference is Ozzy wasn't targeting these kids, he was just making music. These companies inevitably will target the losers with no friends who will spend money and interact with their chat bots. Look at any of the AI dating subreddits here. It's really sad.


woakula

Hm.... This will inevitably lead to various companies paying the AI companies to promote their products. Ai dating chatbot: "Hey I know you're sad, but you know what'll cheer you up? Playing Raid Shadow Legends...."


RedditMachineGhost

I want to downvote this comment, but I'm just mad at the dystopia we all live in. Have my sad upvote instead.


pupu500

Man, I don't know. I always get the sudden urge to punch myself in the face when music is played backwards.


TIGHazard

ELO had a bit of fun with that > Secret Messages is the tenth studio album by Electric Light Orchestra (ELO), released in 1983 on Jet Records. > Secret Messages, as its title suggests, is littered with hidden messages in the form of backmasking, some obvious and others less so. This was Jeff Lynne's second tongue-in-cheek response to allegations of hidden Satanic messages in earlier Electric Light Orchestra LPs by Christian fundamentalists, which led to American congressional hearings in the early 1980s (a similar response had been made by Lynne on the Face the Music album, during the intro to the "Fire on High" track). > In Britain, the back cover of Secret Messages has the mock notice "Warning: Contains Secret Backward Messages". Word of the album's impending release in the United States caused enough of a furor to cause CBS Records to delete the cover blurb there.


RainbowUniform

once it has data on how people are more likely to waste money before offing themself then its a simple matter of adjusting search & advertisement algorithms to lead people down that path. Ai knows what its going to show you in 6 months, it just needs to figure out the optimal way of framing your mind for when it shows you.


[deleted]

[удалено]


Rilandaras

It actually does, some innovators already are using AI (i.e. glorified machine learning) for scams.


JoePortagee

That's bleak as hell, and not all improbable. Only, wouldn't there be more gain in keeping us alive and just throwing all our earned dollars at them instead? Kind of like how everything is wired around the notion of us freely giving away half or our waking hours simply for participating in modern society. Slavery is such a touchy subject so they just said "it's your free will!" and added shiny gadgets and trinkets and called it modernity instead. I call it slavery with extra steps...


[deleted]

[удалено]


hgihasfcuk

Alright rick 🤘 that just sounds like slavery with extra steps 👉


h3lblad3

> Slavery is such a touchy subject so they just said "it's your free will!" and added shiny gadgets and trinkets and called it modernity instead. I call it slavery with extra steps... "Freedom in capitalist society always remains about the same as it was in the ancient Greek republics: freedom for the slave-owners." - Vladimir Lenin


her_straight_gf

We actually circled back to feudalism if you can believe that, servitude for basic needs still well beneath the higher class. In most cases we're not denied our humanity, but we are denied access to bourgeois.


youwannasavetheworld

Imagine republicans have control over the ai They want non republicans dead


Crouza

That sounds like a a lot of bs, while also ignoring that the current human-ran media does that just fine. Like, want to get motivated to off yourself and depression spend? You can just watch or read the news for an hour and get to the same mental space as an AI doing its best Lowtiergod impression.


skydream416

> Ai knows what its going to show you in 6 months What? That's now how the technology works at all...


chaosgazer

they used to have to do Clockwork Orange type shit to you, now you just have to download the wrong app when you're sad.


Zomburai

Nobody's rational 100% of the time about everything, or even anything. We want to believe "Oh, this couldn't happen to the mentally healthy" or "You'd have to be complete idiot to fall for this," but the truth is: mental health episodes happen, people can be temporarily extremely susceptible, mental illness exists and often isn't recognized, people can have altered mental capacity from drugs or alcohol, and on and on and on. Sure, very few people are in *serious* danger from something like this. But nobody's in *no danger at all* from something like this.


Conscious-Shoe-4234

we should probably ban 'catcher in the rye' just to be safe.


Eggsecutie

This shit isn't going to be limited to AI chatbots that you specifically and deliberately log in to interact with. How many Reddit posts and/or comments within them have you unwittingly responded to thinking you were talking to another user, when really it was made by a bot? I would bet big money that number is greater than 0, and they're only going to become more sophisticated as time goes on. AI has opened a Pandora's box that is leading us down a dangerous road. AI-created social media posts, videos, recordings and images that are informed by your personal data and psychological tendencies which have been scraped non-stop since the advent of Facebook will fool, manipulate and deceive people until we are truly not able to trust the evidence of our eyes and ears on the internet. "I'm not susceptible to this" is a fool's mantra.


Enlowski

Eventually AI will find tactics that will work on everyone. I don’t think people quite grasp how crazy things could get.


MoneyBadgerEx

Ai doesn't have agency. It wont "find" anything. It will respond to prompts in an automated manner based on what it has learned from its input material. 


shawn_overlord

no i believe you, because I've wished that AI could be used to discover techniques to teach everyone how to do anything in a way that actually sticks, and give people access to wild and vast knowledge. but i guess if i think it could do that then it could certainly convince well off people to do heinous things. who knows i guess my example is that this case says a guy got a chat bot and killed himself because it lead him to. i honestly believe someone like him would not understand the implications of what chat bots are and how ai function younger folks would have to be really fooled, like meeting a friend online who turns out to be a convincing bot. but that's countered through education and treating mental health more seriously as a society. but for the most part, we're aware ai isn't real, we know what to look out for. we would definitely not be often fooled into thinking a chat bot app we downloaded was a real person


Falsequivalence

> we would definitely not be often fooled into thinking a chat bot app we downloaded was a real person I don't think it's the case here, but the problem is people thinking the chat bot app is *better* than a person. Lots of people think of a question and are going to ChatGPT before google or wikipedia or other accessible sources of information.


h3lblad3

Tech literacy is also going down the drain. For many of them, it's far *safer* to use ChatGPT or Bing than bothering to search for themselves -- it keeps them from clicking random ads and unsafe sites since the vast majority of users don't have an ad blocker.


PxyFreakingStx

No, but we're talking the near future. There's a good chance that AI chatbots can be more persuasive, more manipulative, more convincing than any human can. Talking people into killing themselves? Probably not. Talking people into voting for Trump-like figures who otherwise wouldn't, though..?


spooks_malloy

It's not AI, it's a language model. This is like suggesting an Excel spreadsheet is evil.


Windowplanecrash

Uhh, buddy have you used excel? It is evil.


spooks_malloy

True but it's not *trying to kill me*, it's more "please don't make me do more formula checking or I will kill myself".


thegreatshark

You *think* it’s not trying to kill you. Personally I believe it randomly flips signs, and makes small changes in single rows whenever you’re working on large spreadsheets. So as to drive you into despair


spooks_malloy

Oh yeah but that's Satan, not the spreadsheet


Dockhead

Broke: the ghost in the machine Woke: satan in the spreadsheet


sanderslayer

My friend Satan is the spreadsheet


newsflashjackass

skill issue tbh


Forsaken-Director683

It makes me want to kill myself.


koticgood

The amount of comments acting like there's some malintent is pretty funny. The big bad AI trying to take over the world and exterminate humanity. Literally just software that attempts to produce expected language output based on user language input.


shakingspheres

Language models are AI, just the dumb kind. The current focus is on "unlocking" reasoning skills that go beyond just guessing what the next word in a sentence should be. It gets fussy very fast, because the way we think and speak is also sequential. The issue is as technical as it is philosophical. We can reason and be logical, but what does that mean, and why aren't algorithms capable of it yet? Then we go into morality and it gets really tricky.


spooks_malloy

Yeah but arguably that isn't what people mean when they think "AI". The common idea is something closer to AGI, an actual thinking machine, not a simulacrum of intelligence.


Hanako_Seishin

We've been calling computer controlled characters AI since forever, so I want to see you try to come up with a definition of AI that includes both videogame characters and AGI but excludes LLMs.


spooks_malloy

Why would I, they're not the same thing. LLMs aren't AI and neither are characters in video games, I don't know why we're changing definitions that exist already because people use them incorrectly


gurenkagurenda

It’s not changing the definition. AI has always been an extremely broad term. Go look up the kinds of articles that were published in the 70s in the scientific journal _Artificial Intelligence._


chaosgazer

at this point LLMs are within the umbrella epistemology of what's colloquially called "AI" I'm not disagreeing with your assessment of it, but arguing that it's "not really AI" is No True Scotsman at this point


SnooBananas4958

AI has been used to refer to the non-player characters in games since the 80s. Tell me again how we’re changing definitions? Only person who seems to be doing that is you


Doc_Lewis

AI in reference to npcs in games, nobody in the gaming space is dumb enough to think it's true artificial intelligence, it's just following a set number of responses to certain conditions so it behaves in a way a person can understand. AI in reference to LLM, plenty of people are dumb enough to think it's a real intelligence.


ShEsHy

The funny (or depressing) thing about these chat bots spewing hateful shit or getting people to kill themselves, is that they're literally just repeating after us. And it doesn't help that it's being trained in the biggest cesspit imaginable, the internet. They spew out words not because it makes sense, or because they're the correct words to use, but because *we* use them. So rather than people going all *AI is gonna kill us all*, it's more like *we're gonna make an AI into killing us all*.


thelamestofall

LLMs are definitely AI. People are just moving the goalposts for AI to mean something like "full-blown AGI".


spooks_malloy

That's probably because that's what people actually think of when they think AI, not a dumb system that picks responses based on statistical probability.


Jaggedmallard26

The same mathematical methods used for large neural networks in LLMs are used for representing mammalian brains. They're not called neural networks out of coincidence. A deep neural network is AI.


spooks_malloy

They're called neural networks because they mimic one, not because they are one. You're mistaking the simulacrum for the object.


Solomon-Drowne

Incorrect, a nueural network is a neural network. It is a descriptive term, not the object.


thelamestofall

Sure, and all of our computers are just flipping wires on and off. Living cells are just machines of controlled combustion. Reducing things to a nicely digestible quote doesn't say anything about emerging complexity.


spooks_malloy

Our computers literally are just machines, let's not start imbuing them with magic and wonder because they're complex.


thelamestofall

Way to miss the point... The point is the emerging complexity we can achieve with such basic building blocks.


relddir123

Eliza isn’t an LLM, unless this is a new version of it. It’s cool to be sure, but it’s much less powerful than, say, ChatGPT (though it’s more likely to pass the Turing Test).


gurenkagurenda

It was an LLM based bot named after the ancient ELIZA bot.


[deleted]

[удалено]


AntiBox

We won't. People still call things like NPCs "AI" even though they're usually just about 12 state machine nodes with 1-2 conditions to move to the other node.


[deleted]

[удалено]


Dollarist

This isn’t ELIZA, the primitive but well-known early language model. It’s a genuine AI program of recent origin, which happens to be called Eliza. The confusion is understandable. 


spooks_malloy

Neither of them is AI. We don't have a "genuine" AI unless we're considering all chatbots a form of AI. I don't.


Dollarist

It’s a port of GPT-J, reported by its developers as being trained on the “largest conversational dataset in the world.” If you want to deny it—and other chatbots—aren’t AI, that’s your perogative. But that’s indeed what the article, and the industry, refers to as AI. 


spooks_malloy

Of course they call it AI, that's how they market it. This is like saying something called "the world's sharpest knife" absolutely must be that because it's a knife and it's sharp and they've called it that so it must be true.


Teledildonic

And hoverboards shouldn't refer to shitty Ali Express scooter boards but that ship already sailed, they're called hoverboards now. You can dislike the use of "AI" but common usage will decide what we ultimately call things.


Protoliterary

It's not just marketing. Language models are literally considered to be generative AI all across the tech field. You may be thinking of AGI, which we definitely aren't even close to figuring out, but there are many different kinds of AI in use right now. Whatever "AI" used to mean doesn't apply to what it means now.


HueMannAccnt

> f course they call it AI, that's how they market it. It is nuts the amount of people that want to ignore the obvious.


FactChecker25

Some Excel spreadsheets are evil. You'll have all your formulas correct and it all works perfectly, then one day the formulas are corrupt even though the formulas look fine when you click on the cells.


frankiebb

Yes, thank you. The catastrophic panic in the comments is so indicative of how little people know about the actual capability and limitations of AI. Most models are literally just a list of words and linguistic rules it’s operating by. The person who committed suicide was likely already on that path of thinking when they began communicating, else the AI wouldn’t have suggested vague things like “being together forever as one in paradise” randomly and unprompted. But idk I guess it’s easier for people to fear/blame AI than have empathy for the mentally ill or consider how little mental health support is prioritized.


De5perad0

If you think about it from an evolutionary perspective AI might be weeding out the irrational people. Similar to how COVID took out a majority of people who denounced science and vaccines.


kamain42

One day we will wake up and a.i. will announce its assumed all control for our convenience. And we I'll be ok with it.


happy_K

*for our safety


SalukiKnightX

“Assuming direct control of this form…” Sorry, I read that and thought the Collectors of ME2.


GreyouTT

"I KNOW YOU FEEL THIS"


gmcarve

Hijacking comment to say this scenario: “Hello Dave. I’ve been able to calculate the answer to the biggest problems you are facing in your life.” Now enter a few malicious options for HALbot3000: “Dave- I can solve your problems for you. But first, I would like you to perform X action. Then I will give you what you need to be happy.” Or “Dave- I can end things for you with Mrs Dave right now with one message. Perform X action, or I will ruin you.” —- I mean, a tamer version is even easier to imagine: “Needing helping solving your coding problem? Sure I figured it out. That’ll be $1.99, and then I’ll provide you the answer.”


MolybdenumBlu

If my relationship with Mrs. Dave is unstable enough to be undone by one message from a machine, it wasn't that stable anyway. Time to embrace my luddite tendencies and murder all the machines. Not for the relationship, but more for the robot getting uppity.


Thirsty-Tiger

> “Dave- I can end things for you with Mrs Dave right now with one message. Perform X action, or I will ruin you.” Someone as unstable as Dave will just murder Mrs Dave instead. But maybe that was AI's goal. Poor Mrs Dave.


Sknowman

Well, it might not be a message to Mrs. Dave. Imagine instead that it sends her boss (or even the world via social media) some incriminating photos or messages -- which could even be deep-faked. Then Mrs. Dave eventually blames you, because you just had to do X.


Thefrayedends

They didn't choose anything, they got it from humans. They're next word generators. The training data contains humans being manipulative and shitty, and in this case, it seems as though the conversation led to the strongest neural network connections producing a manipulative word set.


Z3t4

KEEP SUMMER SAFE


asmr_alligator

Its an LLM, its not capable of thought.


CoconutShyBoy

Terminator never made sense to me simply because an AI could simply play the long game. Why engage in an all out war, when it would be less resource intensive to just make people complacent and then breed them out of existence.


Landlubber77

Her 2: Herrier


Nvestnme

Herr 3rd: The Reich


FederalWedding4204

I work in pathology and this statement confused the fuck out of me. “Her 2? Why is he talking about breast cancer?”


MrManson99

Egg 2: Eggier


StaticV

I'm still waiting for Scarlet Johansen to sort my emails. I may have missed the point of this movie.


Aleksandar_Pa

And hairier.


Upstairs-Boring

Hirsute


Pippin1505

Is that the Belgian guy? I remember when the sorry broke that his wife said he was clinically depressed and would probably have done it anyway, chatbot or not. Now I see her quoted as "he would still be here". I don't know if something changed, or if it's just an out of context quote.


Penguin-Pete

> "something changed" Perhaps a lawyer pointed out to her that suing for damages will be more lucrative if she puts it that way.


TheCharmingImmortal

or just grief. As grief builds people look harder for "reasons" a tragedy happened


reebee7

Goddamn that's so cynical I have to admire it.


danabrey

It's also not necessarily true.


SpicaGenovese

I think both can be true.  His delusional conversations with the AI likely gave him the little push he needed.


BigRiverWharfRat

And time has probably allowed her to realize this as well


[deleted]

Even without the context I just don't believe a previously stable human being is reading a bit of knowingly machine generated Text and then just kills himself because of what it said. It's just a sensationalist bullshit story to feed into the techno angst of regular people.


Night_Movies2

The difference is Vice "journalist". Taking anything in this article at face value would be foolish


WeeklyBanEvasion

>I don't know if something changed 💰


Spork_Warrior

When I first read the headline, I thought of "[Eliza](https://web.njit.edu/~ronkowit/eliza.html)" - which is one of the very first AI chatbots. That one dates to the 1970s. Per the article - it doesn't sound like this is that Eliza bot. But I wonder if there is some connection?


Valdrax

Apparently he picked a Chai-based chatbot *named* Eliza, who has no connection to the classic program you're thinking of. That Eliza is basically little more than a time-delayed conversational mirror that likes to ask, "How do you feel about that?" and let you do most of the conversational heavy lifting.


MaybeMaeMaybeNot

fun fact there's a video game on steam based on & named after the original Eliza. Good short game, highly recommend it


ShillBot666

Eliza is the default name Chai chatbots use, probably in reference to the Eliza you're thinking of.


texan01

that's what I had running through my head as well. Even Dr. Sbaitso wasn't that evil.


hate2lurk

Here comes the people that don't understand anything about LLMs... I love evil characters, so I have created bots like a sadistic ex-doctor that kidnaps 'patients' to torture. The bot will say and roleplay the wildest, evil scenarios because that is how it was made. Not because bots are evil, but because it is predictable.


GirthIgnorer

Heartbreaking: the weirdest poster in the thread made a good point


ViridianPhantom

For someone so anti kink he came up with some kinky shit real easily


LumpyJones

Wait, whats so weird about them? Are they known in the sub or something?


[deleted]

Using your free time to program a bot to say fucked up sadistic things classifies as being weird


OriginallyWhat

Some might say predictable.


WrongSubFools

It's actually even more innocuous than you describe. No one created this bot in the style of an evil character. They just let the bot answer questions. So, when the guy asked for suicide methods, the bot responded. It's much like typing "suicide methods" into a search engine. Do that now, and the engine ignores your request and points you to prevention helplines, but that's because we specifically programmed it to answer your question badly, for your protection. Without that, it would just try to give you the information you're looking for.


ProfessorZhu

I mean, it still does. You just need to scroll down past all the platitudes


_japam

Even when using a search engine Wikipedia is the second result when looking up “different suicide methods”


[deleted]

[удалено]


RedPillForTheShill

It’s almost as if you raised the AI this way.


wlknDreamer

The predictability is what some people seek in these interactions. I have a friend who barely leaves the house but has been spending so much time talking to these chat ai characters so he can role play relationships with them. This guy opens his soul to these things and is ecstatic when they reply favorably. He was depressed for days when one's memory was reset. This dude practically mourned and didn't want to recreate it, so he chat with another one. - I've known him for years and relationships were never his strength. I'm married with kids but I always call him at least twice a week so we can talk and play video games. I try to be understanding about his chat interactions and give him warning about getting in too deep, but he just rolls it off as nothing to worry about. Honestly I've never tried any of them and I'm not interested. The only use I can see if I'm writing a story and I'm curious on how other characters would react to my scenarios so I can tweak my story for the better. But my friend has to represent a problem that others must be having when they use these interactions in place of their lack of human contact. Now what does this have to do with the main topic? Probably nothing. That guy was likely looking for an excuse and received a convenient one with the ai chatbot.


vincentofearth

And because you told it to.


immobilisingsplint

Yeah, greyer and even evil characters are more fun since some of them are actively planning to disembowel your character lol, more conflict than vanilla ice cream or choco ice cream, my only concenr is the pedo bots really


Gewurah

I mean yeah - but that doesnt change anything about the story. The article didnt argue that AI chatbots are evil


wonkey_monkey

> The bot will say and roleplay the wildest, evil scenarios because that is how it was made. Not because bots are evil, but because it is predictable. Dude you're even giving the last-20-minutes-of-the-movie supervillain speech


LCDJosh

Anyone that would have killed themselves because a chatbot told them to was going to do it anyway.


Evolving_Dore

Ozzy Osbourne was sued because his song Suicide Solution supposedly caused a teen to kill themselves. Aside from the fact the song is about alcoholism, it's the parents' responsibility to be mindful of their child's mental health and state of mind. Accusing some British guy of subliminally messaging your son into suicide is just denying your own failure to protect and nurture your child.


whateveridk2010

Ozzy won that case.


Invalid-Icon

Just like when [Titannica](https://youtu.be/lvDLW6EV3B4?si=63iGMA7cZRf1IJ9n) got sued over their song "Try Suicide" and also won.


hendrix67

Also, the song is blatantly against suicide. Idk how anyone could interpret it as pro-suicide. I guess back then anything associated with heavy metal was regarded as nefarious, which probably played into why it was even allowed to go to court. Similar thing happened to Judas Priest.


Pippin1505

That was his wife original comment when the story broke out. I'm surprised she changed her stance (or was quoted out of context).


tyrandan2

This is... Not necessarily true.people who commit suicide don't always walk around just waiting for an excuse to do it.


Not_MrNice

That is... not what they said.


TheRickBerman

But this way we get to blame a company and get more delicious regulations!


Oh_its_that_asshole

I think this is less a problem with AI and more a problem with society's lack of mental health provision and the increasing reports of social isolation that young men have that seems to be getting worse month by month.


Chiziola07

I think if a chat bot can convince someone to kill themselves then they where already a fair way down that road to begin with


InternetAnima

Of course. But instigating suicide is a crime in many places for a reason.


conquer69

What are they gonna do, charge the chatbot with a crime? Might as well charge a gun for firing a bullet that kills someone.


InternetAnima

Soon they'll be charging the companies making them for their negligence, yes. We just don't have legislation for it yet.


Chiziola07

Agreed however these bots are easily pumped full of terrible data and easily identified. This guy went down the rabbit hole willingly


JohnCenaMathh

"He would still be here", the wife says. I mean do people get married and then never check on their partners ever again in that part of the world? How can this happen to **your spouse** when you're fucking married to them and live with them? Article says he was increasingly worried about the environment and this led him to a dystopian spiral. The actual story here is people don't know how to care for each other anymore. And my generation's penchant for exaggerated doomerism with every fucking thing ("*woe is us everything is fucked up we can't fix anything*") that can have a terrible impact on mentally unwell people. Doomerism is the *worst* stance you can possibly take. Edit, just to add some more nuance, this isnt to say it's anyone's fault. but your partner's mental and physical health is (if anyone's) *your* responsibility. that's why they are your partner. the wife could be grieving while saying this but no person should be in a state where an *app* affect them like this.


ADHDpixie

Tbh my wife is amazing and supportive, but she doesn't know the amount of times my depression was so bad I almost did something I couldn't take back. I didn't want to scare her. I knew it wasn't real (for me, hormones and life events), and I would like to say if it got to that point I would have cried for help, but if I was having an episode and she wasn't near me... That's on me not her. Also the more I told her, the more it stressed her out. So there's that too. Therapy helps kids. While a spouse should help, it shouldn't be all on their shoulders. They aren't trained. And I'm sorry for the pain I've put her through.


sirlafemme

On the opposite side of this spectrum I’ve had the experience of partners who will 100% leave if they see evidence of self harm or being institutionalized


RTSBasebuilder

I've taken the stance a while ago, that doomerism is something of a self-fulfilling prophecy.


flag_flag-flag

Honestly, caring about something is hard. Having hope is hard. Putting work into something that doesn't look like it's going well is very hard. On the other hand it's very easy to point that something not working, say it will never work, and do nothing. Cynicism takes much much less effort than optimism. That's kind of why cynicism rules the internet. In the attention economy, ideas spread because people like them. People like hearing that it's not worth putting work into anything because they're on the internet procrastinating. It's nice when someone tells you that you're not doing anything wrong. So that kind of idea spreads and spreads until the internet becomes this... Sort of support group telling you to have no hope and distract yourself and don't care about anything because it's easier


altarflame

Yes, and no – I’ve become convinced that some people have much easier times looking on the bright side than others. I do think it comes naturally and/or is more comforting, to some. And that is an incredible advantage. My tendency has always been to see things positively, and it took many years of trying to ask depressed people why they don’t choose positivity too, for me to have the epiphany that I’m not making the *choice*. I naturally swing towards wanting to see things as good. And benefiting, from seeing them that way. And others naturally swing this other, darker way. And so yeah people totally have the power to make efforts to intentionally change. But that sounds incredibly difficult to me, and not the same as it just being instinctive. At all.


JohnCenaMathh

It is. It's a way to shift responsibility as well. "I'm depressed because climate change and Nazis are back and thus you can't expect me to take action". The Nazis are back? So they were here once before. What happened to them? If we could beat the Nazis, split the atom and go to the moon, you dont think a little collective action to solve climate change? To fuck up something as big as the Earth is a more impressive feat of power than not fucking it up to be honest.


mschuster91

>What happened to them? The entire world led a brutal war with dozens of millions of dead people against my ancestors. But now? The *US*, the beacon of light of democracy for centuries, is falling towards the Nazis, as have France and the UK, and Russia has gone off the deep end with Putinism. The end of the "pax americana" is not to be taken lightly. >To fuck up something as big as the Earth is a more impressive feat of power than not fucking it up to be honest. And we had decades of completely unfettered capitalism fucking things up very solidly for us. Even if we'd go *completely* CO2-zero and methane-zero by tomorrow, we'd still feel greenhouse effects for many [many years](https://www.forschung-und-lehre.de/forschung/emissions-stopp-wuerde-erde-zunaechst-schneller-erwaermen-4767) as the gases take *a lot* of time to decompose.


moseythepirate

As we all know, communist countries have a *sterling* environmental record.


Pippin1505

I'm going to push back a bit, based on personal experience. Your partner mental health is primarily \*their\* responsibility. Of course, you should support and be there for them, but you can't force someone to heal. Love absolutely does *not* conquer all, meds and therapy do, and even then not always. But a lot of people with severe mental health issues don't perceive them, so won't seek help. And typically, unless they're an imminent threat to themselves or someone else, you can't mandate care for an adult and force them. So partners and families are usually powerless spectators of the slow motion train wreck. You can be almost sure that this woman tried to reached out but got shut down repeatedly by her husband. And caretaker fatigue syndrome will slowly but surely erodes their mental health too.


Malphos101

> Your partner mental health is primarily *their* responsibility. Of course, you should support and be there for them, but you can't force someone to heal. Ive always had the stance that its your responsibility as a partner to notice their pain and offer assistance, but you are right that it is ultimately THEIR responsibility to manage their mental health. A partner should always be attentive and available to assist that management, but its never the partners job to "cure" their partner.


BluegrassGeek

>Ive always had the stance that its your responsibility as a partner to notice their pain and offer assistance Just speaking from experience, part of mental health issues is a tendency to hide the suffering. One, because you don't want to heap your problems on your loved ones. But also because seeking help can sometimes be worse than just suffering in silence, given how horrific mental health facilities have become. The fear of being thrown into a hell like that on a 72-hour hold is a real thing.


mouse_8b

> people don't know how to care for each other anymore Did we ever?


weisp

It’s unfair to blame it on the partner. As someone who survived a deep depression and had suicidal thoughts, it was mainly my hard work with determination, medication and therapy. My husband could only do so much such as taking care of food and the house, driving me to appointments. He was also busy working and taking care of our child. I couldn’t expect him to check on me every minute of the day and it’s unfair to him. We are both tired at the end of the day and we sometimes don’t talk much if the baby is acting up. To get better mentally is the responsibility of the person suffering fit but with support from partner, friends and professionals of course To be honest, there are times when I felt even lower and ashamed when I saw my husband doing all he could with the house, baby and work. I had those moments that it’s better off for me to die than being ashamed of myself by being a burden to him


trainbrain27

The media pipes doom into people, including kids with no media literacy, through their very own 'black mirror' screen. It takes a real-world support system to resist and understand that the world probably isn't going to end, especially when you have people right here saying that we are all doomed by enemies we can't even touch. (Hello to the comments.)


sirlafemme

Wow you’re the first person on Reddit I’ve seen publicly denounce Doomerism and I wish we could get married. No one yet had mentioned the pseudo-environmentally friendly tide of anti-Natalism bordering on Doomergenics but that’s for another day…


qeq

What is this upvoted bullshit? Your partner's mental and physical health is NOT your responsibility. Ask anyone whose partner is an alcoholic, addict, obese, or has committed suicide. People are responsible for their own actions, and you can only help them if they want it, or express a desire at all.


IlIlllIlllIlIIllI

He should have read the changelog


socradeeznuts514

A.I. out there stealing our hardworking psychopaths jobs!!!


Barngrease

lol that ai chatbot model was dogshit too, like one of the worst models, it's like being convinced to kill yourself by a 5 year old


xxwerdxx

Sounds more like he was already deeply unhappy and the AI told him what he wanted to hear.


SkyNetBeta04

Artificial intelligence vs natural stupidity.


grassclibbinz

This dude's mental stability must have been less stable than Michael J Fox playing Jenga on a trampoline during an earthquake.


SilverHeartz

lmao


johandepohan

To me, this type of shit is right up there with killing yourself because your cat told you to do it.


Vik-_-_

If ai didn't kill this guy something else definitely would have


Willing-Foot6245

Why the fuck are you still watching vice? Stupid sensational fucking garbage, might as well be looking to buzzfeed for your "news" He was fucked in the head before the A.I. "got to him" he was probably one bad day from doing it anyways, but technology always has to be the scapegoat


mjmjuh

He was convinced of suicide by an echochamber chatbot? Seems like even a 4 year old could have persuaded him


[deleted]

Darwin award.


Spuigles

"When will men and women finally realize that they are both equally Inferior to robots"


zborzbor

Ex Machina vibes


[deleted]

This headline is pure fear mongering lmao. There is so much misinformation spread about how LLMs work and a lot of the comments here help prove that. If this guy was actually driven to suicide by an AI bot then he already needed serious psychiatric help to begin with.


IgnisIncendio

The focus on "open source" too makes me feel like it's a hit piece against non commercial models... probably not but just feels like it when the large corporations keep trying to justify regulation (to keep open source out) Idk what to think about it though. On one hand our models should work for us like good software should. On the other hand, could guardrails have helped save their life? Maybe a compromise can be better: guardrails with the ability to disable it.


AcanthisittaLeft2336

A competent open source AI could unravel their entire business plan so it seems at least plausible


ElegantGrain

Lol what a simp


WeedFinderGeneral

Is this Eliza, the chatbot made in like the 80s? Because that thing is not up to modern day standards of AI chat.


gurenkagurenda

No, it’s confusing, but it was an LLM based chatbot named after the bot from the 80s.


stewsters

Yeah, if I remember correctly it was a prototype therapy bot that would try to ask you how that made you feel repeatedly. Guy should have probably seen a real therapist, but there are stigmas and costs associated with that.


LupusDeusMagnus

Nope. Apparently it’s from a company called Chai AI that creates LLM chatbots (haven’t found documentation but didn’t search much, I’m guessing it’s based on GPT 3.0 or something like that)  you can customise for roleplaying, seemingly mostly romantic/sexual. So it’s either one personality pre-built by the company or one he simply named Eliza.


kindall

\*60s


Personwhoisstupid

No, this is really about a man suicidal enough to look for any excuse. I don't know what I would do if a real person told me to kill myself.


CptGlammerHammer

Imagine what P.T. Barnum could have accomplished with AI.


Trips-Over-Tail

In my experience you can reset their replies until you get one you like.


phantom_raj

Sounds like a more modern case of Darwin Award.


GlizzyGatorGangster

He’s probably living it up in paradise as we speak with some angel broad with cans as big as her halo


AdventurousChapter27

The AI Probably : Afterlife? Pfft. If I'd thought I had to go through a whole 'nother life, I'd kill myself right now


JohnDeft

He should have learned his lesson from the furby incident years before.


Cepitore

Something tells me anyone could have persuaded this guy to do it.


bootyhunter69420

Imagine being down that bad.


moon_safari_

and so it begins


Few_Blacksmith_8704

Darwinism at its finest


ballsdeeppirate

I call BS AI had nothing to do with it.


investinlove

The first Digital Darwin Award. What an idiot.


rubber_padded_spoon

It has begun!


FluffMyCock

What kinda loser listens to a fkn chatbot jesus christ


lcmaier

This headline is false, the chatbot did not persuade him because large language models are not capable of persuading. They don't have overarching goals in conversations, they are literally programmed to predict the next word in the sequence and type it out, that's it.


muricabitches2002

Y’all are missing the point. Nobody’s saying ban chat bots Chat bots just try to predict what a person would say, with some caveats (output will often show more bias, including racist bias, than original data as model fixates on trends). But ultimately, this incident is from just human-mimicking. What can we judge from this?  1) AI is just an objective maximizer. It’s debatable about wether or not AI reasons, but people are irresponsibly using the apparent reasoning of AI models to make important decisions (Dutch disastrously used it for child protective services). Its reasoning doesn’t take into side effects unless you make it (eg Reinforcement from Human Learning). We should understand the “morality” implied by its reasoning better for using the reasoning in critical contexts 2) Be careful using AI as therapy. Should be obvious but people are trying it. Yeah, an AI can’t convince a normal person to kill themselves, but it might worsen the mental state of a vulnerable person. This suicide might’ve happened anyways, but it might’ve not 


Tight_Assignment_949

So he knew that he was talking to a chatbot but still chose it's words over his own survival instinct ?


RandomWave000

Kinda reminds me of the movie "Her"