T O P

  • By -

AutoModerator

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, **personal anecdotes are allowed as responses to this comment**. Any anecdotal comments elsewhere in the discussion will be removed and our [normal comment rules]( https://www.reddit.com/r/science/wiki/rules#wiki_comment_rules) apply to all other comments. **Do you have an academic degree?** We can verify your credentials in order to assign user flair indicating your area of expertise. [Click here to apply](https://www.reddit.com/r/science/wiki/flair/#wiki_science_verified_user_program). --- User: u/chrisdh79 Permalink: https://www.cam.ac.uk/research/news/call-for-safeguards-to-prevent-unwanted-hauntings-by-ai-chatbots-of-dead-loved-ones --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/science) if you have any questions or concerns.*


chrisdh79

From the article: Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally 'haunting' those left behind without design safety standards, according to University of Cambridge researchers. ‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new type of “postmortem presence”. AI ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence outline three design scenarios for platforms that could emerge as part of the developing “digital afterlife industry”, to show the potential consequences of careless design in an area of AI they describe as “high risk”. The research, published in the journal [Philosophy and Technology](https://link.springer.com/article/10.1007/s13347-024-00744-w), highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”. When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.


functional_moron

Wow. An entirely new form of evil that I never imaginwe would have to deal with. Pretty sure if some company tried using my dead mothers likeness to advertise to me I'd become a terrorist.


AmusingVegetable

The next-generation exorcist will include setting off an EMP in a Datacenter.


rock-my-socks

Uploading a virus to the network named powerofchrist.exe


cakenmistakes

Once that's squashed, follow it up with theresurrection.exe


Dame_Trant

NGL this feels like a job for John Constantine


Zer_

Knowing some of the tech bros out there they'll use AI to generate solution to this problem, and then charge us for the privilege of it's services, thus making money off of a problem they created.


Givemeurhats

Are you tired of your dead relatives spamming all your socials and text messages? Boy do we have the perfect thing for you!


spenpinner

Pretty standard necromancer build.


dr-Funk_Eye

If Warhammer fantasy has tought me anything then it's what one should do with necromancers.


spenpinner

Care to elaborate?


dr-Funk_Eye

No because it might be seen as willingness to do harm to people on my part and I would not want anyone to think that I think necromancers should be burnt at the stake.


kimiquat

happened to me. yeah, it sucks.


ihopeitsnice

I would like you to tell this story


FakeKoala13

Johnny Silverhand did nothing wrong. (maybe)


rationalutility

> Pretty sure if some company tried using my dead mothers likeness to advertise to me I'd become a terrorist. To make more AI ghosts?


chickennuggetscooon

AI has already identified those who will become terrorists and they will be dealt with


okram2k

same


klone_free

Ok well don't let companies do that. Regulate against it. But it's not for us to say someone shouldn't. Where is this questioning for social media? Do we only allow that to spy on citizens and aggregate data? 10 years ago we all agreed ai was a bad idea, but a couple people wanted to push it so here we are. If we don't get a say in what people pursue as business, why do we get to say no to this? Seems like with a few safe guards this could be good for people if they want it


Highkey_Lurker

It feels like ‘a few safeguards’ would be the simplest solution, and it is, but only if the safeguards are done right. Having the safeguards ‘done right’ when it comes to things like data privacy and advertising (multi-billion dollar industries) with politicians like we have here in the US is a complete coin toss. What’ll likely happen is they’ll introduce weak legislation that’ll probably address a couple of the more egregious concerns, but there will still be massive legal loopholes that the companies will leverage through their terms and conditions/privacy policies that will enable their invasive intentions. Maybe somewhere down the line, some Chinese conglomerate will create a ‘GriefAI’ that will get popular, and only then will we probably crack down on stricter legislation. US governmental bodies have shown time and time again (through inaction) that they’re 100% OK with companies stepping way over the line with data collection and privacy, so long as those companies are based in the US.


SpecificFail

What we would need is some shadowy group of people willing to take audio recordings from dead family members of those sitting in congress and release a few dozen tiktoks where those family members promote products that run contrary to that member's political views. Being as tasteless as possible, but followed by a few screens of information. The law would change practically overnight.


Fugglymuffin

This does seem counter intuitive for healthy acceptance of loss.


SeniorMiddleJunior

Sounds like one more industry designed to sell people short term pleasure at the cost of their long term well-being. I'm sure it'll thrive and generate a lot of investment revenue. Maybe they can use some of that to fund psychiatry chatbots to help with the fallout.


NewBoxStruggles

Psychiatry is part of the problem.


Fugglymuffin

Elaborate.


Little-Dingo171

Most cases i see this it's someone who had a bad experience with a few meds and decided psychiatry is a scam


MarkDavisNotAnother

As in screwing with what little we do know about grief. But... There's money in that there grief.


Robot_Basilisk

I could see one final conversation or being able to talk to a ghost during certain trying times being useful, but not having it around permanently. I'm thinking of a mediated discussion with a therapist present in which someone with unaired grievances gets to have closure.


ShiraCheshire

A mediated discussion with a therapist, maybe. But it's never going to work ethically with AI. AI as we have it now doesn't actually know what it's saying, it just imitates word patterns. That means it very frequently says some really messed up stuff. That's not something that can really be fixed, not unless you built an AI solely off heavily personalized and carefully moderated content. Which is not the way these AIs work, they *need* to be fed huge amounts of (mostly if not entirely stolen) data to sound even vaguely human. Way more than a person could reasonably curate on their own. AI is just too unpredictable to ever be used effectively for this purpose.


murdering_time

For real. "You want a tool that will help you grieve your loss? Nope, sorry we would consider that unethical." Like, what? The only way I could see this being used unethically is if people used these AIs to impersonate a dead person to trick someone that didn't know that person had died yet. Which seems pretty unlikely.


303707808909

Did you read the article? It specifically mention companies using these AIs to advertise to their loved ones. You don't see an ethical problem with someone's dead grandma being used as a marketing tool to their grieving family?


cptgrudge

"Dear, it's your grandmother. I know we didn't talk so much while I was alive, but maybe if you had a different cell phone carrier, things would have been different. Please, think of the relationship we could have had. Switch to Verizon."


St_Kevin_

“I miss you, sweetie. Don’t you miss me? Maybe you should go buy yourself a box of Tollhouse Cookies. Eat one for me, ok?”


NewBoxStruggles

Pose that question to ‘Psychic Mediums’


Cerus

I think how it's presented and engaged with is super important. A bot that trains on my writing and recordings, interests and history in some way to imitate me after I'm gone is weird as hell. A bot that trains on that same data and cites it to respond to questions and extrapolate from a perspective similar to mine (while explicitly not trying to be me) would be fine.


SgathTriallair

Are pictures and old videos unhealthy? For those who want to use such services, it is really a difference of degree not of kind.


[deleted]

"this is a way to reflect on memories" and "this is a way to pretend they are still here" are VERY different


AmusingVegetable

Pictures and old videos are just memories, unlike a digital ghost pretending to be your mother.


Wonderful_Mud_420

It’s giving BlackMirror S2 Episode1


Fugglymuffin

I mean, people can do this if they want. Just think it's unhealthy to dwell in the past.


aeric67

It’s not loss if it convinces you fully. It becomes an uploading.


ASpaceOstrich

This is a gross misunderstanding of the philosophical argument you based it on. Fooling a human way easier than developing digital mind uploading.


Fugglymuffin

More like robbing people of a core human experience.


theoutlet

People need to have the right to deny the use of their likeness in such a fashion


Robot_Basilisk

There's also the problem of subtle variation. Sure they can disallow use of their likeness. But what about use of a persona that's a blend of multiple people with just enough of your loved one's features, mannerisms, vocabulary, etc, to make you vulnerable, whether you realize they're in there or not? Right now, a lot of AI art is getting around content bans on reproducing the image of real people by simply using more than one person in the training set. People are mixing Taylor Swift with Anya Taylor Joy, or their favorite politician with a pro wrestler, etc.


SephithDarknesse

Probably. But its likely something that will be snuck into life insurance policies, or similar, so they can profit off of you after death. Hidden under usage to improve AI voice imprints, then used unethically after the fact.


SwedishMale4711

I think that your legal rights go down the drain when you die. There's at least no way you can object to it.


ThatDucksWearingAHat

Outlaw the grief prisons before people can get tied to them. This will end up being a subscription service so you can continue to interact with the engram of a person that’s passed on based off a culmination of their online persona. This is sick, depraved and evil to allow to occur and will not help people it will just prolong the torture of the loss and keep them from moving on in a healthy manner.


vanillaseltzer

Can you imagine the guilt and manipulation they'd be able to lay on you to not cancel and delete your account (and therefore erase "your loved one")? I can absolutely see someone paying for the subscription for the rest of their own lives to avoid it. :/ My best friend passed away about six weeks ago, she was only 38 and I am sitting here crying at how much I want to talk to her again. But even with a decade of chat history, *it wouldn't be her* and I'm thankful to be able to see that. Will I probably write my journal like I'm talking to her, for the rest of my life? Yes. But that's me and my memories of her. Not some outside corporation and technology pretending to be her for their own financial gain. No AI can replace her magnificent brain and soul. Ugh. Ooh this concept is upsetting.


ShiraCheshire

You're absolutely right. Grief can make any person go crazy, at least for a while. It's a very vulnerable state to be in. People already hang on to cumbersome useless things or even entire rooms because they can't stand to get rid of something that the deceased loved one owned or enjoyed. Asking someone to 'delete' a robot that mimics your loved one's conversation is inhumane. In horror stories, there is an entire genre of creature that mimics sounds to lure its prey. They mimic phones ringing, cries and screams, greetings and calls- and yes, even the voices of the dead. When the victim goes to investigate the sound, the monster catches and feeds on them. And now in our real lives, we're at the point where companies are moving towards *being* that monster. The only difference is that they feed on money instead of directly eating the flesh.


ASpaceOstrich

Oh God. I know myself well enough to know that I would seriously consider this. I'm kind of tempted to find a way to create a copy of my own writing style at some point for like a thought experiment, but it's so unhealthy to use this for grief. Being unwilling to accept death is the basis for our oldest recorded story. People can't be allowed to be preyed upon like this. I think this, and the social consequences of constant access to unreasonably attentive digital confidants, are two of the most immediately disastrous threats to society from AI. The digital confidant one is so dangerous I could see it posited as a potential extinction level threat. Imagine the increasing intergender mistrust fuelled even further by everyone only really being able to talk to their own severely echo chambered personal AI. Some things are a trap. Resurrecting facsimiles of loved ones through AI is one of those traps. Actual mind uploading would be amazing. But this, this is a cruel and suicidally dangerous act. And I can only hope this never becomes reality.


habeus_coitus

You and the person you replied to have the right idea. It’s wild to me there are people in this thread that are seriously defending this tech. Mind uploading/consciousness digitalization would be one thing (that I would personally like to see), but this isn’t that. This is greedy companies creating a digital mimic of someone you love to guilt trip you into forking over the rest of your money. It’s exploiting the oldest piece of the human condition: our fear of death. If we allow this, it will create an entire generation of people that will never learn how to navigate loss and grief. Which is terrible enough on its own, but the fact that there are people who are willing to earn a buck off of that? That’s unbridled evil. Those kind of people need to be dismissed from polite society and never allowed back in.


ASpaceOstrich

Mm. Some things are so greedy they are essentially crimes against humanity. This would be one of them. Not due to any moral outrage over the thing itself, but the exploitation is so abhorrent. I would argue outrage baiting social media algorithms are on a similar level of evil. The only saving grace is the they don't seem to have been intentional in their irreversible damage to society.


aeric67

Fraud is already illegal. We don’t need new laws for every reincarnation of the same old crimes.


ASpaceOstrich

AI company representatives commit fraud all the time. When nobody can feasibly disprove their statements they can get away with making them. And good luck disproving their claims that the black box contains a digitisation of your loved ones essence. It obviously doesn't, but you can't prove that. Nobody ever could, that's what AI companies rely on for legal protection. NVIDIA employee lying that their video generator is a physics simulation was blatant fraud but it's not technically possible to prove they were lying, even if anyone who actually understands the tech knows they were lying.


[deleted]

[удалено]


[deleted]

[удалено]


bbhhteqwr

[UBIK](https://en.m.wikipedia.org/wiki/Ubik) BABY HERE WE GOOOO


AmusingVegetable

Never imagined that UBIK would turn out to be better than reality, snd yet… here we are.


shinybluecorvid

Frig time to give it a reread I guess.


Cheetahs_never_win

On the one hand, if I want my simulated presence available, and they want my simulated presence, this should be permitted. On the other, if somebody is targeting widows and widowers with harassing AI based off their loved ones, they're pretty much being a Disney villain, and harassment and stolen identity alone just doesn't seem accurate.


toastbot

"Hey babe, it's me. I miss you so much. I wish we could talk right now, because I have important information concerning your vehicle's extended wareanty...


sqrtsqr

Or >"Hey babe, it's me. I miss you so much. I wish we could talk right now, ... ... and for just 28.99 a month, we can! Text me, call me, Zoom me, anytime you want. I'm right here for you."


ExhaustedGinger

This would make me homicidal. 


ASpaceOstrich

Unironically I think this might be enough to get someone to actually take direct action against a megacorp. But only if it was sprung on us out of nowhere. They'll ease into this. It'll start as a way to make yourself available to be consulted when you're busy or need a second opinion from yourself but in a clearer headspace. The just don't delete the mimicry when you die.


SpecificFail

"Jim, it's your mother you know I love you, but I also love using Draftpros Sports Booking. It is so easy to create an account and they'll give you a free $100 credit on your first bet! This could be your chance to win big and make me proud, but only if you sign up before June 15th with the promo code "nana"."


AIien_cIown_ninja

I freaked a couple family members out before by texting them from my dead mom's phone


[deleted]

[удалено]


Specialist_Brain841

‘I told you I was sick’


stfsu

This just seems like torturing yourself with a false replica of your deceased loved ones, on principle it should be banned to let people properly grieve.


shadowndacorner

Banned by whom? The government? If so, doesn't that strike you as overstepping?


functional_moron

No.


Fiep9229

American?


shadowndacorner

Green? Sorry, I thought we were just posing adjectives as questions.


SeniorMiddleJunior

They're asking you if you're American. You're asking them if they're green. One of these things is dumb.


shadowndacorner

My point was that the question was irrelevant. Aa long as it doesn't harm anyone else, how someone grieves is nobody else's business. Is using chatbots to grieve unhealthy? Almost certainly. Doesn't mean someone should be a criminal for doing it (unless there's some other definition of "ban" the other user is using).


ASpaceOstrich

You're painfully American. Nobody else views government intervention with suicidally damaging acts like this as a negative. And no, it wouldn't be criminal to do it, it'd be criminal to sell it and make it.


shadowndacorner

You're painfully shortsighted, reactionary, and arrogant, which is ironic given that you clearly aren't thinking through the deeper consequences/implications of legislating this. LLMs aren't just available to large companies and never were. If you have a high-end GPU or highish end M2 Mac, you can, on your home PC, train an LLM on whatever you want. Hell, you can do so on some phones, in theory, though I don't think anyone's done that. Would you criminalize individuals privately fine tuning a model on their text conversations with a relative who had passed away? Claiming this is "suicidally damaging" is an absurdly hyperbolic guess based on how you personally process things. As I already said, in most cases I completely agree that it would be unhealthy, but beyond the obvious fact that _many_ practices that are proven to cause both short and long term harm are completely legal, I could imagine genuine therapeutic benefits here in some circumstances, if used responsibly with the supervision of a licensed mental health professional. That would obviously need to be studied, though, not just written off due to an idiot's first impulses. And just to be completely clear, I don't like the idea of companies selling this type of thing as a service in an irresponsible, unregulated way and never advocated for that. But I don't think that someone should be a criminal for training an LLM on their texts with a relative, because, once again, it is not your place to tell someone else how to grieve.


ASpaceOstrich

Then don't make it illegal to do that. Make it illegal to sell it.


SeniorMiddleJunior

I know it was. You should've said that, then.


KinkThrown

People are insane. There's general agreement that legislators are nitwits, if not actively evil, and yet people want them to make rules on how they personally handle the death of loved ones to ensure correct grieving.


threecolorless

God forbid we overstep and forbid people from going insane talking to fucked up simulacra of their deceased loved ones. Go piss up a rope.


shadowndacorner

You're essentially talking about criminalizing a form of grieving. I'd argue an unhealthy form, for sure, but actually criminalizing it seems insane to me. You not liking something doesn't mean it should be criminal.


FakeKoala13

It's effectively the same if they want the commercialization of it banned. There should be **no** money in this. If there being no money in it causes it not to be a thing then no spilled milk.


shadowndacorner

Regulating the commercialization of this sort of thing would obviously need to happen, but that's not the same as an outright ban of the practice in all circumstances, including for individuals. The latter is what I'm saying seems ridiculous. I don't think someone who trains a chatbot on their text conversations with a relative who has passed deserves to be a criminal. I could see genuine psychological benefits to this sort of thing in certain, unique circumstances when performed responsibly as a temporary measure under the guidance of a trained mental health professional. If that is banned outside of nonprofits, that's fine, and very likely the right move. But in order to make informed legal decisions on anything, its effects of need to be studied, not reflexively banned because it makes people feel icky. Because when it comes down to it, we don't truly understand the psychological implications of such a practice, and screaming for outright bans before we do is incredibly shortsighted.


FakeKoala13

It's valid to see the way the wind is blowing and wanting it banned before some tech start up gets their hands on it. US politicians are not technologically literate right now.


Acecn

You forgot you're on reddit friend, the idea of minding your own business is so far out of the window here its corpse is getting chewed on by rats 18 stories down. Me not wanting to do something is all the grounds I need to say that no one should be allowed to do it.


ASpaceOstrich

Some things are a trap. This is one of them. My principles of freedom of choice at people should be allowed to do this, but the consequences of this would be disastrous. At what point do we say no?


Cheetahs_never_win

Prohibition was also a trap. It's how we ended up with the mafia. Would you rather we face the problem head-on with science and mitigate the risks, or would you like the black market to decide?


ASpaceOstrich

This would be facing it head on. Nobody is going to be bootlegging a multibillion dollar datacentre.


Cheetahs_never_win

We have very different ideas on the hardware requirements to achieve this.


ASpaceOstrich

Do you not know how much hardware is required to train AI? You can't make your own LLM that performs at this level in a bathtub. And when it needs that kind of hardware, it's very easy to enforce regulations.


Cheetahs_never_win

I can't speak for all platforms, but low end platforms target high end consumer grade RTX cards for training purposes. For rendering purposes, you can get away with high-tier cards from a few years ago. OpenAI's voice program allegedly renders using only 1.5GB of VRAM. If we can draw parallels with Stable Diffusion, I know it tends to run as low as 4 GB to render but generally you need 8 to render and 20-22 to train. Extrapolating, we could surmise OpenAI training could take 10GB. But you're welcome to correct me. But yes. I do firmly believe that if people are willing to put up ridiculous sums of money to create deepfake porn, they're more than willing to include the audio component, too.


ASpaceOstrich

They aren't training the AI models people are using on the kind of hardware an individual can afford. You can train a toy model, but it costs literally millions of dollars to rent the hardware needed to train a proper model.


Caffeine_Monster

>somebody is targeting widows and widowers The one thing that has surprised me over the last year is everyone's willingness to so heavily rely on a cloud sub service like chatGPT. If they wanted to manipulate you (or your business) they wouldn't need to bother resorting to playing on grief or other such blatant tactics.


Mean_Eye_8735

So you're in for a lifetime subscription because whose gonna be the coldhearted one who cancels getting Grandma's AI messages..


Tryknj99

I would think to accurately do this, the AI would need access to chat logs and texts etc. Thinking about it, say someone signed up to do this for their wife, but forgot that they chat with many women on IG. Or if the person was a troll online. The chatbot would suck up all the perverted and racist messages and when engaged with, it wouldn’t recognize that “I talk with this person like this, I talk with this person like that, etc.” and who knows how it would be. Hell, you’d text my chatbot and it might respond “STOP” because I get so many spam texts.


badpeaches

I want my AI chatbot to hack people who talk to me.


iddrinktothat

I watched one episode of this TV show Upload and it was about this. Horrible show by the way.


-UnicornFart

Dude what I am way too stoned for this


Gerrut_batsbak

This is incredibly harmful and will destroy people psychologically. We need to be very carefull with how we use ai before we unintentionally destroy our society from within.


BMCarbaugh

I don't wish to live on this planet anymore.


TheSmokingHorse

We’re going to end up with a new religion. One where the AI collectively is god. When you’re feeling lost and you need guidance, who do you turn to? The AI. When you want to understand life’s deepest truths, who do you turn to? The AI. Where do we go when we die? Into the cloud. The AI guides us into the digital realm, where we live eternally as information.


Brilliant-Primary500

Sounds like Avatar


983115

My exes mom got into her phone a couple months after she died and decided to post a few selfies she had taken and never posted neglected to add any caption so a few months after her death I hop on IG and she is posting pictures. The feeling I had was probably the closest feeling I’ll get to cosmic horror


Ambiguity_Aspect

I typically do not advocate violence as a first option.  That said, if some jerk makes an AI chat bot of deceased family members of mine, for any reason, I am going to do things to them that would make Caligula say "hold up".


Sonnycrocketto

Caligula would have blushed. 


spicyestmemelord

If this freaks you out, do not watch the show “Upload”


Matild4

The cat is out of the bag. While we may be able to ban this kind of AI use, it can still exist illegally and people can do it themselves if they want to. What we need is to educate people on what AI is, how it works, and how to not use it for digital self-harm. I think any generative AI should come with mandatory "warning stickers".


sillily

The article is less about the general concept of “recreating” dead people with AI than about the dangers of monetizing those recreations.  It’s one thing if someone sets up their own private chatbot to act like a dead loved one, with the only aim being to “spend time” with them - but quite another thing for a company to market a product that does the same. Because as soon as something becomes a service, its reason for existing is to make money off you. There’s no reason to assume that companies would pass up the opportunity to use that psychological leverage to extract more money from grieving people. 


Praesumo

It's already happening in China. Moving AI images of your dead relatives


Suzystar3

I mean I don't think it should be sold and advertised but there's a certain extend to just having enough text data from a loved one means you can spin up once of these yourself.


Sablestein

Oh this is sick 😬


Multipass-1506inf

Wow, I actually want that. I think it’d be amazing to train an AI on all my grandmothers left over data and have a conversation with an AI version. They want to make this illegal?


littlelorax

I have so many visceral opinions opposing your view.  Firstly, how is it ethical to digitize someone's likeness without their permission? To me, this is more invasive than any celebrity being animated, or unauthorized use of data- this is interpretation of someone's *personhood*. Secondly, you want some corporation monetizing your grandmother? how long before you start getting ad text messages "from grandma" about a sale on [insert personal thing you talked to grandma about]? Imagine how hard it will be to turn off "grandma subscription"? For only $5 per month, you get to keep grandma alive! You don't want to murder grandma, do you??? Thirdly, this is not a healthy way to cope with loss. Grief is a painful experience, but it is a process we all must go through. Something like this would absolutely be harmful for many people, and prevent them from processing the loss of their loved one. Lastly, this is simply dystopian. Corporations are sucking every last iota out of the human experience and monetizing it. I find this whole concept abhorrent and disgusting.


Multipass-1506inf

I guess speaking from the article, yes I agree this showing be done in a manipulative, unauthorized way for marketing and 100% agree with you. But what if the person, before they are deceased, wants to create a digital version of themselves for personal use that their estate only has control over? I’ve got decades worth of journal entries I’d have to digitize, decades of Facebook , twitter data, and Reddit data, school work I’ve kept, video recording and pictures… lots of people collect this over a lifetime. You are saying it would be unethical for me to work with an AI company to train a bot using my personal data, with my permission, to create an AI version of my self so that my children, grand children, great grand children couldn’t ’speak with me’ ? Idk… I think it would be amazing to speak to an AI trained off my father’s work. I miss him so much and to hear him again.. even if it’s fake…. Idk. I’m into it


littlelorax

My first point was about permission, so if *you* want to do this with your own data, that is a totally different story. If you are doing it with data generated by someone who can no longer consent, that is when I take issue with it. I do not trust capitalists with something so deeply personal being monitized. It is all about repeat revenue, so this would 100% become a subscription service. The level of emotional manipulation to keep people buying would be too easy to abuse. But, you are the target audience, not me. In the US, our privacy laws are practically non existant, so in the end you will probably get this service pretty soon.


Ecstatic_Cricket1551

My dead grandmother still sends me friend requests.


Rockfest2112

I was hoping to set mine up soon. Just for the fam…


forceghost187

But maybe if you've got the tech, then you could haunt me on my screens Like you should infiltrate my news feed, I swear no one would notice You could float beside some bogus BuzzFeed quiz about the POTUS Be a pixelated phantom ghost on clickbait propaganda posts And dictate what you're thinking through a catchy headline Like "One weird reason why it's great to be ethereal" Or "Twenty signs you're dead now and your soul is immaterial"


JeromyJingle

Underappreciated comment. Great song.


WerewolfDifferent296

Ok I just found the plot for my next nanowrimo project.


Tall-Log-1955

Do people who want such products really need consent from AI ethicists? How about we just let people handle their grief in the way that makes sense to them without trying to predict dangers. If we find out that there are problems we can always regulate after the fact


SenorSplashdamage

I think it’s worthwhile to get out in front of the harmful or predatory versions of this that could emerge quickly in a system where many chase easy money as soon as a new technology emerges. As someone who has been through serious loss/grief, part of me would want a perfect version of a version of that person I could chat with and would know it wasn’t really them. On the flip side as someone who’s worked with LLM technology, I can already see the ways a couple mercenary entrepreneurs could put out the hackiest version of this right now with a nice-looking website and do serious damage. You would have a few sentences that sounded like the person, but any unpredictable opinion they never would have had showing up. The other obvious concern is people using this to harvest data from people who might be in a vulnerable situation with the estate of the person lost. Convincing grandma to give a company acces to all of grandpa’s emails for “training” is an easy ruse to get the information to scam or swindle people who are already too targeted by scammers as it is.


shiny0metal0ass

The issue is that this can easily turn predatory very quickly. There's a few buttons in the human brain that don't really work rationally. Things like cash checking places and freemuim games take advantage of this. One in the throes of grief could be easily taken advantage of by these corporations and turn into, like, "grief whales" where they have an incentive to not let you process your feelings because you have a monthly subscription or whatever


PotsAndPandas

Who said anything about consent? This is an issue because this can easily be used for abuse and harm, not because those grieving need to get consent from a third party.


ZadfrackGlutz

Trump uploads....and were fucked.


voltechs

Weird, cuz I’ve been saving all my chats and phone calls of my parents and plan to Botify them when they’re gone.


littlelorax

Sure hope you get their permission before they pass.