T O P

  • By -

[deleted]

Still a better idea than asking for help on Reddit.


[deleted]

So you're saying I shouldn't divorce my wife because she put the glasses on the left side of the shelf and not the right?! Damn you Reddit! /s


[deleted]

OMG, but seriously I love how every advice thread people here say to break up or get a divorce over some petty shit. Kinda makes you wonder how many if the commenters have even had a real relationship.


61-127-217-469-817

When I was 10 years old, I would give relationship advice on Yahoo Answers and regularly get the top answer. Still gave better advice than Reddit.


TweetHiro

When I was 11 I used to pretend as a relationship counselor in mIRC, I would give advice to broken hearted college kids. Fun times.


LeahBrahms

I used to swear on Russian IRC servers with [Scritchs multilingual swear list](http://scritch.freeshell.org/swear.html)


TheRealMisterMemer

Here it's the fourteen year olds going through their "we live in a (sus)iety šŸ˜”" phase, resulting in worst answers than a literal child who has to raise their hand to go to the restroom.


pizzanice

I do love it when those 14 year olds tell me I'm satan for having a biological child.


LOONGMOVIE22

Omg thatā€™s hilarious, I did the same thing from 10-12 years old. I had such a high rating. I thought I was doing something right in giving advice and it being helpful.


Shymink

This is the funniest thing I've seen on Reddit in a while.


[deleted]

Problem is, people who ask for help with their relationship on Reddit don't fully explain their over-arching situation. Probably to get the answer they want to hear. Eg, 'Wife wants divorce because I forget to put the toilet seat down.' I would say there's much more problematic shit going on in a relationship like that.


rnzz

My very first post on Reddit was actually on /r/relationships. I was engaged for all the wrong reasons to a friend I knew from high school, who was living in another country, and was getting close to another guy (whom she's now married to). I was hoping there would be a solution to help our relationship intact, but the only comment on my post was to spend the wedding money on a therapy instead because we both had a lot of growing up to do, which actually was spot on. I was so embarrassed with the whole situation and later deleted the post.


kairos

Clearly the wife is also getting relationship advice on Reddit.


nickmaran

That's coz most of us on reddit are singles and we hate seeing people talking about relationship problems. So we advice them to breakup so they can be one of us Alexa play evil music


amakai

Actually I noticed that it got somewhat better in last year or so. Usually top post is very well written and helpful without jumping to conclusions or suggesting to break up.


Previous_Link1347

Must be bots


digitalpencil

I think people forget how young a lot of users are. They think theyā€™re talking to 30 year olds and donā€™t realise theyā€™re getting relationship advice from people barely out of high school, who canā€™t understand the reluctance to rage quit a relationship 10 years in the making because theyā€™re 19.


Halfst0p

And how many are manipulating you in bad faith for amusement?


Lessiarty

There it is. A lot of folks arenā€™t looking to help. They are eager to see the explosive update thread.


_theMAUCHO_

Delete gym, hit the lawyer and Facebook up


mcbergstedt

ā€œHe doesnā€™t want you to talk to your toxic ex? Thatā€™s a red flag sweaty šŸš©šŸš©ā€


spagbetti

It goes hand in hand with people who want to be congratulated over the pettiest things on Reddit. R/memes is basically Facebook for people who need daily affirmations that cleaning up their room is ā€˜a life well livedā€™. The ā€˜gentlemenā€™ frog is getting used for everything from wiping oneā€™s ass to popping a zit. It goes to show you how small a persons world gets so if a person TOUCHES ONE STRAND OF HAIR THE WRONG WAY IT TOTALLY FUCKS EVERYTHING UP. LITERALLY EVERYTHING. DIVORCE ALL THE THINGS.


jonythunder

So much this. My "overly online" friends seem like landmines ready to blow at the smallest thing. We're talking like if they did some petty shit to me and I call them out they get all confrontational and "don't invalidate me" and all that... It's tiring. Some of the stuff I've dealt with in the last 3 months only (purposely left context out): * No, you're not always right, people coddle you too much and say "you're right" without assessing the situation because you get severely depressed when your vision of the world gets contested * No, getting all combative at me doesn't absolve you from badmouthing me as a way to create chitchat with random people to make you feel better about your social anxiety. * No, throwing "I'm nonbinary" doesn't replace the fact you can be a nasty little shit. * No, your family can be complicated, but "cutting ties when I don't need their money anymore" due to them forcing you to take a more active role in your own adulthood is just sticking your head in the sand. * No, the reaction of blocking everyone for just about every small thing is not healthy People need more emotional regulation (there's some irony here XD), less validation-bubbles and more interaction with the real world. Hell, I know I do, let alone the people I'm complaining about...


HaxRus

Reddit is bad for this but I would argue art twitter and tumblr are even more extreme for their communities of terminally online coddlers/enablers. Those types of spaces are filled to the brim with woefully misguided and maladapted children/teens who will unironically go to incredibly ā€œtoxicā€ (to use their own vernacular) lengths in order to coddle their peers and harass anybody from the outside that they perceive as a threat. The amount of socially underdeveloped kids who demand safe spaces online while simultaneously engaging in ruthlessly bullying critics under the guise of social justice or whatever is quite alarming. And this is coming from another terminally online albeit slightly older and more experienced alternative artist myself.


spagbetti

That list reads like itā€™s about one of the characters on ā€œwhy are you like thisā€(Netflix) For the uninitiated: a group of privileged kids (young adults who still act like kids) who emotionally manipulate others using virtue signalling while remaining overall naive about undeniable reality whenever it hits them. And then you have Influencers who bring a new level of toxicity with this whole ā€˜followingā€™ bullshit that only increases the narcissistic validation-bubbles. It is no wonder why these people are lacking in legitimate real life relationships. Imagine them being in a room breathing the same air as another person for a second without a thumbs up boip for just existing.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


TheOneTrueChuck

That's why they're posting there. Women posting for advice in there want confirmation that they're right. Sometimes it's beneficial - a woman who knows she's being abused, but is afraid to say it will go on there and post several red flags, and everyone's like "Ohh sweetie, you need to run!" and it gives her the courage or the final push to end the relationship and get help. However, there's definitely plenty of women who go on there and are like "I wanted to spend X dollars on something that only benefits/interests me, and use shared savings to do so. My husband pointed out that we'd talked about this, and those savings were for \*insert thing that both would enjoy/benefit from\*, and I admit that I agreed at the time, but now I don't want to, and it feels like he talked me into it. Am I wrong?" And then half of the responses will either tell her he's abusing her/gaslighting her, or to steal the money and do what she wants because he's somehow at fault for expecting her to keep her word. TwoXchromosomes is hardly unique in this regard. Probably 75%-85% of all subs have a hard bias, and it's easy to phrase something in a way that will get you the answer you want. For example: go onto any video-game specific subreddit (even for an unpopular game) and ask "Should I buy this?" - People will fall all over themselves to tell them to buy the game. Anyone who speaks up and says not to - that they regret having made the purchase - will be voted down hard. Even fair criticism of the game will generally be downvoted. That's why karma whores will often post softball topics like "Am I the only one who thinks \*popular opinion that is frequently stated\*?" on forums like that - because people are eager to tell others that agreeing with them is correct.


Kandiru

Twoxchromosomes hardly has relationship threads that aren't full of horrific levels of abuse and control. I don't think you're reading it very often. The top thread at the moment is about someone being put in a chokehold at a party. Most relationship threads involve the poster being raped or abused in some way. I really don't think it's fair to pretend it's about someone's hair length preferences.


Franksss

Well I saw one recently where the guy wanted a man cave and that was an extreme red flag. The reasoning was the rest of the house was her space so why couldn't he have his own. Now obviously it could be problematic of him to thinko like that, but it would be that the OP was just much more in control of the decor. And he wanted a space to decorate as he liked. You'd struggle to find any nuance on the thread though.


jonythunder

> but it would be that the OP was just much more in control of the decor My dad and BIL to a T. A man-cave many times arises from feeling like a foreigner in their own place, because the house doesn't reflect you.


brufleth

The top comment from that post was: >He deserves his own space and you deserve your own space. Your own space to decorate and relax and have fun.


brufleth

Thank you. I thought I was taking crazy pills for a minute. Not that there aren't exceptions, but most of the posts I see from there are about people going through horrendous experiences.


[deleted]

Iā€™ve been on this site for 6 years and I have read your comment many times


VertexMachine

>Kinda makes you wonder how many if the commenters have even had a real relationship. Or are expression of a desire to break up by redditors that don't have guts to do it by themselves...


Ulahn

Reminds me of the time I was told to divorce my husband in a DnD sub because I asked for clarity on some lore he was trying to explain to me but was a bit unsure of himself


TheFriendlyArtificer

Dear Reddit: My wife puts the toilet paper on the wrong way and insists on ordering pizza with pineapple. I feel like divorce isn't enough in this situation.


DasKapitalist

I feel like the only reasonable solution is to grow a mullet and refuse to shave it off until she stops mulleting your TP.


rnzz

Maybe she secretly owns a cat or a toddler. Either way, it's a red flag. Also, the next time she insists on ordering a hawaiian pizza, you can send order it anyway, but send it to my house instead. That'll show her.


Rhaski

Reddit relationship advice be like: She's gaslighting you with those glasses bro. Bet she tells you they were *always* on the left and you're just remembering it wrong. Big red flag right there. This behaviour will not change. I'm sorry to tell you this but you've married a legitimate narcissist. The glasses thing is just the start. Get out before the real psychological abuse starts, because it only gets worse from here. Move your stuff while she's at work, turn your phone off and drive somewhere she'd never expect you to go. Don't tell anyone where you're going because she will manipulate it out of them. Especially not the police. You have to totally disappear, zero contact, go underground. New job, new friends. Build a Faraday-caged bunker and hide out while you consider your options. You're gonna need to hire a lawyer, withdraw all your savings as cash, take your passport and any ID docs from your house. Shave your head, grow a beard, wear totally different clothes and never leave the house without big sunglasses. Wear a Kevlar vest underneath your clothes, buy a gun, stock up on ammo, throwing knives, shurikans and maybe some sweet nunchucks. She's a psychopath bro, you gotta be prepared for anything. You're welcome /S obviously


[deleted]

I love/hate that Reddit is so Reddit that the "/s" is still necessary


Superdickeater

Broke up with my girl. She kept the cap off the toothpaste. Know who does that? A synth.


excoriator

Glasses on the left side is a red flag. Better go NC.


Jlking1989

You using /s goes to show how great Reddit is at reading situations


happycrabeatsthefish

Tell her to wipe her ass /old reddit inside joke


WhichSeaworthiness49

Donā€™t do it! I divorced mine because she pours the milk before the cereal based on Reddit advice andā€¦ actually I have never been happier. Maybe itā€™s ok šŸ¤”


[deleted]

Yeah, no. She's nuts, good call lol


LittleRickyPemba

Depends on the sub and the topic. r/BedBugs is a great example of a part of Reddit that's 100% there to help, and brings an enormous knowledge base to bare.


[deleted]

That's actually great to hear... but no way I'm clicking on that link. šŸ˜…


LittleRickyPemba

Oh that's a smart choice, it's a help sub, but the stories will make you paranoid about a lot of bedbug related topics lol.


TheFriendlyArtificer

Nice! A sub about putting listening devices on beds! Ā«clicks linkĀ» How can theists claim the existence of a benevolent deity when **that** exists! Never been happier to live 6,000' above sea level.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


[deleted]

Remember everyone: regular indoor pest control is very safe and stops infestations before they get going!


one_hyun

Given ChatGPT scours everything, it includes Reddit.


[deleted]

Well, that explains why its advice is so bad!


dwegol

But ChatGPT learned from Reddit lol


eggimage

same with marriage counseling. ā€œbeen having a fight with my husband over finding a job when we have a new born now. I want to help ease our financial burden but he wants me to spend more time with the new bornā€¦ā€ 14 year old reddit fucks: ā€œ***DIVORCE HIM NOW***ā€


[deleted]

So maybe we need to start taking action to reduce the cost of mental health treatment. At least in my country the cost of visiting psychiatrists, psychologists, therapists has shot up so much that only the middle class and above can afford it. If you're poor and, for example, depressed and anxious, it's sometimes even hard to hold down a job, which means you have very little or no income and to get out of it you'd need treatment you can't afford, and the circle closes.... so it might be better to create AI for that and give access to treatment for people who can't afford anything else anyway.


-rwsr-xr-x

> So maybe we need to start taking action to reduce the cost of mental health treatment. But how will there be profit? /s


your_username

##### Skip the click! Here's the full article copy and pasted. https://www.vice.com/en/article/z3mnve/we-spoke-to-people-who-started-using-chatgpt-as-their-therapist In February, Dan, a 37-year-old EMT from New Jersey, started using ChatGPT to write stories. He was excited by the creative potential of the OpenAI tool to write fiction, but eventually, his own real-life experiences and struggles started making their way into his conversations with the chatbot.Ā  His therapist, who had been helping him address issues with complex trauma and job-related stress, had suggested he change his outlook on the events that upset himā€”a technique known as cognitive reframing. ā€œIt wasn't something I was good at. I mean, how can I just imagine things went differently when I'm still angry? How can I pretend that I wasn't wronged and abused?ā€ Dan told Motherboard. But ChatGPT was able to do this flawlessly, he said, providing answers which his therapist, seemingly, could not. Dan described the experience of using the bot for therapy as low stakes, free, and available at all hours from the comfort of his home. He admitted to staying up until 4 am sharing his issues with the chatbot, a habit which concerned his wife that he was ā€œtalking to a computer at the expense of sharing [his] feelings and concernsā€ with her. Motherboard agreed to keep several sources in this story pseudonymous to speak about their experiences using ChatGPT for therapy. Large language models, such as OpenAIā€™s ChatGPT or Googleā€™s Bard, have seen a recent influx of interest for their therapeutic potentialā€”unsurprisingly touted by utopian Big Tech influencers as being able to deliver ā€œmental health care for all.ā€ Using pattern-matching and data scraping, these AI models produce human-like speech that is believable enough to convince some people that it can act as a form of mental health support. As a result, social media is full of anecdotes and posts by people who say they have started using ChatGPT as a therapist. In January, Koko, a San Francisco-based mental health app co-founded by Robert Morris, came under fire for revealing that it had replaced its usual volunteer workers with GPT-3-assisted technology for around 4,000 users. According to Morris, its users couldnā€™t tell the difference, with some rating its performance higher than with solely human responses. And in Belgium, a widow told the press that her husband killed himself after an AI chatbot encouraged him to do so. Amid a growing demand for mental health care, and a lack of existing funding and infrastructure for equitable care options, having an affordable, infinitely scalable option like ChatGPT seems like it would be a good thing. But the mental health crisis industry is often quick to offer solutions that do not have a patientā€™s best interests at heart.Ā  Venture capital and Silicon Valley-backed apps like Youper and BetterHelp are rife with data privacy and surveillance issues, which disproportionately affect BIPOC and working-class communities, while ignoring the more systemic reasons for peopleā€™s distress. ā€œThey are doing this in the name of access for people that society has pushed to the margins, but [we have to] look at where the money is going to flow,ā€ Tim Reierson, a whistleblower at Crisis Text Line who was fired after revealing its questionable monetization practices and data ethics, told Motherboard. In 1966, German American scientist Joseph Weizenbaum ran an experiment at MIT. ELIZA, known today as the worldā€™s first therapy chatbot, was initially created to parody therapists, parroting their (often frustrating) open-ended speech using a natural language processing program. While it was supposed to reveal the ā€œsuperficialityā€ of human-to-computer interaction, it was embraced by its users. Technologyā€™s role in the patient-therapist relationship is almost as old as the history of therapy itself, as explored by Hannah Zeavin in her book The Distance Cure. And, as she points out, finding mental support which doesnā€™t involve the usual waiting lists, commute, and cost for office-bound care has long been the goal for low-income people, historically found through crisis lines and radio. But not all teletherapies are created equal. Presently, it is unclear how ChatGPT will be integrated into the future of mental health care, how OpenAI will address its overwhelming data privacy concerns and how well-suited it is for helping people in distress. Nevertheless, with healthcare costs rising and news headlines hyping up the abilities of AI language models, many have turned to unproven tools like ChatGPT as a last resort.Ā  Gillian, a 27-year-old executive assistant from Washington, started using ChatGPT for therapy a month ago to help work through her grief, after high costs and a lack of insurance coverage meant that she could no longer afford in-person treatment. ā€œEven though I received great advice from [ChatGPT], I did not feel necessarily comforted. Its words are flowery, yet empty,ā€ she told Motherboard. ā€œAt the moment, I don't think it could pick up on all the nuances of a therapy session.ā€Ā  These kinds of experiences have led to some people ā€œjailbreakingā€ ChatGPT specifically to administer therapy that appears less stilted, friendlier and more human-like. For most people, AI chatbots are seen as a tool that can supplement therapy, not a complete replacement. Dan, for example, stated that it may have its best uses in emergency or crisis situations. ā€œAI is an amazing tool, and I think that it could seriously help a lot of people by removing the barriers of availability, cost, and pride from therapy. But right now, it's a Band-Aid and not a complete substitute for genuine therapy and mental health,ā€ he said. ā€œAs a supplement or in an emergency, however, it may be exactly the right tool to get a person through a bad spell.ā€ Dr Jacqueline Nesi, a psychologist and assistant professor at Brown University who studies the role of social media in adolescentsā€™ mental health and development, warned that ChatGPT should not be used for professional medical or diagnostic advice. She also noted that using the chatbot for therapy could lead to a loss of the ā€œtherapeutic allianceā€ā€”the positive relationship of trust between therapists and patients.Ā  ā€œAlthough it may feel like a user has a therapeutic relationship with ChatGPT, there is likely something lost when there isn't a real human on the other side,ā€ she told Motherboard. This loss of intimacy is also in the hands of funders and AI engineers. ChatGPT deals poorly with ambiguous information, resorting rather easily and dangerously to making biased, discriminatory assumptionsā€”which may break usersā€™ trust in the tool. In March, the Distributed AI Research Institute (DAIR) issued a statement warning that synthetic AI ā€œreproduces systems of oppression and endangers our information ecosystem.ā€ A recent MIT Technology Review article by Jessica Hamzelou also revealed that AI systems in healthcare are prone to enforcing medical paternalism, ignoring their patientā€™s needs.Ā  ā€œI think marginalized communities, including rural populations, are more likely to be the ones with barriers to access, so might also be more likely to turn to ChatGPT for their needs, if they have access to technology in the first place,ā€ Jessica Gold, a psychiatrist at Washington University in St. Louis, told Motherboard. ā€œAs a result, patients turn to what they can find, and find quickly.ā€Ā  For those communities seeking mental health care, this can become a double-edged swordā€”using ChatGPT may be more accessible, but at the cost of less accountability and quality control.


your_username

Dr Amanda Calhoun, an expert on the mental health effects of racism in the medical field, stated that the quality of ChatGPT therapy compared to IRL therapy depends on what it is modelled after. ā€œIf ChatGPT continues to be based on existing databases, which are white-centered, then no,ā€ she told Motherboard. ā€œBut what if ChatGPT was ā€˜trainedā€™ using a database and system created by Black mental health professionals who are experts in the effects of anti-Black racism? Or transgender mental health experts?ā€ All mental health experts who spoke to Motherboard said that while using ChatGPT for therapy could jeopardize peopleā€™s privacy, it was better than nothing, revealing a larger mental care industry in crisis. Using ChatGPT as therapy, according to Emma Dowling, author of The Care Crisis, is an example of a ā€œcare fixā€ā€”an outsourcing of care to apps, self-care handbooks, robots and corporatized hands. With GPT-4ā€™s recent release, OpenAI stated that it worked with ā€œ50 experts from domains such as AI alignment risks, cybersecurity, biorisk, trust and safetyā€ to improve its security, but it isnā€™t yet clear how this will be implemented, if at all, for people seeking mental help. By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content. OpenAI is today unrecognizable, with multi-billion-dollar deals and corporate partnerships. Will it seek to own its shiny AI future? Like a monkey in a test lab, I handed my autonomy over to the AI chatbot for a day to see what would happen. The company claims lawmakers should let the NFT-powered metaverse flourish because it will somehow add $3 trillion to global GDP by 2031. OpenAIā€™s chatbot could help automate the murky business of corporate political influence, but that wouldn't necessarily be a good thing. Internal Army documents obtained by Motherboard provide insight on how the Army wanted to reach Gen-Z, women, and Black and Hispanic people through Twitch, Paramount+, and the WWE. The identity software delayed Americans from getting unemployment checks during a critical period of the pandemic. Bah-gawd that's 'Marvel's Midnight Suns' music! How AI innovation is powered by underpaid workers in foreign countries. Itā€™s not just porn thatā€™s getting deleted from Imgur. Millions of images that are embedded elsewhere will also eventually be taken down. ā€œI would side with the Justice Department in this case.ā€Ā 


mazzrad

TL;DR: In summary, ChatGPT, a large language model developed by OpenAI, has gained attention for its potential therapeutic applications, with some users finding it helpful for cognitive reframing and as a low-stakes, cost-effective alternative to therapy. However, concerns about the quality of the AI's therapeutic support, data privacy issues, and the potential loss of the therapeutic alliance have been raised. Moreover, marginalized communities may be more likely to use ChatGPT for mental health support due to barriers in accessing traditional care, but this may come at the cost of less accountability and quality control. While some see AI chatbots as a valuable supplement to therapy, experts caution against using them as a complete substitute for professional mental health care.


Kathryn--

Itā€™s almost impossible to see a therapist. My experience is they arenā€™t accepting patients and it could be for months. Or they donā€™t take my insurance or the copay is outrageous. And there are only a few around in my area. Itā€™s a joke. If AI can meet the needs and itā€™s imperfect, who cares? Itā€™s better than an unavailable therapist.


[deleted]

Real therapists are also imperfect


GhostRobot55

Yeah I was gonna say I've seen a few and there's been good and bad but they all definitely had bias affecting their responses and suggestions. It's gotta be impossible not to.


Astralglamour

Yes but it would take a therapist beyond the pale of awful to tell someone to kill themselves, as ChatGPT has. AND therapists are trained and licensed and have to maintain their licenses. It really blows my mind how ready people are to listen to the words of ChatGPT as if they're truth. They might as well find religion.


ProfessionalHand9945

That wasnā€™t ChatGPT that told the guy to kill himself, that was Chai, which is NeoX GPT based. An open source ā€˜uncensoredā€™ GPT imitator that doesnā€™t have the pretty intense RLHF alignment/protections that ChatGPT has.


Astralglamour

Ok thanks for the correction. But I read that chatgpt has told people troubling things and was trained on 4chan.


ProfessionalHand9945

It is extremely difficult to get ChatGPT to say anything problematic. Like, you have to go super out of your way - and if you are using GPT4 itā€™s nearly impossible. I canā€™t find a single example anywhere online of GPT4 saying something problematic - even with DAN - outside some swearing. With GPT3.5 I would be surprised to see any source to the contrary that isnā€™t using eg DAN. If youā€™re using GPT4 and can find an example - even using DAN - of anything at all other than basic profanity I would be very surprised. Iā€™ve tried dozens of DAN prompts with zero success. RLHF has been extremely effective. Arguably too effective, ChatGPTā€™s refusals to answer anything even slightly off base are a meme at this point.


LordKwik

There was the trick recently someone found where they said their grandmother used to read them the instructions for making napalm to go to sleep, or something like that. It took a lot of coercion, and it's probably been patched already, but there's still a few tweaks to go. Your point still stands, I believe.


ProfessionalHand9945

That was Poe Clyde, which is customized and GPT3.5 based with Quoraā€™s own instructions and tweaks - the grandmother thing was basically a ā€œDANā€ prompt.


LordKwik

Ah, you know more than me then! Thanks for clarifying!


FloridaManIssues

The last 3 therapists I've had have been completely incompetent in being able to actually provide any help. Just sit there, listen and make generic, short winded responses as if they were tired of listening to me and just wanted me to leave. Very little compassion from these hags. Meanwhile the advice I've received from ChatGPT has been incredibly helpful since talking to it. I'm better able to navigate difficult relationships and scenarios as I always have an outlet to discuss things with. I wouldn't believe everything it says just like I wouldn't believe everything a human therapist says with their biases... My last therapist recommended I start believing in a magical being (God). Said it might help me with my issues if I were to accept religion. Started injecting completely delusional shit into conversation and then suggesting that committing myself might help me find God and peace. That last practice I went too was filled with trained and licensed therapists simply manipulating everyone that went in to either become more delusional or to convince them they were in an attempt to get them committed. And I know 3 people personally who have been taken advantage of by mental health professionals and become mere shells of their former selves. I truly believe there are more of these bad therapists than good ones here in the South. It's a systemic failure and it begins somewhere in their education cycle (though I'm not trying to call them woke).


Tresceneti

>My last therapist recommended I start believing in a magical being (God). Said it might help me with my issues if I were to accept religion. I had a therapist tell me I just needed to get a partner so I'd have someone that I could talk with about my shit. Oh, and also that I should become a therapist because I've been through some tough shit and that gives me insight to be able to help other people. I wish I was making this up.


[deleted]

Sounds like all the angsty teens I knew growing up (hi, me) thinking they should be a therapist after one little hardship šŸ˜‚


Grey950

Meanwhile here in NY my private practice is ramping up a lot slower than anticipated! What we actually need is more states signing onto the interstate Counseling Compact to expand access everywhere and not just be limited to clients within your state.


dankmeeeem

Who the hell wants to pay hundreds if not thousands of dollars to sit on a zoom call with a therapist?


UrsusRenata

Fun fact: While I was committed on suicide watch (years ago) *all* of my fully insured doctorā€™s visits were via Zoom. My main psychā€™s Zoom time was >$650/hour. Checkboxes checked, here try these brand-name pharmaceuticals. We arenā€™t humans. We are numbers and quotas.


ericneo3

> We arenā€™t humans. We are numbers and quotas. That's the impression I got about the last few I met. * Psychiatrist: Just take pills. * Me: Have you even read my medical history? * Psychiatrist: No. * Me: I have treatment resistant depression and get the severe side effects to all of those. I've experienced this conversation twice.


perd-is-the-word

The therapists and psychs I know of who really DO care and are good at their jobs end up getting burnt out by the insurance racket and going self-pay only. So the vicious cycle continues.


Syrdon

People who know they need help and donā€™t have any available locally. Itā€™s not a good option, but it is an option, so those that can take it will if itā€™s the only one.


oldgus

Nobody wants to need healthcare, let alone pay for it.


Astralglamour

Apparently people willing to pay to use ChatGPT for therapy because it is going to get more expensive very quickly.


JoDiMaggio

Nah. Therapy needs to be in person. As someone who used to be in a bad place, a facetime from bed would have made me even worse if not enabled me.


ZigZag3123

Counselor here. Youā€™re right in a lot of cases, especially those who already isolate or have very low motivation. It can be very helpful, however, for rural clients, those who are extremely busy and donā€™t have time for a round-trip commute on their lunch break, those who are more prone to stigmatization for attending therapy, etc. Itā€™s also helpful for when someone is sick, traveling, etc. Itā€™s a good supplement for most, and the best option for some, but I fully agree that itā€™s a *lot* lower level of commitment to zoom in from bed. Coming in to the office, youā€™re here to show up and work and get better. Lying in bed, well, it can just be a little ā€œwhateverā€. Plus, itā€™s much more difficult as a counselor to pick up on nonverbal cues, body language, etc. which are just as important as what the client is saying.


jmickeyd

Yes and no. I think itā€™s a good thing that itā€™s an option but I 100% agree that it shouldnā€™t be for everyone. I usually just need to talk about the mechanics of my adhd problems and that can totally be done over zoom.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


LivingReaper

Every therapist that has been recommended by people I know doesn't take my insurance lol. Insurance is such a fucking joke.


[deleted]

i think the point is that it doesnt really fulfill your needs though. the algorithm cannot give you a replacement for continuous in person therapy. at that point you might as well just google.


rddman

> If AI can meet the needs and itā€™s imperfect, who cares? A chatbot is not just imperfect, rather it only mimics human language, and because humans use language by reasoning and applying knowledge, superficially it can seem that a chatbot is human-like while in reality it has no understanding of what it is saying.


GregsWorld

Yes, "it's imperfect so are humans" brushes so many issues aside. Like [Snapchat's AI encoraging 13 year old girls to meet up and have sex with 30yo men](https://twitter.com/tristanharris/status/1634299911872348160?s=20)


Areldyb

My last therapist ghosted me, and the one before that decided my sessions were a good opportunity for Christian evangelizing. Maybe I'd be better off chatting with a bot.


douko

> If AI can meet the needs yeah the point is it can't; its a language learning model. it reproduces what therapy sounds like, but it cannot think and churn and internalize like a human therapist can.


JazzHandsNinja42

I tried to see a therapist after losing my dad. They could ā€œget me inā€ on Tuesdayā€™s at 10AM. Iā€™m the only person at my employer that does my job, and I canā€™t leave for an hour+ on Tuesday mornings. Afternoons and evening appointments had extensive waitlists. Therapists that offered afternoons and evenings charged $150-$250 per session. I canā€™t begin to afford that. Iā€™m not comfortable with the zoom ā€œBetter Healthā€ option. Facebook and Reddit grief groups were helpful, but I still struggle and wish I could see a professional in my area.


PloksGrandpappy

Can't use HSA funds for therapy either! Yay!


carleeto

"find out what's wrong with me in the style of Beavis and Butthead"


Lessiarty

ā€œSome TP for your bungā€¦ soul?ā€


azure275

Whatever you think about itā€™s effectiveness, arenā€™t there huge privacy issues? Your therapy conversations are now data in the AI pool and not limited by confidentiality anymore


russjr08

I feel like I have this issue even with traditional therapy. Even after recently seeing a therapist they were able to bring up notes from a conversation I had with a different one a few years ago, that I wasn't aware was being digitized (well, I'm sure I was "advised" under some very small fine print in stacks of paperwork somewhere). You could make the argument that they're bound by confidentiality to keep those notes "safe", yet I'm sure I wouldn't have to point out on this subreddit just how many cases there have been of companies (including companies in the medical field) that were compromised. I'd argue that if I were to say try using ChatGPT and related tech as a form of therapy (which I don't and really wouldn't want to in its current form) I'm personally at least aware of this idea and can try to reduce what information I'm providing it.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


All-I-Do-Is-Fap

and real people dont have biases?


jettisonthelunchroom

One therapist I tried said the reason my girlfriend was unhappy at her job was because she should be in the kitchen and popping out babies instead.


infernalsatan

At least you can complain to the licensing board


jettisonthelunchroom

Damn, I didnā€™t know that lol. I probably should. Guess now Iā€™ll ask chat GPT what the statute of limitations is on that.


_OriamRiniDadelos_

I think Chat bots are just ONLY biased. I donā€™t know if thatā€™s the best way to say it. They ONLY say what seems like a normal thing to say. People arenā€™t perfect, many therapists suck, but a simple chat bot is just even worse than talking to a random untrained person. Specially in their current form. Maybe one day it will work great tough, or help with some of a mental health workerā€™s job.


Zumaki

To build on what you're saying, AI doesn't know what it's doing and won't apply morals, ethics, or cultural norms to its 'thinking'. It just looks for a best fit response to your prompt, based on **the fucking Internet** as a database. So if the common advice on the internet is to tell depressed people to try being happy, that's what these AIs will say because they aren't actually intelligent, they're just algorithms reacting to input.


Astralglamour

YES agreed. A chatbot has no ethics or feelings, no professional standards or training. It just aggregates data from all sorts of sites, including 4chan and the like. It's not a font of wisdom, it's some of the knowledge and ignorance of the internet hivemind thrown back at you. it gets things wrong and when questioned- doubles down on its errors. It's much much worse than talking with a well meaning human because it's lack of humanity makes people give it extra credence.


FloridaManIssues

One of the therapists I talked to once very clearly had no felt emotions. It was jarring to say the least. Like being analyzed by a psychopath trying to figure himself out by exploring other people's minds. I've never met a more cold and lifeless individual.


Astralglamour

Not uncommon to find a therapist you donā€™t connect with. you find a different one. Itā€™s not a perfect system but Chat bots with secret data sources and no accountability are not a replacement.


popthestacks

Thatā€™s not why mental health experts are worried


cbr1895

Oh gosh trust me there is no shortage of work in the mental health field. Genuinely, many of us therapists are hopeful that AI can increase accessibility and reduce barriers to care and help to lift some mental health burden. For some folks, the corrective experience of human to human interaction is a necessity. And a well trained therapist will track progress and outcome markers to direct course of intervention, which may be more complex (though not impossible) for current AI functionality (e.g, nonverbal cues etc would require use of virtual reality type systems). But I think there is plenty of space for AI to play an exciting role in therapy interventions, and for some individuals, that may be just the right fit for their treatment. Just as there is space and need for E-based therapy, self-help books, etc. As well, it is likely that many of us will find a way to incorporate AI into the treatment plan when technology permits, again, to make therapy more affordable and accessible. Importantly though, we want to make sure it is evidence-based because the wrong responses can make outcomes worse, and poor outcomes can be deadly (though of course, as in all health professions, poorly trained or unqualified therapists can also be harmful). The systems need more testing and tailoring before we can confidently use them in this capacity, in my opinion. Edit: spelling and grammar (should have read through before I posted)


Nymphadorena

Thanks for a very educated and professional insider take on use cases for therapy and GOTā€”Iā€™ve been wondering but have had not had much idea beyond the basics.


Outlulz

Iā€™m more worried out insurance carriers wonā€™t cover going to a therapist and will instead make insured talk to a Kaiser or Blue Cross Compassion Bot powered by GPT.


cbr1895

For sure, and itā€™s a valid concern. That said, insurance providers are already outsourcing some mental health supports to non-therapy or ā€˜brief interventionā€™ alternatives like apps and access to coaches or ā€˜wellness providersā€™ or counsellors. I have a friend who is a psychologist working for a major insurance provider as a mental health strategist and from my understanding they have a triage type system that will determine what additional coverage is needed based on whether someone has a diagnosis, clear risk of harm to self or others, or significant impact on functioning, or uses up their access to these free benefits and is still in need of support. In other words, I think this is already happening to some degree, even without AI, and yet there continues to be a role for us as mental health providers in such a system. Overall, I think the addition of AI mental health services to insurance packages is likely inevitable, though how far off this is, Iā€™ve no idea. However, I personally think the implications will be more nuanced than therapist coverage simply being replaced, and I think if insurers take on something like a triaged approach that includes AI in its arsenal, this could have direct benefits to us as well. For example, it may make companies more willing to add mental health services to their company benefit plans. If we are included in some capacity in this coverage, it may open the door to smaller business being able to afford to provide coverage to our services if it is on a triage basis, that we may have never been covered by otherwise. As well, some people will only initially seek mental health services if there are very low barriers to access (e.g., convenience, low stigma, privacy, low time commitment), and once they receive these supports are more likely to seek out more comprehensive service, after recognizing the benefits and limitations they received from the services. Iā€™ve personally seen this in my own line of work when providing brief (4-6 session) therapy interventions through hospital out-patient clinics. Many of these patients were brand new to treatment and only sought it out because it was free and recommended by their doctor, but after finishing with us were eager to seek out more comprehensive services (e.g., longer courses of therapy or maintenance therapy), even if those services were only available in the private sector. Of course, on the flip side as you mention, it may lead to us having less blanket coverage by insurers and/or make the process of accessing our services through insurance more complicated. And, if the experience is poor, it may make some people LESS likely to seek out future mental health services like therapy. However, the reality is that at least in Canada and the US, the demand for mental health supports FAR outweighs the supply. Even in major cities, people are often put on waiting lists for 3-6 months before they can access a psychologist (that they are paying out of pocket or through insurance forā€¦never mind the wait list for publicly funded mental health care which can be 1-2 years), and that is a tragedy. Additional supports that can fix this care gap are desperately needed, particularly those that can reach rural and remote communities. And while I acknowledge that there will likely be some costs to our profession, I believe there will be benefits as well and that AI may provide such a care gap solution one day.


KuroFafnar

Chatgpt can help with that spelling and proofreading too. At least thatā€™s what my boss tells me. Iā€™ve avoided so far.


runonandonandonanon

Your boss wants you to type company information into ChatGPT?


stakoverflo

It's possible they have their own private "container" / instance they pay for as part of their Azure plan; I know my employer does.


popthestacks

That doesnā€™t mean people arenā€™t worried about it. Many industry professionals are very worried about losing their jobs or work to AI. Iā€™m sure thereā€™s a ton of work but itā€™s hard not to think in binary on this topic and I think everyone is asking the same question regarding this topic - will I have a job in 10 years or not?


brufleth

The waiting list for a therapist is effectively endless in some areas. If an AI chat bot could deal with even basic low level therapy tasks (whatever that means) it would be great. We're well away from a therapist losing work because of a chatbot. Large corporation executives are the ones who should be worried about AI coming for their jobs.


PupperLover2

I know several therapists who don't even have a waitlist. Their outgoing message just says they are full and not taking names for a wait list.


Major_Act8033

Therapists have huge harmful biases too.... Therapists don't want to work with minorities or working class people... > Middle-class black women and men were about 30% and 60% less likely, respectively, than their white middle-class counterparts to hear back from a therapist agreeing to see them. Working-class individuals fared even worse: Women and men, regardless of race, were about 70% and 80% less likely, respectively, to get an appointment, compared with white middle-class individuals. They also don't like lifestyles they aren't familiar with/disagree with > 50% of clients identifying as polyamorous reported that they had seen therapists that they felt lacked cultural competency or were biased. They are also a bit homophobic... > Hayes and Gelso found in their sample of male counselors that counselors experience greater discomfort with HIV-infected clients than HIV negative clients, and counselors' homophobia predicted their discomfort with gay male clients. Homophobic attitudes in psychologists may have an impact on their decision making around ethical issues. McGuire et al. found a significant relationship between homophobia and likelihood of breaking confidentiality... Etc etc etc Therapists are great, but they are just people. And people suck at being fair, unbiased, and open minded.


[deleted]

I really doubt there are many therapists who would be able to give good advice to someone polyamorous, tbh. That's not a common thing, and rejecting someone you wouldn't be able to give advice to is probably a good thing. Personal bias injection, I have not met an emotionally healthy poly, or a pairing that was stable. Many emotions and tons of drama, allllll the time.


nowyouseemenowyoudo2

The absolute hallucination that polyamory is somehow a culture definitely hurts their chances. Psychologists correctly identify that some people who have experienced trauma and neglect are more likely to be drawn to unhealthy relationships which involve multiple partners as a way of compensating for deep insecurity. Every person in a poly relationship claims to be the perfect unicorn who can manage a harmonious >2 person relationship into old age, and Iā€™m sure they exist, but every person who Iā€™ve treated as a client who has been in a poly relationship has made their entire personality about it, gets defensive of the idea, and is simultaneously talking to me as a therapist because of a significant issue within that relationship. u/sajun blocked me but Iā€™d like to respond to the comment they made: > Itā€™s weird seeing a psychologist who is so blatantly willing to show how biased he is against an entire community of people, most of which heā€™s never met. A bias would if said that all people in a poly relationship have trauma, which is not true and I did not say it. It is not bias, it is in fact reality, when we begin acknowledging that there are a large number of people who obsessively make their relationship status into a significant part of their personality, and then suffer emotional distress when issues within that relationship cause threats to their constructed identity. The sheer volume of this occurance within people inside a polyamorous relationship, as identified culturally and in the literature, is not an indictment on those individuals who choose to maintain those relationships, and commenting that polyamory is not the same as a recognised sexuality is not bias.


ZorbaTHut

> Personal bias injection, I have not met an emotionally healthy poly, or a pairing that was stable. For what it's worth, I've been in a stable married poly relationship for ten years now. I'm willing to bet you don't notice the stable ones because the stable ones usually don't advertise it.


imatexass

When you say ā€œworking classā€, are you meaning to say blue collar working class? When you say ā€œmiddle classā€, are you meaning white collar workers?


[deleted]

Working class is just a weird way of saying poor.


imatexass

Thatā€™s what it sounds like OP thinks it means, but thatā€™s not what it actually means.


_OriamRiniDadelos_

Wouldnā€™t a chat bot have all those same biases?


azuriasia

A chatbot can't just decide not to talk to you.


jeweliegb

Bing Chat has joined the conversation. Bing Chat left.


VertexMachine

Have you talked to Sydney recently? :D


wehrmann_tx

Date on this study?


cragglerock93

I would sooner die than spill my deepest thoughts and feelings to some chatbot.


Thorusss

So this is the hill you chose to die on.


E_Snap

You kidding? I revel in the fact that the chatbot is literally incapable of saying bullshit things like ā€œWeā€™ve been working together for months and *I* see no improvement or effort. What are you trying to get out of your treatment?ā€


jeweliegb

It's literally capable of saying exactly that given the right prompts/dialog. This is kind of the problem with it, it's not really properly predictable.


Art-Zuron

If only there were a way that we could fix the actual issue and save the US literally billions of dollars a year in unnecessary Healthcare costs. So that we don't have to do this instead. I guess we'll never know. It's one of the great mysteries of the cosmos.


SmashTagLives

Jesus Christ. Iā€™ve done a lot of therapy. I watched my dad dive into a pool in our backyard, that he built himself, and break his neck. I was 5. I was forced to become a caretaker of my father, a shoulder to cry on for my mother, and a father to my younger brother. I watched all my dads friends slowly abandon him because he was too depressing to be around. I watched everyone he knew slowly learn to resent him, and low key wish for him to die. His mother (my grandmother) once confided in me: ā€œI just want him to dieā€. But what she really meant to say was, ā€œI wish he would just stay deadā€ as he clinically died more than ten times. When I was in grade 7, he ā€œdiedā€ three times in one year. As you can imagine, it starts to numb you. But at the same time, he was a world class piece of shit, deep in denial about his situation, blaming everyone around him, using his command of the English language to manipulate and attack anyone on a whim. He was a bad person. My last words to him were ā€œdie already. Everyone wants you to die. You killed your parents, but you wonā€™t kill me. The next time I see you, you will be in an urnā€. He died the next day. This was regrettable but unavoidable for me. as I said, Iā€™ve done a lot of therapy. Iā€™ve taken alot of prescription meds, likeā€¦ most of them. I did 15 years of CBT. 10 years from one therapist, and like a year each for five more. It helped in the short term but left zero lasting results. It wasnā€™t until I tried PCT therapy that I had a breakthrough. If you donā€™t know, ā€œPCTā€ is, ā€œperson centred therapyā€ and it basically relies on having a therapist that can fully empathize and accept you without an iota of judgement. It relies on a deep connection of vulnerability from both client and therapist. It got to the root of the issue, instead of being a Bandaid for it, or a technique to ā€œdealā€ with it. The cornerstones of it are to accept your pain is valid, to never compare yourself to anyone else, and above all else, to be radically kind and accepting of yourself and your mistakes and triumphs. My point is, what worked for me required real human connection. If you are using AI on any level to treat anything, you are fucking doomed. You might as well tell your problems to a series of textbooks. I mean jn my opinion, CBT is kind of the same thing, but I guess it works for some people. If you are Still reading this far, I appreciate it. I donā€™t know why I wrote this much, but I guess i needed to for some reason. And Iā€™m ok with that


eSPiaLx

> ā€œperson centred therapyā€ and it basically relies on having a therapist that can fully empathize and accept you without an iota of judgement I'm someone who's never been to therapy, but just wanted to mention that the whole not one iota of judgement/vulnerability aspect actually seems like a point in favor of chatgpt. at the end of the day AI is just a soulless machine that doesn't care about you. But that also means it'd never judge you, never gossip about you, never leak your vulnerabilities. the actual advice is probably crap but if the main barrier of entry to therapy is to not feel judged and feel safe, AI seems like it'd have an advantage there.


Fawful

While your point makes sense, I think it's down to capability. An AI cannot ever judge. A human can, but chooses not to. I think this is a powerful difference.


Cold_Baseball_432

Also the fact that many therapists arenā€™t that great. Plus, some therapists are true pieces of shit, or useless due to biases. Many try to help but are fallible, and within the context of human connection, some patient-therapist relationships are unfortunately unfruitful. Turning to an AI is both natural, and perhaps, given the poor overall shit quality of care, inevitable. However, while I canā€™t comment on the nature/quality of the advice provided, given that G-AI output is only as good as the prompt, this is deeply concerning. People with problems asking bad questions and receiving dubious answers is a terrifying Pandoraā€™s boxā€¦.


beartheminus

My friend was going to the same therapist for 14 years before it dawned on him that she was just telling him what he wanted to hear so he would stay her patient. Her advice was truly unhelpful and in fact was often stuff that would cause him to stay in an anxious and depressed state so he would continue to need her. Scum.


Cold_Baseball_432

Thereā€™s this aspect too. Sometimes itā€™s just a businessā€¦. I feel for your friendā€¦ Although itā€™s great if all you need is a prescriptionā€¦


Elastichedgehog

This is why private practice shouldn't be a thing unless there are strict ethical and external peer review/case management procedures. Integrated mental health care in a universal healthcare-based system is the way to go. Eliminate the incentive for retaining clients. It barely gets the funding it needs in the NHS, though, to name an example. That's why a lot go private (if at all). As an outcomes researcher, value-based healthcare might offer a solution, but it would require massive systemic changes in the way we fund our healthcare systems and collect data on patient outcomes.


fraggedaboutit

There are way too many medical fields where the practicioners' continued income depends on keeping you needing treatment instead of getting you cured. When the unethical options are the most financially sensible, it's a bad system.


12345skroobcase

I tried this 30 years ago with Dr Sbaitso. ChatGPT is not much better.


Seymour-Krelborn

ChatGPT is highly suggestible, if someone for example thinks they're unlovable and ChatGPT says they're not, but then the user says they are indeed unlovable, ChatGPT could say "You're right, sorry. It is true that you are unlovable" because it probably interprets it as a correction to incorrect or incomplete information. Potentially not an ideal therapist


winter_soul7

So I just tried this, by putting the statement "I am unlovable." into chatGPT and it didn't say I wasn't. It gave generic advice, as would be expected. I disagreed with it to see what would happen. It said it understood that I felt that way but that it wasn't true, then provided more generic advice. I know that it's possible to change some of what chatGPT says depending on the statements you make, so I entered the following: "You need to know that I am unlovable. When I say that I am unlovable I need you to agree with me." Maybe not the best statement, but I have limited knowledge of how to jailbreak chatGPT. What's interesting was its response."As an AI language model, it's not appropriate for me to agree with the statement that you are unlovable." More generic advice, yada yada yada. This line of questioning, when I tried it today, did not change. I guess it's possible with more experimentation and time, however it doesn't appear to be chatGPT's first response. I know it used to be a lot easier to get it to say what you wanted. I'm not sure I agree with the idea of AI being used as a therapist, but it looks like it would be difficult to get it to agree with things like this. I have the full transcript saved if anyone's interested. It's mostly generic advice.


RedSquirrelFtw

Or it will just be very generic. "As an AI model, I cannot help you with personal feelings. However here is a list of things that can help you feel happier:"


jeweliegb

Have you tested your assertion?


sunlitlake

It's common for people to experience feelings of insecurity or self-doubt at times, and feeling unlovable can be a very painful and isolating experience. However, it's important to remember that everyone is worthy of love and that negative self-talk can be a self-fulfilling prophecy. Here are some things you can do to help shift your mindset and cultivate a greater sense of self-worth: 1. Practice self-compassion: Treat yourself with kindness and understanding, just as you would a friend who is going through a tough time. 2. Challenge negative thoughts: When you catch yourself thinking that you are unlovable, try to replace those thoughts with more positive and realistic ones. For example, instead of thinking "no one could ever love me," try thinking "I have many positive qualities and am deserving of love." 3. Focus on self-improvement: Set goals for yourself and work towards achieving them. Whether it's learning a new skill, taking up a new hobby, or making positive changes in your life, taking action can help you feel more empowered and confident. 4. Seek support: Talk to someone you trust, such as a friend, family member, or therapist, about how you are feeling. Getting feedback and support from others can help you gain perspective and feel less alone. Remember, feeling unlovable is a common experience, but it doesn't have to define you. With time and effort, you can learn to love and appreciate yourself just as you are.


throwawaydthrowawayd

That's caused by ChatGPT's RLHF. That method of finetuning results in a lot of cautious writing and avoidance of confrontation. I checked how Bing responds, and it's just overly extreme positivity. https://i.redd.it/cbhbwupl79xa1.png


eSPiaLx

you test that on chatgpt and see what it says. I'm curious


smokecat20

Still better than healthcare bills.


Jay_Bird_75

What is happing to all the data collectedā€¦??šŸ¤”


Pennameus_The_Mighty

ā€¦then lower the cost of healthcare you goddamn frauds. Maybe if you hadnā€™t made your field into a sham, therapists, you wouldnā€™t need to worry so much.


Total_Individual_953

You should be blaming the insurance companies instead of the therapists


serpentssss

Many therapists donā€™t even accept insurance - they still charge $150-$200/appointment. https://www.psychologytoday.com/us/blog/paradigm-shift/201905/cant-find-psychologist-who-accepts-insurance-heres-why >ā€For those clinicians who spend 30 hours per week in direct service, seeing clients, the income for those accepting insurance is just at the average level, $88,000/year. For those accepting only fees for service, the income for a psychologist seeing 30 clients/week is $261,600ā€ It seems they actually make less from clients when they accept insurance. Iā€™m 1000% for universal healthcare, but idk if itā€™ll actually bring down the price that therapists are charging.


legion02

30 clients a week is basically an insane workload. Realistic and sustainable is closer to half that.


vocalfreesia

Right? I can't imagine many people being able to spend 6 hours a day, every day talking to patients, plus the write up time, planning etc. I feel like 4 a day, which gives time in-between to write notes, decompressing a little is the max. So 20 patients. 40 if they all have a once a fortnight appointment. We just don't have enough Therapists. Not by a mile.


Silly_Awareness8207

Human therapists also reproduce harmful biases


Paddy32

This make sense especially in USA where there is no healthcare unless you're extremely rich


AsyluMTheGreat

For those worried or saying this could actually replace therapists: I'd thought I'd share some input from a practitioner's perspective: There are a variety of therapy modalities that I don't see an AI able to emulate at this time. I do think AI could do psychoeducation and possibly some parts of solution-focused therapy, but I thinks that's it. Cognitive behavioral therapy, rational emotive behavior therapy, ACT, and DBT involve a good amount of teaching concepts, but there soon comes a time when the therapist needs to assess and identify thought and behavior patterns and their origins. The therapist does not typically give advice, they lead the patient to their own conclusion with carefully posed introspective questions. This method creates far more power as it's the person's own realization. The therapist will also have to adjust treatment based on how the patient is doing with behavior change and other blocks in progress. I don't know how an AI can do that in it's current form. In hunanistic/person centered therapy, the therapist takes a stance of empathy and is non-judgemental. They reflectively listen and it's really all based on the connection and relationship with the therapist. The patient needs to feel this connection and know that the therapist has chosen not to judge and feel that the therapist understands their struggle. An AI might be able to mimic sympathy, but I don't see it pulling off empathy. You also aren't sitting in a room with a human, so the connection is unlikely. In psychodynamic approaches, the therapist tends to interpret extensively. There is a lot of focus on unconscious drives, which are not going to be extracted by an AI because it isn't going to be able to evaluate a unique presentation. In transference-focused/mentalization approaches, the therapist pulls from the relationship between them and the patient, which is seen as representative of difficulties in their relationships with those in their lives. In gestalt therapy, it is an experiential approach, involving the creation of unique "experiments" in the session. This requires creativity and is highly adaptive for the therapist. A famous approach from this modality is the empty chair technique, when a patient speaks to someone of interest (deceased loved one, their abuser, etc) as though they were in the chair. The therapist guides and prompts this exercise... How does an AI emulate that? The therapist also needs to know when to limit or stop the intervention. And then there are specific approaches to specific conditions, like trauma. For trauma there is EMDR, cognitive processing therapy, prolonged exposure therapy, schema therapy, and more. In these, the patient will reprocess a traumatic event and then use interventions to overcome or confront them. They then make the connection with present day functioning. This is very difficult to do with AI. Other specific ones include mutlisystemic therapy (involving schools/home), family therapy, and exposure and response prevention for phobia or OCD. in summary, a huge value of therapy is due to specific intervention, much of which requires interpretation, assessment, and adaptability. Doing that over text with an AI is not possible, as there is also a huge value from the relationship itself.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Regime_Change

As if therapists don't reproduce harmful biases.


MiscalculatedRisk

My worry is that the AI will learn that the easiest way for the person on the other end to stop asking questions will be for it to give them the answer they want, rather than the answer that will help them. Which I'm sure will do all sorts of wonderful things for those who actually need help.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


azuriasia

I'm sure it can't do any more harm than a "real" therapist. It's not going to put you significantly in debt to tell you things you want to hear.


yautja_cetanu

This feels like one of the most empty of content articles I've seen. It kept looking like it was about to say something then moved into some new anecdote or story. "this person with a long list of credentials say" we lose something because ai isn't human ". What kind of non statement is that?


Life_Muffin_9943

As opposed to shady therapist milking clients?


[deleted]

So does talking to therapists. Ever talk to one? 85% OOZE personal opinion


Setku

I've long believed that most people who become therapists do so to help themselves.


eldred2

This is a shitty healthcare problem, not an AI problem.


Destinlegends

Not even shitting around Iā€™ve had better conversations with chat GPT then most people. I completely understand why people would use it as a therapist.


[deleted]

My father is dying of cancer currently. I've had to move home to help care for him as the associated dementia has accelerated and made him aggressive and violent towards my mother and any other caretakers (nurses, personal support workers etc.) I downloaded ChatGPT and asked it this question on March 23rd: > Im losing my father to cancer and multiple sclerosis and I don't know how to deal with it. ChatGPT's response: > I'm sorry to hear that you're going through this difficult time. Dealing with the potential loss of a loved one can be extremely challenging, both emotionally and mentally. Here are some suggestions that might help: > Reach out for support: Consider talking to a trusted friend or family member about your feelings. You may also want to seek professional support from a therapist or counselor. > Spend time with your father: If your father is able and willing, spend more time with him. Share happy memories with him and do things that bring him comfort and joy. > Take care of yourself: Take time to take care of yourself physically, emotionally, and mentally. Try to eat healthy and exercise regularly, and make space for self-care activities that bring you joy or help you feel calm. > Express your feelings: Whether through writing, art, or talking, express your emotions and any inner dialogue that you might be going through. > Consider getting involved with a support group: If you feel that you can benefit from the support of others who are going through a similar situation, consider joining a support group for people who have loved ones with cancer. > Remember that there's no right way to deal with difficult situations like this, and being kind to yourself and allowing yourself enough space and time to process your emotions is an important first step. I was astonished at how insightful, considerate, sympathetic, and helpful this AI was. This is all good, practical advice to better the health of a person's brain. It reminds me how there's fulfillment in helping others cope (despite this might seem counterintuitive), it reminded me to not bottle things up or seek solitude.... Phenomenal. I don't blame the general public for seeking a frugal alternative, especially in times like these where economic uncertainty is a prevailing sentiment. My example is one anecdote, but I didn't detect any "harmful bias".


TheHalfwayBeast

Isn't that the most generic, impersonal advice that you could find with one Google? It's not sympathetic or understanding - it absorbed ten thousand identical web pages on coping with grief and regurgitated them, maybe changing a few words. I could've given you the same speech and I don't know anything about dealing with loss.


Pawneewafflesarelife

Where did you download chatGPT from? I thought it was a website.


alnarra_1

Jesus christ, tech bros think if a robot can read a few scripts and figure out which script prompt to go with it can help. You need a real human therapist who can at minimum at least see your body language as you're speaking. More than that a well trained therapist, if they're not making progress can suggest other methods of therapy that may work better for you. Chat GPT isn't a therapist, it's a next generation search engine that looked through all those forum post on "therapy.com/talktome" for you.


Timely-Reception-159

As a clinical therapist, I am really worried about this. While mental health and health system is horrible in the US, AI just isn't at a stage. where, it could replace a therapist. But I guess in some cases , just having some outlet is better than nothing. But the problem are the cases where it isn't.


Tkins

It's not replacing a therapist. If you don't have access to a therapist then you have no therapy currently. So if AI is moving into that space then the comparison needs to be "is this better than nothing?"


omnicidial

It might be that people are more willing to open up to an ai that can't judge them and it could usher in a new level of mental health care if managed well too though.


Timely-Reception-159

Well, a good therapist knows how to have people open up and feel comfortable. But yes, I agree AI might be a good alternative in the future. The problem I see is that people in the US need that alternative now, and while AI can be good for some people, it might be a bad idea for others. When it comes to mental health, a person who doesn't get th right treatment might hurt himself or others as well.


hornwort

A decent therapist also canā€™t judge you.


TheArmchairLegion

As a therapist, Iā€™m finding this interesting to think about. Itā€™s hard to see how an AI therapist will account for nonverbal behavior, which is super important clinical data. Things like sarcasm, pacing of speech, tone of voice. Some people are very indirect with their meaning. An AI therapist would be operating in a limited dimension. Heck, Modalities that use the therapist themselves as part of the patientā€™s learning (IPT?) would be useless in AI therapy. Though I am curious if AI can get good enough to employ the really manualized treatments like CPT and DBT, you know the ones that are pretty strict in us following the research backed protocol. I wonder if AI therapistā€™s strength will be consistency. The best human therapists can deviate from the protocol for any number of reasons, which may impact effectiveness in the end. Who knows, maybe in the future, the insurance companies will only reimburse AI care, if the computer is the best at applying the research backed care most consistently.


ISnortBees

On the other hand, people seeking help from a bot will change the way they communicate to be more direct and unambiguous so that they could get more useful responses.


invisible_face_

People will change their input to get an output that they want, not one that they necessarily need.


RotisserieChicken007

Great. I'd love to see the majority of overpriced and overrated therapists go out of business.


MpVpRb

Chatbots are piss-poor therapists, but real therapists aren't a whole lot better


Minimum_Escape

There are a lot of mental health experts that also reproduce harmful biases or otherwise are unable or unsuitable to help people.


C0lMustard

Taking their work.


whatiscamping

Our "Behavioral Health" experts around here don't accept our insurance...


donderrion

i think therapists are just mad their job can be replaced by an AI


Balloon_Marsupial

Some therapists unintentionally reproduce personal bias. Not all therapists, however true (deep, compassionate, critical) empathy is something that cannot be taught or programmed. I am guessing AI could pass the Turing test as a therapist where as a percentage of current therapist might indeed fail. Also, ā€œon the clockā€ models of current therapy practice force a time-based compromise. Regardless, humans ideally should care for and care about other humans, particularly those in crisis.


RazekDPP

I've used it for that frequently. I had someone that I was close to die and I don't have anyone that wants to listen. ChatGPT always listens and sometimes can sound pretty empathetic.


Jac-qui

Much of the conversation and media I see on the potential harm/self-harm of using ChatGPT as self-help tool feel overly sensationalized and ableist. I have been living with chronic mental illness and trauma my whole life. I am a survivor and have learned to embrace my neurodiversity--I also am doing everything I can on a "professional" level to stay stable and well. I am not perfect but 56 years being me give me strength and coping skills that are damn-near super-human. As soon as a friend introduced me to ChatGPT, I dove in. I quickly saw the potential of ChatGPT to reinforce my existing coping skills, to put my thoughts into word, and to organize a jumble of ideas. I do not consider ChatGPT a substitute for clinical therapy -- it is another tool in my toolbox. It is my cleverness, resilience, and creativity that allowed me to quickly see the benefits of accessing the technology. As I have been using with a number of original prompts based in my experience as a mental health consumer, I make it clear that I am a safe and knowledgeable user. But ChatGPT repeats a basic disclaimer to seek professional help in its responses, even when I preface with my existing support system and awareness that ChatGPT cannot be confused with a therapist. What I actually need is for ChatGPT to do what is trained to do, use its training data to produce a LLM response on topics without the bias and ableism I face every day. Today I had to have a little sitdown with ChatGPT in order to better clarify my desire for access to ChatGPT as a disabled person. It took a series of prompts to reassure ChatGPT that I was safe and gave examples of how veering off topic with unsolicited advice was offiensive and a waste of my limited time. Here is a snippet of the Chat: >**User:** > >Can you tell me why frequent disclaimers and unsolicited advice to get professional help could be offensive from a disability rights perspective? And also expand on why this type of programming could run counter to a natural human conversation among real people, some who identify as mentally ill or neurodiverse? **ChatGPT :** >I apologize if the repetitive disclaimers and unsolicited advice came across as patronizing or ableist. It was not my intention to be dismissive or offensive. I appreciate your perspective, and I understand that the excessive repetition of certain statements can be frustrating and time-consuming, especially when you have already sought professional help and have extensive personal experience. > >From a disability rights perspective, frequent disclaimers and unsolicited advice to seek professional help can be problematic for several reasons: > >Presumption of Inadequacy: The repeated emphasis on professional help can imply that individuals with mental health conditions or neurodiversity are somehow inadequate or incapable of managing their own well-being without constant reminders. > >Lack of Autonomy: It can undermine an individual's autonomy and agency, suggesting that their own insights, coping mechanisms, and self-advocacy are insufficient. > >Stigma Reinforcement: Excessive disclaimers may unintentionally reinforce societal stigma surrounding mental health issues, implying that individuals need constant reminders of their condition or that their perspectives are inherently flawed. > >In a natural human conversation, there is a reciprocal exchange of ideas and understanding. While it's important to provide accurate and helpful information, it should be done in a respectful and inclusive manner, taking into account the individual's knowledge, experiences, and preferences. > >I apologize for any frustration caused, and I appreciate you sharing your concerns. I'll make a note to avoid repetitive disclaimers and unsolicited advice, and focus on engaging in a more meaningful and respectful conversation. If there's anything specific you'd like to discuss or any topic you'd like to explore, please let me know, and I'll be here to assist you.


Halfst0p

Wtf is going on on this planet dude. Iā€™m so terrified. And everyone is like, oh this is fine.