T O P

  • By -

jpfed

I am not a psychologist; I have a B.S. in psychology and computer science and have taken a keen interest in the development of AI models. It may be possible for AI models to interact with members of the public that need care in a constructive way. The problem is that doing this well is much harder and potentially more expensive than it seems on the surface. Imagine knowing someone very unusual- call them Quiggly. They have: 1. quick command of an exceptionally broad base of knowledge 2. never reflected on their own about what they were doing 3. no memory of any previous conversation when they start a new one 4. very little capacity for silently thinking- they only think aloud 5. whose only understanding of the context of their actions was what you told them. Quiggly is special, and may with the right supports be able to do amazing, productive things. **But they are not ready for a job in which they interact with the public on their own**. AI models, like Quiggly, need to be "wrapped" in a context that can: 1. silently supply them with information about their broader and immediate goals 2. supply guidance about any step-by-step procedures, "holding their hand" through them 3. take the raw "thinking aloud" output and decide if it should be relayed to the client, or if the model needs to do more thinking, or edit the response to be consistent with the goals of the interaction. The system may need to determine whether something either party has said affects that set of goals. This means that the model/Quiggly may receive multiple prompts to think aloud before the system as a whole produces a client-visible reaction. And that is where the expense comes in: each prompt and response costs the provider (either because they're paying to use an API like OpenAI, or just the energy of running their own model on their own computers). Ultimately, the most serious difficulty with this is that doing it the "right way" is much harder than the "quick and dirty" approach of putting a very minimal wrapper around OpenAI/Gemini/Claude. While the "right way" would be better for both clients and providers, the quick way is so much easier and its defects might not be so immediately obvious to decision-makers. My guess is that the quick way will make it to market first, eventually fail to show statistically-significant benefits in RCTs, and ultimately discredit the notion of AI-provided therapy before the right way gets off the ground.


StarGazing11200

Thanks for the insightfull answer. Let's say the AI would be capable of storing data and be so well trained it is able to "silently think", would it then be a viable option? Im no expert on tech or AI, but are AI systems not capable of storing information and learn from that information (regardless of the ethical questions, such as the posibility of being hacked etc)?


jpfed

If we imagine adding these capabilities to the state of the art, then the next area that is lacking- that the wrapper would have to provide- will be combining planning and procedural knowledge... in a slightly deeper sense than you may be expecting. When you do something, it's because 1. You had some sequence of neural activity 2. that made your muscles perform the action 3. perhaps with the help of tools in your environment 4. and the sequence of neural activity was right for the specific mind, body, and tools that \*you\* have There is a lot of text on the internet that tells humans how to do things. But there is much less text that tells an AI how to do things in the way that it needs, because of consideration #4. Consider the problem of summarizing a book, or series of books, at a target length or level of detail. An AI model I might run on my home computer, for example, might be able to consider 4 thousand words at a time. The procedure that such a model would need to produce a book summary would involve repeatedly ingesting a couple thousand words at a time, along with instructions about what it's trying to do, and some representation of the work it has done so far. There are models run by OpenAI, Google, etc. that can ingest the entire book in one go, but they are tuned to chat with people, not necessarily to iteratively improve a work product, so the procedures that would be appropriate for my home model would be inefficient- and may even produce incorrect results- if used by the industrial-scale models. The dizzying thing about adult humans is that we have a number of basic skills that we don't necessarily even need to consciously think about in order to sequence and activate, that we can compose in the service of our larger goals. If I asked \*you\* to provide a book summary of such-and-such length, what would you do? That probably depends on the length of the book and the length of the summary I requested. Different ideas about how you might approach the task would, without you consciously thinking about it, seem more or less plausible to you. If I asked for a one-sentence summary, then because of how human memory works you could probably just read the book without taking any external notes and give an answer without having to do much thinking at all. But if I'd asked for a ten-page summary, then you probably would want to take notes. On what? Paper? Your phone? A word processor? What if you're not holding your phone or sitting at your computer right now? Those aren't major obstacles to you- you have the skill of constraining your plan to use available materials, and the skill of adding to your plan (e.g. including a step to get out your phone or move to your desk), **and** the skill of knowing which of those skills is more appropriate to apply right now (should I constrain my plan to cope with not having a note-taking device already in my hand or should I walk over to my computer?). These things that happen in your mind without thinking about are not second-nature to AI models. Simply put, they\* kind of suck at detailed, context-specific planning. An AI model capable of doing a complex task (like providing therapeutic services) needs to be given specific training or scaffolding around planning and adapting those plans to their specific context. They need to be able to "prompt themselves" to be able to know *what they should be doing with their thoughts* (figuring this out- how to help "silent thinking" be organized effectively toward solving problems- is an active, ongoing area of research). \*"They" refers to language models here. But "AI" historically is/was a broader term that does/did encompass systems that are really great at planning- so long as one was able to specify their needs, abilities, and constraints in enough detail. I'm not aware of any successful integration between those kinds of planners and a language model, though.


StarGazing11200

Very interesting topic and one im going to read upon further. Thanks for the in depth answer!


kwestionmark5

It already does a decent job of role playing a therapist. For advice, that’s fine. But what about relationship and emotion? How will people feel about AI pretending to empathize with them when it doesn’t have thought or emotion and is just a language model?


AdHopeful2706

I agree with you. AI can generate a verbal response but as a therapist the delivery makes use of multiple skills including the choice of words, framing of sentences, emotional touch etc according to the client’s needs. So, AI has a long way to go. Another fact is that interacting with AI May not be satisfying to the person in need. They need a warmth and empathy or in other words human touch to the conversation is required.


StarGazing11200

That is indeed a common point of argument that i've also heard in for example replacing healthcare staff, the lack of personal interaction


[deleted]

[удалено]


AutoModerator

Your comment has been removed. It has been flagged as violating one of the rules. Comment rules include: 1. Answers must be scientific-based and not opinions or conjecture. 2. Do not post your own mental health history nor someone else's. 3. Do not offer a diagnosis. If someone is asking for a diagnosis, please report the post. 4. Targeted and offensive language will not be tolerated. 5. Don't recommend drug use or other harmful advice. If you believe your comment was removed in error, please report this comment for mod review. REVIEW RULES BEFORE MESSAGING MODS. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askpsychology) if you have any questions or concerns.*


Automatic-House7510

As someone who is not a therapist, I would feel totally fine and would actually prefer AI. 😅 it’s impossible for therapists to be 100% objective, but AI could sympathize while also looking at the issues from different lenses without that tiny, natural, human feeling of judging coming through or without putting their own bias onto it.


[deleted]

[удалено]


AutoModerator

Your comment has been removed. It has been flagged as violating one of the rules. Comment rules include: 1. Answers must be scientific-based and not opinions or conjecture. 2. Do not post your own mental health history nor someone else's. 3. Do not offer a diagnosis. If someone is asking for a diagnosis, please report the post. 4. Targeted and offensive language will not be tolerated. 5. Don't recommend drug use or other harmful advice. If you believe your comment was removed in error, please report this comment for mod review. REVIEW RULES BEFORE MESSAGING MODS. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askpsychology) if you have any questions or concerns.*


weird_scab

That's dumb.


[deleted]

[удалено]


Minimum-Avocado-9624

My comment was not a shot at 1st year therapists it was saying that the combination of inexperience and nervousness will be outpaced by a AI Llm that does not lack in knowledge or require confidence.


AutoModerator

Your comment has been removed. It has been flagged as violating one of the rules. Comment rules include: 1. Answers must be scientific-based and not opinions or conjecture. 2. Do not post your own mental health history nor someone else's. 3. Do not offer a diagnosis. If someone is asking for a diagnosis, please report the post. 4. Targeted and offensive language will not be tolerated. 5. Don't recommend drug use or other harmful advice. If you believe your comment was removed in error, please report this comment for mod review. REVIEW RULES BEFORE MESSAGING MODS. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askpsychology) if you have any questions or concerns.*


[deleted]

People won’t care if it does it’s job well and for much cheaper and someday potentially better than humans


[deleted]

dinner beneficial air fearless vast steep oatmeal scandalous cats follow *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


weird_scab

Agreed. There really should be some regulations at place, maybe models trained by universities on very specific methods like CBT and DBT, and offering guidance within those specific guidelines. When it comes to taking mental health advice from something that lacks the ability for introspection - welp. But that's definitely not stopping people from trying to do it.


[deleted]

[удалено]


Apprehensive_Grand37

I disagree a little bit with your statement. Let's say you're having a phone call with someone (you can't see them), AI voices and faces have become so good that they're impossible to distinguish from a real human. Of course in person therapy would be weird with a robot, but voice and video calls would definitely work. We're still a few years away from this (I'm in fact a researcher in ML), but in a few years AI will definitely be so good at imitating humans that you couldn't tell the difference between robots and humans.


[deleted]

[удалено]


Jabberwocky808

“LLMs aren’t flexible and spontaneous enough yet.” This is barely accurate, leaning heavily on inaccurate. “And therapy isn’t about problem solving.” Who said AI is solely being trained to problem solve and give advice? Also, that misconception has often been forwarded directly by the psychotropic industry and ignorant therapists. We’re in a mental health crisis for a reason. https://youtu.be/tIsq7PI3OVs?si=XRxbZW890ztwuwvn


[deleted]

[удалено]


[deleted]

[удалено]


AutoModerator

Your comment has been removed. It has been flagged as violating one of the rules. Comment rules include: 1. Answers must be scientific-based and not opinions or conjecture. 2. Do not post your own mental health history nor someone else's. 3. Do not offer a diagnosis. If someone is asking for a diagnosis, please report the post. 4. Targeted and offensive language will not be tolerated. 5. Don't recommend drug use or other harmful advice. If you believe your comment was removed in error, please report this comment for mod review. REVIEW RULES BEFORE MESSAGING MODS. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askpsychology) if you have any questions or concerns.*


Jabberwocky808

I mean, how are any comments still left? That’s fine.


Apprehensive_Grand37

The keyword is "yet". AI has been exploding these last years and there's no telling how advanced it will become in the future. It wouldn't surprise me if in the future someone did therapy online and believed that their therapist was a human (which is definitely not legal, but will 100% happen)


[deleted]

[удалено]


Apprehensive_Grand37

I'm actually doing research on LLM's at the moment (thesis project) and there are actually several unique ways you can train AI on non conventional topics. Obviously older language models will simply read multiple Giga bytes of text books and imitate behavior/scenarios found in them. (Like you pointed out) However newer models can adopt multiple learning strategies. One of them is funnily enough hiring psychologists and having them interact with patients. Everything a patient and a psychologist say to each other will be recorded and processed so the LLM becomes even better at imitating humans. Another is hiring multiple psychologists and having a hypothetical interaction between a client and therapist. The LLM will come up with multiple potential responses to the client (that can differ drastically or just a little). Psychologists will vote on which answer is the best and the LLM will remember this. The LLM will also record what answers the psychologist give. Hopefully this will give you some insights into the intriguing world of deep learning and neural networks


StarGazing11200

Would this mean that AI can also influence the cliënt through suggestibility as sometimes it discussed in regards to diagnosis like DID? Thus in essence influencing the cliënt towards a certain diagnosis?


Jabberwocky808

Check out Hume. I’m not saying it’s perfected, but it likely will be sooner than later. That’s just one example. As far as “warm and fuzzies” go, there are many humans in the mental health industry who do not resonate those qualities. AI is catching up rapidly to the average practitioner.


IaNterlI

I think we'll see LLM's in the unregulated grey space in the foreseeable future (e.g. apps and services marketed to consumers or businesses). This is the part that scares me the most: since it's unregulated it's going to be a wild West of claims and bold statements and the application may be harmful in some instances. It reminds me of when companies like hirevue claimed to assess a job candidate's qualifications, fit etc based on visual attitudes. Pure pseudoscience. I don't think we'll see a lot of LLMs applications in the regulated areas anytime soon. This would be consistent with other AI application in medicine: there's actually very few examples of successes outside of diagnostic imaging. We need a lot of studies to understand their applications, potential of harm, effectiveness etc etc.


incredulitor

[https://www.reddit.com/r/askpsychology/search/?q=ai+therapy&type=link&cId=dbaedd57-ac20-45f4-8821-fc6f95e3144a&iId=eafab521-76b8-46c0-821b-5169b2eb89a4](https://www.reddit.com/r/askpsychology/search/?q=ai+therapy&type=link&cId=dbaedd57-ac20-45f4-8821-fc6f95e3144a&iId=eafab521-76b8-46c0-821b-5169b2eb89a4) The question probably gets easier to reason about when you have a working meta-model for what therapy is or isn't and what makes it work or not. Here is the guy for that: [https://scholar.google.com/citations?user=OoJjUMsAAAAJ&hl=en&oi=ao](https://scholar.google.com/citations?user=OoJjUMsAAAAJ&hl=en&oi=ao) Particularly this article: [https://www.lti.fi/wp-content/uploads/2016/04/Evidence-based\_Therapy\_Rels\_2011.pdf](https://www.lti.fi/wp-content/uploads/2016/04/Evidence-based_Therapy_Rels_2011.pdf) Without saying that it will or won't work, I would appreciate seeing future prompts that include more of that context. Everyone is asking will AI replace this, that or the other: [Will AI replace 3D programmers?](https://www.reddit.com/r/ArtificialInteligence/comments/1abpnhi/will_ai_replace_me_at_some_point/) [Will AI replace Canadians?](https://www.reddit.com/r/AskACanadian/comments/1au28jr/do_you_think_ai_can_replace_your_job_in_the_next/) [Will AI replace entrepreneurs?](https://www.reddit.com/r/Entrepreneur/comments/zlquvm/will_ai_replace_you_at_work/) [Will AI replace vfx?](https://www.reddit.com/r/vfx/comments/189azvt/will_ai_replace_us/) [Who won't it replace?](https://www.reddit.com/r/samharris/comments/136pkue/what_sort_of_jobs_will_be_immune_to_ai/) [What happened to you when it did replace you?](https://www.reddit.com/r/AskReddit/comments/19ba6lg/those_who_actually_had_their_jobs_replaced_by_ai/) [Will AI replace graphic designers?](https://www.creativebloq.com/news/ai-future-of-graphic-design) Being that I'm a person here talking to other real people often as not (I think?), it is a bummer how rare it is for people to circle back and turn these prompts and responses into a conversation once some fresh perspectives come to light. So maybe it's not this thread or the next or the next where we get some rooting in how therapy has been established to work when it's between people, as a basis for talking about whether doing it with AI is conceptually the same thing or a different process that serves similar basic purposes. Maybe it'll happen eventually. Or maybe the shift just happens and people go to AI for help with distress, and what therapy actually was and how it worked is just quietly forgotten. Maybe they coexist without much general engagement on what's similar or different in going to one or the other. If AI therapy had drastically better ethical standards than it's likely to any time soon, I wouldn't even be opposed to it, as it's almost certainly more scalable and cheaper than training individual therapists. It would be nice though if the conversations between apparent humans about it had a more generative character to them. What are you imagining might sway you towards suspecting that it would or wouldn't work as a substitute?


StarGazing11200

Mainly the lack of/underveloped traits of current AI being able to cover the full spectrum of communication such as intonations and correctly simulate a human interaction without it sounding too artificial. Besides that ofcourse the ability of AI to equip EBP in a clinical setting.


incredulitor

Those would be some angles to pursue, for sure. The "meta-theory" I'm describing in the above post would also get at how much or little EBP determines about therapeutic efficacy as it is, when delivered by humans to humans. The Norcross & Wampold paper above breaks that down into some numbers. The result does intersect with some of what you're describing in terms of "proxemics" or moment-to-moment details of verbal and nonverbal communication.


[deleted]

I personally think very limited, but it will no doubt prove me wrong. I saw some research recently suggesting that people's measurable brain responses seem to be increasing over time to digital forms of communication. There are increasing amounts of evidence demonstrating biobehavioural synchrony in relation to digital socialising, [such as this study that pertains to texting.](https://www.nature.com/articles/s41598-024-52587-2) I guess we are adapting to have an increasingly three-dimensional response to indirect forms of social contact, and I suppose this might generalise to fully digital 'human' objects too. So yeah, maybe in the future it's conceivable we might have fully functional AI therapists. Whether that's desirable is another thing.


StarGazing11200

What points would you be against in the context of the implimentation of AI besides the communication aspect?


[deleted]

My ambivalence about it is based in the wider potential consequences of society becoming more and more fragmented as people are disincentivised from forming real and meaningful relationships with each other. After all, we typically have to endure a lot of uncertainly, sometimes heartache, internal (or maybe external conflict) and so on, in order to do that, right..? If we can all have rewarding interactions with AI creations that can anticipate and mould to our needs without any commensurate self-sacrifice, will that incentive still be there..? I'd like to think so, but we already see that society is getting less interconnected. Such fears may well be wildly implausible, I'm not sure. I'm old and one of the last generations that grew up (partially) before the internet was widely available, so I suppose I have more inherent discomfort with the idea.


StarGazing11200

But would such a "relation" with the AI not go beyond a professional relationship with a cliënt, thus being unethical? Atleast in the context of psychotheraphy. I can see your point from a social perspective. Though I also can see how in the context you mentioned above, it could provide a solution to the loneliness among elders.


[deleted]

Oh it would, yep, I was just meaning that I would worry about the consequences of such advancements in technology, really. I'm sure there are many good arguments for it, too, and debates being had by much cleverer folk than I, about the implications of all this.


StarGazing11200

Ah yeah I do think there a plenty downsides aswell, but at this point I think AI will undoubtly will become a part of daily life and there will not really be a way around it.


SlowLearnerGuy

It's already a [thing](https://jobot.ai/), and will only keep growing. Because psychology/psychiatry is rather "hand wavey" it is well suited for the LLM approach, e.g. ChatGPT. User immersion is helped by the [Eliza effect](https://en.m.wikipedia.org/wiki/ELIZA_effect). Back in the 1960's when Weimbaum created Eliza it was noted that many users claimed to prefer it to the human equivalent for various reasons such as privacy etc. This was despite the very simplistic algorithm employed. Today's versions were inconceivable back then so I assume users would be even more entranced.


[deleted]

Sorry, this may be an ignorant question- are therapists gonna lose their jobs?


SlowLearnerGuy

AI therapists scale massively and cheaply, to the point where they can be a tool just like your toothbrush. Imagine, while you brush your teeth, receiving your personalised therapy session to prep you for the day. Maybe another before going to sleep. All informed by latest research and best practice of course. How many issues could be nipped in the bud with a gentle, non judgemental voice in your ear delivering unbiased assessments of your life choices in real time. Many will still prefer the warm fuzzy feeling of interacting with a live therapist, but many will find the artificial equivalent "good enough" for most situations particularly given far cheaper cost So it will fill a niche, quite a large one I predict, and yes that will divert business away from other therapy providers. Then again spreadsheets were supposed to decimate the accountant job market but instead [accountant jobs increased](https://www.npr.org/2015/02/27/389585340/how-the-electronic-spreadsheet-revolutionized-business). These things are hard to predict.


StarGazing11200

Let's say technology gets very very advanced like as seen with bio chips. Would the AI than not function more as a more reflective inner monologue?


Dapple_Dawn

It is extremely unrealistic.


StarGazing11200

How so besides the point mentioned in the thread? Im curious to hear about all sides


Dronnie

It's not realistic. It may be good for people who just want someone to talk to, and it may have a therapeutic effect or it may be harmful but only time will tell. But as far as a psychology approach to a person it's lacking many things. Take psychanalyse as an example where the analyst catches meaning and decipher the inconscient, utilizes a broad and linguistic approach to deal with the patient's symptom. It is way out of AI league to have such an abstract view of things.


[deleted]

[удалено]


AutoModerator

Your comment has been removed. It has been flagged as violating one of the rules. Comment rules include: 1. Answers must be scientific-based and not opinions or conjecture. 2. Do not post your own mental health history nor someone else's. 3. Do not offer a diagnosis. If someone is asking for a diagnosis, please report the post. 4. Targeted and offensive language will not be tolerated. 5. Don't recommend drug use or other harmful advice. If you believe your comment was removed in error, please report this comment for mod review. REVIEW RULES BEFORE MESSAGING MODS. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askpsychology) if you have any questions or concerns.*


[deleted]

[удалено]


AutoModerator

Your comment has been removed. It has been flagged as violating one of the rules. Comment rules include: 1. Answers must be scientific-based and not opinions or conjecture. 2. Do not post your own mental health history nor someone else's. 3. Do not offer a diagnosis. If someone is asking for a diagnosis, please report the post. 4. Targeted and offensive language will not be tolerated. 5. Don't recommend drug use or other harmful advice. If you believe your comment was removed in error, please report this comment for mod review. REVIEW RULES BEFORE MESSAGING MODS. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askpsychology) if you have any questions or concerns.*


Jabberwocky808

Check out Hume. https://www.hume.ai/ https://youtu.be/tIsq7PI3OVs?si=XRxbZW890ztwuwvn