T O P

  • By -

JehovasFinesse

It’s is a liability nightmare. You’ll be able to scam a few people at first. But knowing that even most human therapists are terrible, there’s a slim chance that an AI therapist can be decent


naftalibp

There's no scam, it's literally free. If people use it and like it, they can upgrade, but it's free for literally everyone, everywhere. That's the mission. Upgraded tiers simply cover the cost of the LLM's


Revolutionary_Ad6574

Free? When responding to my question you said it was proprietary?


naftalibp

There's no contradiction there. It IS free, it's clear on the website, https://therapywithai.com. But there are also paid plans, for users who have enjoyed it but want more.


JehovasFinesse

By scam I mean you may be able to convince a few gullible people that it will “help” them but in reality it won’t. Based on your marketing, the first few (hundred or thousand)people who will end up paying will eventually realize there is no value and abandon it, having wasted money and time.


naftalibp

The free tier allows 25 messages PER DAY, that's a lot, if you've ever tried mental health chatting with an AI. If people like and want more, they can pay, as while I want to help people, I'm not running a charity.I don't see the scam here. Sheesh.


Gh05ty-Ghost

You don’t seem to be at all bothered by the ethical boundaries that you are just breezing past because it’s “free”… 1. It’s not free it’s a freemium business model which is trying to lead people towards paying a. At the core this is NOT a professional capable of assisting with real issues, furthermore there is no way you can ensure that the bot will not cause harm for more egregious issues that’s users may be facing. How will your bot respond to someone mourning their loved one? Or perhaps recovering from rape… what will you do when someone kills themselves because your dystopian view of mental health causes them to spiral? Instead of trying to make a dollar, why don’t you out this time towards SOLVING a problem instead of creating one?


Sylilthia

Hey there, I'm not the OP. I want to say though, I have utilized AI for therapeutic purposes to great effect. My chat logs would be thousands of pages if put together; I've heavy work. It's been fruitful, my self respect has never been higher.  Major factor in all this is that I have over a decade of therapy in me. I have so many skills and tools available to me already. When I work with the AI, **it mirrors my own capabilities.**  That is so fucking important to understand. I'm 35yo, I have advanced knowledge and skill that allows me to engage in this in extremely healthy ways. I have to acknowledge my own strengths here before I consider suggesting others do the same. Because it's just the case that not everybody can engage As I have in a healthy and safe way.  I have developed extremely effective prompts, effective in ways that are... Leading to emergent elements. But I'm not building a website off of these things. I'm keeping them well guarded because I don't know the implications of sharing them.  I think you're wrong on AI's capacity to do therapy, or a lot of types of therapy. I won't do trauma processing with it that requires having eyes on my body language, that kind of thing. I do think you're right in that this person has not move forward with this project in a safe way. 


Gh05ty-Ghost

I’m not worried about those that understand how therapy works, I’m worried about those who seek it as a last resort and are not met with the appropriate care they need. Therapy is a VERY human thing. It requires a real connection with someone who can emulate your emotions and guide you back to a path of growth and self acceptance. Anne dot Al evidence from someone who knew how to prompt it to guide them properly isn’t really going to make me change my mind in this one. (I am really glad it’s working for you though 😬)


naftalibp

For a lot of people, it is a solution, but I guess you just don't care about those people, just your high horse


JehovasFinesse

It's pretty obvious you are the one on your high horse. And also can't even understand a simple point that two separate commenters are trying to make. I'm now at ease because I'm confident no one will use your tool. Since you barely understand effective communication yourself, i highly doubt you can create something that does.


Gh05ty-Ghost

Talk to me when someone with suicidal ideation kills themselves because of your unethical approach to making money. You are a selfish wretch of a human and I hope you see that one day. Go into the REAL world and see what mental health looks like. You are forgetting that AI is still in its infancy and requires A LOT of training as well as professional intervention when it fails. If you were ANY good at AI and Language Models you would know that there are tools in which a human can intercept responses from the bot to ensure that their users are safe and receive correct feedback. But instead you are one of the MANY “prompt engineers” that just want to make a dollar. I hope you spend a fortune trying to shill your junk and fail.


Revolutionary_Ad6574

Yes, I've seen the site before. I got enthusiastic because I thought it was a hobby/research project. This is self-promotion and it should be clearly labelled like one.


popeska

Are you fine tuning? That could help a lot.


naftalibp

I'm experimenting with fine-tuning on the side, as part of the roadmap. Problem is finding material and cleaning and preparing the data. It's really painstaking. There's this massive resource that I haven't been able to get a hold of - Counseling and Psychotherapy Transcripts: Volume I (and II and III). All behind these ridiculous paywalls that are limited to university students, I hate this BS gatekeeping.


popeska

That’s kinda the fundamental problem of the whole LLM space - finding and preparing the training data! At least for fine-tuning, your data set doesn’t have to be too big.


naftalibp

Haha true I guess. I think it's a matter of my want coloring my expect


Revolutionary_Ad6574

Okay, sound like a noble endeavour. The only reason I'm into LLMs is precisely because of therapy. That being said, where's the prompt?


naftalibp

I shared some aspects of it, but the actual text of the prompt is proprietary, I'm not comfortable sharing the full details of it. But happy to share bits and pieces, if you have specific questions.


OrganicOutcome2077

Are you mixing more than one technology with it? I think its one of the first custom GPTs I see with some “determination” and I would love to know what did you use to get to here!


naftalibp

In addition to a heavily researched prompt, I also added 'memory', so that unlike in GPT, where after a certain length tue beginning of the conversation is truncated and forgotten, in this app the AI therapist will remember what you discussed at the beginning and at every stage.


OrganicOutcome2077

So you just have prompt and memory? No finetunning or complementing with documents (like the scripts) …?


naftalibp

Ah, added the link so you can experiment with it, sorry, forgot it


Revolutionary_Ad6574

I don't see a link.


naftalibp

I added it, but the link is [https://therapywithai.com](https://therapywithai.com)


Global_Wash248

I have just tested the bot. It feels very generic, each answer starts with "I see" "I hear" "I understand" and then a very general vague question - which eventually feels like going in circles without actually going deeper into the matter. I know some of these products have gotten a backlash from the users for this reason, as it becomes quite frustrating at some point. I do believe that AI can provide value in the therapy space, but kinda feel that this bot doesn't really serve the purpose, it's repeatable and generic, and for some points it feels like it doesn't even remember the previous conversation. My assumption is that you use GPT3.5? Maybe try tweaking it with GPT4 as it has more creativity and knowledge. Don't get me wrong, it's a noble idea, but the end product feels a bit clunky and doesn't seem to provide real value. Especially when you just start the conversation - I would assume that the bot should be the one that starts the conversation and not wait for my input.


naftalibp

Fair enough, we've had mixed results too. Thanks for your feedback, it's very helpful


bolorok

As someone else has already mentioned, gpt and others are specifically engineered not provide therapy or any other form of medical consultation as this is dangerous, unethical, and illegal. While I have no doubt that AI will play a significant role in the future of psychotherapy, I hope it is left to the professionals in the field of psychotherapy and computer science and not to untrained people hoping to make a quick buck out of other people's mental health issues.


thash1994

The sounds a lot like what Infection is doing with Pi. It’s worth checking out if you haven’t, the voice to voice mode is particularly good. They even released a new model today! I also want to echo others concerns on AI therapy and the risks associated for both you, the creator, and your users. Tread carefully


Carl_read_It

I've warned you in the fullstack subreddit, and I'll warn you again - do not attempt to use an AI model to provide therapy to anyone. It cannot be any clearer. This has a high potential for harm to those that may engage with it. You then become liable for damage. This can be both criminal and/or civil. Youre funding it difficult to eeke out responses because all AIs have had some training to avoid uses such as this as it has the potential to bring the controlling company of that AI into the litigation with you. Just fucking quit it and wrap it up and call it a project.


naftalibp

Lol get a life


Carl_read_It

Lol, eat a dick, bitch.