AI therapy is a TERRIBLE idea #ai

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

“But I didn’t use Open AI, so it’s fine”
No, you just used a application which almost certainly just used OpenAI’s API, so two tech companies have your deepest darkest secrets.

tyrosinetek
Автор

IMO, the bigger problem with AI therapists is the _very real_ possibility that when you tell them your problems they send you directions to the nearest bridge.

GSBarlev
Автор

We're basically back at the start of social media again. People posting anything cause it doesn't feel real yet. The sooner they learn the better.

jordanmcgrory
Автор

Anyone stealing info from my ai therapy sessions is gonna need therapy too that’s retribution enough for me

kiaart
Автор

I love the perminent look in your eye of "the hell did I just hear people are doing with AI"

TheMotlias
Автор

I remember a story of a computer scientist in the 70's who created a chat bot therapist. All it would do is ask how XYZ made you feel no matter what you wrote about. He eventually shut down the project because testers were getting addicted to talking with the bot, and he was afraid of the narcissian ramifications.

Countessdunne
Автор

I’m okay with the whole chat gpt model knowing my ex is terrible.

michaelwinter
Автор

Fun fact: the first AI language model, invented in 1970, was built to be a virtual therapist

the.duck.is.ronin.
Автор

the word "author" doing some heavy lifting here

Unifather
Автор

There's so many things said here that are just a complete misunderstanding of AI

1.) Messages you send _might_ get used as training data, but they aren't immediately re-training the model after every comment you make. It's only in the context, and the context window is only so big. Other people's messages are not included in YOUR context, but yes you should assume that your context is not private.

2.) LLMs are incapable of "spitting out their training data" thats a logical impossibility, that's not how the system works. It can recreate its training data verbatim when given the right prompt, but you'd have to lead it by the tongue to actually get this result. The easiest way to manufacture a study that shows this happening is to use a LoRA of a single article, then prompt it with just the start of the article and watch as it "predicts" the rest of the article, which is like taking the Mona Lisa, creating a brush in photoshop using it as the brush image and then clicking once on a project and saying "oh wow it output the Mona Lisa!"

3.) As the first point actually points out, your context window is "private" in the sense that another user isn't going to be given a response using your context window. Yes it's possible that responses you include may or may not be used to train a future model, but it would need a complete retraining of the model to do this. So yeah, messages you sent ChatGPT3 could be in ChatGPT4, but messages you send to ChatGPT3 won't train ChatGPT3. Also keep in mind that while it's "learning" off of your responses it would also be learning off of everybody else's responses as well ultimately obfuscating the data.

SherrifOfNottingham
Автор

Yeah, it sucks, but the reality is that therapy in private practice is expensive and waiting lists are long. If I’m truly struggling and my options are venting about my non-exclusive problems and worries to someone who might sell the data, or going through a self-harm relapse, I’m choosing the former. Most of the stuff I vent about are regular stressors and grief, and those topics are mundane enough that I don’t care too much on if it’s sold. Most go through them.

clarelim
Автор

If you cant get real therapy, your willing to take risks

AirventOS
Автор

they sold my trama dumping, and depression

suongoh
Автор

Damn, there might be a book about my struggle to socialize like a normal human being? That's pretty cool tbh

evelieningels
Автор

*uploads childhood trauma directly to huggingface*

someonewithaguitar
Автор

Every google search, every piece of media you have consumed, every communication you have sent, every bit of it can be traced back to you. I really don’t care anymore. If someone wants to trudge through the endless sea of data to find something that makes me look bad they deserve to ruin me at that point and I probably deserve it for pissing them off badly enough to do it.

NoNo-xhru
Автор

I literally saw a comment on a video about doing this about 4 shorts before this video, they said that the feedback helps them process and no one can access their words. I did definitely scroll back and reply that they shouldnt do that. Maybe we shouldnt have given public access to AI without informing people how it works!

Kat-vzin
Автор

average person makes less than 3000 a month and therapy can easily cost 200-600 a month. At a certain point people don't care about their data being in a pool of billions. Esspecially if they're fighting depression or suicide

d-rey
Автор

"ChatGPT, tell me more about unhinged Alberta"

zeetotal
Автор

Oh no a fake writer is going to write about my completely unoriginal mommy issues 😔

nesyk