- OpenAi CEO says that using chatgpt for therapy has serious privacy risks
- Your private chats can be exposed if OpenAi will face a lawsuit
- Feeding your private thoughts to an opaque AI is also a risky movement
One of the rise in having an artificial intelligence assistant (AI) as Chatgpt everywhere is that people begin to rely on the things for which they were never destined. According to the OpenAi CEO, Sam Altman, which includes personal life therapy and advice, but could lead to all kinds of privacy problems in the future.
In a recent episode of the last weekend with Podcast of Theo von, Altman explained a big difference between speaking with a human therapist and using an AI for mental health support: “At this time, if he speaks with a therapist or a lawyer or a doctor about those problems, there is a legal privilege.
A potential result of that is that Operai would be legally obliged to cough those conversations if he faced a lawsuit, Altman said. Without the legal confidentiality he obtains when talking to the doctor or a registered therapist, there would be relatively little to prevent their private concerns from transmitting to the public.
Altman added that Chatgpt is being used in this way by many users, especially young people, who could be especially vulnerable to that type of exposure. But regardless of their age, conversation issues are not the type of content that most people would be happy to see the world in general.
A risky effort
The risk that their private conversations open to scrutiny is only a privacy risk facing Chatgpt users.
There is also the problem of feeding your deeply personal concerns and concerns in an opaque algorithm like Chatgpt’s, with the possibility of being used to train OpenAi algorithm and filter when other users ask similar questions.
That is a reason why many companies have licensed their own versions fenced with AI Chatbots ring. Another alternative is an AI like Lmo, built by unconditional privacy of protons and presents a higher level encryption to protect everything it writes.
Of course, there is also the question of whether an AI like Chatgpt can replace a therapist in the first place. While there may be some benefits for this, any AI is simply regurgitating the data in which it is trained. None is capable of an original thought, which limits the effectiveness of the council they can give.
Whether choosing or not opening for OpenAi, it is clear that there is a mined privacy field that surrounds the chatbots of AI, whether it means a lack of confidentiality or the danger of having their deepest thoughts used as training data for an inscrutable algorithm.
It will require a lot of effort and clarity before enlisting an AI therapist is a significantly less risky effort.