If there was ever evidence that people are developing a deep emotional dependency on ChatGPT, it’s probably OpenAI’s new Trusted Contact feature.
Speaking at Sequoia Capital’s AI Ascent event last May, OpenAI CEO Sam Altman said young people were using ChatGPT as a lifelong operating system, not only for productivity, but also for important personal decisions.
“I mean, I think all of that is cool and impressive,” Altman said. “And there’s another thing where they don’t actually make life decisions without asking ChatGPT what they should do.”
The feature is still rolling out, so Trusted Contact isn’t available to everyone yet, but to find it, click or tap your profile name in ChatGPT and then look in Settings. You can nominate a trusted adult contact, who must accept the role before the feature goes live.
If ChatGPT’s automated systems detect conversations that may indicate a serious risk of self-harm, the user is warned that their trusted contact may be notified and encouraged to contact themselves first.
A specially trained human review team then assesses the situation before sending any alerts. If reviewers believe there is a genuine security issue, the trusted contact receives a notification via email, text message, or in-app alert encouraging them to sign up.
OpenAI says alerts do not include chat transcripts or detailed conversation history to protect user privacy, and you can delete or change your trusted contact at any time.
Reassuring or disturbing?
OpenAI says Trusted Contact was developed with input from mental health experts, suicide prevention specialists, and a global network of more than 260 doctors in 60 countries. In conjunction with all the parental controls OpenAI has already introduced and the security barriers already in place, Trusted Contact is another sign that the company is recognizing that ChatGPT is something that can affect users emotionally, not just technologically.
OpenAI’s recent product announcements have really downplayed the use of ChatGPT as a sure thing and put more emphasis on ChatGPT’s focus on productivity, particularly when it comes to the Codex tool for creating code. However, at the same time, more and more security features are being added aimed at the emotional well-being of ChatGPT users.
The idea that we are now being monitored by ChatGPT also worries some. When my colleague Becca Caddy recently interviewed Amy Sutton of Freedom Counseling for research on AI monitoring tools in the workplace, she noted that knowing that your AI is monitoring you, especially in the workplace, could actually make the problem you’re trying to solve worse. Sutton commented: “With mental health stigmas still rife, AI observation would likely lead to greater efforts to hide evidence of struggles. This could create a dangerous spiral, where the greater our efforts to hide low mood or anxiety, the worse it will be.”
Whether Trusted Contact feels reassuring or unsettling probably depends on how you view AI and ChatGPT. But the feature is another example of how AI companies recognize that their products are not just tools for productivity and information, but systems that people can increasingly rely on emotionally during some of the most vulnerable times in their lives.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.

The best business laptops for every budget




