- Chatgpt will begin to estimate the ages of users based on conversation patterns
- Those identified as teenagers will be diverted to a filtered and specific experience of adolescents
- Parents will have new tools to link accounts, establish limits of use and receive alerts about the mental state of their teenagers
Operai is making Chatgpt act like a gorilla in a club, estimating his age before deciding to let him in. The AI will not use its date of birth (possibly invented), but how it interacts with the chatbot.
If the system suspects that he is under 18 years old, he will automatically change it to a more restricted version of the chatbot specifically designed to protect adolescents from inappropriate content. And if it is not sure, it will be wrong on the side of caution. If you want the chatgpt adult version to return, you may have to show that you are old enough to buy a lottery ticket.
The idea that generative AI should not treat everyone the same is certainly understandable. Especially with adolescents increasingly using AI, OpenAi has to consider the single set of risks involved. The specific chatgpt experience for adolescents will limit the discussions of issues such as sexual content and will offer a more delicate management of issues such as depression and self -harm. And although adults can still talk about these issues in context, adolescents will see many more messages of “I’m sorry, I can’t help that” when wading in sensitive areas.
To discover his age, Chatgpt will combine through his conversation and seek patterns that indicate age, specifically that someone is under 18 years old. The chatgpt assumptions of your age can come from the types of questions you ask, your writing style, how it responds to being corrected or even what emoji prefers. If you activate your teenage alarm bells, in the appropriate mode for age you go.
You may be 27 years old and ask about career change anxiety, but you write as a moody high school student, you may be told to speak with your parents about your spiral concerns.
Operai has admitted that there could be errors, since “even the most advanced systems will sometimes have difficulty predicting age.” In those cases, they are predetermined in the safest way and will offer ways for adults to demonstrate their age and recover access to the chatgpt adult version.
Emotionally safe models
This new age prediction system is the centerpiece of the next phase of security improvements for OpenAi adolescents. There will also be new parental controls at the end of this month. These tools will allow parents to link their own accounts with their children, limit access for certain hours and receive alerts if the system detects what it calls “acute anguish.”
Depending on how serious the situation seems and if parents cannot be reached, OpenAi can even contact the law application agencies based on the conversation.
Making Chatgpt is an orientation advisor for adolescents through incorporated content filters is a remarkable change on its own. Doing it without the user OPTE is an even greater swing, since it means that the AI not only decides how many years is, but that their experience should differ from the chatgpt conversation of an adult.
So, if Chatgpt begins to be more cautious or strangely sensitive, you must verify if it has suddenly been labeled when she was a teenager. It is possible that you only have a creative or youthful writing style, but you will still have to prove that you are legally an adult if you want to have more nervous discussions.
Maybe only talk about his back for no reason or how music is not as good as it used to be to convince the AI of its aged credentials.