- ChatGPT has launched its new age prediction feature globally ahead of the launch of its new Adult Mode
- Some adults are being labeled as teenagers
- Frustrated users worry that bypassing inaccurate restrictions will invade their privacy
ChatGPT’s new age prediction AI model is rolling out globally, but it seems a little overzealous in its attempts to detect who is under 18 to set “teen mode” content filters automatically.
The goal of using AI to identify underage users and include them in your own version of the AI chatbot has its appeal, especially with ChatGPT’s adult mode coming soon. OpenAI’s belief is that its AI models can infer a user’s likely age based on behavior and context.
But it seems that ChatGPT is not only applying protection measures to users under 18 years of age. More than a few adult subscribers have been forced to talk to the teen mode version of ChatGPT, with restrictions preventing them from engaging in more mature topics with the AI. It has been a continuous issue since OpenAI started testing the feature a couple of months ago, but that hasn’t stopped a wider rollout.
The technical aspect of this feature is murky. OpenAI says the system uses a combination of behavioral signals, account history, usage patterns, and occasionally language analysis to make an age estimate. In cases of uncertainty, the model errs on the side of caution. In practice, this means that newer accounts, users with late-night usage habits, or those asking about topics relevant to teens may find themselves caught in the safety net even if they have subscribed to the Pro version of ChatGPT for a long time.
AI identification confirmation
At first glance, it seems like a classic case of good intentions met with forceful implementation. OpenAI clearly wants to create a safer experience for younger users, especially given the tool’s growing reach in education, family environments, and creative projects for teens.
For users flagged incorrectly, the company says it is easy to resolve. You can confirm your age through a verification tool in Settings. OpenAI uses a third-party tool, Persona, which in some cases may ask users to submit a government ID or selfie video to confirm who they are. But for many, the biggest problem is not the extra click. It’s just that a chatbot is misinterpreting them and they have to give more personal details to beat the accusation.
We’re implementing age prediction in ChatGPT to help determine when an account is likely to belong to someone under 18, so we can apply appropriate experience and protections for teens. Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account.…January 20, 2026
Asking for ID even if it is optional and anonymous raises questions about data collection, privacy, and whether this is a backdoor to more aggressive age verification policies in the future. Some users now believe that OpenAI is testing the waters for full identity confirmation under the guise of teen safety, while others are concerned that the model could be trained in part on submitted materials, even if the company insists that is not the case.
“Great way to force people to upload selfies,” wrote one Redditor. “Yeah [OpenAI] Ask me for a selfie, I will unsubscribe and delete my account,” wrote another. “I understand why you are doing this, but please find a less invasive way. “
In a statement on its help site, OpenAI clarified that it never sees the ID or the image itself. Person simply confirms whether the account belongs to an adult and returns a yes or no result. The company also says that all data collected during this process is deleted after verification and the only goal is to correct the misclassification.
The tension between OpenAI’s goal of personalized AI and layering responsive security mechanisms that don’t alienate users is on display. And he may not satisfy everyone with his explanations for how much he can infer about someone based on behavioral cues.
YouTube, Instagram and other platforms have tested similar age estimation tools and have all faced complaints from adults accused of being underage. But now that ChatGPT is a regular companion in classrooms, home offices, and therapy sessions, the idea of an invisible AI filter suddenly wearing kid gloves feels especially personal.
OpenAI says it will continue to refine the model and improve the verification process based on user feedback. But the average user looking for wine pairing ideas and being told they’re too young to drink might leave ChatGPT in disgust. No adult will be happy to be mistaken for a child, especially by a digital robot.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.
The best business laptops for every budget




