OpenAI is introducing a new safety system in ChatGPT that aims to identify underage users—not by asking them to self-report their age, but by predicting it through behaviour.
According to TechCrunch, the company has deployed an “age prediction” feature that analyzes how an account is used to determine whether it likely belongs to someone under the age of 18. Accounts flagged as belonging to minors will automatically be placed under stricter safety controls to limit access to sensitive content.
The system relies on what OpenAI describes as “behavioural and account-level signals.” These include factors such as the user’s stated age (if provided), how long the account has existed, usage patterns, and even the times of day the account is most active.
If an account is identified as likely belonging to a minor, ChatGPT will automatically shift that user into a more restricted experience, applying additional safeguards around topics such as sex, violence, and other material deemed inappropriate for younger users.
Why OpenAI Is Moving Now
The rollout comes amid growing global pressure on technology companies to strengthen online protections for children and teenagers, especially as generative AI tools become increasingly embedded in classrooms, households, and everyday digital life.
In a separate report, Reuters said OpenAI is deploying the age prediction system worldwide as part of broader plans to introduce an “adult mode” for verified users in early 2026. According to Reuters, users who are incorrectly flagged as under 18 will be able to restore full access by verifying their identity through a selfie-based check using Persona, a third-party identity verification service.
OpenAI has been signaling this shift for months. In earlier policy updates, the company outlined a long-term strategy to tailor ChatGPT’s experience based on age, defaulting to stricter protections whenever a user’s age cannot be confidently determined.
A Wider Industry Shift Toward Algorithmic Age Checks
OpenAI’s move reflects a broader trend across the tech industry, where platforms are increasingly turning to algorithmic age estimation instead of relying on easily bypassed age checkboxes.
Social media companies, in particular, have expanded similar systems in regions with tougher online safety regulations. The Guardian recently reported that TikTok has strengthened its age-verification efforts across the European Union by analysing profile information, posted content, and behavioural signals to detect younger users.
For ChatGPT, the change marks an important shift in how consumer AI products manage safety. Rather than static settings, protections are increasingly being built into real-time, adaptive systems designed to respond to who is using the tool—and what they may be vulnerable to seeing.


























