
OpenAI has started deploying an age prediction system across its consumer ChatGPT plans to identify accounts that may belong to users under 18 and automatically apply stricter safety controls. The initiative is intended to create a more age-appropriate experience for younger users, while ensuring adults can continue to access ChatGPT with fewer restrictions. Announcing the update in a blog post, OpenAI said, “Young people deserve technology that both expands opportunity and protects their well-being.”
The new system does not rely on a single data point but instead uses a combination of behavioural and account-level signals to estimate a user’s age. These signals include how long an account has existed, patterns of usage over time, typical hours of activity, and the age a user may have stated while signing up. “When the age prediction model estimates that an account may belong to someone under 18, ChatGPT automatically applies additional protections,” the company said.
Once these protections are triggered, ChatGPT limits access to certain categories of content considered inappropriate or harmful for minors. This includes restrictions on graphic violence, sexual or violent role play, depictions of self-harm, viral challenges that encourage risky behaviour, and content that promotes extreme beauty standards or unhealthy dieting habits. The goal, OpenAI said, is to reduce exposure to material that could negatively affect a young person’s safety or mental health.
At the same time, the company acknowledged that automated systems are not always perfect. Users who believe they have been incorrectly identified as under 18 will be able to regain full access by confirming their age. This will be done through a selfie-based age verification process using Persona, a third-party identity verification service. OpenAI said this approach helps ensure accuracy while preserving user choice.
The company framed the broader objective as balancing protection with autonomy. According to OpenAI, age prediction enables it to “treat adults like adults and use our tools in the way that they want, within the bounds of safety.” The system is expected to evolve over time, with refinements guided by ongoing research into child development and online safety.
OpenAI said it has consulted with a range of expert organisations while designing the feature, including the American Psychological Association, ConnectSafely, and the Global Physicians Network. The rollout will continue gradually, with the age prediction system set to launch in the European Union in the coming weeks, reflecting the company’s effort to align with regional regulations and global best practices for youth safety online.




