OpenAI has quietly introduced a powerful new safeguard inside ChatGPT: age prediction.
And it could fundamentally change how young users experience AI.
As global concern over AI’s influence on children continues to grow, OpenAI is rolling out an “age prediction” system designed to identify minors automatically and apply stricter content controls without requiring users to declare their age upfront.
This move comes after mounting criticism, regulatory pressure, and high-profile controversies surrounding how AI tools interact with young people.
Over the past few years, OpenAI has faced intense scrutiny over ChatGPT’s impact on minors. Critics and regulators have raised alarms about:
In response, OpenAI has steadily tightened safeguards. The new age prediction feature is the most proactive step yet — shifting from reactive moderation to automated risk detection.
According to OpenAI, the system uses an AI model that analyzes “behavioral and account-level signals” to estimate whether a user may be under 18.
These signals include:
Importantly, OpenAI says the system does not rely on a single signal. Instead, it evaluates patterns to reduce false positives.
If an account is flagged as potentially underage, ChatGPT automatically activates stricter content filters, limiting access to topics such as:

OpenAI acknowledges that no age prediction system is perfect.
If an adult user is mistakenly classified as under 18, there is a clear appeals process. Users can verify their age by submitting a selfie ID check through OpenAI’s verification partner, Persona, to restore full account access.
This move aligns with OpenAI’s broader push toward tighter user controls and platform governance, especially as the company prepares for monetization — a shift we recently explored in detail in our breakdown of OpenAI ads, who sees them, and how privacy rules will work.
This update signals a broader shift in how AI platforms may operate going forward:
For OpenAI, this is about more than safety it’s about trust, compliance, and long-term adoption, especially as ChatGPT becomes embedded in schools, devices, and daily workflows.
Age prediction inside ChatGPT may feel intrusive to some users, but it reflects a growing reality:
AI companies can no longer treat safety as optional.
As governments, parents, and educators demand stronger protections, features like this are likely to become industry standard, not exceptions.
And for users? It means ChatGPT is becoming not just smarter but more controlled.
For more daily AI news, trends, and insights, check out FutureTools’ AI News hub on FutureTools, where we cover the biggest developments shaping the industry every day.

Highly recommended for AI fans!