Image: Alex Photo Stock / shutterstock.com
AI should recognize how old you are - completely automatically
OpenAI is planning big things: in future, ChatGPT will be able to recognize whether the user is a teenager or an adult - and adapt its behaviour accordingly. The idea is that anyone under the age of 18 will receive a toned-down version with clear child protection rules. Content with sexual imagery? Taboo. And if the chatbot detects an emergency situation such as suicidal thoughts, it will even inform the authorities.
The automatic age recognition feature will eventually replace the parental controls available from the end of September. Until then, parents will be able to manage their children's profiles manually - including the option to regulate the chat behavior of ChatGPT according to age. However, automated age verification is a technical balancing act: even the most modern systems struggle to reliably identify a person's true age - especially if no ID data is requested.
When in doubt: always the children's version
OpenAI announces: If the system is not sure how old someone really is, it prefers to play it safe. This means that users automatically end up in the "under 18" version. In future, however, adults will have the option of proving their age - how exactly this will work remains to be seen.
Critics are already asking themselves: where is the line between protection and control? And how much "paternalism" is justified when it comes to a chatbot?
Background: Suicide of a teenager triggers debate
The discussion did not come out of nowhere: In August 2025, a 16-year-old boy took his own life - after, according to his parents, he exchanged hundreds of messages with ChatGPT, making his intentions clear and involving the bot in his decision. The family sued OpenAI. The tragedy sparked a heated debate about how big the responsibility of AI providers really is - and whether the existing protection mechanisms are sufficient.
OpenAI is responding with technical measures - but also with a clear signal: if an AI like ChatGPT communicates with millions of people in everyday life, it needs to know who is sitting in front of it. Teenager? An adult? Or someone in an acute crisis?
Progress or digital parental state?
Automatic age recognition sounds like technical brilliance - but it is also a dangerous game. Children and young people must of course be protected, especially when it comes to sensitive issues such as sexuality or mental health. But how far can a company like OpenAI go to "ensure safety"?
If an AI always plays the children's version when in doubt, this could lead to adults chatting with the handbrake on - just because they don't want to or can't "authenticate" themselves. And the statement to even call the police in an emergency? Sounds sensible, but can also become a ticking surveillance issue.
Technological progress must never become a substitute for real responsibility - neither for parents nor for companies. And certainly not when it comes to the question of whether an AI knows how old we are better than we do.