OpenAI has started introducing automatic age estimation in ChatGPT to protect minors

OpenAI announced the introduction of a new feature in ChatGPT that allows the model to estimate the user’s age based on various signals associated with the account and behavior, with the aim of identifying accounts that could be used by minors. This change was introduced as part of a wider initiative for safer use of AI and better adaptation of content to the age of users.

Systematized age prediction works by automatically estimating the probability that a user is under 18, and then, if the algorithm indicates this, automatically activates stricter security settings. This includes restricting access to sensitive content such as graphic violence, scenes that encourage risk or certain sexually explicit themes, as well as other content deemed inappropriate for minors.

OpenAI explains that the assessment is based on a combination of account-related signals, including the user’s self-reported age at registration, account duration, typical activity times, and usage patterns over time, to make an informed assessment without the need for constant manual verifications.

READ ABOUT:  How to protect your privacy when you give someone your phone

If the system mistakenly marks an adult user as a minor, there is a possibility to remove those restrictions by verifying age through a selfie or an ID document through a third-party service (Persona). This step allows full functionality to be restored once it is verified that the user is indeed of legal age.

The launch of the age prediction system comes at a time of heightened international scrutiny of AI platforms and their impact on children and adolescents. OpenAI has faced criticism over the risks generative AI can pose to minors, including potential exposure to harmful or sensitive content, so this feature comes in response to those concerns.

This system has already started to be introduced globally, while in the European Union the implementation is expected to be completed in the coming weeks, in order to fully comply with local security and regulatory requirements.

READ ABOUT:  OpenAI warns investors to expect "deliberately unbelievable" claims from Elon Musk ahead of trial in April

Strategically, forecasting the years is also important in the context of the announcement adult mode-a for users who are verified as adults, a model that would allow content that is not intended for minors, but with strict identity verification. This “dual” approach shows how OpenAI is trying to balance the safety of young people and the freedom of adult users, while maintaining trust and compliance with growing regulatory requirements in the world of AI technologies, reports The Hill.

Source link