Anthropic identity verification is becoming a new controversy as Anthropic introduces additional checks for users of its AI assistant Claude, which has caused quite a negative reaction from part of the community.
The new function involves selective identity verification through a partnership with the Persona platform. The system does not apply to all users, but is activated in certain situations, such as accessing advanced functions or when the system detects suspicious behavior.
Anthropic Claude introduces identity verification
In these cases, the user must submit a valid personal document, such as a passport or driver’s license, along with the live selfie. The process takes a few minutes, but does not accept digital copies or informal identification.
The aim of this measure is to prevent abuses, control access from restricted regions, and enforce the age limit of 18 years. The system also reacts to users who violate the rules of use or try to bypass security mechanisms.
However, this approach has raised privacy concerns. Some users do not want to share biometric data and personal documents with AI companies, especially since competing platforms such as ChatGPT and Gemini do not currently require such a level of verification for standard use.
Anthropic tries to mitigate criticism by stating that it does not store photos of documents on its servers and that this data is not used to train AI models. However, some analysts believe that such measures could become a standard in the industry, similar to the principles of KYC verification in the banking sector, which could change the way AI services are used in the long run, reports Android Headlines.