Anthropic Rolls Out Identity Verification for Claude Users, using Persona

palantir persona surveillance united states openai discord

Artificial intelligence startup Anthropic has quietly begun implementing identity verification for users of its popular Claude chatbot. While the company frames the move as a standard safety measure, its choice of verification vendor—persona, known for giving ICE and other U.S. governmental bodies your data—has reignited debates over data privacy in the AI sector.

The Rollout: Verification on a “Case-by-Case” Basis

Anthropic recently updated its support documentation to indicate that identity verification is being rolled out for Claude users. Rather than a blanket requirement for all accounts, the checks are “currently being applied selectively”.

According to the updated help page, users “might see a verification prompt when accessing certain capabilities.” Anthropic cites “routine platform integrity checks” and “other safety and compliance measures” as the primary drivers for these prompts.

In practice, this means Claude users could be suddenly asked to verify their identity at any time, for a wide variety of operational or legal reasons dictated by Anthropic.

“Identity verification helps us prevent abuse, enforce our usage policies, and comply with legal obligations,” the company stated in the new support language.

Anthropic’s Privacy Promises

Anticipating user pushback regarding data security, Anthropic has outlined several privacy guardrails:

  • No Model Training: Identity data will not be used to train Anthropic’s AI models.
  • Data Minimization: The company claims it will only collect the “minimum information required” to verify a user’s identity.
  • Restricted Sharing: Identity data will not be shared with anyone outside of Anthropic and its chosen vendor, except when explicitly required to respond to valid legal processes.

Anthropic notes that the vendor is the entity actually collecting the selfie images and snapshots of identity documents. The AI company maintains that it sets strict rules on how this data is handled, stating: “Persona is contractually limited in how they can use your data: only to provide and support verification and to improve their ability to prevent fraud.” While Anthropic claims it can set its own retention period for the data processed by the vendor, the company has not publicly stated exactly how long that period is.

The Vendor: Why “Persona” is Raising Eyebrows

The vendor Anthropic has partnered with to handle these sensitive documents is Persona Identities—a name that may sound familiar to privacy advocates.

Persona previously made headlines when the social discussion platform Discord selected it as an age verification partner. The partnership sparked massive controversy after a security researcher reported that Persona’s front end was exposed on a government server, leading to speculation of a broader government surveillance scheme.

While Persona convincingly denied those allegations at the time, the public uproar was intense enough that Discord ultimately delayed its age-check plans and subsequently dropped Persona for ostensibly unrelated reasons.

The reaction to Persona’s involvement with Anthropic has been similarly swift. Discussions across platforms like Reddit show immediate displeasure, with some users threatening to cancel their Claude subscriptions.

And as people discuss this new issue, the ID/Face scan cancer spreads.

Source: Claude, The Register

Leave a Reply

Your email address will not be published. Required fields are marked *