Anthropic starts checking ID for some Claude users • The Register


Anthropic may check your ID before letting you access certain Claude features, and the verification vendor it has picked is the same outfit that sparked controversy when Discord tested similar checks.

Anthropic quietly updated its support page on identity verification for Claude users this week to indicate that it’s rolling the process out on a case-by-case basis. According to the help page, Anthropic is rolling out identity verification for “a few use cases,” and users “might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures.”

In short, expect to be suddenly asked for verification at any time, for pretty much any reason Anthropic can come up with. 

“Identity verification helps us prevent abuse, enforce our usage policies, and comply with legal obligations,” the company said in its new support language. 

In order to further assuage user fears over the privacy of their data, Anthropic notes that it won’t use any identity data to train its models, is only going to collect “the minimum information required to verify your identity,” and won’t share identity data with anyone other than Persona and Anthropic itself, except where legally required to respond to valid legal process.

You may recognize the name Persona Identities if you follow privacy news.

Discord previously chose Persona as its age verification partner when the social discussion platform announced plans to enact a verification system similar to Anthropic’s. But a security researcher reported exposure of Persona’s front end on a government server, then speculated that this was part of a broader government surveillance scheme. Persona convincinglydenied those allegations in discussions with The Register, but the uproar was enough for Discord to delay its plans to implement age checks. It also cast Persona over the edge for ostensibly unrelated reasons.

This time around, discussion was quick to establish displeasure with Persona’s involvement in Anthropic’s identity verification plans, with some on Reddit saying they planned to cancel their subscriptions. 

Others pointed to the February personal account of an individual who dug into Persona after finding out they were LinkedIn’s identity verification partner. As that blog post pointed out, Persona lists a number of subprocessors that help it with various parts of its identity verification process, including AWS, Confluent, Google, OpenAI, Stripe, Twilio, and even potentially Anthropic, among others. 

Anthropic claims on the help page that Persona is the one collecting selfie images and snapshots of identity documents for verification, and that it exercises tight controls over how Persona is able to handle it and what it can do with it. 

“We set the rules for how it’s used and how long it’s kept,” Anthropic states. “Persona is contractually limited in how they can use your data: only to provide and support verification and to improve their ability to prevent fraud.” Anthropic also made multiple mentions of being able to set its own retention period on the data of Claude users processed by Persona, but failed to state what that period is. 

The larger point? When new information is gathered, it often goes through a whole chain of providers. If any one of those providers has sneaky intentions or lax data security practices, that information may end up in hands you never expected it to. When all you maybe wanted to do was write some new code faster or ask a chatbot for relationship advice.

Questions to Anthropic went unanswered. ®



Source link