OpenAI’s KYC Partner Persona Faces Allegations of Sharing User Crypto Data with US Authorities

A major controversy has surfaced at the intersection of artificial intelligence and cryptocurrency privacy. Persona, the firm responsible for conducting identity verification for OpenAI’s premium ChatGPT features, stands accused of forwarding sensitive user information—including linked cryptocurrency wallet addresses—directly to US federal agencies. Security researchers operating under the handles vmfunc (also known as Celeste), MDL, and Dziurwa published their findings on February 18, 2026.

They uncovered publicly available code within Persona’s systems that appears to route collected KYC data to the Financial Crimes Enforcement Network (FinCEN), the U.S. Treasury bureau tasked with combating financial crime.

The researchers highlighted the dual role of the company: it processes passport photos, selfies, and facial videos for everyday users seeking access to advanced AI tools, yet allegedly maintains backend infrastructure that supports government-style suspicious activity reporting.“

The same company that takes your passport photo when you sign up for ChatGPT also operates a government platform that files Suspicious Activity Reports with FinCEN and tags them with intelligence program codenames,” the investigators wrote.

They added a pointed warning: uploading a selfie for a chatbot could result in that biometric data being cross-checked against expansive databases of global political figures and their families.

Independent cybersecurity experts have largely validated the technical claims. Pseudonymous researcher Tanuki42, affiliated with incident response groups SEAL911 and zeroShadow, told reporters that the referenced government domains exist and likely run on Persona-hosted infrastructure.

However, significant gaps remain regarding the platform’s exact purpose, user base, and operational intent.

Persona CEO Rick Song pushed back sharply on social media. In public posts and shared email exchanges with the researchers, he insisted his company has no active partnerships with federal agencies.

Song expressed frustration that the team published without first contacting him, noting his admiration for vmfunc’s technical expertise.

Neither OpenAI nor Persona has issued a formal response to media inquiries, and blockchain analytics firm Chainalysis—whose tools are referenced in the code—has also remained silent.

According to the investigation, the disputed functionality has existed since at least November 2023.

When users complete verification for OpenAI services, their documents undergo standard checks against sanctions lists, facial matching, and crime-related databases.

Yet the code reportedly enables far more: direct filing of reports to FinCEN and its Canadian counterpart, application of classified intelligence tags, and integration with Chainalysis for ongoing crypto address scrutiny.

This includes risk scoring, transaction graph analysis, fund valuation, and owner identification.

A particularly alarming element is the “native crypto address watchlist system.” Once flagged, addresses are not simply queried once but monitored indefinitely against evolving blockchain data clusters.

Retention policies also appear inconsistent—OpenAI publicly states biometric data is kept for up to one year, yet the code suggests a three-year maximum, with government identification records potentially stored permanently.

The revelations have intensified long-standing worries among privacy advocates.

Crypto communities, rooted in cypherpunk ideals of anonymity and resistance to surveillance, view mandatory KYC as a gateway to unchecked monitoring.

Critics argue that such systems risk creating opaque watchlists without user notification or appeal mechanisms.

Past data mishandling scandals at other verification providers underscore the vulnerability of centralized biometric repositories to breaches or abuse.

While standard KYC aims to curb illicit finance, the allegations raise fresh questions about transparency and consent in the rapidly expanding AI sector.

As millions rely on tools like ChatGPT, the extent to which routine identity checks feed into broader government intelligence networks remains unclear.



Sponsored Links by DQ Promote

 

 

 
Send this to a friend