AI Agents Can Unmask Anonymous Online Accounts, ETH Zurich and Anthropic Study Finds
Researchers demonstrate that large language model-powered agents can automatically link pseudonymous accounts across platforms by analyzing writing style, interests, and digital breadcrumbs, raising fresh questions about online privacy.
ZURICH — A new study from researchers at ETH Zurich and Anthropic shows that AI agents can successfully deanonymize pseudonymous online accounts at scale, challenging long-held assumptions about internet anonymity. The work, which has not been peer-reviewed, demonstrates how large language models can connect disparate accounts through analysis of linguistic patterns, topics of interest, and publicly available information.
The research, conducted in collaboration with the Machine Learning Alignment and Theory Scholars program, built an automated system of AI agents capable of searching the web and synthesizing information much like a human investigator. Using unspecified models, the system reportedly achieved high accuracy in linking anonymous accounts across platforms such as Reddit, X (formerly Twitter), and others.
According to multiple reports of the study, the researchers tested the system on realistic scenarios, including a hypothetical user who maintained separate anonymous accounts for career venting on Reddit’s r/cscareerquestions, movie discussions on r/horror, and academic topics. The AI agents were reportedly able to identify that these accounts belonged to the same individual despite different usernames and no shared real names or photos.
How the AI Deanonymization System Works
The system leverages the growing capabilities of large language models to perform cross-platform analysis that would be time-consuming or impractical for humans. By examining writing style, recurring themes, specific terminology, and subtle behavioral patterns, the AI agents can build profiles that link seemingly unrelated accounts.
The study highlights that this capability operates automatically, relatively cheaply, and at scale — a significant departure from previous manual or semi-automated deanonymization techniques. While earlier methods often required substantial human effort, the new AI-driven approach can potentially process large numbers of accounts simultaneously.
Despite the concerning findings, researchers and experts caution that anonymity is not entirely dead. As quoted in The Verge, one expert noted that despite years of incremental progress in unmasking techniques, “the identity of Satoshi Nakamoto, the inventor of Bitcoin, remains a mystery after more than a decade.” Whistleblowers can still communicate safely with journalists, and tools like Signal continue to provide strong privacy protections.
Implications for Online Privacy
The research arrives at a time when millions of users maintain “finstas,” Reddit alts, anonymous professional complaint accounts on Glassdoor, and other pseudonymous identities. These accounts allow people to express opinions, share sensitive experiences, or discuss controversial topics without linking them to their real-world identities.
The ability to automatically unmask such accounts could have significant consequences for users who rely on anonymity for personal, professional, or safety reasons. Employees criticizing management, individuals discussing health issues, or activists operating in restrictive environments may face increased risk of identification.
However, the study also underscores current limitations. The AI systems still depend on sufficient public data across platforms and may struggle with highly disciplined users who maintain strict separation in their posting habits, language, and topics.
Industry Context and Competitive Landscape
The collaboration between ETH Zurich and Anthropic is notable given Anthropic’s focus on AI safety and alignment research. The involvement of the Machine Learning Alignment and Theory Scholars program further ties the work to broader efforts to understand both the capabilities and potential societal impacts of advanced AI systems.
This research adds to a growing body of work examining the dual-use nature of large language models, which can simultaneously enable beneficial applications and create new privacy challenges. Similar studies have explored AI’s ability to detect generated content, analyze sentiment at scale, and perform other forms of large-scale behavioral analysis.
What’s Next
The study has not yet undergone peer review, meaning its specific methodologies, success rates, and limitations require further validation by the broader research community. Additional details about the exact models used and quantitative results are expected to emerge as the paper progresses through the academic review process.
For users concerned about deanonymization risks, experts suggest maintaining stricter separation between accounts, varying writing styles, and limiting the total volume of personal information shared across platforms. Some online discussions have already proposed browser-based AI anonymizers that could rewrite posts to obscure stylistic fingerprints.
As AI capabilities continue to advance, the tension between powerful analytical tools and individual privacy is likely to intensify. The ETH Zurich and Anthropic research serves as an early warning that the technical barriers to large-scale deanonymization are falling faster than many anticipated.
The full study details are available through The Verge, which first reported on the findings. Further technical information is expected to be released as the researchers publish their complete methodology and results.

