Grammarly is using our identities without permission — news
News/2026-03-08-grammarly-is-using-our-identities-without-permission-news-news
Breaking NewsMar 8, 20264 min read

Grammarly is using our identities without permission — news

Grammarly is using our identities without permission — news

Grammarly’s ‘Expert Review’ Feature Uses Real Identities Without Permission

SAN FRANCISCO — Grammarly has come under fire for its new “Expert Review” feature, which generates writing feedback “inspired by” named subject-matter experts — including living journalists who say they never consented to having their identities used.

The feature, launched in August, promises users AI-generated advice modeled after prominent voices in various fields. However, when Verge reporters tested it, the system produced comments attributed to The Verge’s editor-in-chief Nilay Patel, editor-at-large David Pierce, and senior editors Sean Hollister and Tom Warren. None of the journalists were contacted or granted permission for their names and professional personas to be incorporated, according to a report published Wednesday by The Verge.

The controversy highlights growing concerns about AI companies leveraging public figures’ identities for commercial products without consent. Grammarly is owned by Superhuman, the email startup that acquired the writing-assistance platform.

How the Feature Works

According to The Verge’s testing, the Expert Review tool surfaces specific individuals’ names and then delivers AI-generated commentary presented as being “inspired by” their work. The system also attempts to link users to source material so they can “explore more deeply,” but those links frequently led to spammy mirror sites or archived copies rather than the experts’ actual published works. The feature itself reportedly crashed multiple times during testing.

In a statement to The Verge, Alex Gay, vice president of product and corporate marketing at Superhuman, defended the approach. “The Expert Review agent doesn’t claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply,” Gay said.

When asked whether the company considered notifying the named individuals or seeking permission beforehand, Gay replied that the experts appear “because their published works are publicly available and widely cited.”

Broader Implications for AI and Consent

The incident arrives amid heightened scrutiny over how generative AI tools handle real people’s names, likenesses, and professional reputations. While training on publicly available text is common practice across the industry, explicitly naming living individuals and simulating their expertise in a commercial product raises fresh questions about identity rights and implied endorsement.

The Verge noted that the feature has also referenced recently deceased professors, adding an additional layer of ethical complexity. Critics argue that presenting feedback “inspired by” named experts without their knowledge could mislead users and potentially damage the reputations of those whose names are invoked if the AI-generated advice is inaccurate or poorly reasoned.

Impact on Developers, Users, and the Industry

For individual users and writers relying on Grammarly’s premium features, the revelation may erode trust in the platform’s claims of expert-level guidance. Enterprise customers, in particular, may reconsider deployment of similar AI tools that lack transparent consent mechanisms.

The controversy could influence how other AI companies design “expert” or “persona”-based features going forward. Several large language model providers have already begun implementing stricter policies around named individuals, especially public figures, following earlier backlash over deepfakes and unauthorized voice cloning.

Developers building on top of Grammarly’s API or similar writing assistants may now face pressure to audit how expert identities are sourced and surfaced in their applications.

What’s Next

Grammarly and Superhuman have not announced any immediate changes to the Expert Review feature. The company has not commented publicly beyond the statement provided to The Verge.

The story is likely to fuel ongoing debates and potential regulatory interest in AI consent practices. Legal experts have previously suggested that right-of-publicity laws, which vary significantly by jurisdiction, could apply when a person’s professional identity is commercially exploited without authorization.

Users seeking more information about the feature can read The Verge’s full investigation, titled “Grammarly is using our identities without permission.”

As the AI industry continues its rapid expansion into productivity tools, cases like this underscore the tension between innovation and respecting individual agency over personal and professional identities. Further updates are expected as more experts potentially come forward and as the companies respond to the growing criticism.

Original Source

theverge.com

Comments

No comments yet. Be the first to share your thoughts!