Grammarly’s ‘Expert Review’ Feature Draws Criticism for Misusing Names of Famous Writers
SAN FRANCISCO — Grammarly has rolled out a new “Expert Review” feature that claims to provide writing feedback inspired by the world’s great writers and thinkers, but the company is facing sharp backlash after multiple outlets reported that none of the named experts appear to have been consulted or given permission to have their names used.
The feature, recently added to Grammarly’s AI-powered writing assistant, promises users personalized advice drawn from luminaries in literature, journalism and academia. However, reporting from TechCrunch, WIRED, The Verge and The Chronicle of Higher Education reveals that prominent figures listed — including living authors, deceased writers and working journalists — have no involvement with the product and were not contacted by Grammarly.
According to the reports, the Expert Review feature generates AI-produced feedback and then attributes its stylistic or thematic insights to specific “experts” whose published works or public personas align with the content. Journalists whose names were used without permission include The Verge’s senior editors Sean Hollister and Tom Warren, as well as other tech journalists. Academic experts such as Helen Sword were similarly listed despite having no knowledge of or participation in the initiative.
No Permission, No Involvement
Multiple publications emphasized that the named individuals had nothing to do with the AI review process. “None of these figures appear to be involved in Expert Reviews or to have given Grammarly permission to use their names,” TechCrunch reported. WIRED described the practice as “insidious,” noting that users are presented with a list of authors available to “weigh in” on their text when in reality those people have no connection to the generated feedback.
The Chronicle of Higher Education highlighted the case of Helen Sword, a well-known academic writing expert, who was unaware that Grammarly was using her name and work to brand its AI-generated suggestions for students and other users.
Grammarly has not yet issued a public statement addressing the allegations in the provided reporting. The company’s marketing materials for the feature positioned Expert Review as a way to bring authoritative voices into everyday writing assistance, a claim now under scrutiny for potentially misleading consumers.
Technical and Ethical Questions
The controversy touches on broader issues in the generative AI industry regarding the use of individuals’ names, likenesses and reputations to lend credibility to automated systems. While large language models are frequently trained on publicly available text, the explicit association of living or recently deceased authors with specific AI outputs — without consent — has raised questions about implied endorsement and potential misrepresentation.
Industry observers note that Grammarly, long known for its grammar-checking tools, has aggressively expanded into generative AI features in recent years to compete with newer entrants such as those from OpenAI, Anthropic and other large language model providers. The Expert Review feature appears designed to differentiate the product by promising “expert” level insight rather than generic suggestions.
Impact on Users and the Industry
For students, professionals and other Grammarly users, the revelation may undermine trust in the platform’s newer AI capabilities. Many rely on Grammarly for high-stakes writing such as academic papers, job applications and client communications, where perceived authority and accuracy carry significant weight.
The incident also reflects growing tension in the AI sector around “personality laundering” and unauthorized use of intellectual and personal brands. Similar complaints have surfaced against other AI companies that generate content “in the style of” specific creators without their involvement.
What’s Next
As of the latest reporting, it remains unclear whether Grammarly plans to remove the named experts from the feature, add disclaimers, or seek retroactive permission. The company has not announced a timeline for any changes to Expert Review or addressed how it intends to verify future “expert” associations.
Developers and writers following the story will be watching for official comment from Grammarly, potential legal or regulatory scrutiny around deceptive marketing practices, and whether the controversy prompts wider industry standards for disclosing the human origins — or lack thereof — of AI-generated advice.
The episode serves as a reminder that as AI writing tools become more sophisticated in their branding, transparency about the actual source of recommendations will be critical to maintaining user confidence.
(This article is based solely on reporting from TechCrunch, WIRED, The Verge, The Chronicle of Higher Education and NewsBytes. Grammarly had not responded to the publications at the time of their reporting.)

