The short version
Grammarly, the popular writing app owned by Superhuman, added a new AI feature called "Expert Review" that uses real experts' names—like journalists from The Verge—to make its suggestions seem more credible, without asking permission first. Experts discovered their names attached to AI-generated feedback they never gave, sparking backlash. Grammarly responded by letting people opt out, but won't apologize or remove the feature, claiming it's just "inspired by" public works—and they might still track some data even after opting out.
What happened
Imagine you're using Grammarly to polish an email or report, and it suddenly gives you writing tips "from" a famous editor like Nilay Patel from The Verge. Sounds helpful, right? But here's the catch: Patel and other real journalists never agreed to this. Last week, Verge reporters tested the new "Expert Review" feature and saw their own colleagues' names pop up on AI-generated comments—like fake quotes or advice that the AI made up, pretending to come from these experts.
It started when Wired reported it, then Verge staff like Stevie Bonifield, David Pierce, Tom Warren, and their boss Nilay Patel all found themselves "starring" in Grammarly's AI without knowing. Grammarly says the feature pulls from publicly available writings by these experts (think blog posts or articles anyone can read online) to "inspire" suggestions, then points users to the real experts' work. But they slapped the experts' real names on it to make the AI sound smarter and more trustworthy—like borrowing a celebrity chef's name for your homemade recipe without telling them.
Instead of saying sorry or shutting it down, Grammarly's response (via Superhuman VP Alex Gay) was basically, "We didn't ask because their stuff is public." Now, affected people can email to opt out, but Grammarly admits it might still use some data like usage stats even after you do.
Why should you care?
You might think, "I'm not a famous writer, so this doesn't touch me." But it does—Grammarly has access to your private emails, documents, and messages every time you use it to check grammar or style. If they're casually using real people's identities without permission to hype their AI, what stops them from mishandling your data? This erodes trust in apps we rely on daily for work, school, or personal stuff. Plus, it blurs the line between real human advice and AI fakes, which could mislead you into bad writing habits or make you question if advice is genuine.
For everyday folks, it matters because writing tools like Grammarly are everywhere—in browsers, apps, even email. If companies start name-dropping without consent, it could flood your suggestions with phony "expert" input, wasting your time or spreading subtle biases from those public writings. And with privacy worries, you might second-guess sharing sensitive info (like job applications or medical notes) with apps that play fast and loose with identities.
What changes for you
Practically, nothing huge flips overnight, but here's what to watch:
- If you're a Grammarly user: Your suggestions might still reference unnamed "experts" inspired by real people. Check your account settings or contact support to see if you're opted in accidentally—no mass notification happened.
- Opting out: If you're an expert (or think you might be listed), email Grammarly to remove your name. Regular users? You can't opt out of the feature itself, but you can disable Expert Review in settings if it bugs you.
- Privacy check: Review what Grammarly sees— it scans your text for improvements. Even post-opt-out, they keep usage stats (like how often you use it), per reports.
- Alternatives: If this spooks you, try free tools like Google Docs' spellcheck or Microsoft Editor, which don't pull this stunt (based on this story).
- Broader ripple: Expect more scrutiny on AI writing apps. Your boss's email or kid's essay might get "expert" tips that aren't what they seem, potentially leading to awkward mistakes.
No apps are breaking yet, costs aren't changing, but it makes AI feel less reliable—like a friend quoting someone out of context to sound smart.
Frequently Asked Questions
### Is Grammarly still safe to use for my private writing?
Yes, for most people—it's still a solid grammar checker with access only to what you paste. But this incident shows they prioritize features over permissions, so avoid super-sensitive stuff like legal docs. Even after opting out (if applicable), they track usage stats, so treat it like any cloud app: nothing truly private.
### How do I opt out if my name is being used?
If you're an "expert" like a published author, email Grammarly's support (details in their response to The Verge). They haven't set up an easy button yet—it's manual. Regular users can't opt out of the feature, but you can turn off Expert Review in app settings to skip name-dropping.
### Does this mean the AI advice is fake or wrong?
Not necessarily wrong—the suggestions are AI-generated but "inspired by" real experts' public work, like summarizing their articles. The issue is falsely implying direct involvement (e.g., "Nilay Patel says...") without permission, which tricks you into trusting it more. Always double-check big changes.
### Why didn't Grammarly just ask permission first?
They claim public writings don't need it, calling it "pointing to influential voices." Critics say that's a cop-out—using someone's name commercially requires consent, like getting a testimonial. No apology came; they just added opt-out after backlash.
### Will this happen in other apps like Microsoft Editor or Gmail?
Not confirmed here, but it's a warning. AI tools train on public data, but slapping real names on outputs without asking is rare and risky. Watch for similar features; user complaints (like on ResetEra forums) push companies to fix fast.
The bottom line
Grammarly's move to use real experts' names without permission in its AI "Expert Review" is a trust-busting wake-up call: even handy apps can overstep on privacy to make their tech seem fancier. For you, it means pausing before blindly following "expert" tips in writing tools—verify if needed, consider opting out if listed, and maybe diversify your apps. Demand better from companies; this story shows public outcry works (they added opt-out). Stay savvy with AI helpers, and your writing—and data—stay safer.
Sources
- The Verge: Grammarly is using our identities without permission
- SFist: Grammarly’s New AI Tools Use Experts’ Identities Without Their Permission
- TechBuzz: Grammarly Caught Using Real Identities Without Consent
- ResetEra Discussion on The Verge Article
(Word count: 812)

