Chad Hetherington

Writer’s note: Grammarly reportedly disabled Expert Reviews as I was actively writing this blog. Despite it no longer being available, it still begs a conversation around AI ethics, which you’ll find here.

Writer’s note #2: A class action lawsuit against Grammarly is underway for presenting editing suggestions from experts without their consent. More details below.

I’ve seen the rise, peak and seemingly inevitable fall of many products and services over the years. It’s become such an unfortunately predictable pattern.

I can’t say for certain that we’re witnessing the ‘inevitable fall’ stage of that trajectory with Grammarly right now, but it kind of feels like it.

The smart spellchecker turned AI writing assistant introduced a new feature called “Expert Reviews” back in August 2025. But it wasn’t until recently that it began boiling the tech discourse waters. It’s given rise to many articles with headlines like “Grammarly’s ‘expert review’ is just missing the actual experts,” and “Grammarly is using our identities without permission.”

If you hadn’t heard of the feature until right now, don’t worry — I hadn’t either. Let’s talk about what Expert Reviews was and why it’s so controversial among tech journalists, scholars and writers.

Subscribe to the ai marketer

Weekly updates on all things ai in marketing.

What Is (Was) Grammarly’s Expert Review Feature?

Expert Review was meant to provide feedback “inspired by subject matter experts so you can craft stronger arguments the way an expert would,” writes Grammarly on its AI Agents landing page.

Here’s how Expert Review was intended to work:

  • You open your document in Grammarly and run the Expert Review agent.
  • The AI analyzes your writing and identifies relevant experts based on the topic.
  • It then provides revision suggestions framed as insights from those experts’ perspectives (e.g., how they might refine clarity, argument, tone or structure).
  • You can choose which experts to include or remove, and accept or ignore the suggestions.

It goes without saying, but the “experts” were not actually reviewing the document. The feedback was generated by AI using publicly available work associated with those experts. That said, it was made to sound like it came directly from the source, which didn’t sit well with almost everyone.

The Problems with Expert Review

Grammarly quickly deactivated Expert Review due to the widespread backlash.

I hadn’t originally planned to frame this section this way, because when I began writing this blog, Expert Review was still live. So, rather than the feature’s reported problems, period, this section is now about the problems that led to its demise. Dramatic, I know. But also warranted.

As a side note, we recently published a refresher article on AI ethics. Well, the timing couldn’t have been better for a story like this to break, because the reasons Grammarly pulled the product all represent what most people have come to understand as the wrong ways to do AI.

For example:

  • No opt-in: Cited experts didn’t give their permission to be featured in the tool. They could opt out once they realized — or if they realized — the AI agent was pulling from their work and essentially posing as them, but that’s a backwards way to do it.
  • Inaccuracy: Many experts who found themselves unwillingly cited by the tool reached out to Grammarly directly to express their concerns about how “the agent misrepresented their voices,” according to a Yahoo! Tech article reporting on the matter.

Some experts’ workplaces, positions and titles as they appeared in Grammarly were also outdated, reported Stevie Bonifield, news writer for The Verge, which could have been avoided if the company had simply “asked those people for their permission to reference their work,” he writes. Attribution is well and good (expected, even) but it has to be accurate. Otherwise, what’s the point?

  • Insensitivity: Some reports indicated that a selection of experts emulated by Grammarly’s tool were somewhat recently deceased.

I understand that when people pass, their work doesn’t disappear from the internet, and people certainly don’t stop referencing it. However, beyond all of the other problems with Expert Review highlighted here and everywhere, it’s easy to understand how this slip-up might be jarring for friends, family and colleagues of the deceased who might’ve encountered this in Grammarly — especially since suggestions sometimes looked like real feedback.

So, inaccurate information, misrepresentation and insensitivity: all things AI can perpetuate unless there are humans in the loop to course correct. That didn’t seem to be the case here.

The Lawsuit

On Wednesday, March 11, 2026, award-winning investigative journalist and NYT bestselling author Julia Angwin posted on LinkedIn that she’s joining a class action lawsuit against Superhuman (the parent company) and Grammarly.

Angwin cites a relevant New York law: The “Right of Publicity.” This states clearly that individuals have an “inherent right to control the commercial use of his or her personal characteristics.” This goes for living or deceased individuals, in order to “protect against the commercial exploitation, or unauthorized use” of their name or likeness.

Seems pretty cut and dry to me.

An AI Ethics Review

These ethics are pretty self-explanatory, which makes it all the more concerning why Grammarly or Superhuman greenlit Expert Review or thought that it wouldn’t be met with great opposition. I imagine if they’d asked experts beforehand about using their names to offer AI writing feedback, most would’ve said no, meaning they would’ve had to rethink the product before it ever launched.

Well, that might’ve been preferential now that a class action lawsuit is underway.

Using AI for most anything warrants:

  • Disclosure if it’s something you plan to publish.
  • Permission, if it’s a tool that relies on other people’s work or actions.
  • Forethought about how your use of AI might impact someone else, directly or indirectly.
  • A plan or policy for approaching AI — whether content, tools or workflows — to mitigate and ideally prevent everything from bias to inaccuracy to hallucinations.

Without ethical frameworks to rein it in, AI can absolutely do more harm than good. And that’s bad for everybody.

Final Thoughts

Wow, this train derailed quickly. This was originally supposed to be a simple critique of Grammarly’s Expert Review feature — what it was and why it felt icky. And in less than 24 hours, Superhuman deactivated Expert Review and got slammed with a class action lawsuit for essentially cloning experts without consent.

I mean, this has turned into the quickest crash course on how not to do AI, and the consequences that result from poor or absent ethics. Whatever the verdict of the class action, I imagine Grammarly’s reputation will bear permanent tarnish.

AI in marketing is inevitable, but marketers must always operate with a calibrated moral compass. Be transparent, think critically about how AI supports your work, what it can realistically help with and where you need to keep your human feet planted in your processes.