Chad Hetherington

Deloitte, the world’s largest professional services network by revenue, is the center of the latest AI blunder — and it cost them.

Let’s unpack what happened and how, the consequences of waiving human review of AI work and lessons marketers can take away to avoid a similar PR nightmare in their AI content endeavors.

What Happened & How? Bogus Facts and Footnotes in a Government Report

In December 2024, the Department of Employment and Workplace Relations (DEWR) in Australia commissioned Deloitte Australia to conduct an independent review of the Targeted Compliance Framework (TCF) and IT system for automatically imposing penalties on welfare recipients who miss certain obligations. Deloitte was awarded the A$440,000 (≈ US$290,000) contract and published the report in July 2025.

After publication, a researcher at the University of Sydney noted multiple serious errors and AI hallucinations, such as non-existent academic papers and quotes attributed to real professors and researchers, among other faults.

Deloitte later admitted using generative AI to produce parts of the report and that indeed some footnotes and references were incorrect or totally fabricated. They agreed to repay a portion of the contract total to the Australian government.

A revised version of the report was later published; the bogus citations and the fabrications corrected or removed. Even so, Deloitte’s final recommendations for the department in the report remained unchanged in the corrected version.

Subscribe to the ai marketer

Weekly updates on all things ai in marketing.

Why This Slip Up Is Unfortunate on Multiple Fronts

Mistakes, no matter how honest, often have consequences. For a company as large as Deloitte, the outcomes of this slip-up will assuredly be far-reaching — and also serve as a reminder to all who are using AI to slow down, triple-check and edit thoroughly:

  • Reputational risk: For Deloitte, this blunder raises questions about how rigorously they supervise AI use in client deliverables.
  • Transparency about AI use: The fact that the initial version of the report did not disclose the involvement of an AI tool raises policy questions.
  • Cost vs. value debate: Because Deloitte is refunding only part of the fee, questions remain about whether the Australian government is getting full value for its money.

Deloitte is a market force, but every business is prone to these unfortunate outcomes with unguided AI use, whether that’s by a lack of corporate policy or something else.

At the very least, these blunders help reinforce lessons about ethical AI adoption and use that, ideally, businesses begin applying.

3 Key Takeaways for Marketers

By now, these AI blunders reveal the same surface-level lessons, i.e., always review, review, review and edit, edit, edit.

For this one, I want to take things outside of the box a bit to see what other lessons there are to uncover, starting with authenticity.

1. Authenticity Is an Asset, Not Just a Value

AI can mimic authority, but it cannot be truly authentic — and audiences can feel that difference immediately when something seems off.

Authenticity isn’t equal to content simply being “human-written,” but having real provenance: Who said this, where did it come from and why should I trust it? Brands that clearly communicate how AI assists their work (rather than pretending things are handcrafted when they aren’t) will increasingly stand out as more credible. In the new year, better transparency about how we make content will become a big differentiator.

2. Polish Must Go Beyond the Surface Level

On the surface, Deloitte’s report looked credible. It was formatted, structured and cited, but began to collapse under light scrutiny. That’s a powerful metaphor for content marketing: Surface-level polish doesn’t really signal truth anymore.

The next phase of marketing differentiation won’t be about how good something looks, but in how verifiable it actually is. I hope that, one day, we have ways to tag content so it’s clearer what we’re reading or seeing at a glance; is something genuinely human-created, AI-assisted or completely generated? Until official methods are available, though, marketers should take their own approaches to trust signaling to make their content more credible now and in the future.

3. There’s Opportunity To Turn Your Processes Into a Story

As lines blur between what’s actually authentic human-created content and what AI has had a hand in making, don’t be afraid to make your creative process a part of your content and brand stories.

For example:

  • Show your editorial workflow.
  • Explain how humans and AI collaborated on a project.
  • Position your quality assurance and ethics as part of your brand narrative.

The more people know about your processes, the easier it is to prove your integrity.

Where To Go From Here?

It feels like every other week, another major player or beloved business is under fire for unethical AI use that goes against the grain of audience expectations. I don’t imagine that will stop anytime soon, but I do have hope that marketers are taking note of these big mistakes and adjusting their strategies accordingly.

AI use is inevitable, but public scrutiny due to negligence, laziness or poor foresight can be largely prevented if you have an AI playbook in place, think critically about how you use it and are transparent with your audience and customers.