Florian Fuehren

Remember that Monty Python scene? The one where villagers, absolutely convinced they’ve caught a witch, offer up such irrefutable evidence as “She turned me into a newt!” (Spoiler: he got better) and the undeniable fact that she weighs the same as a duck?

Well, welcome to the content marketing world, circa right now. With the explosion of AI-generated content, a similar witch hunt is on, where everyone’s jumpy, accusations fly and frankly, nobody’s entirely sure what makes a piece of content “AI-generated” anymore, but they’re darn sure they’ve found it.

It’s time to put down the pitchforks and oversized scales, folks. Let’s talk about what’s really going on.

The Great AI Content Conundrum: Misunderstandings and Mayhem

The digital town square is buzzing with complaints, doubt and accusations about AI content. Is it a miracle cure for content creator burnout? A harbinger of SEO doom? A cheap knock-off that’ll fool no one? The truth, as always, is a bit more nuanced than a yes/no tick box on a villager’s parchment. But a few key misunderstandings are muddying the waters.

Misunderstanding #1: “My Trusty AI Detector Said So!”

Ah, the detection tools. The digital equivalent of checking if someone floats. Here’s the not-so-secret secret: These tools are incredibly unreliable, and I say that as someone who used to take pride in catching plagiarizing students before the age of AI. Back then, the most lazy cases used to be pretty obvious — term papers that even copied the original’s font and chapter numbering. 

Today, it’s a bit more tricky, since you (or that student) can tell a chatbot to rewrite a source while ignoring data points 2-4 and turning it into a Shakespearean sonnet. For a tool that could rely on passages copied word by word to flag plagiarism, that makes the task at hand next to insurmountable, which is why we’ve seen every document being flagged as AI-generated — from the Declaration of Independence to the Bible.

The problem is that these tools still rely on fairly simplistic rules, which simply don’t reflect today’s world. Yes, you can still catch the most mundane type of word-by-word plagiarism, but today, you’re most likely dealing with false positives (or negatives) because of the way the very process of writing has changed. Relying solely on these tools is like trusting that Monty Python peasant to identify the witch — well-intentioned, perhaps, but ultimately flawed.

Misunderstanding #2: “AI Content Will Tank My SEO!”

This one’s a biggie, often whispered by nervous SEO vendors of yore. The fear is that Google and its search engine brethren can sniff out AI content and will promptly banish your site to the digital cornfield. 

Now, let’s be clear: Google has stated it prioritizes high-quality, helpful content, regardless of how it’s produced. It’s also true that humans in general can recognize a difference between human-crafted and generated content (to a certain extent). The problem, though, is that we all have certain assumptions about what AI content might look like, which can change how we perceive content.

The fact that we’re using yet another AI tool to tell us whether another algorithm generated copy in the first place should already tell us enough about our shared paranoia. I, as a content writer, might notice certain cliché analogies, whereas a data scientist might notice a weirdly common typo in a formula. But that could mean that the author has read too many blogs using that cliché, or that they simply copied over that typo. 

Poor content, whether painstakingly typed by human hands or spat out by a Large Language Model in seconds, will hurt your SEO. Conversely, valuable, engaging content that serves user intent can boost your rankings, AI-assisted or not. Search engines are far more interested in the what and why of your content than the how.

Client Relationships in the Crosshairs: Dealing With AI Accusations

Certainly, we’ve all felt our share of AI anxiety, but in B2B relationships, it seems to be particularly palpable. Clients, quite understandably, want to know they’re getting what they paid for — expert human insight, not just something they “could’ve generated themselves.” So, how do you approach this issue from both sides without the relationship going up in smoke?

Before the First Spark: Radical Transparency Is Your Best Defense

Don’t wait for the awkward email asking, “Did a robot write this?” Be proactive.

  • Define your stance: Make it crystal clear what your organization’s policy on AI is. Do you use it? How do you use it? What tools are in play? What guardrails are in place?
  • Communicate it clearly: Don’t bury it in fine print. Talk about it. Put it in your proposals, your contracts, your kickoff calls, your FAQs. Transparency builds trust.

When You ARE Using AI (The Good Way): Own It, Explain It

For context: I got my storytelling chops leafing through medieval manuscripts. Still, even I have a hard time imagining many modern content production workflows that don’t rely on some type of AI. 

That spellcheck with Grammarly? — AI. Overviews at the top of your Google search? — AI. Perplexity reports about recent surveys, keyword density recommendations, auto-transcribed expert interviews? — You got it. AI.

We’re way past the point where it’s realistic (or in many cases, desirable) not to use AI. The question is how your team’s using it. If AI is part of your process, your clients and business partners should be aware of it, and that means you should be upfront about it.

There’s no reason to use AI for the sake of it. Focus on value, and explain how AI enhances your human-led services (or makes them superfluous, if that’s your jam). Perhaps it aids in research, helps generate initial drafts or assists in outlining. We as humans are collectively learning how we experience generated content in real time, so if you’re transparent about your processes, stakeholders might even appreciate the opportunity to learn about certain workflows’ advantages.

Falsely Accused? Keep Calm and Refer to the Policy

So, you got that email. The one with the screenshot from an AI detector claiming your lovingly crafted, 100% human-written masterpiece is 73% robot. Deep breaths.

  • Have a game plan: Don’t be caught off guard.
  • Acknowledge and educate: Thank them for their diligence. Then gently explain the known unreliability of AI detection tools (possibly even with the help of this blog post).
  • Reiterate your process: Remind them of your agency’s AI disclaimers (which you’ve already shared, right?). Highlight the human hours, expertise and review stages involved.
  • Focus on quality: Shift the conversation back to the quality of the work, its alignment with their goals and guidelines and the results the content will drive.

Suspect Your Vendor Isn’t Being Forthright? Inquire, Don’t Accuse

The shoe might be on the other foot. You might suspect a vendor is passing off AI work as entirely human-crafted, especially if the quality dips or it feels … off. What do you do?

  • Remember the detector flaw: Don’t immediately leap to conclusions based on a detection tool.
  • Inquire politely: Instead of an accusation (“Gotcha! You used AI, didn’t you?!”), try a more inquisitive approach. “Can you walk us through your content creation process?” or, “We’ve noticed a shift in our search position. Could you provide some insight into how these deliverables are being developed?” Notice the focus on outcome, not scores.
  • Clarify expectations: This is a good opportunity to reiterate your stance and expectations on AI use in the content you’re paying for.
  • Value proposition under scrutiny: The core issue often boils down to value, and there’s no one right answer. If content is produced by AI, is it inherently cheaper? If a human did a spellcheck but no fact check, is it worth more? When does it become more valuable for your business objectives? 

Ditch the Torches. Demand Quality.

The current panic around AI-generated content feels a lot like trying to determine witchcraft by duck-dunking. It’s messy, often wrong and distracts from what truly matters: the quality, accuracy and value of the content itself. Whether a sentence was first drafted by a human brain or a neural network, refined by human expertise and checked for accuracy should be the ultimate test.

The unavoidable truth is that AI is here, and its features are increasingly embedded in the tools we use. So unless we want to advocate that we’re going back to quill and parchment, let’s just advocate for clear communication, ethical AI usage and an unwavering focus on creating content that’s valuable and reliable. Granted, that’s less dramatic or exciting. But unlike a newt, it’s something that won’t just “get better” on its own.