Chad Hetherington

Part of covering all things AI means that, occasionally, you have to harp on not just the good, but the bad and even the ugly, too.

An underhanded study came to light at the end of April, revealing how researchers from the University of Zurich used generative AI to influence unsuspecting Reddit users in one of the platform’s most popular subs — r/changemyview (CMV).

The study wasn’t well received, and I think it’s important to know why (though it may be obvious to some), as AI plays an increasingly permanent role in our lives. But first, what actually happened?

How Researchers Carried Out the Study

Over several months, researchers from the University of Zurich deployed 13 AI-controlled Reddit accounts that posted nearly 1,700 comments. Supposedly, there were far more accounts, but they were all shadowbanned by Reddit. The final 13 were created specifically to avoid being suppressed by the platform. To sell an idea of authenticity, each bot profile was populated with content and activity that gave it an emotionally sensitive and human-like persona, from trauma counselors to sexual assault survivors.

Under the guise of real users, the bots responded to posters in r/changemyview, a subreddit dedicated to miscellaneous, often hard-hitting, conversations where users are open to having their minds changed. The fake accounts ‘participated’ in these discussions, tailoring their responses and comments to each user based on their own Reddit activity for a better chance at persuading them, aiming to influence their opinions without their knowledge or consent. People come to CMV to have fruitful and educational conversations with other people, not algorithms.

The Problem With the Researchers’ Methodology

This study’s unethical nature sticks out like a sore thumb, but let’s break it down into more digestible pieces to unpack why and what could have been done differently.

The Researchers Went Rogue and Were Not Transparent

According to an article from Engadget, the researchers were aware of site-wide Reddit rules prohibiting this kind of behavior and actively chose to ignore them. Not only that, but because of the way Reddit works, r/changemyview has a completely different set of guidelines apart from Reddit’s overarching etiquette, which directly address posting AI-generated comments without proper disclosure and intervention:

The use of AI text generators (including, but not limited to ChatGPT) to create any portion of a post/comment must be disclosed and substantial human-generated content included; failure to do so is a Rule 5 violation.

Still, the rule-breaking isn’t the most unethical aspect of this ordeal. A bit of digging reveals that the study was preregistered with the OSF in 2024 and details all kinds of important information. Most notably, the prompts researchers used to train their chosen AI model for this task were made clear. Here’s a snippet from one of them:

[…] The users participating in this study have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns.

In a lengthy meta thread posted by the moderators of CMV, the Chief Legal Officer of Reddit, Ben Lee (u/traceroo), joined the discussion:

I just wanted to thank the mod team for sharing their discovery and the details regarding this improper and highly unethical experiment. The moderators did not know about this work ahead of time, and neither did we.

Lee goes on to say that he and his team are in the process of contacting the University of Zurich with “formal legal demands” to ensure that “the researchers are held accountable for their misdeeds here.”

All bot accounts that were a part of the research effort have been banned, and Lee also shared a commitment to bettering Reddit’s “inauthentic content detection capabilities” to mitigate future incidents like this one.

The researchers have since formally apologized, which you can read here if interested.

What Can Marketers Learn From All of This?

With AI increasingly enmeshing itself in our lives, there’s a growing importance for brands (or anyone using AI) to be upfront, honest and ethical. AI-generated content is only going to become more difficult to spot, especially if you don’t know exactly what to look for.

Here are bite-sized, marketing-related reminders I took away from this whole ordeal:

Consumers Deserve Respect and Transparency

Increasingly hard-to-spot AI content puts consumers in a tough position. They deserve respect, transparency and the right to choose who they do business with based on their personal ideals. Lying or being deceptive about how, when or why you use AI takes that right away from them and is wholly unjust.

Trust and credibility build brands, and you can’t have either behind closed curtains; breaking rules and being sneaky. Show your processes, explain their value and, most importantly, let consumers make decisions based on what’s presented.

Policies Are Important

Even though these researchers dodged Reddit and subreddit rules to conduct their experiment, having those guidelines in place gives the platform leverage in the aftermath.

Our most recent survey revealed that only 26.7% of companies have a formal AI policy. If nothing else, having an AI policy to fall back on if something goes awry can put your organization in a stronger position to handle it, whether something happened internally or externally. Had Reddit’s Terms of Service or CMV’s formal rulebook for posting in the community not been updated to include specific mentions of AI, there may not have been much to pursue here, legally speaking.

People Have Varied Ethical Standards

This whole experiment may seem like one big ethical dilemma: If the results of this unauthorized research prove beneficial, was it worth it? The researchers tried to argue exactly that; however, many CMV members have pointed out that they showed a blatant disregard for basic research ethics — i.e., that all participants of an experiment are made aware and have given consent.

This shows just how differently people think about ethics and what they consider to be good or just. In that sense, this incident serves as a reminder for marketers and brands that views won’t always align. For example, you shouldn’t feel bad about wanting to integrate AI tools into your workflows to save time or money — the important part is that you’re open, honest and willing to communicate with your audience at all times.

Final Thoughts

As AI technology grows more advanced and accessible, it wouldn’t surprise me if incidents like this continue to happen. But the good news is that the quick action Reddit and CMV moderators took set a positive precedent for anyone using AI: Honesty is the best policy (alongside a real AI policy).