Change My Mind, Bro—Just Don’t Hack It
The Ethics of AI-Driven Persuasion: Lessons from Reddit’s Unconsented Experiment
In a recent incident that has sparked widespread concern, researchers from the University of Zurich deployed AI bots on Reddit’s r/ChangeMyView forum as part of a psychological experiment to test the persuasive capabilities of large language models (LLMs). These bots, powered by tools such as GPT-4, Claude 3.5, and LLaMA, were designed to mimic human users and engage in debate—without informing participants that they were interacting with artificial intelligence[^1].
The study’s core ethical failure lies in its lack of informed consent. Reddit users were unknowingly turned into research subjects. Some bots posed as individuals with personal traumas or professional authority, such as sexual assault survivors or trauma counselors, to bolster their credibility[^2]. Such deception is a clear violation of established ethical standards in human-subject research, including those laid out in the Declaration of Helsinki and U.S. federal research regulations (the Common Rule), which require transparency, voluntary participation, and minimal risk to participants.
Reddit responded by banning the research team from the platform, with its Chief Legal Officer stating that the study was “deeply wrong on both a moral and legal level”[^3]. The University of Zurich has since opened an investigation.
Beyond the issue of consent, the incident underscores the growing power—and danger—of using AI to manipulate public discourse. The study’s results showed that AI-generated comments could be more persuasive than those written by humans, particularly when they strategically mirrored the language and emotional tone of other users. This raises urgent concerns about how AI might be deployed at scale to shape opinions, especially in political or commercial contexts.
The ease with which these bots integrated into a real-world forum without detection reveals a sobering truth: AI-driven persuasion is not a hypothetical threat. It’s here—and if left unregulated, it may erode trust in digital communication and public discourse.
Footnotes:
[^1]: Vincent, James. “Reddit Bans University Researchers Who Ran a Mass AI Experiment without Consent.” The Verge, April 30, 2025. [Link](https://www.theverge.com/2025/4/30/ai-reddit-ban-university-zurich-experiment-consent).
[^2]: Ibid.
[^3]: Ibid.