
AI Ethics Under Fire: Reddit Users Deceived in University Experiment
A storm of controversy has erupted over an experiment conducted by researchers from the University of Zurich, who secretly deployed AI-generated messages on the Reddit subforum r/ChangeMyView (CMV). The study, aimed at assessing the persuasiveness of large language models (LLMs), has been heavily criticized for its ethical breaches, particularly the lack of informed consent from participants.
CMV is a community where users post opinions to invite discussion and challenge their own viewpoints. Moderators revealed that the researchers, who remained anonymous, used AI to generate comments, some portraying fabricated personas like a “rape victim” or a “Black man opposed to Black Lives Matter.” These bots engaged in over 1,700 comments, with some even attempting to personalize their responses by analyzing users' posting histories to infer their gender, age, ethnicity, location, and political orientation.
The revelation sparked outrage. Casey Fiesler, an information scientist at the University of Colorado, called it “one of the worst violations of research ethics I’ve ever seen.” Sara Gilbert from Cornell University’s Citizens and Technology Lab, suggested the study has damaged the community's trust: “Are people going to trust that they aren’t engaging with bots? And if they don’t, can the community serve its mission?” Trust is at the core of the issue, highlighted in numerous comments on the original Reddit post.
According to moderators, the researchers violated Reddit's rule against impersonation and AI-generated content. In response, Reddit’s Chief Legal Officer Ben Lee condemned the actions as “deeply wrong on both a moral and legal level.” Reddit has banned all accounts associated with the research and is contemplating legal action against the university.
The University of Zurich, while acknowledging the situation, defended its ethics commission's approval, stating the project provided “important insights” and the risks were “minimal.” They issued a formal warning to the principal investigator but resisted calls to suppress publication, arguing it was not commensurate with the importance of the yielded study. Despite the ethical concerns, the university maintains the research's value in understanding how AI can shape online discourse. Some of the bots in question “personalized” their comments by researching the person who had started the discussion and tailoring their answers to them by guessing the person’s “gender, age, ethnicity, location, and political orientation as inferred from their posting history using another LLM.”

The r/ChangeMyView moderators emphasized their willingness to collaborate with researchers who approach them beforehand and obtain consent. As one moderator stated, “People do not come here to discuss their views with AI or to be experimented upon.” This recent incident raises critical questions about the ethical boundaries of AI research and the need for transparency and user consent in online studies.
What are your thoughts on this experiment? Should universities be allowed to conduct research like this without explicit user consent? Share your opinions in the comments below.