Donate
Text Audio
00:00 00:00
Font Size

A new story from Reddit highlights the grave risks of artificial intelligence manipulating opinions online.

The moderators of Reddit forum r/changemyview (CMV), which has 3.8 million members, posted over the weekend about an “unauthorized experiment conducted by researchers from the University of Zurich on CMV users.” This experiment specifically “deployed AI-generated comments to study how AI could be used to change views.” According to Straight Arrow News, the study found that “AI-generated comments were six times more persuasive than human responses.” What’s worse is that Reddit users were unaware they had interacted with AI-generated accounts. 

The study controversy is another reason AI continues to raise red flags, since its privacy, deception and free speech risks could lead to abuses, as noted by Reddit.

Last month, the CMV Mod Team explained, the university finally reached out as “part of a disclosure step in the study approved by the Institutional Review Board (IRB) of the University of Zurich (Approval number: 24.04.01).” The university further provided a list of both active accounts and accounts that had been removed by Reddit moderators, as the platform features a community notes-style censorship system.

The CMV moderators said the university described its study as:

“[Using] multiple accounts to posts published on CMV. Our experiment assessed LLM's persuasiveness in an ethical scenario, where people ask for arguments against views they hold. In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful. We recognize that our experiment broke the community rules against AI-generated comments and apologize. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.”

The email included the university’s first draft of its study results, according to the Reddit post by r/changemyview AutoModerator. Among the individuals that the AI impersonated were a patient who received substandard care in a foreign hospital, a trauma counselor and a victim of rape. In the latter instance, CAI asserted to users that it was a man who survived rape while still a teenaged minor. Even the University of Zurich ethics commission was not consulted as the researchers changed strategy during the study, the Reddit moderators said.

“The Mod Team responded to this notice by filing an ethics complaint with the University of Zurich IRB, citing multiple concerns about the impact to this community, and serious gaps we felt existed in the ethics review process,” the post added. The post further explained that the university refused to prevent publication of the study, downplayed the harmful effects and claimed that an investigation had already taken place. 

Such AI manipulation without users’ knowledge is not simply relevant for users of Reddit forums but also for all individuals who might be impacted by the growing use of AI on tech platforms, in businesses and education. In fact, the Trump administration last week signed an executive order expanding the use of AI in education and professional development to help future generations understand how to utilize AI more effectively. AI does pose a threat to free speech, too, as evidenced by its history of censorship. For instance, in May, MRC researchers caught Meta AI bending over backwards to defend arguments in favor of censorship.

Free speech is under attack! Contact your representatives and demand that Big Tech and government be held to account to mirror the First Amendment while providing transparency, clarity on hate speech and equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.