Have you ever tried to convince a conspiracy theorist that the moon landing wasn’t a hoax? You probably didn’t succeed, but ChatGPT might have better luck, according to a study by David Rand, a professor at the MIT Sloan School of Management, and Thomas Costello, a psychology professor at American University who conducted the study during his postdoctoral fellowship at MIT Sloan.
In a new article “Sustainably reducing conspiracy beliefs through dialogue with AI” published in Science, Researchers demonstrate that large language models can effectively reduce individuals’ beliefs in conspiracy theories – and that these reductions last for at least two months – a finding that offers new insights into the psychological mechanisms behind the phenomenon as well as potential tools to combat the spread of conspiracies.
Going down the rabbit hole
Conspiracy theories, which claim that certain events are the result of secret plots by influential actors, have long aroused fascination and concern. Their persistence in the face of contrary evidence has led to the conclusion that they appeal to deeply held psychological needs, making them impervious to facts and logic. According to this conventional wisdom, once someone has “fallen down the rabbit hole,” it is virtually impossible to get out.
But Rand, Costello and their co-author, Cornell University professor Gordon Pennycook, who have conducted extensive research on the spread and adoption of disinformation, say that conclusion seems unfounded. Instead, they suspect a simpler explanation.
“We wondered if it was possible that people simply hadn’t been exposed to compelling evidence that refuted their theories,” Rand explained. “Conspiracy theories come in many forms: the details of the theory and the arguments used to support it differ from believer to believer. So if you’re trying to disprove the conspiracy but you haven’t heard those particular arguments, you’re not going to be prepared to disprove them.”
Effectively debunking conspiracy theories would require two things: personalized arguments and access to vast amounts of information, both of which are now readily available thanks to generative AI.
Conspiracy Conversations with GPT4
To test their theory, Costello, Pennycook, and Rand harnessed the power of GPT-4 Turbo, OpenAI’s most advanced large-scale language model, to engage more than 2,000 conspiracy theorists in personalized, evidence-based dialogues.
The study used a unique methodology that allowed for a deeper dive into participants’ individual beliefs. They were first asked to identify and describe a conspiracy theory they believed in using their own words, along with the evidence supporting their belief.
GPT-4 Turbo then used this information to generate a personalized summary of the participants’ beliefs and initiate a dialogue. The AI was tasked with persuading users that their beliefs were false, adapting its strategy based on each participant’s unique arguments and evidence.
These conversations, which lasted an average of 8.4 minutes, allowed the AI to directly address and refute specific evidence supporting each individual’s conspiracy beliefs, an approach that was impossible to test at scale before the technology was developed.
A significant and lasting effect
The results of the intervention were striking. On average, the AI-led conversations reduced the average participant’s belief in their chosen conspiracy theory by about 20%, and about one in four participants – all of whom had believed in the conspiracy theory beforehand – disavowed it after the conversation. This impact proved long-lasting, with the effect remaining intact even two months after the conversation.
The effectiveness of AI conversation is not limited to specific types of conspiracy theories. It has successfully challenged beliefs across a broad spectrum, including conspiracies of potential political and social significance, such as those involving COVID-19 and fraud in the 2020 U.S. presidential election.
Although the intervention was less successful among participants who said conspiracy was central to their worldview, it still had an impact, with little variation across demographic groups.
It is worth noting that the impact of AI-led dialogues is not limited to simple changes in beliefs. Participants also showed changes in their behavioral intentions related to conspiracy theories. They reported being more likely to “unfollow” people who advocate conspiracy theories online and more willing to engage in conversations that challenge these conspiratorial beliefs.
The Opportunities and Dangers of AI
Costello, Pennycook, and Rand are careful to emphasize the need to continue the responsible deployment of AI, as the technology could potentially be used to convince users to believe in conspiracies as well as to abandon them.
Nevertheless, the potential for positive applications of AI to reduce belief in conspiracies is considerable. For example, AI tools could be integrated into search engines to provide accurate information to users searching for conspiracy-related terms.
“This study shows that evidence is much more important than we thought, provided that it is actually related to people’s beliefs,” Pennycook said. “This has implications that go far beyond conspiracy theories: any belief based on flimsy evidence could, in theory, be undermined using this approach.”
Beyond the study’s specific findings, its methodology also highlights how large language models could revolutionize social science research, said Costello, who noted that the researchers used GPT-4 Turbo not only to conduct conversations, but also to filter respondents and analyze the data.
“Psychology research used to rely on graduate student interviews or interventions with other students, which was inherently restrictive,” Costello said. “Then we moved to online survey and interview platforms that gave us breadth but removed nuance. AI allows us to have both.”
These findings fundamentally challenge the idea that conspiracy theorists are beyond the reach of reason. Instead, they suggest that many of them are willing to change their minds when confronted with compelling, personalized counterevidence.
“Before we had access to AI, conspiracy research was largely based on observation and correlation, which gave rise to theories that conspiracies fulfilled psychological needs,” Costello said. “Our explanation is more mundane: Most of the time, people simply didn’t have the right information.”
Additionally, members of the public interested in this ongoing work can visit a website and try the intervention for themselves.
More information:
Thomas H. Costello, Sustainably Reducing Conspiracy Beliefs Through Dialogue with AI, Science (2024). DOI: 10.1126/science.adq1814. www.science.org/doi/10.1126/science.adq1814
Provided by MIT Sloan School of Management
Quote:Conversations with AI can successfully reduce belief in conspiracy theories (2024, September 12) retrieved September 12, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.