Political observers have been troubled by the rise of misinformation online, a concern that has grown as Election Day approaches. However, while the spread of fake news can pose a threat, a new study finds that its influence is not universal. On the contrary, users with extreme political views are more likely than others to encounter and believe fake news.
“Misinformation is a serious problem on social media, but its impact is not uniform,” says Christopher K. Tokita, the lead author of the study conducted by the Center for Social Media and Politics (CSMaP) of the New York University.
The results, which appear in Nexus PNASalso indicate that current methods for combating the spread of misinformation are likely unsustainable and that the most effective way to address this is to implement interventions quickly and target them to users most likely to be vulnerable to these lies.
“Because these extreme users also tend to detect misinformation very early, current interventions on social networks often struggle to limit its impact: they are generally too slow to prevent exposure of the most receptive people” , adds Zeve Sanderson, executive director of CSMaP.
Existing methods used to assess exposure and impact of online misinformation rely on measuring views or shares. However, these fail to fully capture the true impact of misinformation, which depends not only on its spread, but also on whether users actually believe the fake news.
To address this gap, Tokita, Sanderson and their colleagues developed a new approach using data from Twitter (now “X”) to estimate not only how many users were exposed to a specific news story, but also how many were likely to believe it.
“What is particularly innovative about our approach in this research is that the method combines social media data that tracks the spread of real news and misinformation on Twitter with surveys that assess whether Americans believed the content of these articles,” says Joshua A. Tucker, co-director of CSMaP and professor of politics at NYU, and one of the authors of the article. “This allows us to track both the susceptibility to believing false information and the spread of that information in the same articles from the same study.”
Methodology
The researchers captured 139 news articles (November 2019-February 2020) – of which 102 were found to be true and 37 of which were found to be false or misleading by professional fact-checkers – and calculated the circulation of these articles on Twitter from the moment they were published. their first publication.
This sample of popular articles was drawn from five types of news feeds: left-wing mainstream publications, right-wing mainstream publications, low-quality left-wing publications, low-quality right-wing publications, and poor quality with no apparent connection. ideological tendency. To establish the veracity of the articles, each article was sent to a team of professional fact-checkers within 48 hours of publication. Fact-checkers rated each article as “true” or “false/misleading.”
To estimate exposure to and belief in these articles, the researchers combined two types of data. First, they used Twitter data to identify which Twitter users were potentially exposed to each of the articles; they also estimated the ideological positioning of each potentially exposed user on a liberal-conservative scale using an established method that infers a user’s ideology from the news and political accounts they follow.
Second, to determine how likely these exposed users were to believe an article was true, they deployed real-time surveys as each article was distributed online. These surveys asked Americans who are habitual Internet users to rate the article as true or false and to provide demographic information, including their ideology.
Using data from this survey, the authors calculated the proportion of individuals within each ideological category who believed the article to be true. With these estimates for each article, they could calculate how many Twitter users were exposed and willing to believe the article is true.
Findings and conclusions
Overall, the results showed that while fake news reached users across the political spectrum, those with more extreme ideologies (both conservative and liberal) were much more likely to see and believe it . Above all, these users, receptive to disinformation, tend to be confronted with it at the start of its propagation on Twitter.
The research design also allowed the study authors to simulate the impact of different types of interventions designed to stop the spread of misinformation. One takeaway from these simulations is that the earlier interventions were applied, the more likely they were to be effective. Another reason was that “visibility” interventions – whereby a platform makes reported misinformation posts less likely to appear in users’ feeds – seemed more likely to reduce the reach of misinformation to sensitive users than interventions aimed at making users less likely to share misinformation.
“Our research indicates that understanding who is likely to be receptive to misinformation, not just who is exposed to it, is essential for developing better strategies to combat misinformation online,” advises Tokita, now a data scientist at the technology industry.
More information:
Christopher K Tokita et al, Measuring receptivity to disinformation at scale on a social media platform, Nexus PNAS (2024). DOI: 10.1093/pnasnexus/pgae396
Provided by New York University
Quote: Online disinformation most likely to be believed by ideological extremists, study finds (September 30, 2024) retrieved October 1, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.