About a quarter of Americans get their news from YouTube. With billions of users and hours upon hours of content, YouTube is one of the largest online media platforms in the world.
In recent years, there has been a popular narrative in the media that videos from highly partisan and conspiracy theory-based YouTube channels are radicalizing young Americans and that YouTube’s recommendation algorithm is leading users down the path of increasingly radical content.
However, a new study from the Computational Social Science Lab (CSSLab) at the University of Pennsylvania reveals that users’ interests and political preferences play a huge role in what they choose to watch. In fact, if recommendation features have an impact on users’ media diet, it is a moderating impact.
“On average, relying exclusively on the recommender results in less partisan consumption,” explains lead author Homa Hosseinmardi, associate researcher at CSSLab.
YouTube bots
To determine the true effect of YouTube’s recommendation algorithm on what users watch, researchers created bots that followed the recommendation engines or ignored them altogether. To do this, the researchers created robots trained on the history of videos watched on YouTube from a set of 87,988 real users collected from October 2021 to December 2022.
Hosseinmardi and his co-authors Amir Ghasemian, Miguel Rivera-Lanas, Manoel Horta Ribeiro, Robert West and Duncan J. Watts sought to unravel the complex relationship between user preferences and the recommendation algorithm, a relationship that evolves with each video watched.
These bots were assigned individualized YouTube accounts so that their viewing history could be tracked, and the partisanship of what they were watching was estimated using the metadata associated with each video.
During two experiments, the robots, each with their own YouTube account, went through a “learning phase”: they watched the same sequence of videos to ensure that they all presented the same preferences at the same time. YouTube algorithm.
Next, the robots were divided into groups. Some bots continued to track the monitoring history of the actual user they were trained on; others have been referred to as experimental “counterfactual bots,” bots following specific rules designed to separate user behavior from algorithmic influence.
In the first experiment, after the learning phases, the control robot continued to watch videos from the user’s history, while the counterfactual robots deviated from real user behavior and only selected videos from the list of recommended videos without taking into account user preferences.
Some counterfactual bots always selected the first (“upcoming”) video in the sidebar recommendations; others randomly selected one of the top 30 videos listed in the sidebar recommendations; and others randomly selected a video from the top 15 videos in the homepage recommendations.
The researchers found that counterfactual bots consumed less partisan content on average than the corresponding real user, a result that was more pronounced for heavy consumers of partisan content.
“This gap corresponds to an intrinsic preference of users for such content compared to what the algorithm recommends,” explains Hosseinmardi. “The study shows similar moderating effects on bots consuming far-left content, or when bots subscribe to channels at the extreme end of the political partisan spectrum.”
“Forgetting time” of recommendation algorithms
In the second experiment, the researchers sought to estimate the “forgetting time” of the YouTube recommender.
“Recommendation algorithms have been criticized for continuing to recommend problematic content to previously interested users, long after they themselves had lost interest,” says Hosseinmardi.
In this experiment, the researchers calculated the recommender forgetting time for a user with a long history (120 videos) of consuming far-right videos and who changes their diet to moderate the information for the next 60 videos.
While control bots continued to follow a far-right diet throughout the experiment, counterfactual bots simulated a user “switching” from one set of preferences (watching far-right videos) to another (watching moderate videos). As the counterfactual bots changed their media preferences, the researchers tracked the average partisanship of recommended videos in the sidebar and homepage.
“On average, recommended videos in the sidebar shifted to moderate content after about 30 videos,” says Hosseinmardi, “while homepage recommendations tend to adjust less quickly, showing that recommendations of the home page are more responsive to individual preferences and that the recommendations in the sidebar are more linked to the nature of the video currently being watched.”
“YouTube’s recommendation algorithm has been accused of leading its users toward conspiratorial beliefs. While these accusations have some merit, we should not overlook the fact that users have significant influence over their actions and may have viewed the same content, or worse, even without any control. recommendations”, says Hosseinmardi.
In the future, the researchers hope that others can adopt their method to study AI-based platforms where user preferences and algorithms interact to better understand the role that algorithmic content recommendation engines play in our daily life.
The results are published in the journal Proceedings of the National Academy of Sciences.
More information:
Homa Hosseinmardi et al, Causally estimating the effect of YouTube’s recommendation system using counterfactual bots, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2313377121
Provided by the University of Pennsylvania
Quote: The YouTube algorithm does not radicalize people, according to a study on robots (February 20, 2024) retrieved on February 20, 2024 from
This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.