Although artificial intelligence (AI) bots can serve legitimate purposes on social media, such as marketing or customer service, some are designed to manipulate public debate, incite hate speech, spread misinformation, or commit frauds and scams. To combat potentially harmful bot activities, some platforms have published policies on the use of bots and created technical mechanisms to enforce these policies.
But are these policies and mechanisms sufficient to ensure the safety of social media users?
Research from the University of Notre Dame analyzed the policies and mechanisms of AI bots from eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta Facebook platforms, Instagram and Threads. The researchers then attempted to launch robots to test the robots’ policy enforcement processes. Their research is published on the arXiv preprint server.
The researchers successfully published a harmless “test” post from a bot on each platform.
“As computer scientists, we know how these bots are created, how they are connected, and how malicious they can be, but we were hoping that social media platforms would block or shut down the bots and that it wouldn’t really pose a problem. issue.” said Paul Brenner, faculty member and director of Notre Dame’s Center for Research Computing and lead author of the study.
“So we looked at what the platforms say, often vaguely, that they do, and then tested to see if they were actually enforcing their policies.”
Researchers found that Meta platforms were the most difficult to use to launch bots: it required multiple attempts to bypass their policy enforcement mechanisms. Although the researchers accumulated three suspensions in the process, they managed to launch a bot and post a “test” message on their fourth attempt.
The only other platform that presented a modest challenge was TikTok, due to the platform’s frequent use of CAPTCHAs. But three platforms presented no challenge.
“Reddit, Mastodon and X were insignificant,” Brenner said. “Despite what their policy says or the technical mechanisms of the robots they have, it has been very easy to create a robot and work on X. They are not effectively enforcing their policies.”
As of the study’s publication date, all test bot accounts and posts were still active. Brenner explained that the trainees, who had only a high school diploma and minimal training, were able to launch the test robots using technology readily available to the public, emphasizing how easy it is to launch online robots.
Overall, the researchers concluded that none of the eight social media platforms tested provide sufficient protection and monitoring to protect users from malicious bot activities. Brenner argued that laws, economic incentive structures, user education and technological advancements are necessary to protect the public from malicious bots.
“There needs to be U.S. legislation requiring platforms to identify human accounts and bot accounts, because we know people can’t tell the two apart on their own,” Brenner said. “Current economic data is against this, as the number of accounts on each platform forms the basis of marketing revenue. This needs to be put to policy makers.”
To create their bots, the researchers used Selenium, a suite of tools for automating web browsers, as well as GPT-4o and DALL-E 3 from OpenAI.
The research was led by Kristina Radivojevic, a doctoral student at Notre Dame.
More information:
Kristina Radivojevic et al, Social Media Bot Policies: Evaluating Passive and Active Enforcement, arXiv (2024). DOI: 10.48550/arxiv.2409.18931
arXiv
Provided by the University of Notre Dame
Quote: AI bots easily bypass some social media protections, study reveals (October 15, 2024) retrieved October 15, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.