The American group Meta denounced Thursday an increase in online misleading propaganda campaigns originating from China, linked to the upcoming elections in 2024 in the United States and elsewhere in the world.
• Read also: Targeted advertising: EU receives complaint against Meta from consumer associations
• Read also: $100M agreement between Ottawa and Google for journalism
The multinational, which includes Facebook and Instagram in its portfolio, said it had dismantled five coordinated influence networks outside China this year.
“Foreign threat actors are trying to influence voters on the internet before next year’s elections and we must remain on guard,” explained the head of the American giant’s global threat intelligence division, Ben Nimmo, presenting the latest security report.
Meta says it removed 4,789 fake Facebook accounts that were part of a campaign about US domestic politics and relations with China.
These accounts praised China, attacked critical voices and copied and pasted real messages posted online by US politicians that could fuel partisan divisions, the report said.
“As the election campaign picks up pace, we should expect to see foreign influence operations attempting to take advantage of real political groups and debate rather than creating original content,” Nimmo said.
“We expect to see, if relations with China become an election issue, influence operations based in China will begin to target these debates,” he added.
Meta places the source of these networks in China but does not attribute them to the government itself.
The most prolific breeding ground for such networks remains Russia, where they are mainly interested in the war in Ukraine, according to Meta.
Websites linked to Russia-based campaigns have recently begun using the war between Hamas and Israel to tarnish the image of the United States, according to the report.
Meta’s team of security experts expects to see efforts to influence upcoming elections using fake “leaks” of allegedly hacked material.
“We hope that people will try to think carefully by broadcasting political content on the Internet,” said Mr. Nimmo. “For political groups, it is important to be aware that heightened partisan tensions can be exploited by foreign threat actors.”
The false propaganda campaigns extend outside of Meta to other social networks, blogs, forums and websites, according to the report.
Artificial intelligence (AI) with programs like ChatGPT is used to produce convincing fake content for propaganda campaigns, explained Meta’s head of security policy, Nathaniel Gleicher, during the presentation.
“Threat actors can use AI to create higher volumes of persuasive content, even if they don’t have the cultural or linguistic knowledge to speak to their audiences,” according to Gleicher.
“With the number of elections expected around the world in 2024, this means we all need to prepare for a greater volume of virtual content and our defenses must continue to evolve to meet this challenge.”