While artificial intelligence (AI) bots can serve a legitimate purpose on social media, such as marketing or customer service, some are designed to manipulate public discourse, incite hate speech, spread disinformation, or commit fraud and scams. To combat potentially harmful bot activity, some platforms have published policies on the use of bots and created technical mechanisms to enforce those policies.
But are these policies and mechanisms enough to keep social media users safe?
Research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, Researchers then tried to launch bots to test bot policy enforcement processes. Their research has been published on the website arXiv preprint server.
The researchers successfully published a benign ‘test post’ of a bot on each platform.
“As computer scientists, we know how these bots are created, how they are connected and how malicious they can be, but we were hoping that the social media platforms would block or shut down the bots and it wouldn’t be much of a problem,” he says. said Paul Brenner, faculty member and director of the Center for Research Computing at Notre Dame and senior author of the study.
“So we looked at what the platforms, often vaguely, claim to do and then tested to see if they actually enforce their policies.”
The researchers found that the Meta platforms were the most difficult to launch bots, requiring multiple attempts to bypass their policy enforcement mechanisms. Although the researchers received three suspensions in the process, on their fourth attempt they managed to launch a bot and create a ‘test post’.
The only other platform that posed a modest challenge was TikTok, due to the platform’s heavy use of CAPTCHAs. But three platforms posed no challenge at all.
“Reddit, Mastodon and X were trivial,” Brenner said. “Despite what their policies say or the technical bot mechanisms they have, it was very easy to get a bot up and running and working on X. They don’t enforce their policies effectively.”
As of the study publication date, all test bot accounts and messages were still active. Brenner said interns, who had only a high school education and minimal training, were able to launch the test bots using technology readily available to the public, highlighting how easy it is to launch bots online.
Overall, the researchers concluded that none of the eight social media platforms tested provide sufficient protection and monitoring to protect users from malicious bot activity. Brenner argued that laws, economic incentive structures, user education and technological advancements are needed to protect the public from malicious bots.
“There should be US legislation requiring platforms to identify human and bot accounts, because we know that people cannot distinguish the two on their own,” Brenner said. “The economic situation is currently at odds with this, as the number of accounts on each platform is a basis for marketing revenue. This needs to be brought to the attention of policymakers.”
To create their bots, researchers used Selenium, a suite of web browser automation tools, and OpenAI’s GPT-4o and DALL-E 3.
The research was led by Kristina Radivojevic, a PhD student at Notre Dame.
More information:
Kristina Radivojevic et al, Social Media Bot Policies: Evaluating Passive and Active Enforcement, arXiv (2024). DOI: 10.48550/arxiv.2409.18931
arXiv
Provided by the University of Notre Dame
Quote: AI bots easily bypass some social media security measures, study shows (2024, October 15) retrieved October 15, 2024 from https://techxplore.com/news/2024-10-ai-bots-easily-bypass-social .html
This document is copyrighted. Except for fair dealing purposes for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.