Researchers from the University of Sheffield have developed an AI-based algorithm that predicts which Twitter users will share disinformation before they do it.
The team, led by Yida Mu and Dr Nikos Aletras from the university’s Department of Computer Science, has created a new natural language processing method that can accurately predict whether a social media user is likely to share content from unreliable news sources.
In the research, published in the journal PeerJ, the team analysed over one million tweets from approximately 6,200 Twitter users. The study grouped users into two categories, those who have shared unreliable news sources and those who only spread stories from reliable news outlets.
Data from the study was used to train a machine-learning algorithm that can predict if a user will repost content from an unreliable source in the future. The algorithm was reportedly 79.7% accurate in its predications.
Nikos Aletras, lecturer in natural language processing at the University of Sheffield, said: “Social media has become one of the most popular ways that people access the news, with millions of users turning to platforms such as Twitter and Facebook every day to find out about key events that are happening both at home and around the world.
“However, social media has become the primary platform for spreading disinformation, which is having a huge impact on society and can influence people’s judgment of what is happening in the world around them.”
According to its results, the study found Twitter users who shared stories from unreliable sources are more likely to tweet about either politics or religion and use impolite language. Common words in such tweets were ‘liberal’, ‘government’ and ‘media’.
The study also found that users who share stories from reliable news sources often tweet about personal subjects including emotions and interactions with friends. This group used words such as ‘mood’, ‘wanna’, ‘gonna’, ‘excited’ and ‘birthday’.
The researchers said findings from the study could help social media companies develop ways to tackle the spread of disinformation online. Furthermore, it could also help social scientists and psychologists improve understanding of such user behaviour on a large scale.