“The coronavirus pandemic has sparked what the World Health Organization has called an ‘infodemic’ of misinformation,” said Dr. John W. Ayers, a scientist who specializes in public health surveillance. “But, bots –like those used by Russian agents during the 2016 American presidential election– have been overlooked as a source of COVID-19 misinformation.”

A new study published in JAMA Internal Medicine led by Dr. Ayers, Co-Founder of the Center for Data Driven Health and Vice Chief of Innovation within the Division of Infectious Diseases at the University of California San Diego in collaboration with the George Washington University and Johns Hopkins University suggests bots are the primary pathogen of COVID-19 misinformation on social media.

Identifying Bot Influence on Facebook Groups: A Case Study of Masks and COVID-19

The team identified public facebook groups that were heavily influenced by bots. The team measured how quickly the same URLs (or links) were shared in a sample of about 300,000 posts made to Facebook groups that shared 251,655 links.

When URLs are repeatedly shared by multiple accounts within seconds of one another, it indicates these are bot accounts controlled by automated software that coordinates their action. The team found that the Facebook groups most influenced by bots averaged 4.28 seconds between shares of identical links, compared to 4.35 hours for the Facebook groups least influenced by bots.

Among Facebook groups least or most influenced by bots, the team monitored posts that shared a link to the Danish Study to Assess Face Masks for the Protection Against COVID-19 Infection (DANMASK-19) randomized clinical trial published in the Annals of Internal Medicine. “We selected DANMASK-19 for our study because masks are an important public health measure to potentially control the pandemic and are a source of popular debate,” said Dr. Davey Smith, study coauthor and Chief of Infectious Diseases at UC San Diego.

39 percent of all posts sharing the DANMASK-19 trial were made to Facebook groups that were the most influenced by bots. Whereas 9 percent of posts were made to Facebook groups the least influenced by bots.

20 percent of posts sharing the DANMASK-19 trial made to Facebook groups the most influenced by bots claimed masks harmed the wearer, contrary to scientific evidence. For example, one post read “Danish study proves…the harmfulness of wearing a mask.” 50 percent of posts promoted conspiracies such as “corporate fact checkers are lying to you! All this to serve their Dystopian #Agenda2030 propaganda.”

Posts sharing the DANMASK-19 trial made to Facebook groups the most influenced by bots were 2.3 times more likely to claim masks harm the wearer and 2.5 times more likely to make conspiratorial claims than posts made to Facebook groups made to Facebook groups the least influenced by bots.

The Threat of Automated Misinformation

“COVID-19 misinformation propaganda appears to be spreading faster than the virus itself,” said Dr. Eric Leas, study coauthor and Assistant Professor at UC San Diego. “This is fueled by bots that can amplify misinformation at a rate far greater than ordinary users.”

“Bots also appear to be undermining critical public health institutions. In our case study, bots mischaracterized a prominent publication from a prestigious medical journal to spread misinformation, said Brian Chu, study coauthor and medical student at UPenn. “This suggests that no content is safe from the dangers of weaponized misinformation.”

“The amount of misinformation from bots we found suggests that bots’ influence extends far beyond our case study,” added Dr. Smith. “Could bots be fostering vaccine hesitancy or amplifying asian discrimination too?”

The team noted that the effect of automated misinformation is likely larger due to how it spills over into organic conversations on social media.

“Bots sharing misinformation could inspire ordinary people to propagate misinformed messages,” said Zechariah Zhu, study coauthor and research associate with the Center for Data Drive Health at UC San Diego. “For example, bots may make platforms’ algorithms think that automated content is more popular than it actually is, which can then lead to platforms actually prioritizing misinformation and disseminating it to an even larger audience,” added Dr. David A. Broniatowski, Associate Director of the GW Institute for Data, Democracy, and Politics, and study coauthor.

A Call to Action to Address Automated Misinformation

“We must remember that unknown entities are working to deceive the public and promote misinformation. Their dangerous actions directly affect the public’s health,” said Dr. Mark Dredze, the John C. Malone Associate Professor of Computer Science at Johns Hopkins University and study coauthor.

Yet, solutions to eliminate bots and their misinformation campaigns are at hand the team notes.

“Our work shows that social media platforms have the ability to detect, and therefore remove, these coordinated bot campaigns,” added Dr. Broniatowski. “Efforts to purge deceptive bots from social media platforms must become a priority among legislators, regulators, and social media companies who have instead been focused on targeting individual pieces of misinformation from ordinary users.”

“If we want to correct the ‘infodemic,’ eliminating bots on social media is the necessary first step,” concluded Dr. Ayers. “Unlike controversial strategies to censor actual people, silencing automated propaganda is something everyone can and should support.”