Published Apr 14 2021

Fact-checking the fact-checkers: Platforms are failing the misinformation test

Tuesday, 16 March, 2021 was a dark day, yet oddly familiar. A gunman murdered eight people in metropolitan Atlanta in the US, six of whom were Asian women. That same day, 1245 Americans died of COVID-19.  

I’ve written about the need for special intervention to combat targeted social media misinformation stoking racial hatred in an era of random acts of terror.  

The backdrop of the global pandemic, and yet another terror-inducing mass murder, amplifies the need for aggressive fact-checking, content moderation, and a preparedness to fact-check the fact-checkers. COVID-19 misinformation on social media is exceptionally dangerous, destabilising forces are likely to exploit it, and current firm safeguards against it are inadequate.  

Without adequate and robust fact-checking, social media misinformation about COVID-19 can become rampant and stoke racial hatred – if it isn’t already. 

Of specific concern is misinformation about the origin of COVID-19, and posts denigrating preventative measures. Origin conspiracy theorists insist that COVID-19 isn’t zoonotic. Instead, they argue that the virus originates from 5G radio waves and/or it was engineered in a lab in Wuhan. 

The latter fable was reinforced by Republican politicians in the US, including Senator Tom Cotton, former secretary of state Mike Pompeo, and former president Donald Trump. Instead of the stab-in-the-back myth (the Dolchstoßlegende blaming Germany’s WWI defeat on the Jews), it’s a sneeze-in-the-back myth, with similar xenophobic overtones and attendant threats to domestic minorities.  

Reports of violence against Asians and individuals of Asian descent, especially women, are unsurprisingly increasing worldwide, including in Australia.  

Undermining preventative measures

Misinformation may also undermine preventative measures and instead advocate ineffective or deadly substitutes. False posts about oxygenation rates, mask usage, and hydroxychloroquine constitute an “infodemic”. 

Anti-vaccine theories range from the more pedestrian concerns of permanent DNA reprogramming and the persistent autism libel, to the more overtly antisemitic argument that vaccines will allow Jewish businessmen to engage in global mind control. 

The antisemitic and anti-Chinese lab-origination slanders merged quite quickly, as indicated by a false claim on 15 March, 2020, that George Soros – a Hungarian-born American billionaire – owns the Wuhan lab where COVID-19 was supposedly developed. 

It’s obvious from these examples that COVID-19 misinformation is exceptionally dangerous, as it may encourage deadly behaviour, be it in the form of non-compliance or violence. 

It’s also obvious that destabilising actors, foreign and domestic, are weaponising these lies, contributing to a sort of biological warfare on the cheap. 

Reuters reported in March 2020 that Russia was deploying COVID-19 disinformation against the West. In May 2020, studies showed that roughly half of the Twitter accounts discussing COVID-19 were likely bots. Recently, the Wall Street Journal noted that websites linked to Russian intelligence services were falsifying information to denigrate vaccines’ safety and effectiveness. This follows similar documented patterns in Russian bot behaviour in the lead-up to the 2016 US presidential election. 

Too long to act on misinformation

Social media policies have taken far too long to respond to this climate of misinformation. 

After allowing anti-vaccine groups and content to proliferate on its platform, Facebook only recently announced a tougher stance on anti-vaccine misinformation. While Facebook’s fact-checking program came about in 2016, anti-vaccine ads weren’t targeted for reduction until 2019, and anti-vaccine groups weren’t singled out until February 2021 – a year into the current pandemic. 

Prior to late 2020, Twitter’s standing policies did reach some categories of COVID-19 misinformation. But these didn’t specifically target anti-vaccine misinformation until December 2020, with further expansion of policy in March 2021. 


Read more: Technology and the truth: Novel approaches to combating misinformation


My research has critiqued Facebook’s fact-checking, as it’s vulnerable to political interference, and may occasion clandestine deviation to avoid offence to powerful groups. 

It’s essential to fact-check the fact-checkers to ensure they’re not wantonly pulling down information or ignoring misinformation. Facebook does employ a sort of self-regulating “high court”, but the problem lies in the fact that it’s designed to address the former, while ignoring the latter.   

Facebook’s oversight board has been praised as a step towards fairer and more transparent content moderation. This praise seems to miss the mark, if one appreciates the paramount danger of misinformation during a time of plague and random acts of terrorism. 

The oversight board, to date, has no authority to order Facebook to remove content. Instead, it only considers content that Facebook has already removed – the oversight board can order the restoration of this controversial content, but cannot safeguard the public by removing deadly lies. 

The oversight board therefore functions to either endorse Facebook’s prior removals, or punish Facebook by asking that traffic-growing misinformation be restored. Do better Facebook, or we’ll force you to make more money! 

Of the board’s first five substantive decisions, four ordered the restoration of content. One of these resurrected posts was advocacy for the debunked (and potentially lethal) COVID-19 hydroxychloroquine “cure”, described as a “harmless drug”. 

Perhaps, in the fullness of time, the oversight board may grow into a more influential body, but we don’t have that luxury. 


Fake news: The other pandemic that can prove deadly


Governments can accelerate the process of our informational immune system by publicly and repeatedly noting social media’s continuing failure to address this problem. 

Politicians should continue to drag social media executives before committees, highlight the effects of toxic misinformation, and shame the inadequate policies offered up by billion-dollar entities. 

In the contest between COVID-19 variants and current vaccination efforts, and against the backdrop of racially-motivated violence, we must not defer to deception.   

About the Authors

  • Andrew moshirnia

    Senior Lecturer, Monash Business School

    Andrew is a magna cum laude graduate of Harvard Law School, and gained his PhD in educational technology, with a concentration in statistics, from the University of Kansas. He's published several articles in law reviews and education technology journals, including a co-authored behavioural economics study of whether legal argument matters in decision-making. His current research focuses on the intersection of intellectual property rights and national security.

Other stories you might like