The spread of false information or even deliberate spread of misinformation online regarding the coronavirus has been regarded as the ‘infodemic’. False information about the virus are circulating the Internet and reach more people than information coming directly from Public Health Officials like the World Health Organization (WHO) and the US Center for Disease Control (CDC).
As a consequence of the mass spread of misinformation regarding the virus, social media platform giants; Facebook, Google, Linkedin, Microsoft, Reddit, Twitter, and YouTube issued a joint statement on the matter on March 17th stating that they would “elevate authoritative content” on each of their respective platforms.
Facebook, Google, and You Tube gave free advertising to public health messages whilst the International Fact-Checking Network and health authorities cooperated with Facebook to flag and remove false information on the virus. There were also reports of numerous incidents of ads that were clearly exploiting the coronavirus touting cures, and commerce listings for masks, hand wash, etc.
However, online platforms had an increased reliance on automated content moderation to deal with the wave of misinformation. This resulted in users and creators seeing increased removals of content that did not necessarily violate any of the platforms’ policies. There were even reports of automated monitoring systems flagging Covid-19 content from reputable sources as spam.
This situation has led to a call for greater transparency among online platforms to share their fact-checking and online moderation processes with the public. The European Commission even called on platforms to voluntarily publish a monthly transparency report.
-
Social media platforms in the private sector need to gather evidence systematically, issue regular transparency updates about misinformation being published on the subject of Covid-19. There also needs to be a more effective due process of content moderation that doesn’t exploit human fact-checkers and also doesn’t place emphasis on automated systems as well as proactively ban ads that exploit the pandemic.
-
CEOs of social media platforms need to call for a united front to providing their content moderation guidelines so that users are aware of how their social media presence will be affected and work alongside public health officials to increase moderaton, flag content and providing/promoting reliable information to members of civil society.
-
Policy makers should ensure that members of civil society are active agents in their social media engagement and that there are continuous multi-stakeholder political dialogue between these platforms, users, legislators, and the government.
-
Members of civil society can combat misinformation with the four ‘Ds’; Dismiss, Distort, Distract, Dismay, learn more at : https://ai.umich.edu/blog/spotting-fake-news-ben-nimmo-disinformation-misinformation-fake-news-teach-out/
Respect and protection of human dignity, responsibility & accountability, privacy, transparency, awareness & literacy.
Know more about this case:
-
“Combatting Covid-19 disinformation on online platforms”, OECD, https://www.oecd.org/coronavirus/policy-responses/combatting-covid-19-disinformation-on-online-platforms-d854ec48/#section-d1e305
-
“Online platforms responses’ Covid-19 Mis-and disinformation”, EU DisinfoLab, https://www.disinfo.eu/wp-content/uploads/2020/04/20200403_platforms-responses-covid-19.pdf
Additional resources:
-
“How is the EU Combatting disinformation”, European Training Foundation, https://www.etf.europa.eu/en/news-and-events/news/how-eu-combatting-disinformation
-
“Combatting disinformation with the Four Ds, The University of Michigan, https://ai.umich.edu/blog/spotting-fake-news-ben-nimmo-disinformation-misinformation-fake-news-teach-out/