The International Forum for Responsible Media Blog

The role of social media platforms and users in tackling Covid misinformation – Mathilde Groppo

Plandemic”. This was the name of a 26-minute video spreading misinformation about Covid-19 which went viral (pun intended) after it was posted online on Facebook, YouTube, Vimeo and a separate website set up to share the video in early May 2020. The New York Times reports that it had been viewed 8 million times within a week of its release.

That is not the only example of Covid-19 fake news that have received a significant level of attention. Other such examples include Donald Trump’s claim that children are practically immune to the virus (in a post which was removed from his Facebook page) and countless reports linking Covid-19 with 5G technology in the absence of any scientific evidence.

Broadly speaking, there are three ways in which governments have tackled misinformation about Covid-19: by providing guidance to social media companies on taking down contentious pandemic content, by establishing special units to combat disinformation and by introducing new laws criminalising malicious coronavirus falsehoods, including in relation to public health measures. At the end of March 2020, the UK government adopted the second of these measures, creating a rapid response unit, whose aim was to work with social media firms to remove fake news and harmful content. This received a fair amount of media coverage at the time it was set up, but there is little information available online about what it has achieved since.

In the meantime, various online platforms have developed their own rules to tackle Covid misinformation. YouTube, for instance, has set up a COVID-19 Medical Misinformation Policy, which tackles content that contradicts World Health Organization (WHO) or local health authorities’ guidance on issues related to the treatment, prevention, diagnostic or transmission of Covid-19. In April, Facebook announced that it would be directly warning users who are exposed to false coronavirus content on its platform; and earlier this month it reiterated its commitment to combating misinformation across its apps, including Instagram and WhatsApp.

Facebook is taking a slightly different approach than YouTube, in that it does have a specific policy with regards to Covid-19 misinformation (although this would fall within the scope of Facebook’s broader false news policy). Instead, its approach as described by its vice president of integrity in a blog post published at the end of April is to work with fact-checking organisations whose role is to identify fake news content on Facebook. Once this content has been identified, Facebook works to reduce its distribution and show warning labels with more context. It also generates a message to the user inviting them to consult the WHO’s webpage debunking myths about Covid-19. A similar approach was taken by Twitter, which introduced a label leading to a Twitter-curated webpage or external trusted source containing additional information on the claims made within the Tweet. Those platforms can take down or hide relevant posts with a warning about “spreading misleading and potentially harmful information”, as they did recently in relation to Donald Trump’s claims that Covid-19 was “less lethal than flu.

Those platforms are also proving to be reactive and adaptable to the evolution of the landscape regarding the virus. As Europe braces itself for a second wave and governments reimpose restrictions on individual freedoms with a view to slowing the spread of the virus, researchers around the world are working on developing a vaccine against Covid-19. More than 170 candidate vaccines are now tracked by the WHO, with 11 of them in Phase 3, where the vaccine is given to thousands of people to confirm its safety and effectiveness.  On 14 October 2020, YouTube amended its COVID-19 Medical Misinformation Policy which now provides that videos containing COVID-19 vaccine misinformation will be removed from YouTube. It justified this move by referring to the fact that a Covid-19 vaccine may be imminent. Facebook also announced that it would ban ads discouraging the use of vaccines.

These measures are not akin to takedown requests, because the content is not flagged by a platform user, but is being managed by the platform itself. A system of takedown requests in this regard may be the next step in regulating Covid misinformation. The question has already been asked by users on some forums, and this can be done by reporting the content under more general existing policies, such as Facebook’s false news policy. Overall, in tackling the spread of fake news, social media platforms and their users all have a role to play.

Mathilde Groppo is an Associate at Carter-Ruck

2 Comments

  1. zrpradyer

    In an ever-changing world of immense diversity, how is it decided, and who decides, what benefits one person but may harm another?
    And who fact checks the fact checkers?
    For clues, I suggest following the power and the money.

  2. Ricardo

    Hello, thank you very much for sharing this valuable information. It definitely caught my attention since at the moment many people live believing in everything that is shown in social media and they are surrounded by a lot of misinformation and at the same time spreading them, so I would like to know what measures to take in case of having an encounter with fake news and how to prevent them from spreading.

Leave a Reply

© 2023 Inforrm's Blog

Theme by Anders NorénUp ↑

%d bloggers like this: