The COVID-19 pandemic not only represents a challenge for researchers and policymakers in the fields of medicine, international relations and economics, but also in media and communications.

The ethics of tracking the spread of the virus through personal data, the dynamics of political communication and journalism conducted exclusively through digital interfaces such as Facebook Live, Zoom, or Google Hangouts, and cultural representations across the globe of both social distancing and the virus itself are but a few topics that will need research and investigation. For the last few weeks, I’ve been focusing on one in particular: online coronavirus misinformation, or as the World Health Organization has called it, the COVID-19 ‘infodemic’.

On 21 April 2020, Ofcom published the latest results from its new weekly survey into COVID-19 news consumption. With the caveat that it is, understandably in the current circumstances, an online survey, and therefore unable to be truly representative of the whole of the UK (see methodology here), 39% of over 2,000 adult respondents aged 16+ said they use social media as a news source, 50% stating they’d seen false or misleading information about COVID-19 – that percentage rising to 59% amongst those aged between 16 to 24. That means that in all likelihood, many of those reading this blog will have seen it: forwarded WhatsApp messages, YouTube videos, or posts in public or private Facebook groups, that share claims purporting to be from NHS staff, or even close friends or family, whilst promoting fake causes and origins of the virus, or false reports about government policies and reactions to it.

This has real-life consequences. Fears that 5G is responsible for the virus and many other health risks, fuelled by renowned conspiracy theorist David Icke as well as by more mainstream media personalities, have led to assaults on telecommunications staff and arson attacks on masts in the UK. Hopes that hydroxychloroquine may cure it have led to the death of a man in the US who ingested chloroquine phosphate. We shouldn’t be surprised: as LSE’s Shakuntala Banaji and Ram Bhat highlighted while studying the links between misinformation circulating on WhatsApp and mob violence in India, if the sender is considered a trusted source, ‘then even the most implausible, or fake-looking messages, are accepted as accurate, and passed on’.

Platforms have, to a certain extent, pleasantly surprised online harms campaigners by stepping up in ways they refused to when it came to elections. In response to calls from Avaaz, Facebook has announced it will now promote WHO’s myth busters page to users who have ‘liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed’. YouTube told BBC News that it’s stepping up its prohibited content policies, declaring that not only will content promoting medically unsubstantiated treatments instead of medical attention be removed, but also ‘any content that disputes the existence or transmission of Covid-19, as described by the WHO and local health authorities’. And if a message has already been forwarded more than five times, WhatsApp users can now only send it to one contact at a time.

Whether these measures will be effective remains to be seen. The absence of a statutory regulator able to investigate, challenge and guarantee implementation of these policies means that we may not find out for a while. In the meantime, many organisations are taking the view that we need to clearly rebut false and/or misleading content, not just slow down its propagation. Indeed, Oxford University’s Professor Philip Howard, Professor Rasmus Kleis Nielson, Nic Newman and Dr J. Scott Brennan write that the Oxford Martin Programme on Misinformation, Science and Media has identified a rise in English-language fact-checks by over 900% between January and March 2020.

In this context, my boss Damian Collins – Member of Parliament for Folkestone and Hythe, and former Chair of the House of Commons Digital, Culture, Media and Sport Select Committee – has joined forces with the team at Iconic Labs (who made UNILAD a media giant) to create a new, free to use factchecking website: Infotagion. Infotagion invites social media users to submit screenshots or links to suspicious content they’ve received or scrolled past; identifies the main claims and compares them to official advice from the NHS, WHO, UK and other governments; and publishes the results, labelling the content either true, unconfirmed, misleading or false.

Since launching on 30 March 2020, Infotagion has published 50 factchecks: on fake or unconfirmed treatments ranging from onion poultices to exposure to malaria; on fake public policies, such as helicopters spraying pesticides at night over the UK or ambulance services refusing to take 999 calls; and on fictitious origin stories, for example accusing the US military or Bill Gates of creating the virus. The initiative is supported by renowned campaigners and experts, including parliamentarians who sat on the International Grand Committee on Disinformation and Fake News, the Oxford Internet Institute’s Professor Philip Howard, Imran Ahmed of the Center for Countering Digital Hate, and ‘The Age of Surveillance Capitalism’ author Shoshana Zuboff; and accompanied by a podcast series discussing the wider issues at play in the infodemic, available on AppleSpotify and Libsyn.

Whilst Infotagion’s main aim is to counter in real-time potentially harmful stories, addressing the media consumption challenges described above, it’s also a public record of COVID-19 related content being viewed and shared – potentially helping to inform both future research and public policy. If you’re interested in knowing more, or would like to help out, please get in touch at

This post originally appeared on the LSE Media Policy Project blog and is reproduced with permission and thanks