There are very few laws around fake news currently but lots of debate about their introduction. To try and stave off legislation at an EU level, several of the main platforms created and signed up to a voluntary Code of Practice on Disinformation in 2018.  This contains a range of measures, focussed around 5 overall pillars or goals.

A lot of good work has been done by the platforms to pursue these goals but, in its assessment of the Code recently, the European Commission pointed to varying standards of implementation across a) the different platforms and b) the 5 pillars.  It identified the platforms’ cooperation with researchers and the empowerment of users to recognise fake news as particularly sporadic and expressed concern that metrics used to assess platform performance focussed on output rather than impact.

A further report, focussing specifically on actions taken in response to the Covid-19 “infodemic”, recognised some significant steps taken by the Code signatories to combat this particular problem, but again highlighted the need for more harmonised implementation, and for more signatories to the Code.

It appears then that self-regulation is likely to be replaced by legislation of some form. In the UK, disinformation has been wrapped up with a wide range of other issues in the Online Harms proposals making their way, rather slowly, through legislative channels.  There are, however, a number of thorny issues for law-makers to wrestle with in this area. A brief summary of two particularly tricky points is set out below:

  • When talking about notice and takedown obligations relating to disinformation, you quickly run into the age-old clash with freedom of speech. Some content will obviously be unlawful.  A large amount of fake news, however, will not be, but may be causing significant harm to individuals or groups of people – and you are then in a complex grey area where difficult judgments have to be made.  Someone has to decide whether the right to freedom of expression and information, a critical right in any democracy, overrides the rights of the individual in the particular situation. There is a real danger, if you get this balancing exercise wrong, of a chilling effect on internet media.  Countries who have already dabbled in fake news legislation have first hand experience of this problem: the German government has recently proposed amendments to its Netz DG law, which commentators suggest have been driven in part by the over-removal of content thus far; and the proposed French law on fake news was in large part struck down as unconstitutional by France’s courts this summer.  Any law, then, which passes the burden of making these difficult calls down to the platforms is likely to run into challenges.
  • The second problem is how any proposed legislation would interact with the protections given to platforms under the E-Commerce Directive.

Whilst the Online Harms White Paper appears to propose a regulatory system that focusses on platforms’ systems and processes, rather than individual content removal, buried within the proposals is an unspoken expectation that platforms will find ways to remove more content than they currently do.  Driving this expectation, partially, is a misconception that tech companies have a magic wand, that they already have the perfect technology to solve this problem and are simply reluctant to use it.  In my experience , this isn’t the case; the platforms really do not want this content on their sites – it hurts them both commercially and reputationally.  The reality is that there is no technology that currently exists which can identify and remove all disinformation on a global scale.

What does make platforms hesitate before deploying what technology they do have to monitor and assess content on a wholesale basis is concern about how such action may impact the protections from liability they are entitled to under Articles 12 to 14 of the E-C Directive.  These protections are premised on platforms playing no active role in the publication/distribution of the content they host and, indeed, Article 15 also expressly rejects any obligation upon companies to undertake general monitoring of the content they host.  If, however, platforms are now being encouraged to take more active steps to organise and filter content, law-makers need to create an environment in which platforms who do more, get more protection not less.

We’ve seen this sort of “good Samaritan” approach work in the context of s.230 of the US CDA.  Without something similar in Europe, platforms are going to be left in the uneviable position of choosing either a) to disobey regulatory requirements and face sanctions (but preserve their protection from damages claims); or b) to take more active steps but expose themselves to content litigation from individuals as a result.  And in the context of the UK’s Online Harms proposals, there’s a real likelihood that, akin to what we’ve witnessed since the introduction of the GDPR,  claimant law firms could swarm around regulator’s findings of failings and bring mass actions against platforms for negligence using those findings.

Whilst both the German and French laws mentioned above were, in the main, concerned with notice and takedown obligations on platforms, carefully crafted so as to align with the current intermediary liability regime, the UK appears to be proposing a different approach which could encroach significantly on the platform’s traditional legal protections. Currently, then, it appears that platforms can’t win for losing if legislation in this area increases as proposed.

I noted above that technology in this area is not yet perfect.  To take a simple example of this – you may have heard about the doctored video circulated in the US in October which appeared to show Joe Biden addressing a campaign rally in Florida by saying “Hello Minnesota”.  Some sterling fact-checking subsequently revealed that the video was in fact filmed at a rally in Minnesota, and the stage had then been edited in the video to portray Florida signage. Two points demonstrate well the limitations of technology in this area:

  • It was not a machine which identified this video as fake news. It was a human being at The Associated Press – and one of the giveaways that enabled them to identify the video as fake was that Mr Biden was wearing a very thick coat, necessary for an outdoor rally in chilly Minnesota in October, but not so for sunny Florida.  This sort of observation is strikingly “human”.
  • The AP published a fact-checking article on its site, which included photo stills of the doctored video. I’m not an expert in AI, but I would assume a machine would struggle to differentiate between content hosting the original video and this article, so that if tasked with removing “fake” news, there would be a risk that both ended up being removed.

Away from legislative counter-measures, however, it strikes me that there are other areas where a lot more could be done to tackle disinformation, with less friction encountered.  One such area is the reduction of the effects of fake news – or as one responder to the Online Harms White Paper put it “increasing our societal resilience” to it. Education and awareness are obviously key to this; and there’s still a big role for platforms to play within that, but one which they can undertake in a manner which allows them to harmonise their efforts across borders and which doesn’t expose them to a flood of potential claims if they fail.

The recent Covid infodemic has produced some good examples of how this can be done:, such as the UK government’s Don’t Feed the Beast campaign and its public health campaigns, and Facebook and other platforms’ determined efforts to redirect users to reliable sources of health information.  Indeed, there are a range of steps that can help to recondition public awareness, from ambitious media literacy programmes through to seemingly- simple tweaks to tools offered by platforms– Twitter, for example, is currently trialling a tool which pops up when users try to share a link they haven’t opened, encouraging them to read the content before passing it on.

Another example, which made me smile and I’ll leave with you as a parting thought, is the Democratic candidate in the recent US election who released a campaign video containing a deepfake version of his opponent and did so openly, to warn voters –  stating “If our campaign can make a video like this, imagine what Putin is doing right now.” (In fact, if he had wanted to point the finger at fake news sources both before and during the election, he could have looked much closer to home….).

Studies that have been carried out since the election, analysing the impact of disinformation upon voter behaviour, appear thus far to share in the general consensus that it has had less impact this election than was the case four years ago, despite the public being exposed to more sophisticated fake stories and a larger amount of them.  Many put this down to evidence that the US electorate has, to a greater extent, woken up to the prevalence and identity of this media phenomenon. Public education, then, appears to be a key countermeasure in this ongoing fight, and one that all stakeholders can and should invest time and resource in.

This is an edited version of a talk given to Westminster Media Forum policy conference in November 2020

Bryony Hurst is a partner in Dispute Resolution Group at Bird & Bird