The attack on two mosques in Christchurch, New Zealand, last week saw the perpetrator publish footage of his violent attack, as it was happening, online. That footage was republished on various platforms as well as by some news media outlets.

It is very likely the perpetrator could not have released this footage, live and to a global audience, without the resource provided by online platforms.

Forcing platforms to act – legal duty of care

Quite rightly questions are being raised about platform responsibility, both in the aftermath of the Christchurch attack, and prior to it given the dominance of the internet in information-flows, and the dominance of platforms on the internet. Law reform is already being pursued, including by imposing legal duties on platforms: a duty of care imposed directly upon platforms has been suggested by parliamentary committees and commentators . Such a duty could oblige platforms to block or remove certain content.

There are non-legal ways to force platforms to act: consumers and companies could boycott platforms by not using them for personal and advertising purposes, to channel their outrage at the footage of the Christchurch attack not having been blocked, and to hurt these platforms where it hurts them most: their profits.

Nevertheless, given the degree of outrage and impact of the publication of the Christchurch attack footage, there is a place for discussion about law reform, alongside market-driven pressures, to force platforms to put their advanced technological capacities (whether algorithmic or human) to use in blocking and removing footage of violent attacks.

Which content does the duty cover?

With any discussion about law reform, especially that imposing stronger liability, care must be taken in justifying and delineating new, stricter laws. Any new or adjusted law must be normatively coherent and not overbroad. Graham Smith has recently made this point. A major question arising in the context of a new duty of care on platforms is: exactly what is the content that platforms will be duty-bound to block or remove?

It is no good imposing duties upon platforms without first specifying what content is unlawful and covered by such duties. The duty must be able to be discharged in a practical sense – platforms must be able to know which content to target.

This is also a matter of principle, and fundamental rights. The right to freedom of expression is central to this type of law reform. If the law is to prohibit certain content, there must be a justification for that prohibition that is consistent with the normative underpinnings of the right to freedom of expression, or provides a strong enough reason for setting aside those normative protections. The importance of freedom of expression to individual liberty and representative democracy is well-known, and it is one of the most fundamental constitutional protections against authoritarianism. That is why whenever new expression-limiting laws are being developed, the right to freedom of expression should be at the centre of the justification for and framing of such laws.

A new law prohibiting publication of footage of a violent attack might extend beyond the intended target: it might catch depictions of violence in films and other art-media, or in video-games. Deciding whether such an extension is desirable must take into account freedom of expression. How that new law is framed to include publications based upon extremist ideological motivations but exclude blockbuster films must also take into account freedom of expression. Freedom of expression is not a trump-card that prevents development of new laws prohibiting harmful expression. But its normative importance must be accounted for meaningfully whenever such new laws are developed, including for a platform duty of care.

Existing laws prohibiting publication of certain content could apply to footage of violent attacks, and these laws could ground the new duty of care imposed directly upon platforms. That does not mean existing laws all necessarily sit comfortably with the right to freedom of expression, but we ought at least to explore existing categories of unlawful expression before creating a new one.

One area of law that could be applied to footage of violent attacks is obscenity or objectionable publications: e.g., Obscene Publications Act 1959 (UK) and Films, Videos, and Publications Classification Act 1993 (NZ). UK law prohibits “obscene” publications, defined as tending to deprave and corrupt persons likely to see it. NZ law prohibits “objectionable” publications, defined as depicting crime, cruelty or violence in a way that availability of publication will likely be injurious to the public good. The Christchurch attack footage very likely falls into these categories. However, in terms of law reform and a generally-applicable duty of care, the difficult question of law of whether content is obscene or objectionable (tending to deprave, or likely to be injurious) might not rest so easily in the hands of platforms or their algorithms. Platforms would be duty-bound to act quickly, and must have as little doubt as to which content, as a matter of law, qualifies as objectionable. There is potential, therefore, that they err on the side of caution, which could chill online expression.

Another area of law potentially applicable to footage of violent attacks is incitement to religious or ethnic hatred:  Public Order Act 1986 (UK) and Human Rights Act 1993 (NZ). As long as the footage is threatening and intended to stir up religious hatred (UK law), or is threatening, abusive or insulting and likely to excite hostility on grounds of ethnic origins (NZ law), as was the Christchurch attack footage, it should be caught by this existing category of unlawful publication. But, as with objectionable publications, this category involves making complex, nuanced judgments about whether something is intended to or will likely incite hatred. And allowance must be made for satire or political speech: Wall v Fairfax [2018] NZHC 104). It is doubtful whether platform algorithms could ever be programmed to mimic such value-judgments when they are being used to decide which material to suppress and which to allow.

Could privacy help?

Privacy or data protection law could be a more user-friendly category of unlawful publication, as the basis of a new duty of care. Any depictions of real (as opposed to simulated or performed) death or grievous bodily injury could be categorised as an invasion of privacy tout court: a misuse of private information, or, more broadly, an intrusion upon private or family life. Causes of action for invasion of privacy exist in English and NZ law: Campbell v MGN ([2004] 2 AC 457) and Hosking v Runting ([2005] 1 NZLR 1). Such footage could also be “personal data” under the GDPR and Data Protection Act 2018 (UK), or “personal information” under the Privacy Act 1993 (NZ) as long as the individuals are “identifiable” (which is not a high threshold). Such a clear category of unlawful publication, not based upon nuanced value-judgments but upon whether there is a depiction of real death or grievous bodily injury to an individual, could be easier for platforms to incorporate into their algorithmic or human operational mechanisms for blocking or removing content.

This type of content (depictions of harm to body or life) has been treated as private information: Peck v UK (2003) 36 EHRR 41 concerning broadcast of CCTV footage of attempted suicide, and Andrews v TVNZ ([2009] 1 NZLR 220) concerning broadcast of at-the-scene footage of victims of road accident (but note this claim failed for not meeting the “highly offensive” threshold in the NZ privacy tort).

The individuals whose privacy is invaded by publication of such footage would be the attack victims and their family members. This is not without complications. First, do deceased individuals still have a right to privacy, or is a claim in privacy covering an individual’s dying moments valid after their death? Secondly, can a depiction of a family member’s death be covered by privacy (at least by the ECHR article 8 right to a private and family life)? Neither of those questions should be answered with an automatic ‘no’; both should be considered carefully in developing a platform duty of care covering footage of violent attacks.

The countervailing ‘public interest’ or ECHR article 10 claim (in England), or ‘legitimate public concern’ defence (in NZ), permitting invasions of privacy, must also be considered. It is difficult to see how the privacy-invasive publication of a violent attack against an individual could be in the ‘public interest’ or of ‘legitimate public concern’, at least where it is not part of news reportage. Even in news reportage, media do not have absolute freedom over what or how they publish, even if the news story itself is in the ‘public interest’: Richard v BBC ([2018] EWHC 1837 (Ch)) and A v Fairfax ([2011] NZHC 71).

A complication with using privacy in a platform duty of care to block or remove footage of violent attacks is the different thresholds of liability in privacy invasion across different jurisdictions: platforms operate across borders, and the harmful effects of online publications are closely linked to their international dissemination. A germane example in the immediate context is the requirement in NZ tort law that the privacy intrusion be “highly offensive”; that is absent in English law. Data protection laws also differ across borders: the GDPR and UK’s Data Protection Act are more prescriptive than NZ’s Privacy Act. Harmonisation in a platform duty of care context could involve adjusting some countries’ privacy laws, or creating a model cross-border platform duty of care through an international convention.

Principled and measured law reform

In developing a platform duty of care, lawmakers must explore existing grounds of unlawfulness of publication to cover footage of violent attacks. Otherwise, they must comprehensively and meaningfully assess how any new unlawful publication categories affect the right to freedom of expression. That right must not be overlooked in this process, and existing privacy law might be helpful in this regard.

Jelena Gligorijević, PhD candidate in Law; Trinity College, Cambridge.