The fundamental issues with the government’s White Paper proposals have been exhaustively discussed on previous occasions. Reminiscent of a sheriff in the Wild West, to which the internet is so often likened, Ofcom would enlist deputies – social media platforms and other intermediaries acting under a legal duty of care – to police the unruly online population.  Unlike its Wild West equivalent, however, Ofcom would get to define its territory and write the rules, as well as enforce them.

The introduction of a general definition of harm would tie Ofcom’s hands to some degree in deciding what does and does not constitute harmful speech. Limiting the scope of ‘harm’ to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals goes some way to align the proposed duty of care more closely with analogous offline duties of care, which are specifically safety-related.

Nevertheless, when applied in the context of speech there remain significant problems.

  1. What is an adverse psychological impact? Does it have to be a medically recognised condition? If not, how wide is it meant to be? Is distress sufficient? The broader the meaning, the closer we come to a limitation that could mean little or nothing more than being upset or unhappy. The less clear the meaning, the more discretion would be vested in Ofcom to decide what counts as harm, and the more likely that providers would err on the side of caution in determining what kinds of content or activity are in scope of their duty of care.
  2. The difficulty, not to say virtual impossibility, of the task faced by the regulator and providers should not be underestimated. Thus, for the lawful but harmful category, the government has said that it will include online abuse as a priority category in secondary legislation. However, on the basis of these proposals that must be limited to abuse that falls within the general definition of harm – i.e. abuse that presents a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals. The provider’s actions under the duty of care should relate only to such harmful abuse. Where, concretely, is the dividing line between abuse that does and does not carry a foreseeable risk of adverse psychological impact? What content falls on either side of the line?

The provider would also have to take into account the proposed obligation not to remove controversial viewpoints and the possibility of user redress for unduly restricting their freedom of expression. Coincidentally, the Divisional Court in Scottow v CPS has in the last few days issued a judgment in which it re9ferred to “the well-established proposition that free speech encompasses the right to offend, and indeed to abuse another”.

These issues illustrate the care that has to be taken with using terms such as ‘online abuse’ to cover everything from strong language, through insults, to criminal threats of violence.

  1. What is the threshold to trigger the duty of care? Is it the risk that someone, somewhere, might read something and claim to suffer an adverse psychological impact as a result? Is it a risk gauged according to the notional attributes of a reasonably tolerant hypothetical user, or does the standard of the most easily upset apply? How likely does it have to be that someone might suffer an adverse psychological impact if they read it? Is a reasonably foreseeable, but low, possibility sufficient?

The Media Minister John Whittingdale, writing in the Daily Mail on the morning of the publication of the Final Response, said:

“This is not about an Orwellian state removal of content or building a ‘woke-net’ where causing offence leads to instant punishment.  Free speech includes the right to offend, and adults will still be free to access content that others may disapprove of.”

If risk and harm thresholds are sufficiently low and subjective, that is what would result.

  1. Whatever the risk threshold might be, would it be set out in tightly drawn legislation or left to the discretion of Ofcom? It will not be forgotten that Ofcom, in a 2018 survey, suggested to respondents that ‘bad language’ is a harmful thing. A year later it described “offensive language” as a “potential harm”.
  2. Lastly, in the absence of deliberate intent an author owes no duty avoid causing harm to a reader of their work, even though psychological injury may result from reading it. That was confirmed by the Supreme Court inRhodes. The government’s proposals would therefore mean that an intermediary would have a duty to consider taking steps in relation to material for which the author itself has no duty of care.

These are difficult issues that go to the heart of any proposal to impose a duty of care. They ought to have been the subject of debate over the last couple of years. Unfortunately they have been buried in the rush to include every conceivable kind of harm – however unsuited it might be to the legal instrument of a duty of care – and in discussions of ‘systemic’ duties of care abstracted from consideration of what should and should not amount to harm.

It should be no surprise if the government’s proposals became bogged down in a quagmire resulting from the attempt to institute a universal law of everything, amounting to little more than a vague precept not to behave badly online. The White Paper proposals were a castle built on quicksand, if not thin air.

The proposed general definition of harm, while not perfect, gives some shape to the edifice. It at least sets the stage for a proper debate on the limits of a duty of care, the legally protectable nature of personal safety online, and its relationship to freedom of speech – even if that should have taken place two years ago. Whether regulation by regulator is the appropriate way to supervise and police an appropriately drawn duty of care in relation to individual speech is another matter.

The post originally appeared on the Cyberleagle blog and is reproduced with permission and thanks