Strand 4 involves the creation of new and reformed criminal offences that would apply directly to users,  in parallel with the government’s proposals for an online duty of care, the Law Commission has been conducting two projects looking at the criminal law as it affects online and other communications: Modernising Communications Offences (Law Com No 399, 21 July 2021); Hate Crime Laws (LawCom No 402, 7 December 2021).

The communications offences report recommended:

  • A new harm-based communications offenceto replace S.127(1) Communications Act 2003 and the Malicious Communications Act 1988
  • A new offence of encouraging or assisting serious self-harm
  • A new offence of cyberflashing; and
  • New offences of sending knowingly false, persistent or threatening communications, to replace S.127(2) Communications Act 2003

It also recommended that the government consider legislating to criminalise maliciously sending flashing images to known sufferers of epilepsy. It was not persuaded that specific offences of pile-on harassment or glorification of violent crime would be necessary, effective or desirable.

The hate crime report made a complex series of recommendations, including extending the existing ‘stirring up’ offences to cover hatred on grounds of sex or gender. It recommended that if the draft Online Safety Bill becomes law, inflammatory hate material should be included as ‘priority illegal content’ and the stirring up offences should not apply to social media companies and other platforms in respect of user to user content unless intent to stir up hatred on the part of the provider could be proved.

It also recommended that the government undertake a review of the need for a specific offence of public sexual harassment (covering both online and offline).

The government has said in an interim response to the communications offences report that it proposes to include three of the recommended offences in the Bill: the harm-based communications offence, the false communications offence and the threatening communications offence. The remainder are under consideration. The hate crime report awaits an interim response.

From the point of view of the safety duties under the Online Safety Bill, the key consequence of new offences is that the dividing line between the illegality duty and the ‘legal but harmful’ duties would shift. However, the ‘reasonable grounds to believe threshold would not change, and would apply to the new offences as it does to existing offences.

The Petitions Committee acknowledged concerns over how the proposed harm-based offence would intersect with the illegality duties:

“The Law Commission is right to recommend refocusing online communications offences onto the harm abusive  messages can  cause to victims. We welcome the Government’s commitment to adopt the proposed threatening and ‘harm-based’ communications offences. However, we also acknowledge the uncertainty and hesitation of some witnesses about how the new harm-based offence will be interpreted in practice, including the role of social media companies and other online platforms in identifying this content—as well as other witnesses’ desire for the law to deal with more cases of online abuse more strongly.

It recommended monitoring the effectiveness of the offences and that the government should publish an initial review of the workings and impact of any new communications offences within the first two years after they come into force.

The Joint Committee supported the Law Commission recommendations. It also suggested that concerns about ambiguity and the context-dependent nature of the proposed harm-based offence could be addressed through the statutory public interest requirement discussed above. [135]

Annex: What is a duty of care?

In its proper legal sense a duty of care is a duty to take reasonable care to avoid injuring other people– that is why it is called a duty of care. It is not a duty to prevent other people breaking the law. Nor (other than exceptionally) is it a duty to prevent other people injuring each other. Still less is it a duty to prevent other people speaking harshly to each other.

A duty of care exists in the common law of negligence and occupier’s liability. Analogous duties exist in regulatory contexts such as health and safety law.  A duty of care does not, however, mean that everyone owes a duty to avoid causing any kind of harm to anyone else in any situation. Quite the reverse. The scope of a duty of care is limited by factors such as kinds of injury, causation, foreseeability and others.

In particular, for arms-length relationships such as property owner and visitor (the closest analogy to platform and user) the law carefully restricts safety-related duties of care to objectively ascertainable kinds of harm: physical injury and damage to property.

Objective injury v subjective harm  Once we move into subjective speech harms the law is loath to impose a duty. The UK Supreme Court held in Rhodes that the author of a book owes no duty to avoid causing distress to a potential reader of the book. It said:

“It is difficult to envisage any circumstances in which speech which is not deceptive, threatening or possibly abusive, could give rise to liability in tort for wilful infringement of another’s right to personal safety. The right to report the truth is justification in itself. That is not to say that the right of disclosure is absolute … . But there is no general law prohibiting the publication of facts which will cause distress to another, even if that is the person’s intention.” [77]

That is the case whether the author sells one book or a million, and whether the book languishes in obscurity or is advertised on the side of every bus and taxi.

The source of some of the draft Bill’s most serious problems lies in the attempt to wrench the concept of a safety-related duty of care out of its offline context – risk of physical injury – and apply it to the contested, subjectively perceived claims of harm that abound in the context of speech.

In short, speech is not a tripping hazard. Treating it as such propels us ultimately into the territory of claiming that speech is violence: a proposition that reduces freedom of expression to a self-cancelling right.

Speech is protected as a fundamental right. Some would say it is the right that underpins all other rights. It is precisely because speech is not violence that Berkeley students enjoy the right to display placards proclaiming that speech is violent. The state is – or should be – powerless to prevent them, however wrong-headed their message.

Quite how, on the nature of speech, a Conservative government has ended up standing shoulder to shoulder with those Berkeley students is one of the ineffable mysteries of politics.

Causing v preventing Even where someone is under a duty to avoid causing physical injury to others, that does not generally include a duty to prevent them from injuring each other. Exceptionally, such a preventative duty can (but does not necessarily) arise, for instance where the occupier of property does something that creates a risk of that happening. Serving alcohol on the premises, or using property for a public golf course, would be an example. Absent that, or a legally close relationship (such as teacher-pupil) or an assumption of responsibility, there is no duty. Even less would any preventative duty exist for what visitors say to each other on the property.

The duty proposed to be imposed on UGC platforms is thus doubly removed from offline duties of care. First, it would extend far beyond physical injury into subjective harms. Second, the duty consists in the platform being required to prevent or restrict how users behave to each other.

It might be argued that some activities (around algorithms, perhaps) are liable to create risks that, by analogy with offline, could justify imposing a preventative duty. That at least would frame the debate around familiar principles, even if the kind of harm involved remained beyond bounds.

Had the online harms debate been conducted in those terms, the logical conclusion would be that platforms that do not do anything to create relevant risks should be excluded from scope. But that is not how it has proceeded. True, much of the political rhetoric has focused on Big Tech and Evil Algorithm. But the draft Bill goes much further than that. It assumes that merely facilitating individual public speech by providing an online platform, however basic that might be, is an inherently risk-creating activity that justifies imposition of a duty of care. That proposition upends the basis on which speech is protected as a fundamental right.

Safety by design It may be suggested that by designing in platform safety features from the start it is possible to reduce or eliminate risk, while avoiding the problems of detecting, identifying and moderating particular kinds of illegal or harmful content.

It is true that some kinds of safety feature – a reporting button, for instance – do not entail  any kind of content moderation. However, risk is not a self-contained concept. We always have to ask: “risk of what?” If the answer is “risk of people encountering illegal or harmful content”, at first sight that takes the platform back towards trying to distinguish permissible from impermissible content. However, that is not necessarily so.

A typical example of safety by design concerns amplification. It is suggested that platforms should be required to design in ‘friction’ features that inhibit sharing and re-sharing of content, especially at scale.

The problem with a content-agnostic approach such as this is that it inevitably strikes at all content alike (although it would no doubt be argued the overall impact of de-amplification is skewed towards ‘bad’ content since that is more likely to be shared and re-shared).

However, the content-agnostic position is rarely maintained rigorously, often reverting to discussion of ways of preventing amplification of illegal or harmful content (which takes us back to identifying and moderating such content). An example of this can be seen in Joint Committee recommendation 82(e):

“Risks created by virality and the frictionless sharing of content at scale, mitigated by measures to create friction, slow down sharing whilst viral content is moderated, require active moderation in groups over a certain size…”

Criticism of amplification is encapsulated in the slogan ‘freedom of speech is not freedom of reach’. As a matter of human rights law, however, interference with the reach of communications certainly engages the right of freedom of expression. As the Indian Supreme Court held in January 2020:

“There is no dispute that freedom of speech and expression includes the right to disseminate information to as wide a section of the population as is possible. The wider range of circulation of information or its greater impact cannot restrict the content of the right nor can it justify its denial.”

Broadcast regulation The model adopted by the draft Bill is discretionary regulation by regulator, rather than regulation by the general law. Whether discretionary broadcast-style regulation is an appropriate model for individual speech is a debate in its own right.

Part I of this post was published here and Part II here.

The full post originally appeared on the Cyberleagle Blog and is reproduced with permission and thanks