The Rule of Law and the Online Harms White Paper – Graham Smith

12 05 2019

Before the publication of the Online Harms White Paper on 8 April 2019 I proposed a Ten Point Rule of Law test to which it might usefully be subjected.

The idea of  the test is less to evaluate the substantive merits of the government’s proposal – you can find an analysis of those here – but more to determine whether it would satisfy fundamental rule of law requirements of certainty and precision, without which something that purports to be law descends into ad hoc command by a state official.

Here is an analysis of the White Paper from that perspective. The questions posed are whether the White Paper demonstrates sufficient certainty and precision in respect of each of the following matters.

  1. Which operators are and are not subject to the duty of care

The White Paper says that the regulatory framework should apply to “companies that allow users to share or discover user-generated content, or interact with each other online.”

This is undoubtedly broad, but on the face of it is reasonably clear.  The White Paper goes on to provide examples of the main types of relevant service:

  • Hosting, sharing and discovery of user-generated content (e.g. a post on a public forum or the sharing of a video).
  • Facilitation of public and private online interaction between service users (e.g. instant messaging or comments on posts).

However these examples introduce a significant element of uncertainty. Thus, how broad is ‘facilitation’? The White Paper gives a clue when it mentions ancillary services such as caching. Yet it is difficult to understand the opening definition as including caching.

The White Paper says that the scope will include “social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.”  In the Executive Summary it adds messaging services and search engines into the mix. Although the White Paper does not mention them, online games would clearly be in scope as would an app with social or discussion features.

Applicability to the press is an area of significant uncertainty. Comments sections on newspaper websites, or a separate discussion forum run by a newspaper such as in the Karim v Newsquest case would on the face of it be in scope. However, in a letter to the Society of Editors the Secretary of State has said:

“… as I made clear at the White Paper launch and in the House of Commons, where these services are already well regulated, as IPSO and IMPRESS do regarding their members’ moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.

This exclusion is nowhere stated in the White Paper. Further, it does not address the fact that newspapers are themselves users of social media. They have Facebook pages and Twitter accounts, with links to their own websites. As such, their own content is liable to be affected by a social media platform taking action to suppress user content in performance of its duty of care.

The verdict on this section might have been ‘extremely broad but clearly so’. However the uncertainty introduced by ‘facilitation’, and by the lack of clarity about newspapers, results in a FAIL.

  1. To whom the duty of care is owed

The answer to this appears to be ‘no-one’. That may seem odd, especially when Secretary of State Jeremy Wright referred in a recent letter to the Society of Editorsto “a duty of care between companies and their users”, but what is described in the White Paper is not in fact a duty of care at all.

The proposed duty would not provide users with a basis on which to make a damages claim against the companies for breach, as is the case with a common law duty of care or a statutory duty of care under, say, the Occupiers’ Liability Act 1957.

Nor, sensibly, could the proposed duty do so since its conception of harm strays beyond established duty of care territory of risk of physical injury to individuals, into the highly contestible region of speech harms and then on into the unmappable wilderness of harm to society.

Thus in its introduction to the harms in scope the White Paper starts by referring to online content or activity that ‘harms individual users’, but then goes on: “or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities to foster integration.”

In the context of disinformation it refers to “undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”

Whatever (if anything) these abstractions may mean, they are not the kind of thing that can properly be made the subject of a legal duty of care in the offline world sense of the phrase.

The proposed duty of care is something quite different: a statutory framework giving a regulator discretion to decide what should count as harmful, what kinds of behaviour by users should be regarded as causing harm, what rules should be put in place to counter it, and which operators to prioritise.

From a rule of law perspective the answer to the question posed is that it does seem clear that the duty would be owed to no one. In that limited sense it probably rates a PASS, but only by resisting the temptation to change that to FAIL for the misdescription of the scheme as creating a duty of care.

Nevertheless, the fact that the duty is of a kind that is owed to no-one paves the way for a multitude of FAILs for other questions.

  1. What kinds of effect on a recipient will and will not be regarded as harmful

This is an obvious FAIL. The White Paper has its origins in the Internet Safety Strategy Green Paper, yet does not restrict itself to what in the offline world would be regarded as safety issues.  It makes no attempt to define harm, apparently leaving it up to the proposed Ofweb to decide what should and should not be regarded as harmful. Some examples given in the White Paper suggest that effect on the recipient is not limited to psychological harms, or even distress.

This lack of precision is exacerbated by the fact that the kinds of harm contemplated by the White Paper are not restricted to those that have an identifiable effect on a recipient of the information, but appear to encompass nebulous notions of harm to society.

  1. What speech or conduct by a user will and will not be taken to cause such harm

The answer appears to be, potentially, “any”. The White Paper goes beyond defined unlawfulness into undefined harm, but places no limitation on the kind of behaviour that could in principle be regarded as causing harm. From a rule of law perspective of clarity this may be a PASS, but only in the sense that the kind of behaviour in scope is clearly unlimited.

  1. If risk to a hypothetical recipient of the speech or conduct in question is sufficient, how much risk suffices and what are the assumed characteristics of the notional recipient

FAIL. There is no discussion of either of these points, beyond emphasising many times that children as well as adults should be regarded as potential recipients (although whether the duty of care should mean taking steps to exclude children, or to tailor all content to be suitable for children, or a choice of either, or something else, is unclear). The White Paper makes specific reference to children and vulnerable users, but does not limit itself to those.

  1. Whether the risk of any particular harm has to be causally connected (and if so how closely) to the presence of some particular feature of the platform

FAIL. The White Paper mentions, specifically in the context of disinformation, the much discussed amplification, filter bubble and echo chamber effects that are associated with social media. More broadly it refers to ‘safety by design’ principles, but does not identify any design features that are said to give rise to a particular risk of harm.

The safety by design principles appear to be not about identifying and excluding features that could be said to give rise to a risk of harm, but more focused on designing in features that the regulator would be likely to require of an operator in order to satisfy its duty of care.

Examples given include clarity to users about what forms of content are acceptable, effective systems for detecting and responding to illegal or harmful content, including the use of AI-based technology and trained moderators; making it easy for users to report problem content, and an efficient triage system to deal with reports.

  1. What circumstances would trigger an operator’s duty to take preventive or mitigating steps

FAIL. The specification of such circumstances would left up to the discretion of Ofweb, in its envisaged Codes of Practice or, in the case of terrorism or child sexual exploitation and abuse, the discretion of the Home Secretary via approval of OfWeb’s Codes of Practice.

The only concession made in this direction is that the government is consulting on whether Codes of Practice should be approved by Parliament. However it is difficult to conclude that laying the detailed results of a regulator’s ad hoc consideration before Parliament for approval, almost certainly on a take it or leave it basis, has anything like the same democratic or constitutional force as requiring Parliament to specify the harms and the nature of the duty of care with adequate precision in the first place.

  1. What steps the duty of care would require the operator to take to prevent or mitigate harm (or a perceived risk of harm)

The White Paper says that legislation will make clear that companies must do what is reasonably practicable. However that is not enough to prevent a FAIL, for the same reasons as 7. Moreover, it is implicit in the White Paper section on Fulfilling the Duty of Care that the government has its own views on the kinds of steps that operators should be taking to fulfil the duty of care in various areas. This falls uneasily between a statutorily defined duty, the role of an independent regulator in deciding what is required, and the possible desire of government to influence an independent regulator.

  1. How any steps required by the duty of care would affect users who would not be harmed by the speech or conduct in question

FAIL. The White Paper does not discuss this, beyond the general discussion of freedom of expression in the next question.

  1. Whether a risk of collateral damage to lawful speech or conduct (and if so how great a risk of how extensive damage), would negate the duty of care

The question of collateral damage is not addressed, other than implicitly in the various statements that the government’s vision includes freedom of expression online and that the regulatory framework will “set clear standards to help companies ensure safety of users while protecting freedom of expression”.

Further, “the regulator will have a legal duty to pay due regard to innovation, and to protect users’ rights online, taking particular care not to infringe privacy or freedom of expression.” It will “ensure that the new regulatory requirements do not lead to a disproportionately risk averse response from companies that unduly limits freedom of expression, including by limiting participation in public debate.”

Thus consideration of the consequence of a risk of collateral damage to lawful speech it is left up to the decision of a regulator, rather than to the law or a court. The regulator will presumably, by the nature of the proposal, be able to give less weight to the risk of suppressing lawful speech that it considers to be harmful. FAIL.

Postscript It may said against much of this analysis that precedents exist for appointing a discretionary regulator with power to decide what does and does not constitute harmful speech.

Thus, for broadcast, the Communications Act 2003 does not define “offensive or harmful” and Ofcom is largely left to decide what those mean, in the light of generally accepted standards.

Whatever the view of the appropriateness of such a regime for broadcast, the White Paper proposals would regulate individual speech. Individual speech is different. What is a permissible regulatory model for broadcast is not necessarily justifiable for individuals, as was recognised in the US Communications Decency Act case (Reno v ACLU) in the early 1990s. The US Supreme Court found that:

“This dynamic, multi-faceted category of communication includes not only traditional print and news services, but also audio, video and still images, as well as interactive, real-time dialogue. Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer. As the District Court found, ‘the content on the internet is as diverse as human thought’ … We agree with its conclusion that our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.’

In these times it is hardly fashionable, outside the USA, to cite First Amendment jurisprudence. Nevertheless, the proposition that individual speech is not broadcasts hould carry weight in a constitutional or human rights court in any jurisdiction.

This post originally appeared on the Cyberleagle blog and is reproduced with permission and thanks.


Actions

Information

Leave a Reply




%d bloggers like this: