The most heavily debated aspect of the government’s proposals has been, Strand 2, the ‘legal but harmful content’ duty. In the draft Bill this comes in two versions: a substantive duty to mitigate user content harmful to children; and a transparency duty in relation to user content harmful to adults. That, at any rate, appears to be the government’s political intention. As drafted, the Bill could be read as going further and imposing a substantive ‘content harmful to adults’ duty (something that at least some of the Committees want the legislation explicitly to do).
Compared with an illegality duty, the legal but harmful duty is conceptually closer to a duty of care properly so called. As a species of duty to take care to avoid harm to others, it at least inhabits approximately the same universe. However, the similarity stops there. It is a duty of care detached from its moorings (risk of objectively ascertainable physical injury) and then extended into a duty to prevent other people harming each other. As such, like the illegality duty, it has no comparable equivalent in the offline world; and again, as with the illegality duty, any concept of risk-creating activity by providers is stretched and homeopathically diluted to encompass mere facilitation of individual public speech.
Those features make the legal but harmful duty a categorically different kind of obligation from analogous offline duties of care; one that – at least if framed as a substantive obligation – is difficult to render compliant with a human rights framework, due to the inherently vague notions of harm that inevitably come into play once harm is extended beyond risk of objectively ascertainable physical injury.
This problem has bedevilled the Online Harms proposals from the start. The White Paper (Harm V.1) left harm undefined, which would have empowered Ofcom to write an alternative statute book to govern online speech. The Full Consultation Response (Harm V.2) defined harm as “reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”. The draft Bill (Harm V.3) spans the gamut, from undefined (for priority harmful content) to physical or psychological harm (general definition) to a complex cascade of definitions starting with the “adult (or child) of ordinary sensibilities” for residual non-priority harmful content.
If harm includes subjectively perceived harm, then it is likely to embody a standard of the most easily offended reader and to require platforms to make decisions based on impossibly vague criteria and unascertainable factual context.
The debate has not been helped by a common tendency to refer to ‘risk’ in the abstract, without identifying what counts and, just as importantly, does not count as harm. Everyday expressions such as ‘harm’, ‘abuse’, ‘trolling’ and so on may suffice for political debate. But legislation has to grapple with the uncomfortable question of what kinds of lawful but controversial and unpleasant speech should not qualify as harmful. That is a question that a lawmaker cannot avoid if legislation is to pass the ‘clear and precise’ rule of law test.
Even when a list is proposed it still tends to be pitched at a level that can leave basic questions unanswered. The Joint Committee, for instance, proposes a list including ‘abuse, harassment or stirring up of violence or hatred based on the protected characteristics in the Equality Act 2010’, and “content or activity likely to cause harm amounting to significant psychological distress to a likely audience (defined in line with the Law Commission offence)”.
On that basis does blasphemy count as legal but harmful content? Does the Committee’s proposed list of specific harms answer that question? Some would certainly claim to suffer significant psychological distress from reading blasphemous material. Religion or belief is a protected characteristic under the Equality Act. How would that be reconciled with the countervailing duty to take into account the importance of freedom of expression within the law or, as the Joint Committee would propose for high risk platforms, to assess the public interest in high value speech under the guidance of Ofcom?
If none of these provides a clear answer, the result is to delegate the decision-making to Ofcom. That prompts the question whether such a controversial decision as to what speech is or is not permissible online should be made, and made in clear terms, by Parliament.
While on the topic of delegation, let us address the proposition that the draft Bill’s ‘legal but harmful to adults’ duty delegates a state power to platforms. The Joint Committee report has an entire section entitled ‘Delegation of decision making’ ([165] to [169]).
At present, service providers have freedom to decide what legal content to allow or not on their platforms, and to make their own rules accordingly. That does not involve any delegation of state power, any more than Conway Hall exercises delegated state power when it decides on its venue hiring policy. Unless and until the state chooses to take a power via legislation, there is no state power capable of delegation.
Clause 11 (if we take at face value what the government says it is) requires platforms to provide users with information about certain of their decisions, and to enforce their rules consistently. Again, the state has not taken any power (either direct or via Ofcom) to instruct providers what rules to make. No state power, no delegation.
It is only when (as at least some Committees propose) the state takes a power to direct or govern decision-making that delegation is involved. Such a power would be delegated to Ofcom. Providers are then obligated to enforce the Bill’s and Ofcom’s rules against users. That involves providers in making decisions about what content contravenes the rules. There is still no delegation of rule-making, except to the extent that latitude, vagueness or ambiguity in those rules results in de facto delegation of rule-making to the providers.
Current State of Play None of the Committees has accepted the submissions from a number of advocacy groups (and the previous Lords Committee Report on Freedom of Expression in the Digital Age) that ‘legal but harmful to adults’ obligations should be dropped from the legislation.
However, each Committee has put forward its own alternative formulation:
- The Joint Committee’s list of reasonably foreseeable risks of harm that providers should be required to identify and mitigate (replacing the draft Bill’s transparency duty with a substantive mitigation duty) ([176]), as part of an overall package of recommended changes
- The Petitions Committee’s recommendation that the primary legislation should contain as comprehensive an indication as possible of what content would be considered harmful to adults or children; and that abuse based on characteristics protected under the Equality Act and hate crime legislation should be designated as priority harmful content in the primary legislation. This Committee also considers that the legal but harmful duty should be a substantive mitigation duty. ([46], [67])
- The DCMS Committee’s recommendation (similar to the Joint Committee) that the definition of (legal) content that is harmful to adults should be reframed to apply to reasonably foreseeable harms identified in risk assessments ([20]); This sits alongside a proposal that providers be positively required to balance their safety duties with freedom of expression ([19]); and that providers should be required to assess and take into account context, the position of the speaker, the susceptibility of the audience and the content’s accuracy. ([20]) This Committee appears also, at least implicitly, to support conversion into a substantive duty.
The DCMS Committee also recommends that the definition of legal content harmful to adults should: “explicitly include content that undermines, or risks undermining, the rights or reputation of others, national security, public order and public health or morals, as also established in international human rights law.”
On the face of it this is a strange proposal. The listed items are aims in pursuance of which, according to international human rights law, a state may if it so wishes restrict freedom of expression – subject to the restriction being prescribed by law (i.e. by clear and certain rules), necessary for the achievement of that aim, and proportionate.
The listed aims do not themselves form a set of clear and precise substantive rules, and are not converted into such by the device of adding ‘undermines, or risks undermining’. The result is a unfeasibly vague formulation. Moreover, it appears to suggest that every kind of speech that can legitimately be restricted under international human rights law, should be. It is difficult to believe that the Committee really intends that.
The various Committee proposals illustrate how firmly the draft Bill is trapped between the twin devils of over-removal via the blunt instrument of a content-oriented safety duty; and of loading onto intermediaries the obligation to make ever finer and more complex multi-factorial judgements about content. The third propounded alternative of safety by design has its own vice of potentially interfering with all content, good and bad alike.
Strand 3 – Reduce the discretion of large social media platforms to decide what content should and should not be on their services
Until very late in the consultation process the focus of the government’s Online Harms proposals was entirely on imposing duties on providers to prevent harm by their users, with the consequent potential for over-removal of user content mitigated to some degree by a duty to have regard to the importance of freedom of expression within the law. This kind of proposal sought to leverage the abilities of platforms to act against user content.
When the Full Response was published a new strand was evident: seeking to rein in the ability of large platforms to decide what content should and should not be present on their services. It is possible that this may have been prompted by events such as suspension of then President Trump’s Twitter account.
Be that as it may, the Full Response and now the draft Bill include provisions, applicable to Category 1 U2U providers, conferring special protections on journalistic content and content of democratic importance. The most far-reaching protections relate to content of democratic importance. For such content the provider must not only ensure that it has systems and processes designed to ensure the importance of free expression of such content when making certain decisions (such as takedown, restriction or action against a user), but ensure that they apply in the same way to a diversity of political opinion. Whatever the merits and demerits of such proposals, they are far removed from the original policy goal of ensuring user safety.
Current state of play As noted above, the Joint Committee proposes that the journalistic content and content of democratic importance be replaced by a single statutory requirement to have proportionate systems and processes to protect ‘content where there are reasonable grounds to believe it will be in the public interest’ ([307]) The DCMS Committee recommendation on the scope of legal but harmful content recommends including democratic importance and journalistic nature when considering the context of content ([23]).
Although the Committee’s discussion is about protecting ‘high value speech’, there is a risk involved in generalising this protection to the kind of single statutory safeguard for ‘content in the public interest’ envisaged by the Committee. The risk is that in practice the safeguard would be turned on its head – with the result that only limited categories of ‘high value speech’ would be seen as presumptively qualifying for protection from interference, leaving ‘low value’ speech to justify itself and in reality shorn of protection.
That is the error that Warby L.J. identified in Scottow, a prosecution under S.127 Communications Act 2003:
“The Crown evidently did not appreciate the need to justify the prosecution, but saw it as the defendant’s task to press the free speech argument. The prosecution argument failed entirely to acknowledge the well-established proposition that free speech encompasses the right to offend, and indeed to abuse another. The Judge appears to have considered that a criminal conviction was merited for acts of unkindness, and calling others names, and that such acts could only be justified if they made a contribution to a “proper debate”. … It is not the law that individuals are only allowed to make personal remarks about others online if they do so as part of a “proper debate”.
In the political arena, the presumption that anything unpleasant or offensive is prima facie to be condemned can be a powerful one. The 10 December 2021 House of Lords debate on freedom of speech was packed with pleas to be nicer to each other online: hard to disagree with as a matter of etiquette. But if being unpleasant is thought of itself to create a presumption against freedom of expression, that does not reflect human rights law.
The risk of de facto reversal of the presumption in favour of protection of speech when we focus on protecting ‘high value’ speech is all the greater where platforms are expected to act in pursuance of their safety duty proactively, in near real-time and at scale, against a duty-triggering threshold of reasonable grounds to believe.
That is without even considering the daunting prospect of an AI algorithm that claims to be capable of assessing the public interest.
Part I of this post was published here and Part IIi will be published late this week.
The full post originally appeared on the Cyberleagle Blog and is reproduced with permission and thanks
0 Comments
1 Pingback