It is time – in fact it is overdue – to take stock of the increasingly imminent Online Safety Bill. The two months before and after Christmas saw a burst of activity: Reports from the Joint Parliamentary Committee scrutinising the draft Bill, from the Commons DCMS Committee on the ‘Legal but Harmful’ issue, and from the House of Commons Petitions Committee on Tackling Online Abuse.
Several Parliamentary debates took place, and recently the DCMS made two announcements: first, that an extended list of priority illegal content would be enacted on the face of the legislation, as would the Law Commission’s recommendations for three modernised communications offences; and second, that age verification would be extended to apply to non-user-to-user pornography sites.
Most recently of all, the Home Secretary is reported to have gained Cabinet support for powers for Ofcom (the regulator that would implement, supervise and enforce the Bill’s provisions) to require use of technology to proactively seek out and remove illegal content and legal content harmful to children.
As the government’s proposals have continued to evolve under the guidance of their sixth Culture Secretary, and with Parliamentary Committees and others weighing in from all directions, you may already be floundering if you have not followed, blow by blow, the progression from the 2017 Internet Safety Strategy Green Paper, via the April 2019 Online Harms White Paper and the May 2021 draft Online Safety Bill, to the recent bout of political jousting.
If you are already familiar with the legal concept of a duty of care, the significance of objective versus subjective harms, the distinction between a duty to avoid causing injury and a duty to prevent others causing injury, and the notion of safety by design, then read on. If not, or if you would like a recap, it’s all in the Annex.
In brief, the draft Bill would impose a new set of legal obligations on an estimated 24,000 UK providers of user to user services (everyone from large social media platforms to messaging services, multiplayer online games and simple discussion forums) and search engines. The government calls these obligations a duty of care.
This post is an unashamedly selective attempt to put in context some of the main threads of the government’s thinking, explain key elements of the draft Bill and pick out a few of the most significant Parliamentary Committee recommendations.
The government’s thinking The proposals bundle together multiple policy strands. Those include:
- Requiring providers to take steps to prevent, inhibit or respond to illegal user content
- Requiring providers to take action in respect of ‘legal but harmful’ user content
- Limiting the freedom of large social media platforms to decide which user content should and should not be on their services.
The government also proposes to enact new and reformed criminal offences for users. These are probably the most coherent aspects of the proposed legislation, yet still have some serious problems – in their own right, in the case of the new harm-based offence, and also in how offences interact with the illegality strand of the duty of care.
Protection of children has been a constant theme, sparking debates about age verification, age assurance and end-to-end encryption. Overall, the government has pursued its quest for online safety under the Duty of Care banner, bolstered with the slogan “What Is Illegal Offline Is Illegal Online”.
That slogan, to be blunt, has no relevance to the draft Bill. Thirty years ago there may have been laws that referred to paper, post, or in some other way excluded electronic communication and online activity. Those gaps were plugged long ago. With the exception of election material imprints (a gap that is being fixed by a different Bill currently going through Parliament), there are no criminal offences that do not already apply online (other than jokey examples like driving a car without a licence).
On the contrary, the draft Bill’s Duty of Care would create novel obligations for both illegal and legal content that have no comparable counterpart offline. The arguments for these duties rest in reality on the premise that the internet and social media are different from offline, not that we are trying to achieve offline-online equivalence.
Strand 1: Preventing and Responding to Illegality
Under the draft Bill, all 24,000 in-scope UGC providers would be placed under a duty of care (so-called) in respect of illegal user content. The duty would be reactive or proactive, depending on the kind of illegality involved. Illegality for this purpose means criminal offences.
The problem with applying the duty of care label to this obligation is that there is no necessary connection between safety (in the duty of care sense of risk of personal injury) and illegality. Some criminal law is safety-related and some is not. We may be tempted to talk of being made safe from illegality, but that is not safety in its proper duty of care sense.
In truth, the illegality duty appears to stem not from any legal concept of a duty of care, but from a broader argument that platforms have a moral responsibility to take positive steps to prevent criminal activity by users on their services. That contrasts with merely being incentivised to remove user content on becoming aware that it is unlawful. The latter is the position of a host under the existing intermediary liability regime, with which the proposed positive legal duty would co-exist.
That moral framing may explain why the DCMS Minister was able to say to a recent Parliamentary Committee:
“I think there is absolute unanimity that the Bill’s position on that is the right position: if it is illegal offline it is illegal online and there should be a duty on social media firms to stop it happening. There is agreement on that.” (1 Feb 2022, Commons DCMS Sub-Committee on Online Harms and Disinformation)
It is true that the illegality safety duty has received relatively little attention compared with the furore over the draft Bill’s ‘legal but harmful’ provisions. Even then, the consensus to which the Minister alludes may not be quite so firm. It may seem obvious that illegal content should be removed, but that overlooks the fact that the draft Bill would require removal without any independent adjudication of illegality. That contradicts the presumption against prior restraint that forms a core part of traditional procedural protections for freedom of expression. To the extent that the duty requires hosts to monitor for illegality, that departs from the long-standing principle embodied in Article 15 of the eCommerce Directive prohibiting the imposition of general monitoring obligations.
It is noteworthy that the DCMS Committee Report recommends ([21]) that takedown should not be the only option to fulfil the illegality safety duty, but measures such as tagging should be available.
So an unbounded notion of preventing illegality does not sit well on the offline duty of care foundation of risk of physical injury. Difficult questions arise as a result. Should the duty apply to all kinds of criminal offence capable of being committed online? Or, more closely aligned with offline duties of care, should it be limited strictly to safety-related criminal offences? Or perhaps to risk of either physical injury or psychological harm? Or, more broadly, to offences for which it can be said that the individual is a victim?
The extent to which over time the government’s proposals have fluctuated between several of these varieties of illegality perhaps reflects the difficulty of shoehorning this kind of duty into a legal box labelled ‘duty of care’.
Moving on from the scope of illegality, what would the draft Bill require U2U providers to do? Under the draft Bill, for ‘ordinary’ illegal content the safety duty would be reactive – to remove it on receiving notice. For ‘priority’ illegal content the duty would in addition be preventative: as the DCMS described it in their recent announcement of new categories of priority illegal content:
“To proactively tackle the priority offences, firms will need to make sure the features, functionalities and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting suspicious users and having effective systems in place to prevent banned users opening new accounts.”
These kinds of duty prompt questions about how a platform is to decide what is and is not illegal, or (apparently) who is a suspicious user. The draft Bill provides that the illegality duty should be triggered by ‘reasonable grounds to believe’ that the content is illegal. It could have adopted a much higher threshold: manifestly illegal on the face of the content, for instance. The lower the threshold, the greater the likelihood of legitimate content being removed at scale, whether proactively or reactively.
The draft Bill raises serious (and already well-known, in the context of existing intermediary liability rules) concerns of likely over-removal through mandating platforms to detect, adjudge and remove illegal material on their systems. Those are exacerbated by adoption of the ‘reasonable grounds to believe’ threshold.
Current state of play The government’s newest list of priority offences (those to which the proactive duty would apply) mostly involves individuals as victims but also includes money laundering, an offence which does not do so. The list includes revenge and extreme pornography, as to which the Joint Scrutiny Committee observed that the first is an offence against specific individuals, whereas the second is not.
Given how broadly the priority offences are now ranging, it may be a reasonable assumption that the government does not intend to limit them to conduct that would carry a risk of physical or psychological harm to a victim.
The government intends that its extended list of priority offences would be named on the face of the Bill. That goes some way towards meeting criticism by the Committees of leaving that to secondary legislation. However, the government has not said that the power to add to the list by secondary legislation would be removed.
As to the threshold that would trigger the duty, the Joint Scrutiny Committee has said that it is content with ‘reasonable grounds to believe’ so long as certain safeguards are in place that would render the duty compatible with an individual’s right to free speech; and so long as service providers are required to apply the test in a proportionate manner set out in clear and accessible terms to users of the service.
The Joint Committee’s specific suggested safeguard is that Ofcom should issue a binding Code of Practice on identifying, reporting on and acting on illegal content. The Committee considers that Ofcom’s own obligation to comply with human rights legislation would provide an additional safeguard for freedom of expression in how providers fulfil this requirement. How much comfort one should take from that, when human rights legislation sets only the outer boundaries of acceptable conduct by the state, is debatable.
The Joint Committee also refers to other safeguards proposed elsewhere in its report. Identifying exactly which it is referring to in the context of illegality is not easy. Most probably, it is referring to those listed at [284], at least insofar as they relate to the illegality safety duty.
The Committee proposes these as a more effective alternative to strengthening the ‘have regard to the importance of freedom of expression’ duty in Clause 12 of the draft Bill:
- greater independence for Ofcom ([377])
- routes for individual redress beyond service providers ([457])
- tighter definitions around content that creates a risk of harm ([176] (adults), [202] (children))
- a greater emphasis on safety by design ([82])
- a broader requirement to be consistent in the applications of terms of service
- stronger minimum standards ([184])
- mandatory codes of practice set by Ofcom, who are required to be compliant with human rights law (generally [358]; illegal content [144]; content in the public interest [307])
- stronger protections for news publisher content ([304])
It is not always obvious how some of these recommendations (such as increased emphasis on safety by design) qualify as freedom of expression safeguards.
For its part, the DCMS Committee has suggested ([12]) that the definition of illegal content should be reframed to explicitly add the need to consider context as a factor. How providers should go about obtaining such contextual information – much of which will be outside the contents of user posts – is unclear. The recommendation also has implications in the degree of surveillance and breadth of analysis of user communications that would be necessary to fulfil the duty.
Content in the public interest The Joint Committee recommends a revised approach to the draft Bill’s protections for journalistic content and content of democratic importance. ([307]) At present these qualifications to the illegality and legal but harmful duties would apply only to Category 1 service providers. However, the Committee also recommends (at [246]) replacing strict categories based on size and functionality with a risk-based sliding scale, which would determine which statutory duties apply to which providers. (The government has told the Petitions Committee that it is considering changing the Category 1 qualification from size and functionality to size or functionality.)
The Joint Committee relies significantly on this recommendation, under the heading of ‘protecting high value speech’. It proposes to replace the existing journalism and content of democratic importance protections with a single statutory requirement to have proportionate systems and processes to protect ‘content where there are reasonable grounds to believe it will be in the public interest’ ([307]). It gives the examples of journalistic content, contributions to political or societal debate and whistleblowing as being likely to be in the public interest.
Ofcom would be expected to produce a binding Code of Practice on steps to be taken to protect such content and guidance on what is likely to be in the public interest, based on their existing experience and caselaw.
As with the existing proposed protections, the ‘public interest’ proposal appears to be intended to apply across the board to both illegality and legal but harmful content (see, for instance, the Committee’s discussion at [135] in relation to the Law Commission’s proposed new ‘harm-based’ communications offence). This proposal is discussed under Strand 3 in Part II of this post.
Parts II and III of this post will be published later this week
The full post originally appeared on the Cyberleagle Blog and is reproduced with permission and thanks
Leave a Reply