Two Commons Committees – the Home Affairs Committee and the Digital, Culture, Media and Sport Committee – have recently held evidence sessions with government Ministers discussing, among other things, the government’s proposed Online Harms legislation. These sessions proved to be as revealing, if not more so, about the government’s intentions as its February 2020 Initial Response to the White Paper.

As a result on some topics we know more than we did, but the picture is still incomplete. Some new issues have surfaced. Other areas have become less clear than they were previously.

Above all, nothing is set in stone. The Initial Response was said to be indicative of a direction of travel and to form an iterative part of a process of policy development. The destination has yet to be reached – if, that is, the government ever gets there at all. It may yet hit a road block somewhere along the way, veer off into a ditch, or perhaps undergo a Damascene conversion should it finally realise the unwisdom of creating a latter-day Lord Chamberlain for the internet. Or the road may eventually peter out into nothingness. At present, however, the government is pressing ahead with its legislative intentions.

I’m going to be selective about my choice of topics, in the main returning to some of the key existing questions and concerns about the Online Harms proposals, with a sprinkling of new issues added for good measure. Much more ground than this was covered in the two sessions.

Borrowing from the old parlour game, each topic starts with what the White Paper said; followed by what the Initial Response said; then what the Ministers said; and lastly, the Consequence. The Ministers are Oliver Dowden MP (Secretary of State for Digital, Culture, Media and Sport); Caroline Dinenage MP (Minister for Digital and Culture) and Baroness Williams (Lords Minister, Home Office).

Sometimes the government’s Initial Response to Consultation recorded consultation submissions, but came to no conclusion on the topic. In those instances the Initial Response is categorised as saying ‘Nothing’. Some repetitive statements have been pruned.

Since this is a long read, here is a list of the selected topics:

  1. Will Parliament or the regulator decide what “harm” means?
  2. The regulator’s remit: substance, process or both?
  3. For “lawful but harmful” content seen by adults, will the regulator be interested only in whether intermediaries are enforcing whatever content standards they choose to put in their TandCs?
  4. Codes of Practice for specific kinds of user content or activity?
  5. Search engines in scope?
  6. Everything from social media platforms to retail customer review sections?
  7. Will journalism and the press be excluded from scope?
  8. End to end encryption
  9. Identity verification
  10. Extraterritoriality.

Points 1 to 3 are dealt with in this post, points 4 to 10 will be dealt with in Part 2.

  1. Will Parliament or the regulator decide what “harm” means?

The White Paper said:

“… government action to tackle online content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”

“This list [Table 1, Online harms in scope] is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.”

The Initial Response said:


The Ministers said:

Oliver Dowden: “The only point that I have tried to make is that I am just keen on this proportionality point because it is often the case that regulation that starts out with the best of intentions can, in its interpretation if you do not get it right, have a life of its own. It starts to get interpreted in a way that Parliament did not intend it to be in the first place. I am just keen to make sure we put those kinds of hard walls around it so that the regime is flexible but that in its interpretation it cannot go beyond the intent that we set out in the first place in the broad principles.” (emphasis added)

Caroline Dinenage: “For what you might call the “legal but harmful” harms, we are not setting out to name them in the legislation. That is for the simple reason that technology moves on at such a rapid pace that it is very likely that we would end up excluding something….  We want to make sure that this piece of legislation will be agile and able to respond to harms as they emerge. The legislation will make that clearer, but it will be for the regulator to outline what the harms are and to do that in partnership with the platforms.” (Q.554) (emphasis added)

The Consequence: It is difficult to reconcile the desire of the Secretary of State to erect “hard walls”, in order to avoid unintended consequences, with the government’s apparent determination to leave the notion of harm undefined, delegating to the regulator the task of deciding what counts as harmful. This kind of approach has serious implications for the rule of law.

Left undelineated, the concept of harm is infinitely malleable. The Home Office Minister Baroness Williams suggested in the Committee session that 5G disinformation could be divided into “harmless conspiracy theories” and “that which actually leads to attacks on engineers”, as well as a far-right element. One Committee member (Ruth Edwards M.P.) responded that she did not think that any element of the conspiracy theory could be categorised as ‘harmless’, because “it is threatening public confidence in the 5G roll-out” — a proposition with which the DCMS Minister Caroline Dinenage agreed.

Harm is thus equated with people changing their opinion about a telecommunications project. This unbounded sense of harm is on a level with the notorious “confusing our understanding of what is happening in the wider world” phraseology of the White Paper.

Statements such as the concluding peroration by Baroness Williams: “I, too, want to make the internet a safer place for my children, and exclude those who seek to do society harm” have to be viewed against the backdrop of an essentially unconstrained meaning of harm.

When harm can be interpreted so broadly, the government is playing with fire. But it is we – not the government, the regulator or the tech companies – who stand to get our fingers burnt.

  1. The regulator’s remit: substance, process or both?

The White Paper said:

“In particular, companies will be required to ensure that they have effective and proportionate processes and governance in place to reduce the risk of illegal and harmful activity on their platforms, as well as to take appropriate and proportionate action when issues arise. The new regulatory regime will also ensure effective oversight of the take-down of illegal content, and will introduce specific monitoring requirements for tightly defined categories of illegal content.” (6.16)

The Initial Response said:

“The approach will be proportionate and risk-based with the duty of care designed to ensure companies have appropriate systems and processes in place to improve the safety of their users.”

“The focus on robust processes and systems rather than individual pieces of content means it will remain effective even as new harms emerge. It will also ensure that service providers develop, clearly communicate and enforce their own thresholds for harmful but legal content.

“The kind of processes the codes of practice will focus on are systems, procedures, technologies and investment, including in staffing, training and support of human moderators.”

“As such, the codes of practice will contain guidance on, for example, what steps companies should take to ensure products and services are safe by design or deliver prompt action on harmful content or activity.”

“Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

“In fact, the new regulatory framework will not require the removal of specific pieces of legal content. Instead, it will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

“Of course, companies will be required to take particularly robust action to tackle terrorist content and online Child Sexual Exploitation and Abuse. The new regulatory framework will not remove companies’ existing duty to remove illegal content.”

The Ministers said:

Caroline Dinenage: “the codes of practice are really about systems and processes, rather than naming individual harms in the legislation. There are two exceptions to that: there will be codes of practice around child sexual exploitation and terrorist content, because those are both illegal.” (Q554)

“It is for the regulator to set out codes of practice, but they won’t be around individual harms; they will be around systems and processes—what we expect the companies to do. Rather than focusing on individual harms, because we know that the technology moves on so quickly that there could be more, it is a case of setting out the systems and processes that we would expect companies to abide by, and then giving the regulator the opportunity to impose sanctions on those that are not doing so.” (Q.556)

Q562 Stuart C. McDonald: “…if the regulator feels that algorithms are working inappropriately and directing people who have made innocent searches to, say, far-right content, will they be able to order, essentially, the company to make changes to how its algorithms are operating?

Caroline Dinenage: Yes, I think that they will. That is clearly something that we will set out in the full response. The key here is that companies must have clear transparency, they must set out clear standards, and they must have a clear duty of care. If they are designing algorithms that in any way put people at risk, that is, as I say, a clear design choice, and that choice carries with it a great deal of responsibility. It will be for the regulator to oversee that responsibility. If they have any concerns about the way that that is being upheld, there are sanctions that they can impose.

The Consequence: As with the specific issue around the status of terms and conditions for “lawful but harmful” content (see below), it is difficult to see how a bright line can be drawn between substance and process.  Processes cannot be designed, risk-assessed or their effectiveness evaluated in the abstract — only by reference to goals such as improving user safety and reducing risk of harm. A duty of care evaluated without reference to the kind of harm intended to be guarded against makes no more sense than the smile without the Cheshire Cat.

In Caparo v Dickman Lord Bridge cautioned against discussing duties of care in the abstract:

“It is never sufficient to ask simply whether A owes B a duty of care. It always necessary to determine the scope of the duty by reference to the kind of damage from which A must take care to save B harmless.

Risk assessment is familiar in the realm of safety properly so-called: danger of physical injury, where there is a clear understanding of what constitutes objectively ascertainable harm. It breaks down when applied to undefined, inherently subjective harms arising from users’ speech. If “threatening public confidence in the 5G roll-out” (see above) can be labelled an online harm within scope of the legislation, that goes far beyond any tenable concept of safety.

The government’s approach appears to be to adopt different approaches to illegal and “legal but harmful”, the latter avowedly restricted to process (although see next topic as to how far that can really be the case).

In passing, the Initial Response is technically incorrect in referring to “companies’ existing duty to remove illegal content”. No such general duty exists. Hosting providers lose the protection of the ECommerce Directive liability shield if they do not remove unlawful content expeditiously upon gaining actual or (for damages) constructive knowledge of the illegality. Even then, the eCommerce Directive does not oblige them to remove it. The consequence is that they become exposed to the risk of possible liability (which may or may not exist) under the relevant underlying law (see here for a fuller explanation). In practice that regime strongly incentivises hosting providers to remove illegal content upon gaining relevant knowledge. But they have no general legal obligation to do so.

  1. For “lawful but harmful” content seen by adults, will the regulator be interested only in whether intermediaries are enforcing whatever content standards they choose to put in their TandCs?

The White Paper said:

“As indication of their compliance with their overarching duty of care to keep users safe, we envisage that, where relevant, companies in scope will:

    • Ensure their relevant terms and conditions meet standards set by the regulator and reflect the codes of practice as appropriate.
    • Enforce their own relevant terms and conditions effectively and consistently. …”

“To help achieve these outcomes, we expect the regulator to develop codes of practice that set out: 

    • Steps to ensure products and services are safe by design.
    • Guidance about how to ensure terms of use are adequate and are understood by users when they sign up to use the service. …
    • Steps to ensure harmful content or activity is dealt with rapidly. …
    • Steps to monitor, evaluate and improve the effectiveness of their processes.

The Initial Response said:

“We will not prevent adults from accessing or posting legal content, nor require companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour is acceptable on their sites and then for platforms to enforce this consistently.

“To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that is not illegal but has the potential to cause harm. Regulation will therefore not force companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content.”

“Recognising concerns about freedom of expression, the regulator will not investigate or adjudicate on individual complaints. Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently.”

The Ministers said:

Oliver Dowden: “The essence of online harms legislation is holding social media companies to what they have promised to do and to their own terms and conditions. My focus in respect of those is principally on two things: underage harms and illegal harms. Clearly, the trickiest category is legal adult harms. In respect of that, we are looking at how we tighten the measures to ensure that those companies actually do what they promised they would do in the first place, which often is not the case.” (Q20) (emphasis added)

“Clearly, in respect of legal adult harms, that is the underlying principle anyway in the sense that what we are really trying to do is say to those social media companies and tech firms, “Be true to what you say you are doing. Just stick by your terms and conditions”. We would ask the regulator to make sure that it is enforcing them, and then have tools at our disposal to require it to do so.” (Q89) (emphasis added)

Caroline Dinenage: “A lot of this is about companies having the right regulations and standards and duty of care, and that will also be in the online harms Bill and online harms work. If we can have more transparency as to what platforms regard as acceptable—there will be a regulator that will help guide them in that process—I think we will have a much better opportunity to tackle those things head-on.” (Q513) (emphasis added)

“With regard to our role in DCMS, it is more as a co-ordinator bringing together the work of all the different Government Departments and then liaising directly with the platforms to make sure that their standards, their regulations, are reflective of some of the concerns that we have—make sure, in some cases, that harmful content can be anticipated and therefore prevented, and, where that is not possible, where it can be stopped and removed as quickly as possible.” (emphasis added) (Q525)

Baroness Williams: “There is obviously that which is illegal and that which breaches the CSPs’ terms of use. It is that latter element, particularly in the area of extremism, on which we have really tried to engage with CSPs to get them to be more proactive.” (emphasis added) (emphasis added) (Q.527)

The Consequence: This is now one of the most puzzling areas of the government’s developing policy. The White Paper expected that codes of practice would ensure that terms and conditions meet “standards set by the regulator” and that terms of use are “adequate”. These statements were not on the face of them limited to procedural standards and adequacy. They could readily be interpreted as encompassing standards and adequacy judged by reference to harm reduction goals determined by the regulator (which, as we have seen, would be able to decide for itself what constitutes harm) – in other words, extending to the substantive content of intermediaries’ terms and conditions.

When the Initial Response was published, great play was made of the shift to a differentiated duty of care: that it would be up to the intermediary to decide – for lawful content for adults – what standards to put in its terms and conditions.

The remit of the regulator would be limited to ensuring those standards are clearly stated and enforced “consistently and transparently” (or “effectively, consistently and transparently”, depending on which part of the Initial Response you turn to; or “effectively and consistently”, according to the White Paper). Indeed the Secretary of State said in evidence that “The essence of online harms legislation is holding social media companies to what they have promised to do and to their own terms and conditions”

But it seems from the other Ministers’ responses that the government has not disclaimed all interest in the substantive content of intermediaries’ terms and conditions. On the contrary, the government evidently sees it as part of its role to influence (to put it at its lowest) what goes into them. If the regulator’s task is to ensure enforcement of terms and conditions whose substantive content reflects the wishes of a government department, that is a far cry from the proclaimed freedom of intermediaries to set their own standards of acceptable lawful content.

Ultimately, what can be the point of emphasising how, in the name of upholding freedom of speech, the role of an independent regulator will be limited to enforcing the intermediaries’ own terms and conditions, if the government considers that part of its own role is to influence those intermediaries as to what substantive provisions those TandCs should contain?

This is one aspect of an emerging issue about division of responsibility between government and the regulator. It is tempting to think that once an independent regulator is established the government itself will withdraw from the fray. But if that is not so, then reducing the remit of the independent regulator concomitantly increases the scope for the government itself to step in.

That is especially pertinent in the light of the government’s desire to cast itself as a ‘trusted flagger’, whose notifications of unlawful content the intermediaries should act upon without question. Thus Caroline Dinenage appears to regard the platforms as obliged to remove anything that the government has told them it considers to be illegal (with no apparent requirement of prior due process such as independent verification), and would like them to take seriously anything else that the government notifies to them:

“We have found that we have become – I forget the proper term, but we have become like a trusted flagger with a number of the online hosting companies, with the platforms. So when we flag information, they do not have to double-check the concerns we have. Clearly, unless something is illegal, we cannot tell organisations to take it down; they have to make their own decision based on their own consciences, standards and requirements. But clearly we are building up a very strong, trusted relationship with them to ensure that when we flag things, they take it seriously.” (Emphasis added)

This post originally appeared on the Cyberleagle Blog, Part 2 will be published later this week.