This is Part 2 of a post dealing with evidence given by government Ministers to two Commons Committees – the Home Affairs Committee and the Digital, Culture, Media and Sport Committee – discussing, among other things, the government’s proposed Online Harms legislation. Part 1 was published on 28 May 2020.

The issues which arise have been dealt with under ten separate topics.

  1. Will Parliament or the regulator decide what “harm” means?
  2. The regulator’s remit: substance, process or both?
  3. For “lawful but harmful” content seen by adults, will the regulator be interested only in whether intermediaries are enforcing whatever content standards they choose to put in their TandCs?
  4. Codes of Practice for specific kinds of user content or activity?
  5. Search engines in scope?
  6. Everything from social media platforms to retail customer review sections?
  7. Will journalism and the press be excluded from scope?
  8. End to end encryption
  9. Identity verification
  10. Extraterritoriality.

Topics 1 to 3 were dealt with in Part 1, topics 4 to 10 are dealt with in this post.

Each topic starts with what the White Paper said; followed by what the Initial Response said; then what the Ministers said; and lastly, the Consequence. The Ministers are Oliver Dowden MP (Secretary of State for Digital, Culture, Media and Sport); Caroline Dinenage MP (Minister for Digital and Culture) and Baroness Williams (Lords Minister, Home Office).

Topics

  1. Codes of Practice for specific kinds of user content or activity?

The White Paper said:

“[T]he White Paper sets out high-level expectations of companies, including some specific expectations in relation to certain harms. We expect the regulator to reflect these in future codes of practice.”

It then set out a list of 11 harms, accompanied in each case by a list of areas in relation to that harm that it expected the regulator to include in a code of practice. For instance, in relation to disinformation a list of 11 specific areas included:

“Steps that companies should take in relation to users who deliberately misrepresent their identity to spread and strengthen disinformation.”; and

“Promoting diverse news content, countering the ‘echo chamber’ in which people are only exposed to information which reinforces their existing views.”

The Initial Response said:

“The White Paper talked about the different codes of practice that the regulator will issue to outline the processes that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users. … We do not expect there to be a code of practice for each category of harmful content, however, we intend to publish interim codes of practice on how to tackle online terrorist and Child Sexual Exploitation and Abuse (CSEA) content and activity in the coming months.”

The Ministers said:

“Caroline Dinenage: I think I need to clear up a bit of a misunderstanding about the White Paper. The 11 harms that were listed were really intended to be an illustrative list of what we saw as the harms. The response did not expect a code of practice for each one, because the codes of practice are really about systems and processes, rather than naming individual harms in the legislation. There are two exceptions to that: there will be codes of practice around child sexual exploitation and terrorist content, because those are both illegal.” (Q.554) (emphasis added)

The Consequence: The different approach to CSEA and terrorism probably owes more to the different areas of responsibility of the Home Office and the DCMS than to any dividing line between illegality and non-illegality. The White Paper covers many more areas of illegality than those two alone.

  1. Search engines in scope?

The White Paper said:

“… will apply to companies that allow users to share or discover user-generated content, or interact with each other online.” (emphasis added)

“These services are offered by…  search engines” (Executive Summary)

The Initial Response said:

“The legislation will only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions, for example though comments, forums or video sharing” (emphasis added)

The Ministers said:

“Caroline Dinenage: Again, we are probably victims of the fact that we published an interim response, which was not as comprehensive as our full response will be later on in the year. The White Paper made it very clear that search engines would be included in the scope of the framework and the nature of the requirements will reflect the type of service that they offer. We did not explicitly mention it in the interim response, but that does not mean that anything has changed. It did not cover the full policy. Search engines will be included and there is no change to our thoughts and our policy on that.”   (Q.560)

The Consequence: Notwithstanding the Minister’s explanation, the alterations in wording between the White Paper and the Initial Response (omitting “discover”, adding “only”) had the appearance of a considered change. The lesson for the future is perhaps that it would be unwise to parse too closely the text of anything else said or written by the government.

  1. Everything from social media platforms to retail customer review sections?

The White Paper said:

“… companies of all sizes will be in scope of the regulatory framework. The scope will include… social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.” (emphasis added)

The Initial Response said:

“To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions.”

The Ministers said:

Oliver Dowden: “We are a Europe leader in this. I have seen, as I am sure you have seen, the unintended consequences of good-intended legislation then having bureaucratic implications and costs on businesses that we want to avoid.

For example, in respect of legal online harms for adults, if you are an SME retailer and you have a review site on your website for your product and people can put comments underneath that, that is a form of social media. Notionally, that would be covered by the online harms regime as it stands. The response to that is they will go through this quick test and then they will find it does not apply to them. My whole experience of that for SMEs and others is that it is all very well saying that when you are sat have no idea what this online harms thing is, this potentially puts a big administrative burden on you. (emphasis added)

Are there ways in which we can carve out those sorts of areas so we focus on where we need to do it? Those kinds of arguments pertain less to illegal harms and harms to children. I hope that gives you a flavour of it.” (Q.88)

Q89 Damian Hinds: “Yes, quite so. I think in the previous announcement there was quite a high estimate of the number of firms or proportion of total firms that would somehow be counted in the definition of an online platform, which was rather a disturbing thought. It would be very welcome, what you can do to limit the scope of who counts as a social media platform.

The Consequence: This exchange does shine a light on the expansive scope of the proposed legislation. The Secretary of State said that SME retailers with review sections were “notionally” covered. However, there was nothing notional about it.  Retailer review sections were expressly included in the White Paper, as were companies of all sizes.

As the Secretary of State suggests, it is little comfort for an SME to be told “don’t worry, you’ll be low risk so it won’t really apply to you” if: (a) you are in scope on the face of it, and (b) it is left to the regulator to decide whether the duty of care should bear less heavily on some intermediaries than others.

There are, of course, many other kinds of non-social media platform intermediary who are in scope as well as SME retailers with review sections: apps, online games, community discussion forums, non-profits and many other online services.  The Initial Response said “Analysis so far suggests that fewer than 5% of UK businesses will be in scope of this regulatory framework.” Whether 5% is considered to be small or large in absolute terms (not to mention the apparent indifference to non-UK businesses), there has been no indication of the assumptions underlying that estimate.

  1. Will journalism and the press be excluded from scope?

The White Paper said:

Nothing. In a subsequent letter to the Society of Editors the then DCMS Secretary of State Jeremy Wright said:

“… as I made clear at the White Paper launch and in the House of Commons, where these services are already well regulated, as IPSO and IMPRESS do regarding their members’ moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”

The Initial Response said:

Nothing. It limited itself to general expressions of support for freedom of expression, such as:

“…freedom of expression, and the role of a free press, is vital to a healthy democracy. We will ensure that there are safeguards in the legislation, so companies and the new regulator have a clear responsibility to protect users’ rights online, including freedom of expression and the need to maintain a vibrant and diverse public square.”

The Ministers said:

“Caroline Dinenage: Obviously, we know that a free press is one of the pillars of our society, and the White Paper, I must say from the outset, is not seeking to prohibit press freedom at all, so journalistic and editorial content is not in the scope of the White Paper. Our stance on press regulation has not changed.” (Emphasis added)

“As for what has been in the papers recently, the Secretary of State wrote a letter to the Society of Editors, and this was about what you might call the below-the-line or comments section. They were concerned that that might be regulated. I think what the Secretary of State is saying is that, where there is already clear and effective moderation of that sort of content, we do not intend to duplicate it. For example, there is IPSO and IMPRESS activity on moderated content sections. Those are the technical words for it. This is still an ongoing conversation, so we are working at the moment with stakeholders to develop proposals on how we are going to reflect that in legislation, working around those parameters. (Q.558)

“Stuart C. McDonald: But there is no suggestion that below-the-line remains unregulated. It is where that regulation should lie that is the issue.

Caroline Dinenage: Exactly.” (Q.559)

The Consequence: There are three distinct issues around inclusion or exclusion of the press from the regulatory scope of the Bill:

  1. User comments on newspaper websites.  On the face of it, news organisations would be subject to the duty of care as regards user comments on their websites. The position of the government appears to be that whether the duty of care would apply would depend on whether the comments are already subject to another kind of regulation (or at least the existence of “clear and effective moderation”). Potentially, therefore, newspapers that are not regulated by IPSO or IMPRESS would be in scope for this purpose. Whether this demarcation would be achieved by a hard scope exclusion written into the Bill is not clear.
  2. Journalistic or editorial material. Whilst the Minister may say that the government’s stance on press regulation has not changed, her statement that journalistic and editorial content is not “in the scope” of the White Paper is new — at least if we are to understand that as meaning that the Bill would contain a hard scope exclusion for journalistic or editorial content. Previously the government had said only that such content would not be affected by the regulatory framework. A general exclusion of journalistic or editorial material would on the face of it go much wider than newspapers and similar publications. It would be no surprise to find this statement being “clarified” at some point in the future.
  3. Newspaper social media feeds and pages. Newspapers and other publications maintain their own pages, feeds and blogs on social media and other platforms. Newspapers would not themselves be subject to a duty of care in relation to their own content. But as far as the platforms are concerned the newspapers are users, so that their pages and feeds would fall under the platforms’ duty of care. As such, they would be liable to have action taken against their content by a platform in the course of fulfilling its own duty of care.

The government has said nothing about whether, and if so how, such press content would be excluded from scope. If the government is serious about excluding “journalistic or editorial” material generally from scope, that would achieve this. However that would create immense difficulties around whether a particular feed or page is or is not journalistic or editorial material (what about this Cyberleagle blog, or the Guido Fawkes blog, for instance?), and how a platform is supposed to decide whether any particular content is or is not in scope.

  1. End to end encryption

The White Paper said:

Nothing. (Although the potential for the duty of care to be applied to prevent the use of end to end encryption was evident.)

The Initial Response said:

Nothing.

The Ministers said:

Baroness Williams: “[Facebook] then announced that they were going to end-to-end encrypt Messenger. That, for us, is gravely worrying, because nobody will be able to see into Messenger. I know there is going to be a Five Eyes engagement next week, and I do not know if the Committee knows, but the Five Eyes wrote to Mark Zuckerberg last year, so worried were we about this development.” (Q538)

Q566 Chair: “On that basis, does end-to-end encryption count as a breach of duty of care?

Baroness Williams:It is criminal activity that would breach the duty of care. Allowing criminal activity to happen on your platform would be the breach of duty of care. End-to-end encryption, in and of itself, is not a breach of duty of care.

Chair: Presumably, for this regulation to have any bite at all, they will have to be able to take some enforcement against the policies that fail to prevent criminal activity. On that logic, introducing the end-to-end encryption, if it knowingly stops the company from preventing illegal activity—for example, the kind of online child abuse you have talked about—that would surely count as a breach of duty of care.

Baroness Williams: I fully expect that that is what some of the Five Eyes discussions, which will be happening very shortly, will look at.”

The Consequence: This is the first indication that the government is alive to the possibility that a regulator might be able to interpret a duty of care so as to affect the ability of an intermediary to use end to end encryption. The “in and of itself” phraseology used by the Minister appears not to rule that out. This issue is related to the question of how the legislation might apply to private messaging providers, a topic on which the government has consulted but has not yet published a conclusion.

  1.  Identity verification

The White Paper said:

“The internet can be used to harass, bully or intimidate. In many cases of harassment and other forms of abusive communications online, the offender will be unknown to the victim. In some instances, they will have taken technical steps to conceal their identity. Government and law enforcement are taking action to tackle this threat.”

“The police have a range of legal powers to identify individuals who attempt to use anonymity to escape sanctions for online abuse, where the activity is illegal. The government will work with law enforcement to review whether the current powers are sufficient to tackle anonymous abuse online.”

“Some of the areas we expect the regulator to include in a code of practice are:

Steps to limit anonymised users abusing their services, including harassing others. …

Steps companies should take to limit anonymised users using their services to abuse others.

The Initial Response said:

Nothing.

The Ministers said:

Q25 John Nicolson: Would you like to see online harms legislation compel social media companies to verify the identity of users, not of course to publish them but simply to verify them before the accounts are up and running?

Oliver Dowden: There is certainly a challenge around, as you mentioned, bots, which are sometimes used by hostile state activity, and finding better ways of verifying to see whether these are genuine actors or whether it is co-ordinated bot-type activity. That is through online harms but there is obviously a national security angle to that as well.

Q530 Ms Abbott: “Finally, would you consider changing the regulation, so you could post anonymously on a website or Twitter or Facebook, but the online platform would have your name and address? In my experience, when you try to pursue online abuse, you hit a brick wall because the abuser is not just anonymous when they post, the online platform doesn’t have a name and address either.

Caroline Dinenage: That is a really interesting idea. It is definitely something that we have been discussing. With regard to the online harms legislation that we are putting together at the moment, we have said very clearly that companies need to be much more transparent. They need to set out standards and they need to clarify what their duty of care is and to have a robust complaints procedure that people can use and can trust in. That is why we are also appointing a regulator that will set out what good looks like and will have expectations but also powers to be able to demand data and information and to be able to impose sanctions on those that they do not feel are abiding by them.

Q531 Chair: What does that actually mean? Does that mean that you think that the regulator should have the power to say that social media companies should not allow people to be … [a]nonymous to the platform?

Caroline Dinenage: This is something that we are considering at the moment. There are a number of things here. In the online harms legislation, the regulator will set out their expectations.

Chair: We can’t devolve everything to the regulator. Something like this is really important—should social media companies be allowed to not know who it is that is using their platforms? That feels like a big question that Parliament should take a view on, not something we just hand over to a regulator and say, “Okay, whatever you think,” later on.

Caroline Dinenage: Yes, exactly. That is why we are considering it at the moment, as part of the online harms legislation, and that, of course, will come before Parliament.”

Q545 Tim Loughton: “… If I want to set up a bank account and all sorts of other accounts, I must prove to the bank or organisation who I am by use of a utility bill and other things like that. It is quite straightforward. What is the downside of a similar requirement being enforced by social media platforms before you are allowed to sign up for an account? This is an issue that we have looked at before on the Committee. Many of us have suggested that we should go down that route. I gather that it already happens in South Korea. You say that you are looking at it, Minister Dinenage. What, in your view, is the downside of having such scrutiny?

Caroline Dinenage: You make a very compelling argument, Mr Loughton. A lot of what you said is extremely correct. The only thing we are mulling over and trying to cope with is whether there is any reason for anonymity for people who are victims, who want to be able to whistleblow, and who may be overseas and might not want to identify themselves because they fear for their lives or other harm. There are those issues of anonymity and protecting someone’s safety and ability to speak up. That is what we are wrestling with.

Q546 Tim Loughton: By the same token, you could have somebody with a fake identity who is falsely whistleblowing or pushing around propaganda, so it cuts both ways. I fail to see the downside of having a requirement that you have to prove who you are—not least because we know what happens when people are caught and have their sites taken down. Five minutes later, they set up another new anonymous site peddling the same sort of false information.

Caroline Dinenage: You make a very compelling argument. This is such an important piece of legislation, and we have to get it right. As I say, it is world-leading. Everybody is looking at us to see how we do it. We need to make sure that we have taken into consideration every angle, and that is what we are doing at the moment.”

The Consequence: Identity verification is evidently an issue that is bubbling to the surface. The most fundamental objection is that the right of freedom of expression secured by Article 19 of the Universal Declaration of Human Rights is not conditioned upon identity verification. It does not say:

“Everyone has the right to freedom of opinion and expression upon production of any two of the following: driving licence, passport, recent utility or council tax bill…”.

In South Korea, legislation imposing online identity verification obligations was declared unconstitutional in 2012.

The Home Affairs Committee raised, to the best of my knowledge for the first time in any Parliamentary deliberation on the Online Harms project, the question of what should be decided by Parliament and what delegated to a regulator. That is not limited to the question of identity verification. It is an inherent vice of regulatory powers painted with such a broad brush that many concrete issues will lie hidden behind abstractions, to surface only when the regulator turns its light upon them – by which time it is far too late to object that the matter should have been one for Parliament to decide. That vice is compounded when the powers affect the individual speech of millions of people.

  1. Extraterritoriality

The White Paper said:

“The new regulatory regime will need to handle the global nature of both the digital economy and many of the companies in scope. The law will apply to companies that provide services to UK users.” (6.9) (emphasis added)

“We are also considering options for the regulator, in certain circumstances, to require companies which are based outside the UK to appoint a UK or EEA-based nominated representative.” (6.10)

The Initial Response said:

Nothing of relevance.

The Ministers said:

“Q569: Andrew Gwynne: Presumably the regulations will apply to all content visibly available in the UK—is that correct?

Baroness Williams: Yes.”

The Consequence: Charitably, perhaps we should assume that the Minister misspoke. There is a vast difference between providing services to UK users and mere visibility in the UK. Given the inherent cross-border nature of the internet, asserting a country’s local law against content on a mere visibility basis is tantamount to asserting world-wide extra-territoriality.

It would be more consistent with the direction in which internet jurisdictional norms have moved over the last 25 years to apply a test of whether the provider is targeting the UK.

This post originally appeared on the Cyberleagle Blog, Part 2 will be published later this week.