The Online Harms White Paper proposes to subject companies that allow users to share or discover user-generated content or interact with each other online (“platforms”) to a new “duty of care”. The intention is to make platforms take more responsibility for protecting users against a variety of “online harms”.

A number of commentators have raised issues about the utility of the concept of a “duty of care” in this context. Although I share a number of these concerns the issues have been full discussed elsewhere (see for example, Graham Smith’s posts here and here).

However, I wish to deal with two issues concerning the approach of the White Paper to “harms”: the distinction between “individual” and “societal” harms and the approach to privacy harms.

Individual and Societal Harms

The first issue concerns the distinction between “individual” and “societal harms”. Table 1 lists the “Online Harms in scope”. Some of these harms concern conduct directed at particular individuals (for example, “Revenge Pornography” or “Cyberstalking and trolling”). Other harms are “societal” in nature (for example “Content illegally uploaded from prison” or “Promotion of FGM”).

The White Paper’s failure to draw a clear distinction between these two types of harms has led to a number of concerns about its general approach. These can be avoided if a clear distinction is drawn.

“Societal harms” can, in general, be clearly defined and usually depend on the objective nature of the content. They can, therefore, be clearly defined in a code.

“Individual harms” are, by contrast, fact specific – whether someone is engaged in harassment depends on an assessment of the quality of their conduct and its impact on the “victim”. It is, therefore, difficult to lay down general regulatory rules about protection against “individual harms”. Such protection is usually achieved by having fast and effective complaints mechanisms. The legislation should spell out the different approaches to be taken to the two types of harm.


The second issue does not appear to have been the subject of any substantial comment to date. This is the apparent exclusion of “invasion of privacy” as one of the harms against which users should be protected. The White Paper lists four kinds of harm which are excluded from scope. The second of these is “all harms suffered by individuals as a result of breaches of the data protection legislation”.

Harms caused to individuals as a result of invasions of privacy do not, strictly speaking, “result directly” from breaches of data protection legislation: they result from the fact that private information is made available to the public. Nevertheless, this “exclusion” has been generally understood to exclude “invasions of privacy” from the scope of the regulation proposed in the White Paper and I will approach it on this basis.

This exclusion is particularly concerning because harms resulting from “invasions of privacy” are one of the most serious areas of harm caused by online activity. The most extreme case is that of “revenge porn” (the posting of private sexual with intent to cause distress). However, there many other cases of the invasion of privacy by the posting of other private information or photographs. Such posting is often part of a course of conduct aimed at causing distress to an individual. In that case it is may constitute “harassment” or “trolling”. But, the posting of a single piece of private information may be extremely damaging to an individual and may not be sufficient to constitute harassment, trolling or cyberbullying. Private messages may be made public, private information about health or finance may be disclosed. Private sexual information of a non-photographic nature may be revealed. All these are, apparently, outside the scope of the White Paper.

The answer in the White Paper to these points is, apparently, that it is unnecessary for a new Online Regulator to deal with these issues because they fall under “data protection law” and are, therefore, regulated by the Information Commissioner’s Office (“ICO”).

In my view, there are a number of problems with this approach:

  • It is inconsistent. A number of the other “harms” identified in the White Paper may also constitute breaches of data protection law. For example, revenge pornography, harassment and cyberstalking, cyberbullying and trolling, coercive behavior or intimidation. The first of these will, inevitably, involve breaches of data protection law and the others are likely or very likely to do so. Yet these are included within the scope of platform regulation, whilst invasions of privacy are, apparently, excluded.
  • It risks different regulatory standards being applied in relation to similar conduct. If some types of misuse of personal data are subject to the Online Regulator and some a subject to the ICO there a clear risk that different approaches will be taken by two regulators in relation to similar types of conduct. It makes regulatory and practical sense for a single regulator to deal with all the “harm” issues arising out of the operation of the platforms.
  • The ICO has a huge regulatory remit and limited resources. The regulation of privacy breaches by platforms has, perhaps understandably, not been a priority area. Although the GDPR has been in force for over a year the ICO has taken no regulatory action in relation to individual privacy issues concerning platforms. It appears that the only action it has taken in this area concerns the misuse of private information in the context of political campaigning.

The White Paper proposes that the Online Regulator will have an obligation to protect the right to privacy of users (para 5.12). It appears that this obligation will not extend to protecting the right to privacy of those whose privacy is invaded by users. This is a serious omission which should be rectified in the legislation.

Hugh Tomlinson QC is a member of the Matrix Chambers media and information practice group.