1. The draft Online Safety Bill is nothing if not abstract. Whether it is defining the adult (or child) of ordinary sensibilities, mandating proportionate systems and processes, or balancing safety, privacy, and freedom of speech within the law, the draft Bill resolutely eschews specifics.
2. The detailing of the draft Bill’s preliminary design is to be executed in due course by secondary legislation, with Ofcom guidance and Codes of Practice to follow. Even at that point, there is no guarantee that the outcome would be clear rules that would enable a user to determine on which side of the safety line any given item of content might fall.
3. Notwithstanding its abstract framing, the impact of the draft Bill (should it become law) would be on individual items of content posted by users. But how can we evaluate that impact where legislation is calculatedly abstract, and before any of the detail is painted in?
4. We have to concretise the draft Bill’s abstractions: test them against a hypothetical scenario and deduce (if we can) what might result. This post is an attempt to do that.
A concrete hypothetical
Our scenario concerns an amateur blogger who specialises in commenting on the affairs of his local authority. He writes a series of blogposts (which he also posts to his social media accounts) critical of a senior officer of the local authority, who has previously made public a history of struggling with mental health issues. The officer says that the posts have had an impact on her mental health and that she has sought counselling.
5. This hypothetical scenario is adapted from the Sandwell Skidder case, in which a council officer brought civil proceedings for harassment under the Protection from Harassment Act 1997 against a local blogger, a self-proclaimed “citizen journalist”.
6. The court described the posts in that case, although not factually untrue, as a “series of unpleasant, personally critical publications”. It emphasised that nothing in the judgment should be taken as holding that the criticisms were justified. Nevertheless, and not doubting what the council officer said about the impact on her, in a judgment running to 92 paragraphs the court held that the proceedings for harassment stood no reasonable prospect of success and granted the blogger summary judgment.
7. In several respects the facts and legal analysis in the Sandwell Skidder judgment carry resonance for the duties that the draft Bill would impose on a user to user (U2U) service provider:
a. The claim of impact on mental health.
b. The significance of context (including the seniority of the council officer, the council officer’s own previous video describing her struggle with mental health issues; and the legal requirement for there to have been more than a single post by the defendant).
c. The defendant being an amateur blogger rather than a professional journalist (the court held that the journalistic nature of the blog was what mattered, not the status of the person who wrote it).
d. The legal requirement that liability for harassment should be interpreted by reference to Art 10 ECHR.
e. The significance for the freedom of expression analysis of the case being one of publication to the world at large.
f. The relevance that similar considerations would have to the criminal offence of harassment under the 1997 Act.
8. Our hypothetical potentially requires consideration of service provider safety duties for illegality and (for a Category 1 service provider) content harmful to adults. (Category 1 service providers would be designated on the basis of being high risk by reason of size and functionality.)
9. The scenario would also engage service provider duties in respect of some or all of freedom of expression, privacy, and (for a Category 1 service provider) journalistic content and content of democratic importance.
10. We will assume, for simplicity, that the service provider in question does not have to comply with the draft Bill’s “content harmful to children” safety duty.
The safety duties in summary
11. The draft Bill’s illegality safety duties are of two kinds: proactive/ preventative and reactive.
12. The general proactive/preventative safety duties under S.9(3)(a) to (c) apply to priority illegal content designated as such by secondary legislation. Although these duties do not expressly stipulate monitoring and filtering, preventative systems and processes are to some extent implicit in e.g. the duty to ‘minimise the presence of priority illegal content’.
13. It is noteworthy, however, that an Ofcom enforcement decision cannot require steps to be taken “to use technology to identify a particular kind of content present on the service with a view to taking down such content” (S.83(11)).
14. Our hypothetical will assume that criminally harassing content has been designated as priority illegal content.
15. The only explicitly reactive duty is under S.9(3)(d), which applies to all in-scope illegal content. The duty sits alongside the hosting protection in the eCommerce Directive, but cast as a positive obligation to remove in-scope illegal content upon gaining awareness of the presence of illegal content, rather than (as in the eCommerce Directive) exposing the provider to potential liability under the relevant substantive law. The knowledge threshold appears to be lower than that in the eCommerce Directive.
16. There is also a duty under S.9(2), applicable to all in-scope illegality, to take “proportionate steps to mitigate and effectively manage” risks of physical and psychological harm to individuals. This is tied in some degree to the illegal content risk assessment that a service provider is required to carry out. For simplicity, we shall consider only the proactive and reactive illegality safety duties under S.9(3).
17. Illegality refers to certain types of criminal offence set out in the draft Bill. They would include the harassment offence under the 1997 Act.
18. The illegality safety duties apply to user content that the service provider has reasonable grounds to believe is illegal, even though it may not in fact be illegal. As the government has said in its Response to the House of Lords Communications and Digital Committee Report on Freedom of Expression in the Digital Age:
“Platforms will need to take action where they have reasonable grounds to believe that content amounts to a relevant offence. They will need to ensure their content moderation systems are able to decide whether something meets that test.”
19. That, under the draft Bill’s definition of illegal content, applies not only to content actually present on the provider’s service, but to kinds of content that may hypothetically be present on its service in the future.
20. That would draw the service provider into some degree of predictive policing. It also raises questions about the level of generality at which the draft Bill would require predictions to be made and how those should translate into individual decisions about concrete items of content.
21. For example, would a complaint by a known person about a known content source that passed the ‘reasonable grounds’ threshold concretise the duty to minimise the presence of priority illegal content? Would that require the source of the content, or content about the complainant, to be specifically targeted by minimisation measures? This has similarities to the long running debate about ‘stay-down’ obligations on service providers.
22. The question of the required level of generality or granularity, which also arises in relation to the ‘content harmful to adults’ duty, necessitates close examination of the provisions defining the safety duties and the risk assessment duties upon which some aspects of the safety duties rest. It may be that there is not meant to be one answer to the question; that it all comes down to proportionality, Ofcom guidance and Codes of Practice. However, even taking that into account, some aspects remain difficult to fit together satisfactorily. If there is an obvious solution to those, no doubt someone will point me to it.
23. The “content harmful to adults” safety duty requires a Category 1 service provider to make clear in its terms and conditions how such content would be dealt with and to apply those terms and conditions consistently. There is a question, on the wording of the draft Bill, as to whether a service provider can state that ‘we do nothing about this kind of harmful content’. The government’s position is understood to be that that would be permissible.
24. The government’s recent Response to the Lords Digital and Communications Committee Report on Freedom of Expression in the Digital Age says:
“Where harmful misinformation and disinformation does not cross the criminal threshold, the biggest platforms (Category 1 services) will be required to set out what is and is not acceptable on their services, and enforce the rules consistently. If platforms choose to allow harmful content to be shared on their services, they should consider other steps to mitigate the risk of harm to users, such as not amplifying such content through recommendation algorithms or applying labels warning users about the potential harm.”
25. If the government means that considering those “other steps” forms part of the Category 1 service provider’s duty, it is not obvious from where in the draft Bill that might stem.
26. In fulfilling any kind of safety duty under the draft Bill a service provider would be required to have regard to the importance of protecting users’ right to freedom of expression within the law. Similarly it has to have regard to the importance of protecting users from unwarranted infringements of privacy. (Parenthetically, in the Sandwell Skiddercase privacy was held not to be a significant factor in view of the council officer’s own previous published video.)
26. Category 1 providers would be under further duties to take into account the importance of journalistic content and content of democratic importance when making decisions about how to treat such content and whether to take action against a user generating, uploading or sharing such content.
Part 2 of this post deals with the implementation of the duties and will be published tomorrow
This post originally appeared on the Cyberleagle Blog and is reproduced with permission and thanks.