This post is the second part of two posts on the draft Online Safety Bill. The first post, found here, detailed the mechanics of the proposed bill in detail. This post will summarise some of the civil society responses since the publication of the draft bill, attempting an evaluation of how reasonable those responses are in light of the available information.

Does the bill go too far?

A recent report on freedom of expression online from the House of Lords, ‘Free for All? Freedom of Expression in the Digital Age’ (found here), recommends that the draft bill drops the duty to protect adults from contentious “legal but harmful” content. As detailed in the previous post, “category 1” services would have a duty under the draft bill to identify how their systems could cause adults to come into contact with user-generated content that is legal but nonetheless considered harmful. Further to that duty, they would be required to take steps to proportionately mitigate against the risk of exposure to that harmful content. Given the possibility to adverse impacts on freedom of expression, especially from the potential of overzealous policing of this provision by category 1 services to avoid liability, this has become one of the most controversial elements of the current draft bill.

The House of Lords report recommends that s. 11, implementing the adult safety duty, be dropped from the draft bill. As things stand, there are two ways in which content can be caught by the adult safety duty. Under s.46(2), the relevant secretary of state can designate by regulation certain types of content as “priority content”. Second, under s.46(3 – 5), content for which there is a “material risk” of having “significant adverse physical or psychological impact on an adult of ordinary sensibilities” is also considered “content that is harmful to adults”. Category 1 services must take steps to proportionately mitigate against the likelihood of adults using their service to come into contact with these types of content.

In paragraph 182 of the report, the House of Lords voices concerns that s.11 is not “workable” and could cause “unjustifiable and unprecedented interference with freedom of expression”. Their primary recommendation is that where a type of content is sufficiently harmful, it ought to be “defined and criminalised through primary legislation”. This is in keeping with the Law Commission’s recent recommendations on reforming online communication offences. The Law Commission recommends introducing a new offence based on an intention to cause harm through communication, where harm is defined as “psychological harm, amounting to at least serious distress”.

The definitional concepts underpinning the adult safety duty itself have also been criticised. The notion of the adult of “ordinary sensibilities” has been developed from caselaw on privacy and the misuse of private information, as was clarified in Caroline Dinenage MP’s response to questions posed by Lord Gilbert of Panteg, chair of the Communications and Digital Committee in the House of Lords (and one of the authors of the report). However, as the report makes clear, the definition of the adult of ordinary sensibilities omits the requirement of reasonableness found in the caselaw. In paragraph 183, the House of Lords report suggests amending the legislation as things stand to define the at-risk adult as an “reasonable person of ordinary sensibilities”.

Furthermore, as the government currently interprets the bill, the definition of psychological harm does not require a clinical basis (as was also made clear in Dinenage’s letter). Thus, the responsibility falls entirely on the service provider to determine whether there is a risk that certain content will cause a somewhat nebulously defined “adverse” psychological impact. The report expresses fears that this could lead to the kind of over-zealous policing occasionally seen with Germany’s Network Enforcement Law (NetzDG).

Does the bill go far enough?

However, a number of civil society groups and charities have also expressed concerns that the Online Safety Bill fails to do enough to address online abuse. Glitch are a charity focused on combatting online abuse, particularly against women and marginalised people. They have been a significant voice calling for action on online abuse since their foundation in 2017, and have been involved in the process since the original Online Harms White Paper.

Along with Hope not Hate, Reset, the Antisemitism Policy Trust, Impress, Clean up the Internet, Demos, and Catch 22, Glitch has co-authored a report which welcomes the “legal but harmful” provisions in the bill but asks that the government do more to recognise the intersectional nature of online abuse. An intersectional perspective focuses on the way in which different marginalised communities experience abuse differently, and the way in which different characteristics can intersect to produce unique, and uniquely damaging, experiences of online abuse.

Evidence for the intersectional nature of abuse experienced online has been found in a number of polls and reports. For example, 41% of black respondents in the UK reported being the recipient of abusive emails; comparatively, white respondents reported being targeted about four times less. Further, a 2017 poll for Amnesty International found that 21% of UK women had experienced online abuse or harassment. Downstream, this caused 10% of these women to leave social media platforms, 20% to stop sharing opinions on platforms, and 34% to make their social media accounts private. These issues have been researched and discussed in detail in Glitch’s recent report with the End Violence Against Women Coalition, The Ripple Effect, which looked at the effect of the pandemic on online abuse from an intersectional perspective.

These kinds of statistics arguably throw the debate over how the bill impacts on freedom of expression into a new light. As Graham Smith, author of Internet Law and Regulation and of Counsel at Bird and Bird notes in ‘Speech vs Speech’, freedom of expression under the ECHR cannot be simply understood as simply a limit on the coercive power of the state. Recent ECtHR caselaw recognises that free expression rights are horizontal, meaning that governments have a positive obligation to protect individuals’ free expression rights from undue interferences from other individuals.

The government has recognised this positive obligation in its full response to the Online Harms White Paper. Glitch’s particular concern is the draft bills failure of recognition of the specifically intersectional nature of horizontal restrictions on free expression. Where online abuse disproportionately targets certain marginalised groups, which can cause them to stop exercising free expression rights (such as causing them to leave social media or not share opinions), Glitch argue that this should be reflected in the Online Safety Bill.

Legal to Say, Legal to Type Campaign

Concern about over-policing of speech has been front and centre in the Index on Censorship’s Legal to Say, Legal to Type campaign. A strongly critical report argues that the introduction of the ‘duty of care’ model adopted by the bill would “mark the most significant change in the role of the state over free speech since 1695”.

The campaign’s fundamental concern is the asymmetry between policing of online and offline speech. For example, as noted by Heather Burns here, universities could be obliged to host incendiary speakers due to freedom of expression requirements; a social media site that hosted a recording of the speakers talk could be obliged to take it down due to the risk of psychological harm from the content of the talk.

The report also expresses concern arising from the dangers of algorithmic content moderation. Large social media sites already rely heavily on algorithmic content moderation to enforce their content policies, and the Online Safety Bill would undoubtably increase the use of algorithmic content moderation. Algorithmic content moderation is infamous for lacking sensitivity to context. Furthermore, as the report notes, algorithmic content moderation tends to disproportionately target members of marginalised groups. This would likely lead to adverse downstream freedom of expression consequences.

But this the concern based in a more fundamental reversal in the traditional structure of the policing of offensive speech that Index on Censorship sees as inherent in the bill. Traditionally, offensive speech policing is enforced through courts after the speech has been delivered. Index on Censorships argues that the ‘duty of care’ model reverses this, placing the onus on social media companies to proactively remove speech that might be harmful. This would only be exacerbated by proactive decisions being taken by AI moderating systems with limited understanding of the meaning and context of the speech in question.

This reversal seems to be in line with language used in the bill. Whereas the obligations under the adult safety duty requires category 1 services to demonstrate how legal but harmful content will be “dealt with” (s. 11(2)), category 1 services have a duty merely to “have regard to” (s. 12(2)) the importance of freedom of expression and privacy.

Press Carve Out

The bill contains a significant exception for the press, such that “news publisher content” is exempt from the safety duties imposed on category 1 services. This has sparked concerns that the Online Safety Bill will indirectly create a regulatory regime for the press, via which organisations are considered to create news publisher content and which are not.

A number of administrative and substantive conditions must be met for an organisation to be considered a producer of news publisher content. These require such organisations to be incorporated in the UK and be publishing news content, subject to a standards code, and have policies in place for dealing with complaints. As things stand, this would not include journalists expressing opinions outside such an organisation (such as posting a tweet) or citizen journalists. A three part post on Inforrm can be found here, detailing how the press carve-out was secured.

Next Steps

The bill is currently undergoing scrutiny from a parliamentary pre-legislative scrutiny commission, the details of which can be found here. DCMS has also launched its own enquiry into the bill, focussing on its development and omissions (find details here). The deadline for the pre-legislative scrutiny committee’s report is 10 December 2021.

Rafe Jennings is an aspiring barrister with an interest in freedom of expression and privacy online

This post originally appeared on the UK Human Rights Blog and is reproduced with permission and thanks