The much-anticipated Online Safety Bill 2022 is currently making its way through Parliament. At the time of writing, the Bill is currently at the Report Stage in the House of Lords. It is not yet known when the Bill will come into force.

The Online Safety Bill, as it currently stands, will impose a duty on user-to-user services, defined as ‘any service that enables content generated, uploaded or shared by one user to be encountered by another user’ to remove illegal content from their sites.

Although it is not yet fully known who will fall within the scope of the Online Safety Bill, with the Secretary of State still to set the boundaries, it is likely that many social media platforms that we use today will have to adhere to the regulations set out in the Bill. 

Under the Bill, platforms will be under an obligation to interpret the criminal law of the United Kingdom and ensure anything that is illegal in the UK, is not displayed on their sites. Though there is a hierarchy, with priority illegal content, for instance, terrorist material and child sexual exploitation matters, imposing a higher obligation on sites to ensure such content is not available to view. In addition, the Online Safety Bill also creates several ‘new’ criminal offences, including the prohibition of threatening or false material from being sent online. If an individual does send such material, they could be liable for a fine, or at worst, imprisonment. A further criminal offence which has been mooted, though not yet accepted, is the criminalisation of encouraging another to self-harm

Under the proposed new offence, if accepted in the amendment put forward by the House of Lords, individuals could face imprisonment for sending content which ‘intentionally encourages another to self-harm’. Though this could be considered a positive step forward in protecting vulnerable people online, particularly in the tragic death of Molly Russel, we need to be careful not to criminalise or judge those in distress. Concerns which have been raised by the National Survivor User Network

We think that several key aspects of online self-harm content, related to malicious intent, peer support, and the nature of sharing one’s own experiences, have been under-explored and inadequately considered.

There is a thin line between content that actively encourages another to self-harm and content that shares another’s survival. 

Arguably the proposed offence does acknowledge this with the inclusion of the word ‘intent’ in which the Police, the Crown Prosecution Service, Magistrates and Judges would need to be satisfied that the aim or purpose of the material sent, was to encourage another to self-harm. And although ‘intent’ is found in many criminal offences in the UK, it has been incredibly difficult for those working in the criminal justice system to interpret. How then do we expect social media platforms or AI technology to be able to interpret this offence – which under the Bill as it currently stands, they will be under an obligation to do so? 

For example, would AI or human moderators be able to distinguish an image of a person with self-harm scars from content that could be said encourages another person to self-harm? What about content that discusses another’s person’s experiences of self-harm? Or content that offers peer-to-peer support? Intent is incredibly difficult to establish in the criminal justice system, where evidence would be gathered before a decision were to be made. This takes time. Time which social media platforms will not have under the Online Safety Bill. Decisions will need to be made instantly. Currently, human moderators have seconds to decide if content should remain on a site or be removed. AI also has its issues, with Facebook coming under criticism in the past for removing the American Declaration of Independence from its platform, after AI flagged it as hate content. 

In situations where AI or moderators are asked to consider if content related to self-harm breaches the criminal law framework of the UK, the intent behind the sharing of such material might not always be obvious. Consequently, there runs the real risk that those sharing their experiences of self-harm online will have their content removed by platforms, for fear that leaving such content, could result in heavy fines. This imposes a moral judgement on the individual. As a society, we have come a long way in discussions around mental health. We need to be sure that we are not taking a step backwards. We also need to be sure that we do not push vulnerable people to smaller sites, creating echo-chambers where no moderation exists.

Laura Higson-Bliss, Lecturer in Law, Keele University