Non-consensual pornography, commonly referred to as image-based sexual abuse, is easily and freely available on pornography websites. Yet, to date, responses to this problem have been partial and fragmented. There is the possibility that this situation will change. Proposals to impose obligations on pornography sites to reduce the extent of this unlawful material have been inserted into the European Parliament’s negotiating position on the Digital Services Act (DSA) as Article 24b.

This will now be discussed in the trilogue meetings between the European Parliament, the Council and the European Commission to finalise the text of the instrument.

Similar in scope to the UK’s Online Safety Bill, the DSA would introduce new rules to ensure social media and other large internet companies better manage the content on their sites. The proposed DSA will apply to intermediary services including online platforms, social media services, marketplaces and internet service providers, with different obligations applying to the different types of service. In addition to the general measures applying to all social media platforms, these proposals would introduce specific requirements for platforms primarily used for sharing user-generated pornographic content, with the aim of reducing the amount of non-consensual pornography on those sites.

In this post, we assess these proposals. Our comments comprise two parts:

  • why this is important and why regulation is justified – looking at the prevalence and impact of image-based sexual abuse; and
  • the provision on user generated pornography and how it fits into the DSA and with the EU rights framework.

Why is this important?

Prevalence

In understanding why action is needed, the prevalence of abuse is a critical factor. Studies from across the world are revealing the alarming extent of image-based sexual abuse, a term used to cover all forms of taking or sharing intimate or sexual images without consent, including threats to share such images and altered images often known as fakeporn or deepfakes. One survey covering Australia, New Zealand and the United Kingdom, for example, found that 1 in 3 participants had experienced at least one form of image-based sexual abuse, with 1 in 5 experiencing threats to share.

One survey covering Australia, New Zealand and the UK found that 1 in 3 participants had experienced at least one form of image-based sexual abuse, with 1 in 5 experiencing threats to share.

This maps onto experiences across the EU. In 2020 in Ireland, over 100,000 images of women and girls were leaked online, with similar websites uncovered in Italy where thousands of users shared sexual images without consent. Unsurprisingly, a survey from HateAid found that 30% of women across the EU fear that fake intimate images of them may be shared without their consent. For some, the situation is even worse, with victimisation rates higher among younger people (aged 18-25), sexual minorities, disabled people, and black and minority ethnic individuals.

Image-based sexual abuse is part of the broader range of online abuses facing women and girls. The EU’s Agency for Fundamental Rights, for example, reported that 1 in 10 women have already experienced a form of cyber violence since the age of 15. As with intimate image abuse, younger women and girls are particularly affected. The 2020 World Wide Web Foundation report, surveying 180 countries, found that 52% of young women and girls had experienced online abuse, including intimate image abuse.

This situation has worsened during the COVID-19 pandemic. In the UK, reports of abuse to the Revenge Porn Helpline doubled in 2020 and online abuse across the UK has increased during the pandemic, with the prevalence and harms worse for black and minoritized women.

These studies demonstrate that online abuse, including intimate image abuse, is pervasive and is a particularly gendered phenomenon. Women and girls experience higher levels of online abuse and image-based abuse and the vast majority of perpetrators of image-based sexual abuse are men.

Life-shattering and life-threatening harms  

Image-based sexual abuse can result in life-shattering and potentially life-threatening harms. This impact is experienced as more significant by women, and by victims from minority ethnic and religious communities, and those identifying as LGBTQI. Many victims describe their experiences as a form of sexual assault; the trauma and adverse impacts are similar to those of survivors of sexual violence.

Some women describe their experiences as a form of ‘social rupture’ – a devastating rupture of their lives with impacts that are all-encompassing and pervasive, radically altering their life experiences, relationships and activities, with deep and long-lasting psychological impacts. They divide their lives in terms of ‘before’ and ‘after’ the abuse. Victims talk about how my whole world just crumbled; a nightmare … which destroyed everything and that the abuse obviously does define my life now … it has completely changed my life in horrific ways. Victims experience profound isolation from friends and family due to breach of trust and victim-blaming attitudes.

The abuse also has profound impacts on women’s professional and economic lives, as many withdraw from online activities and social media to try to protect themselves from further abuse and harassment. Some victims are sacked from employment and struggle to find new work, others go so far as to change their names to avoid further harassment and abuse. Women also self-censor online, restricting their online activities, limiting their contributions to civil society and compromising their professional lives.

Overall, all women are adversely impacted, being held responsible for managing the risks of abuse, having to alter their behaviours to try to prevent abuse and/or report it.

Non-consensual imagery on mainstream pornography platforms

The potentially devastating harms often stem from the abusive material being ever-present on the internet and being distributed on mainstream pornography platforms accessed by millions every day. Despite the large pornography platforms stating they have policies against non-consensual material, such material is easily and freely available, organised into many genres and categories such as upskirting, spycams, hidden cams, revenge porn, leaked, stolen and many more terms. This easy availability of non-consensual pornography on mainstream sites legitimises and normalises this form of abuse.

In the largest study to date of online pornography, 1 in 8 titles on the front page of the most popular websites described sexually violent material, including image-based sexual abuse. This research reviewed the landing pages – the material promoted to a first-time user including young teenagers – meaning that this is the material that the porn companies are actively choosing to showcase to new users. The easy availability of this material available is in direct contravention of their own Terms & Conditions, showing the need for greater regulation.

In the largest study to date of online pornography, 1 in 8 titles on the front page of the most popular websites described sexually violent material, including image-based sexual abuse.

Swift removal of non-consensual imagery from the internet can significantly reduce the harms and harassment experienced by victims. Yet currently, victims report significant delays in getting material removed from porn sites, or being ignored entirely. The time is ripe, therefore, to consider whether more can be done to reduce the prevalence of intimate image abuse, especially on pornography sites.

Regulating pornography platforms used for user-generated porn

The Commission proposals for the DSA did not specifically address either the gendered nature of online abuse, nor the extent of non-consensual pornography available online.

Amendments were agreed by the European Parliament to strengthen the DSA in relation to user generated pornography, resulting in a new clause: Article 24b DSA. This provision introduces particular obligations on online platforms primarily used for the dissemination of user generated pornographic content. It requires those platforms to take technical and organisational measures to ensure that:

  1. those disseminating such content have identified themselves by email and mobile phone number;
  2. the platform has professional, appropriately trained human moderators; and
  3. an additional notification mechanism whereby victims may notify platforms of the dissemination of content and content is to be removed without ‘undue delay’.

The following points can be made about Article 24b DSA.

Targeted measure specific to porn platforms

Article 24b is targeted to a sub-set of online platforms that are central to the problems identified above, due to the high risk of non-consensual pornography being disseminated via these sites. It does not cover sites on which the sharing of user-generated pornography is not a large part of the content available. Significantly, the provision does not regulate content. Rather, it provides safeguards for the well-being of those the subject of pornography and thereby protects against image-based abuse.

Limiting dissemination of non-consensual material

Article 24b introduces friction into the process of uploading and disseminating material: in other words, they make it just a bit more time-consuming and difficult to upload material, with the hope of reducing non-consensual material being disseminated. Such a measure recognises the business models of these sites which incentivise them “to keep the process of uploading video content friction-free and to minimise moderation”.

Risk assessments

One of the DSA’s mechanisms for regulating online platforms is – in relation to very large online platforms (VLOP) – to require risk assessments to be carried out, with the aim of identifying risks to then be mitigated. In practice, these new provisions will provide a minimum standard for some parts of those risk assessments, when the online pornography platform is a VLOP, and may help avoid debates over whether specific platforms have responded appropriately to risk assessment obligations.

Verification processes assist law enforcement

The obligation to verify the status via email and phone number of those disseminating pornography is limited: it only applies to those who upload or share material. It does not, therefore, apply to all users. In doing so, the controls are imposed on those who  contribute to the problem by uploading non-consensual material.

Requiring those uploading content to verify themselves through the double opt-in strengthens the ability of victims (and where relevant the police) to assert their rights by increasing the likelihood that some contact information for an accounting disseminating relevant material would be available. Difficulty in identifying users is one of the reasons given for weak police enforcement of relevant laws. Such a requirement does not mean that relevant due process safeguards found in national law for users would or should be subverted.

Nonetheless, the obligation will also affect those who legitimately disseminate pornography, including sex workers. In this context, the following points should be noted. The requirement is not for a real name policy, nor for the email/mobile number to be made public. Moreover, there remains the possibility within the wording for a user to identify themselves through a secondary or business email and/or mobile number.

Requiring trained content moderation assists identification of non-consensual imagery

Quantitative and qualitative concerns have been raised about platforms’ moderation systems.  Article 24b(b) responds to both of these issues, underlining the necessity for investment in human moderation. At this stage of development, automated solutions are unlikely to be able to identify non-consensual images; AI is notoriously weak at assessing context and weaker on images and emojis than text.

In many problem domains (eg abuse of minorities and hate speech), civil society groups have highlighted the problems of inadequate training with those acting as moderators failing to recognise content that is illegal or which contravenes platform rules. Further, systems relying on user complaints are unlikely to be effective: users searching for borderline legal material are unlikely to recognise or report illegal material.

Strengthened notification procedures and take-down

News reporting and academic research indicates that platforms’ notice and take down processes are deficient, both in not having processes that specifically recognise intimate image abuse, and the lack of speed and effectiveness of response.

While measures in the DSA are designed to improve notice and take down, it applies to illegal content and there is reference to such material including ‘unlawful non-consensual sharing of private images’. This suggests some non-consensual sharing of images may constitute ‘lawful’ content. There is great divergence amongst member states as to what forms of image abuse are criminalised, as well as differences regarding image rights.

The new Article 24b addresses these limitations by ensuring effective and swift notification and take-down relating to all non-consensual imagery. It covers content that contains images of the complainant, as well as fake images, reflecting the fact that ‘deepfakes’ are a serious and growing problem.

Human rights compliant

Finally, in assessing the suitability of measures to regulate online activity, it is vital that any proposals respect fundamental rights. Concerns about the freedom of expression and the privacy of a person using pornographic online platforms are clear; the rights of the subject of the intimate images must also be considered. In other words, priority cannot automatically be given to a specific right – such as uploaders’ freedom of expression – but the appropriate balance between conflicting rights must be determined in the light of relevant facts.

The EU Charter of Fundamental Rights refers back to the European Convention on Human Rights.  There, the protection of the right to privacy in Article 8 is very broad. It includes an individual’s physical and social identity, including the right to personal autonomy and personal development. States also have positive obligations to ensure respect for individuals’ psychological integrity which could include taking action against a range of harms such as bullying and intimate image abuse. The weight of this jurisprudence makes clear that not only is Article 8 engaged, but that intimate image abuse would be regarded by the Court as “a serious, flagrant and extraordinarily intense invasion of her private life”.

Of course, freedom of expression and privacy are both limited rights. An interference with either right can be justified if the intrusion is lawful, for a legitimate aim and necessary in a democratic society. This last element is essentially a proportionality analysis and in the context of Art 24b there are many factors to be weighed in balance including:

  • the speech is not of a highly protected nature (not political speech or journalism, but in some instances commercial speech);
  • an essential aspect of the victim’s private life is in issue and the breach is serious;
  • the victim’s own rights to freedom of expression are likely curtailed;
  • the measure is narrowly focussed in terms of platforms in scope;
  • it does not prohibit pornography, or prevent consensual sharing of images;
  • the identifiability obligation does not require the user to be named on the platform, and applies only to those who disseminate content – not all users;
  • while the moderation and take down obligations are a significant intrusion into speech, given the possibility for multiple further intrusions on each viewing, it is hard to imagine a remedy short of take down that could be appropriate.

In sum, and taking into account all of these features, it is likely this proposed provision is human rights compliant.

Conclusion: Reducing and preventing image-based sexual abuse

These measures in the EU’s Digital Services Act are a welcome recognition of the prevalence and harms of image-based sexual abuse. In particular, in seeking to reduce, and ultimately prevent, many cases of such abuse, these provisions tackle the core of the problem. They do so in a targeted way, focusing on those who may disseminate non-consensual pornography.

While criminal justice processes are vital in providing redress for many victims, in practice they can only address a small fraction of cases of image-based sexual abuse and necessarily only come into play once the abuse has taken place.

Ultimately, therefore, these measures, by seeking to reduce the incidence of abuse, have the potential to make a significant difference to the lives of victims. They may also have positive benefits across society as a whole, with women in particular fearing less the threat of having images taken or shared of them online and feeling freer to express themselves online

Clare McGlynn is a Professor of Law at Durham University and Lorna Woods is Professor of Internet Law at the University of Essex.  The full text of their expert opinion prepared for HateAid examining these measures is available here.

This post originally appeared on the LSE Media Policy Project Blog and is reproduced with permission and thanks.