On 7 July 2022, the Law Commission published its final report on intimate image abuse (IIA), with recommendations to create a comprehensive criminal offence that provides effective protection against all manifestations of IIA. The report should be commended for addressing the existing sporadic IIA offences which are not fit for purpose and leave many instances of abuse unpunished.

For example, under current law, upskirting is a criminal offence but downblousing is not; nor is sharing an intimate image that has been altered (aka “deepfake porn”). This comment points to one particular inconsistency in the report as it relates to deepfake intimate image abuse (DFIIA).

DFIIA is the digital creation of sexual photographs or videos where the facial or bodily features of a victim-survivor are mapped on to the face and body of someone engaging in a sexual act, usually an adult actress. This form of abuse violates the sexual privacy and autonomy of both the victim-survivor whose face is used and the adult actress whose face is erased.

Despite acknowledging that the making of non-consensual deepfakes are a “violation of the subject’s sexual autonomy,” the report concludes that the act of making intimate images without consent should not fall within the scope of the criminal law because the level of harm is not serious enough [4.176]. The report contends that the harm crystallises when such an image is shared [4.9; 4.282]. It recommends that DFIIA be criminalised, but as a sharing offence, i.e. only when non-consensual deepfake pornographic material is shared is the behaviour deemed criminal.

Drawing on the relevance of knowledge, the report finds that victim-survivors of DFIIA who are unaware that an image has been made of them suffers little or no harm. This, apparently, is distinct from being unaware that an intimate image has been taken, because, according to the report, taking violates privacy in a more tangible way and violates sexual autonomy regardless of knowledge [4.177].

The apparent lack of harm associated with the simple making of non-consensual deepfake pornography is then balanced with the perpetrator’s right to freedom of expression; the conclusion being that a making offence would be a disproportionate interference given the harm (or lack thereof) caused [4.215].

So, the lack of harm and the perpetrator’s freedom of expression rights are the justifications given for not creating a making offence.

The inconsistency emerges from the later recommendation that proof of actual harm should not be an element of intimate image offences (Recommendation 31). As a subjective concept, harm is difficult to define, and particularly so in the context of IIA where it is likely to be non-physical [9.8]. Including harm would serve as a barrier to successful prosecutions [9.28]. Through this recommendation, the report indirectly asserts that the wrong is the act of IIA, not the harm caused.

The making of DFIIA breaches the victim-survivor’s right to choose who sees them as a sexual being in a tangible way, regardless of their knowledge. The harm is inherent in the gendered context within which DFIIA operates, where women’s bodies are chopped and changed to satiate the maker’s sexual gratification or exert power and control over the victim-survivor. More than a simple fantasy, DFIIA requires a level of involvement by the maker that is worthy of criminalisation. Following the report’s own logic, the wrong is in the act, not the harm that ensues.

The argument that a simple making offence would be difficult to enforce is unconvincing. Criminal laws are tools to secure justice for victims. A making offence would simply equip victim-survivors with options. It would also set a standard that the disregard for sexual autonomy reflected in DFIIA will not be tolerated, just as the making of an indecent image of a child is deplored under section 1 of the Protection of Children Act 1978.

Moreover, there is an inherent risk that, once made, DFIIA material can be shared or leaked, via hacking or otherwise. The maker is responsible for this increased risk. Their acts expose the victim-survivor, which warrants liability.

The perpetrator’s freedom of expression rights are justifiably infringed by a making offence because the non-consensual making of deepfake pornographic material is, by the report’s logic, inherently wrong. Furthermore, the perpetrator’s freedom of expression rights must be balanced with those of the victim-survivors. The report fails to consider how DFIIA impacts women and minority voices’ free speech rights. McGlynn and Rackley note that the harassment, abuse and victim-blaming that is typically directed at women in instances of intimate image abuse risk extinguishing the consensual production of private intimate images as a form of sexual expression (at 15).

In the deepfake context, the risk of self-censorship is even more pronounced. DFIIA weaponizes everyday online expression. Innocent pictures uploaded to social media can be manipulated to create explicit, graphic and violent scenes. The reality of DFIIA may well lead to chilling effects as female and minority voices withdraw from the online environment. Jonathon Penney has shown that in the face of online abuse, women are more chilled in their speech than men (at 11). In the face of the nonconsensual sharing of intimate images, women are likely to self-censor and change how they express themselves online; they are less controversial, far more muted, and connect with fewer individuals (at 59). They were more likely to withdraw from online activities, including shutting down their accounts. The practice and reality of DFIIA has a major impact on the victim-survivor’s right to express themselves – both as a sexual being to whom they choose, and as a citizen online. The report does not give this enough attention.

The report is a welcome attempt to patch up the piecemeal laws that are failing victim-survivors of intimate image abuse. But its recommendations on DFIIA falls short on its own framework and fails to appreciate the extent of this form of abuse. England and Wales could be the first jurisdiction in the world to create a making offence, criminalising the creation of non-consensual deepfake pornographic material, rather than just the sharing or distribution of that content. In doing so, a standard would be set that establishes society’s respect for the sexual autonomy, privacy and expressive rights of the women and minority voices who are most frequently targeted by this form of abuse.

Colette Allen is the host of Newscast on Dr Thomas Bennett and Professor Paul Wragg’s The Media Law Podcast (@MediaLawPodcast).