Following on from our post in December 2018 which looked at ‘deepfake’ technology, below we consider the challenges posed by the use of such technology to create fake pornographic content and, in particular, what legal and practical recourse might be available to victims.
What is a deepfake?
A deepfake (being a portmanteau of ‘deep learning’ and ‘fake’) refers to video or audio recordings that are edited, using readily available technology, to create a fake clip of an individual saying or doing something which never happened.
This can be utilised for a multitude of sinister reasons, including to doctor videos of politicians taking extreme stances which could foil their campaign or create social unrest, to FTSE chairmen making announcements which could plummet share prices. In the case of some celebrities, it has been used to create fake pornographic material, known as “morph porn” (or “parasite porn”).
How does deepfake technology work?
At its simplest, deepfake technology involves feeding an individual’s data (such as existing photographs and video clips) into artificial intelligence and machine learning models, which will then learn to generate new data which contains the realistic characteristics of the original data. This is used to generate fake content. Similar to Snapchat filters (which use face-morphing technology), the user can then use the software to combine and superimpose an individual’s face onto other content. Their voice can then also be edited by using the same technology to create a convincing deepfake.
This technology is widely accessible, with free and user-friendly deepfake apps available online, and whole communities dedicated to developing these technologies and keeping the deepfake movement alive. Today, creating a deepfake is almost as straightforward as creating a selfie.
Furthermore, the rate at which deepfake software is being developed and refined is far outpacing the technologies being developed to detect such content. This poses significant issues to the public, as the quality of deepfakes are becoming increasingly realistic and harder to identify. The ease of creating a deepfake is, therefore, contributing to their ever-increasing prevalence online.
“Morph porn”
The use of deepfakes to create fake pornographic content of non-consenting individuals is on the rise.
By way of example, at the date of this article, when searching for “deepfake”, the last result on the first page of Google directs you to a website dedicated to pornographic deepfake videos of celebrities. A direct search for such content returns numerous sites hosting morph porn.
The creation of morph porn is fundamentally derogatory and an unquestionable breach of the individual’s human right to privacy, and is commonly viewed as another form of revenge porn. Celebrities such as Scarlett Johansson, Emma Watson and, most recently, Gal Gadot are just a handful of those who have fallen victim to this form of online sexual abuse.
Yet, the criminal offence of revenge porn as set out in sections 33 – 35 of the Criminal Justice and Courts Act 2015 does not necessarily cover this content. This is because content which is only private or sexual by virtue of the alteration (i.e. where a non-private photograph of a celebrity’s face is superimposed onto a pornographic image) will, controversially, not be deemed private and sexual under section 35(5) of the 2015 Act.
What can be done?
Whilst there is no specific legislation that deals with deepfakes in the UK, lawyers have long been advising clients on forged content. Below are a few of the legal redresses available to a victim of deepfakes.
Misuse of private information
A claim in misuse of private information may be viable. An individual has a legally recognised expectation that information which purports to concern activity of a sexual nature will remain private. Morph porn is such that it would almost certainly engage the individual’s Article 8 rights, even though the ‘private information’ is, for all intents and purposes, false (as established in McKennitt v Ash [2006] EWCA Civ 1714). There cannot be said to be any genuine public interest in that subject matter and the law should be capable of being utilised in order to address any misuse of such information.
Harassment
The law of harassment covers repeated attempts to impose unwanted communications and contact upon a victim in a manner that could be expected to cause distress or fear in any reasonable person. Therefore, if a creator of deepfake content repeatedly creates and posts content online, they may be liable for harassment. This was the case with city worker Davide Buccheri, who in 2018 was sent to prison for 16 weeks and ordered to pay £5,000 in compensation to his victim; he was convicted of harassment for photoshopping pictures of his victim, who was an intern at his firm, onto pornographic images and uploading them to adult websites after she turned down his romantic advances.
Defamation
Morph porn is created without the consent of the individual whose face is used and is inherently humiliating to the victim. Given the rapid development of deepfake technology and, in particular, the quality of that technology, you can well envisage that such content would carry a defamatory imputation capable of causing the individual serious harm to their reputation.
General Data Protection Regulation (the “GDPR”)
The information contained within a deepfake would be considered personal data of the victim under Article 4(1) of the GDPR as it “relat[es] to an identified or identifiable natural person”. On this basis, the victim could enforce their rights as a data subject in accordance with GDPR, which might well include the right to rectification, the right to erasure and the right to restriction of processing.
Copyright
There may be grounds for asserting copyright over the image used in the deepfake, which is something that we considered in detail in our earlier article.
Issues with dealing with deepfakes
It is often extremely difficult to identify who is behind the creation or dissemination of the material, and online platforms are opposed to handing over such information without some intervention from the court.
Furthermore, given the global nature of the internet, there will be jurisdictional issues to consider where, for example, content has been published, or is being hosted online, in a jurisdiction which does not have effective legal tools to deal with deepfakes.
Even if the details of the perpetrator are identified, he or she may not be the actual creator of the content, and may not be the only individual circulating the material online. The time and expense of trying to identify each publisher of the information can, therefore, escalate quickly.
Who is vulnerable?
Anyone can be the victim of a deepfake video, and increasingly sophisticated software means that only a series of photos are needed to make a convincing clip. Those in the public eye, such as politicians and celebrities, are most at risk, however, any public content from social media accounts can also be used and ordinary members of the public have been victims of deepfakes and morph porn as well.
The consequence of being a victim of morph porn can be devastating, and the impact on the victim can vary greatly. As Scarlett Johansson pointed out in her interview with the Washington Post: “this doesn’t affect me as much because people assume it’s not actually me in a porno”. This raises an interesting point, as the majority of other victims may not be given the benefit of the doubt.
What can be done?
Whilst legislators globally are playing catch up to the emergence of fake news (including deepfakes), it is crucial that online platforms have adequate policies in place to prevent and respond to morph porn.
Online platforms such as Twitter, Tumblr and PornHub have already banned deepfakes, which has made reporting and removing content easier. Google has also added “involuntary synthetic pornography” to its ban list, meaning anyone can request that deepfakes of themselves be removed from the search engine. However, these tools all require a user to actively report the deepfakes.
This is, regrettably, something we have already had to manage for a client. Crudely photoshopped images of a female client onto hardcore pornographic photographs caused understandable distress even long after we had those images removed.
Both legislators and platform providers face the key challenge of distinguishing between genuine and fake content, and often victims themselves are unaware that such content exists before it has been widely shared.
For victims of deep fakes, a comprehensive strategy which utilises both legal and PR tools will be essential to help manage the situation.
This post originally appeared on the Himsworth Scott website and is reproduced with permission and thanks
Leave a Reply