The International Forum for Responsible Media Blog

Deepfakes: all is not what it seems – Lorna Caddy

This piece looks at machine learning methods being used to create a deepfake (a portmanteau of ‘deep learning’ and ‘fake’). While the advances in the technology are exciting news for the film industry, the potential for misuse is significant. Within a very short time frame, the technology allows a film to be created of an individual appearing to say and do things that she has not said and done.

Commentators on the technology have thus far mainly concentrated on the potential for fake news in the political arena. There are commercial implications also. What if the technology was used to create a fake announcement by the chairperson of a listed company? The ensuing havoc could be hard to contain, particularly given the ease with which such a film could be posted on social media, quickly percolating via the “share” button. It is now quite possible that share prices could be affected.

Equally, what if the technology were used to contrive a situation where a well-known person appeared to be cheating on their partner?

Developments in technology

It has long since been possible to manipulate film featuring living human beings to create fictional content. You may recall the scene in ‘Forrest Gump’ where Forrest appears to talk with President Kennedy.  Obviously, Tom Hanks and President Kennedy were never in a room together. This scene was created in the early 1990s through painstaking editing of footage of the former President, combining it with footage of Tom Hanks.  Since then, technology has led to a significant increase in the quality and the ease with which video content can be created and edited.  With this new technology (particularly machine learning, a field of artificial intelligence) comes the rise of the deepfake.

In the early Autumn, three members of US Congress (Adam B. Schiff, Stephanie Murphy and Carlos Curbelo) described deepfakes as “[c]onvincing deceptions of individuals doing or saying things they never did, without their consent or knowledge”. The Members of Congress’ main concern raised in a letter to the Director of National Intelligence is that “[b]y blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality.”

Computer-generated film featuring replicas of seemingly living human beings can now be created relatively easily, in some cases in real time, using machine learning based technologies. Special effects and dubbing are no longer principally the domain of film studios. Some of these tools are readily available to the public online. It has never been so easy to create fictional and realistic content.

Earlier this year, the Max Planck Institute for Informatics reported on the work that it has been doing on “deep video portraits” a system using artificial intelligence, with the ability to edit the facial expressions of actors to accurately match dubbed voices. The developments have been pursued with the film industry in mind. Hyeongwoo Kim from the Max Planck Institute for Informatics explains how the technology works: “It works by using model-based 3D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and head position of the dubbing actor in a video. It then transposes these movements onto the ‘target’ actor in the film to accurately sync the lips and facial movements with the new audio.”

While all of this is great news for the film industry, enabling all sorts of possibilities for post-production film editing, there are some worrying potential uses for this technology. Of particular concern is the potential for audiences not to recognise content as fictional, particularly where it is deployed by a competitor or a rival, or indeed as political propaganda.

Recent deepfake videos

The deepfake which has attracted the most media attention is one where Barack Obama appears to call President Trump a “dipshit”. In a recent videopublished by Buzzfeed earlier this year, Jordan Peele sits in front of a camera speaking and moving his face. Using machine learning, a computer synthesises the same facial movement in real time on an existing video of Barack Obama, with the result that Mr Obama appears to be saying the words spoken by Jordan Peele.

More disturbingly has been the rise in “morph porn” created using machine learning technology, with one Reddit user going by the name of “Deepfakes”, having posted digitally manipulated pornographic videos on the Reddit site. In doing so, he would superimpose the faces of celebrities onto the bodies of actors in pornographic videos. Reddit has since banned such use from its site.

Professor Hany Farid is a computer science professor at Dartmouth College in the United States. In an interview with the Wall Street Journal the professor comments: “This is a big deal…You can literally put into a person’s mouth anything you want.” He is concerned that the development of fakes is not being outpaced by the development of methods for detecting them.

 Of most concern to democracy is the possibility of deep video portrait technology being exploited by creators of “fake news”. The Congress Members mentioned above identified this possibility earlier this Autumn. They have written a letter to the Director of National Intelligence asking for a “thorough review by the Intelligence Community” with a report back to Congress by the middle of December 2018. They recognise the impact a carefully timed deepfake could have on the democratic process, with the technology being harnessed for ill gain.

Legal issues

Lawyers have been advising on forged content for years. While the legal issues are not new, the rapid rise and ease of use of these technologies is new, bringing with it an increase in the instances of fake content. With this increase, there is likely to be an increase in the number of clients asking their lawyers to assist in removing content.

In terms of ease of removal based on legal grounds, the easier end of the scale is pornographic deepfakes which involve a clear misuse of private information, as well as other potentially criminal offences. At the harder end of the scale will be parodies where it is obvious to the audience that the person depicted did not say or do the things shown, particularly if the parody is created purely for comedy value on a not for profit basis.

In terms of pornographic pieces and “fake news”, it is unlikely that a complainant would need to set out arguments based on misuse of private information and/or any relevant intellectual property rights in order to get material taken down by most of the big social media platforms. Most of their terms of use cater for the removal of instances of impersonation and pornographic content. Removal is reasonably straightforward if the material falls into one or both of these categories.

Where a deepfake amounts to a parody, it is less likely that a social media platform will take the material down and the legal position becomes more interesting. Often, in these cases, it will not be a clear case of impersonation and it may be necessary for the complainant to deploy legal argument, rather than simply relying on the platform’s terms of use and codes of practice. It is in these situations that copyright laws could be relevant, particularly where the complainant owns the copyright, has performers’ rights related to the underlying copyright work or is in a position to join forces with the copyright owner to complain.

If the deepfake takes a substantial part of an existing copyright work, there could well be an actionable infringement of copyright. This is a qualitative test, rather than a quantitative test – has the maker of the deepfake taken the intellectual creation of the author of the copyright work? While there may be infringement, the creator of the deepfake may also have a defence.   English copyright law was amended in October 2014 to include a fair dealing exception for parody, pastiche and caricature. Under the Copyright, Designs and Patents Act 1988, fair dealing with a copyright work for the purposes of caricature, parody or pastiche (Section 30(A) Copyright, Designs and Patents Act 1988) does not infringe copyright in the work. Equally, under US law, there are fairly wide “fair use” defences which cater for and permit certain parodies.

As far as English law is concerned, it may be open to the maker of a deepfake to rely on this relatively new fair dealing exception. While its parameters are still to be properly tested by the courts, we know that there are two things a court would consider: (1) does the deepfake amount to a parody? (2) if it does, does the use amount to fair dealing?

In terms of defining what is meant by a parody, we have the Court of Justice of the European Union’s definition in the case of Deckmyn v Vandersteen (C-201/13). In that case, the CJEU ruled that a parody must (1) evoke an existing work while being noticeably different from it; (2) constitute an expression of humour or mockery.

In terms of the second question, there must be fair dealing with the original copyright work. Relevant factors include a review of how much of the copyright work is taken, whether the use of the work competes with the copyright owner, whether the deepfake is being used in an advertising context. If the maker of the deepfake takes more than is necessary of the original or uses it for advertising or in a commercial manner, particularly in competition with the copyright owner, there is unlikely to be fair dealing and the defence will fail. Obviously, each situation will turn on its specific facts.

What about a situation where the deepfake does not take a substantial part of the underlying film of the complainant so that there is no copyright infringement and/or the person replicated in the deepfake does not own any sort of interest in the underlying footage?

It could be that the maker of the deepfake only needs to take a very small amount of the original film of the complainant in order to create a deepfake. In these circumstances, the complainant’s moral rights under copyright legislation could usefully come into play even where there is no copyright infringement claim.

The words spoken or the actions depicted in a deepfake are likely to form new copyright works even where they are created using artificial intelligence. There is still human intellectual input worthy of protection. Taking Jordan Peele’s Obama deepfake, the words were scripted and spoken by an actor albeit that the technology recorded the process as if the words were spoken by Obama. The script containing the words spoken attract copyright protection.

Assuming that the words spoken and/or actions done are understood by the audience to have been genuinely spoken by the complainant, there could be scope for a false attribution argument. Under English copyright legislation, a person has the right not to have a copyright work attributed to him as the author.

It would also be in these sorts of circumstances, where the words spoken or actions depicted in the deepfake have caused or are likely to cause serious harm to the reputation of the claimant, that a defamation claim could also be appropriate. This is more likely to be so in the case of a deep fake which is intended to be taken seriously.

With the rise in use of artificial intelligence to create content, the next couple of years could well see courts getting to grips with the copyright issues prompted by the technology. We wait to see whether those judicial efforts will be in the context of a deepfake.

Keeping the faith

The most heavily reported concern about this technology is that its misuse could add to a growing distrust of information available to us via the internet. It is most likely to be technology itself,  rather than legal tools, which is best placed to meet this concern.

The Max Planck Institute for Informatics is working on detection methods for fake content. It reports that its research team is using the same technology used for creating deep video portraits. In a video made for the Wall Street Journal, Professor Farid talks about the work being done on detection methods, for example using technology to review film for subtle changes of colour of face as the heart pumps blood in and out of a person’s face.

The advances in detection methods will be of importance to media businesses whose activities include the distribution of content, particularly news. If we are to improve, or at least not further erode, public trust in the news, it will become increasingly more important for these businesses to harness detection methods and ensure that those systems remain as up to date with the technology as possible. The aspiration must be that in time, detection tools will be readily available to the public to assess the authenticity and provenance of content.

The issue is being seriously considered in the United States, with the Director of National Intelligence likely to report back to Congress in December 2018 with thoughts on how the technology could and has been deployed for ill gain. The United States will not be the only country thinking about deepfakes at a political level. 2019 will see policymakers and technology businesses grappling with harnessing the good that comes from these new technologies while at the same guarding against the bad.

Lorna Caddy is a media and IP lawyer at Himsworth Scott

This post originally appeared on Himsworth Scott Insights and is reproduced with permission and thanks

1 Comment

  1. Mark Catlin

    Reblogged this on Declaration Of Opinion.

Leave a Reply

© 2024 Inforrm's Blog

Theme by Anders NorénUp ↑

Discover more from Inforrm's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading