It was the first week of February 2020, the Delhi elections were around the corner, and a day before the elections two videos appear online in which a politician is seen speaking in two different languages appealing to people from different linguistic backgrounds to vote for him.
On close inspection it was revealed that the videos were not real, instead, they were a part of his party’s ‘positive campaigning’ created using the Deepfake technology. Few months ago, in October 2019, a student in Mumbai was arrested for making a deepfake porn video of his girlfriend to threaten her. T
These were just two instances highlighting a wide array of use arising from the Deepfake Technology. Deepfakes use a form of artificial intelligence called deep learning to develop images from scratch, videos and, even voice of real people to create fake events, hence the name Deepfake.
There has been an unprecedented growth in the use of Deepfakes in the past years, and various jurisdictions have taken a step towards curtailing the misuse of Deepfakes. For example, the house of representatives in the USA introduced the “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to accountability act of 2019” to regulate the use of Deepfakes, and a similar law was also introduced in California. On the other hand, India has no separate laws for dealing with the Deepfakes, however, one can seek protection under different regulation currently in force in India, such as the Copyright Act, 1957 for copyright infringement resulting from Deepfakes, section 499 of IPC for defamation through Deepfakes, or even under the Information and Technology act, 2000. Though these acts protect from the consequent effect caused by the Deepfakes, they do not in the first place regulate the use of Deepfakes itself, hence the need for a separate law.
If any, the privacy laws in India may prove the most efficient in controlling the use of Deepfakes, as privacy has been recognized as a fundamental right in India by the Supreme Court. In furtherance of its aim to protect the data and privacy of an individual, the legislature had introduced the personal data protection bill, 2019 (PDP Bill). Under the PDP Bill consent of the data principal is necessary for processing personal data. Personal data includes one’s image or picture since the data principal is identifiable because of it. Superimposition and publication of such a picture can be interpreted as ‘processing’ under section 3 (31) of the bill. For the same, the remedy available to the victim under the Bill is right to be forgotten, which can be claimed against a data fiduciary. The perpetrator can be a data fiduciary, which according to the section 3(31) is someone who determines the means and purpose of processing.
The fact remains that PDP Bill is still not an Act. However, the right to be forgotten has been recognised by the High Courts of Delhi, Karnataka and Kerala. Even the Apex Court viewed data archiving for 7 years as per certain regulations of Aadhaar (Authentication) Regulations, 2016 as severely affecting to right to be forgotten
Unlike France, the PDP Bill of India lacks provisions for protecting the privacy of a deceased person. This calls for a change in the bill wherein the consent of the family members of the deceased person should be taken before processing their data any further. Moreover, such family members should also be given a right to take action against such people who use the data of the deceased without their consent. Recognizing that dead people also have a right to privacy would be the best way to protect their privacy. The need for the change in the law is further substantiated by the Supreme Court’s decision in Pt. Parmanand Katara, Advocate v. Union of India, wherein the apex court held that the right to dignity is available not only to living person but it also extends to his dead body. This is likely to ensure that the data of the deceased is not processed to make and disseminate doctored content.
Another way of tackling the spread of Deepfakes can be through the extension of the liability of Internet Service Providers (ISPs) for transmission of such content through their platform. For obvious reasons, the person who creates such Deepfakes should be held liable, however, at times it becomes difficult to trace the origin of such videos due to the anonymity offered by ISPs. Hence, the need to extend their liability. In India, section 79 of the Information and Technology Act, 2000 limits the liability of the ISPs. The section is loosely worded, which provides a widow for ISPs to escape from their liability, for example, the section states that the ISPs will have no liability if they had undertaken ‘all due diligence’, however, what actions constitute ‘all due diligence’ is nowhere defined. Increasing the liability of ISPs would establish a check-point for every content that is uploaded online, thereby regulating the flow of Deepfakes.
Social Media platforms should be mandated to have policies against Deepfakes. A prominent case involving the presence of a deepfake video on a social media platform emerged in 2017, wherein a video of Mark Zuckerberg appeared on Instagram, and despite being informed about the unauthentic nature of the video, Instagram resisted in taking down the Video, due to a lack of anti-deepfake policy. A similar case happened in October 2019, wherein Facebook refused to remove a deepfake video of US politician Nancy Pelosi, consequently, it ended-up drawing a million views. The policies can include bans or warnings on accounts of users who upload doctored content regularly and cooperation with ISPs to bring publishers of such content to review. This would ensure that other users are sceptical of the content that such a person uploads and proceed with caution. There have to be standards regarding situations where doctored content or Deepfakes cannot be differentiated with the real content so as to balance the freedom of expression.
The aforesaid text discusses the possible legislative changes that can be made before a consolidated law for Deepfakes is put in place. However, the biggest challenge posed by Deepfakes relates to its detection. Research points to every evolving deepfake technology, wherein every possible flaw in previous technology is being cured by a better one. Hence, it is quintessential that the government authorities double-check the originality of every content that is floated online, and further invest more in the technology which can be used to detect Deepfakes. The worst consequence that Deepfakes inflict on the society is that they make the real content unbelievable. Many suggest that maybe a key to fight Deepfakes, is the real and authentic content.
For a country like India, with an illiteracy rate of 36%, the spread of misinformation remains a huge cause of concern and if not regulated, Deepfakes can lead to a surge in these misinformation campaigns. In an age, where politics functions on narratives and cognitive bias, Deepfakes are an essential tool. The use of Deepfake technology need not be completely curtailed since it also has some positive uses, however, if left unchecked the Deepfakes can not only stir-up the political climate in India, but it can also cause intrusions into someone’s privacy, copyrights, and even harassment.
Parth Tyagi is a student at National Law Institute University, Bhopal, Achyutam Bhatnagar is a student at National Law University, Orissa.