The Rise Of Deepfake Technology: A Threat To Evidence In Arbitration?
Yash Dahiya
22 Nov 2023 9:37 AM IST
In the recent week, a video had been circulating of Rashmika Mandanna which was later proven to be deepfaked. The video caused immense humiliation to the actress who later on went to Instagram expressing her displeasure. Famous Actor Amitabh Bachan also went to social media to express his views to the same. Deepfakes are something that has come to public attention in recent times and is...
In the recent week, a video had been circulating of Rashmika Mandanna which was later proven to be deepfaked. The video caused immense humiliation to the actress who later on went to Instagram expressing her displeasure. Famous Actor Amitabh Bachan also went to social media to express his views to the same. Deepfakes are something that has come to public attention in recent times and is now proving to be dangerous. A fictitious video was circulated where Russian President Vladimir Putin had purported to announce a full-scale war against Ukraine. Russian television aired this fake video, creating misunderstanding and escalating geopolitical tensions.[1] In the midst of the 2024 US presidential campaign, social media is overrun with deepfake videos intended to influence public opinion. Even though big social media companies like Facebook and Twitter have made an effort to forbid and delete this kind of content, the efficacy of these measures is still debatable.
In the Indian landscape as well, deepfakes have come to lime light. For instance, during the Delhi Elections, 2 videos popped up online wherein a politician is seen pleading with voters from various linguistic backgrounds while speaking in two different languages. Upon closer examination, it became evident that the videos were fake and that they were made as part of his party's "positive campaigning" using Deepfake. [3]
Consider all the various pornographic sites where various famous actresses' faces have been deepfaked and misused. In 2020, a student in Mumbai was arrested for making a deepfake porn video of his girlfriend to threaten her. The technology is now being used by fraudsters as well who pretend to be their loved ones and friends in video calls asking for money when they are in dire need and scamming them of their money. [4]
The use of the faces of public figures or celebrities in videos doing something out of character. All of this has made this an issue that now needs to be addressed. Deepfake is a kind of AI technology that creates synthetic media like images, videos, and audio using machine learning algorithms, especially generative adversarial networks (GANs). The generator network generates artificial data that mimics the actual data in the training set, like a synthetic image. After that, the discriminator network evaluates the veracity of the synthetic data and gives the generator input on how to enhance its output. Until the generator generates synthetic data that is remarkably realistic and challenging to discern from the real data, this process is repeated numerous times, during which the discriminator and generator learn from one another.
Using this training data, deepfakes are produced that can be used in a variety of ways for both images and videos. (a) face swapping: substituting one person's face for the one in the video; (b) attribute editing: altering the video subject's appearance, such as their hair colour or style; (c) face re-enactment: projecting the subject's facial expressions from one person's face onto the subject in the target video; and (d) material that is entirely synthetic: Although real material is used to train people's appearances, the final image is completely artificial.
Concerns
- The use of deep fakes and its apparent rise in stardom and popularization has also now brought concern for courts, arbitration tribunals, and other adjudicatory bodies for its apparent misuse in during adjudicatory process particularly during the evidence stage. In 2019, an attorney representing the father who lives in Dubai in a UK custody dispute was successful in challenging audio evidence that seemed to show the father as violent and confrontational. After forensic specialists obtained access to the audio files, they were able to demonstrate that the recording was a "deepfake," created by the mother with the assistance of online forums. Compared to video and photos, audio evidence can still be convincingly faked quickly, and there are already high-quality mass-market applications available for free that can be used to create "voice clones.[5] This brings in a question are audio- video evidence still reliable?
Authentication is fundamental to the admissibility of evidence. If the arbitration tribunals admit evidence which are falsified then it can cause grave injustice to the other party and hence lead to violation of natural justice. To authenticate the evidence, experts need to be used which can lead to extra costs being incurred and time being spend.
- Defendants have contested the veracity of video evidence in a number of high-profile cases in recent years, such as the Kenosha trial of Kyle Rittenhouse, by claiming that the videos could have been manipulated by artificial intelligence (AI). [6] Deepfake media is becoming more and more common, and this gives litigants more opportunity to challenge accepted video and photo evidence. Deepfakes make it simpler for parties to contest the veracity of original evidence, regardless of whether they have a good reason to do so. This mistrust in evidence will give rise to what Rebecca Delfino has called, “The Deepfake Défense”.[7] Additionally, law professors Bobby Chesney and Danielle Citron have called this phenomenon when individuals play upon the existence of deepfakes to challenge the authenticity of genuine media by claiming it is forged, as liar’s dividend. [8]
“Deepfakes will make it easier for liars to deny the truth in distinct ways. A person accused of having said or done something might create doubt about the accusation by using altered video or audio evidence that appears to contradict the claim,” their paper states.”
Additionally, deepfakes raise the possibility that credible but fabricated evidence will be used to support an unfair conclusion. In the trials being held for the invasion of the US Capitol, this similar tactic is being used by the Defendants to undermine the evidence brought forward by the prosecution. Even if the court, brings in experts to authenticate the evidence, it only delays the matter and serves the Defendant in the end.
- The third and most troubling problem has to do with oral testimony gathering and videoconferencing. The legitimacy of remote interactions during arbitrations is jeopardised by deepfakes. By impersonating a witness, a deepfake model can give the false impression that the right person is testifying when they are not.
The report on ‘virtual justice’ by New York-based privacy group Surveillance Technology Oversight Project (STOP) noted that parties to online court proceedings may be asked to verify their identity by providing sensitive personal information, biometric data, or facial scans – in the state of Oregon, judges sign into their virtual court systems using facial recognition.[9]
Detecting Deepfakes: An Impossible Venture?
To identify deepfakes, there are a number of technical options available, such as the following: [10]
- Software for AI Output Detection: This kind of software examines digital traces left by artificial intelligence-generated material to ascertain whether a picture, video, or audio file has been altered.
- AI-Powered Watermarking: This method entails tagging an image or text with a special code that indicates where it came from. This facilitates the process of tracking and tracing the origin of media content, thereby aiding in the assessment of its legitimacy.
- Content Provenance: This tactic seeks to identify the sources of digital media, both synthetic and natural. Keeping track of a piece of media's history and sources makes it easier to identify instances of manipulation.
As we move towards greener arbitration, the fears towards the use of video conferencing as a means to record evidence is genuine. But as we as a society incorporate new practices or new ways of doing things, new problems will arise and it is quite obvious. What needs to be done is formulating solutions to address these new problems instead of restricting progress. Currently in India, to prosecute crimes with regard to deepfakes, the IT Act lays down the following provisions.
- Section 66E of the Information Technology Act of 2000 addresses the infringement of privacy when someone's image is taken, published, or transmitted in the media. Infringers risk a fine of up to ₹2 lakh or up to three years in prison.[11]
- According to Section 66D of the IT Act, using communication devices or computer resources maliciously to impersonate someone else is punishable by up to ₹1 lakh in fines and three years in prison. [12]
However, these laws are not enough and we need an elaborative provision to deal with deepfake crimes. As the Digital India Act 2023, is said to replace the IT Act we just have to wait and see how it addresses deepfakes. It is expected that the new law will address current technologies and crime with respect to it. There is also a need for lawyers to be trained with respect to artificial intelligence. Proper training can allow arbitrators and counsels to detect the use of deepfake technology in evidence. There is also need for expert verification of video evidence before admission so as to establish genuinity. Necessary protocols and guidelines need to be established. The use of deepfake in evidence has not yet become too apparent but in the future it will and we as a community need to ready. The concerns are not just restricted to arbitration but also court litigation as well.
The author is an Advocate practicing at Goa. Views are Personal.
[1] Basak Akmese, The Artificial Intelligence Dimension of Digital Manipulation Deep Fake Videos: The Case of Ukrainian- Russian People, 2 CONTEMPORARY ISSUES OF COMMUNICATION 80, 76-85 (2023).
[2] Richard W. Painter, Deepfake 2024: Will Citizens United and Artificial Intelligence Together Destroy Representative Democracy, JOURNAL OF NATIONAL SECURITY LAW & POLICY 25, 23-30 (2023).
[3] Nilesh Christopher, We have Just Seen the First Use of Deepfakes in Indian Election Campaign, VICE (Nov. 8, 2023, 11:15 AM), https://www.vice.com/en/article/jgedjb/the-first-use-of-deepfakes-in-indian-election-by-bjp.
[4] Maria Pawelec, Deepfakes and Democracy (Theory): How Synthetic Audio- Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions, DIGITAL SOCIETY JOURNAL 25, 1- 37 (2022).
[5] Molly Mullen, A New Reality: Deepfake Technology and the World Around Us, 48 MITCHELL HAMLINE LAW REVIEW 224, 211- 234 (2022).
[6] Mathew Ferraro & Brent Gurney, The Other Side Says Your Evidence is a Deepfake. Now What, WILMERHALE (Nov. 9, 2023, 11:15 PM), https://www.wilmerhale.com//media/files/shared_content/editorial/publications/documents/2022-12-21-the-other-side-says-your-evidence-is-a-deepfake-now-what.pdf.
[7] Rebecca Delfino, The Deepfake Defense- Exploring the Limits of the Law and Ethical Norms in Protecting Legal Proceedings from Lying Lawyers, 84 LOS ANGELES LEGAL STUDIES RESEARCH PAPER 2023-02 50, 55-56 (2023).
[8] Robert Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy and National Security, 107 CALIFORNIA LAW REVIEW 1753, 1745- 1760 (2019).
[9] Albert Fox Cahn, Esq. & Melissa Giddings, Online Courts During Covid 19, VIRTUAL JUSTICE (Nov. 10, 2023, 10:45AM),https://static1.squarespace.com/static/5c1bfc7eee175995a4ceb638/t/5f1b23e97ab8874a35236b67/1595614187464/Final+white+paper+pdf.pdf
[10] Leonardo F. Souza, Arbitration Tech Toolbox: Deepfakes and the Decline of Trust, KLUWER ARBITRATION BLOG (Nov. 10, 2023, 10:35 AM), https://arbitrationblog.kluwerarbitration.com/2023/10/04/arbitration-tech-toolbox-deepfakes-and-the-decline-of-trust/#:~:text=Deepfakes%20compromise%20the%20authenticity%20of,in%20fact%2C%20they%20are%20no.
[11] The Information Technology Act, 2000, § 65 E, No. 21, Acts of Parliament, 2000 (India).
[12] The Information Technology Act, 2000, § 66 D , No. 21, Acts of Parliament, 2000 (India).