Mitigating Deepfake Threats: How Existing Laws Can Tackle Misuse

Ravi Goyal And Heba Ajaz

16 July 2024 9:12 PM IST

  • Mitigating Deepfake Threats: How Existing Laws Can Tackle Misuse
    Listen to this Article

    The controversies surrounding the morphed media of influential figures such as Alia Bhatt, Katrina Kaif, Rashmika Mandanna, and Sachin Tendulkar are rooted in the misuse of deepfake technology. These incidents have now become commonplace, raising alarms about the unethical and illegal uses of Deepfake technology. Despite government warnings, the issue continues to persist, prompting concerns about the integrity of the information we see online and the rights of affected parties.

    To recap, on December 23, 2023, the Ministry of Electronics and Information Technology [MeitY] issued an advisory directing all intermediaries to comply with their obligations under the Information Technology [Intermediary Guidelines and Digital Media Ethics Code] Rules, 2021 [IT Rules]. This directive was issued in response to the alarming rise in deepfakes and AI-enabled misinformation, and required intermediaries to clearly and precisely inform users about prohibited content as specified in Rule 3[1][b] of the IT Rules –including as part of the terms of service, privacy policy and user agreements. This includes notifying users against hosting, displaying, uploading, modifying, or sharing any content belonging to another person that is grossly harmful, defamatory, obscene, pornographic, paedophilic, or otherwise unlawful to mention a few. Specifically, Rule 3[1][b][v] prohibits the dissemination of misinformation or information which is patently false.

    The latest advisory, dated, March 15, 2024, reiterated this obligation and appeared to introduce an additional parameter for intermediaries that facilitate or permit AI-generated content. It advises that such intermediaries should label or embed permanent unique metadata or identifiers on synthetically created text or media produced through their services. This measure aims to identify whether deepfakes were created using the intermediary's resources and to trace the user of the intermediary responsible for them. While not legally enforceable, it will be interesting to observe how these guidelines influence the legal landscape surrounding deepfakes.

    However, given the potential risks posed by deepfake technology, it is increasingly relevant to understand what deepfakes entail and how existing legal provisions may aid in curbing their misuse.

    A 'Deepfake', as defined in the recently passed European Union's Artificial Intelligence Act [EU AI Act] is an, “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.” To simplify, deepfakes are akin to an advanced form of photoshopping but created by the use of artificial intelligence.

    Deepfakes pose a serious threat to society due to their potential to inflict harm through various means, including the dissemination of fabricated news stories and videos, manipulation of elections by disparaging political candidates, and financial fraud via impersonation, to mention a few.

    India's legal framework lacks specific measures to address Deepfake threats, but existing laws such as the Information Technology Act, 2000 [IT Act], the Indian Penal Code, 1860 [IPC] and the IT Rules offer potential remedies.

    Deepfake Pornography

    One of the most significant threats posed by deepfakes is the exponential increase in the creation of non-consensual pornographic content, especially targeting women. It may be noted that the publication or transmission of sexually explicit or pornographic content is punishable under Section 67A [Punishment for publishing or transmitting of material containing sexually explicit act, etc., in electronic form] of the IT Act. On first conviction, the penalty can be imprisonment for a term which may extend to five years and with fine which may extend to Ten Lakh Rupees. In case of a second or subsequent conviction, the punishment is harsher, extending to imprisonment for a term which may extend to seven years and with fine which may extend to Ten Lakh Rupees.

    In its less aggravated form, the publication or transmission of content that includes an act which causes sexual arousal, but is not explicitly pornographic is deemed as obscene and is penalised under Section 67 [Punishment for publishing or transmitting obscene material in electronic form] of the IT Act. This offence is punishable with imprisonment for a term which may extend to three years and with fine which may extend to Five Lakh Rupees. And in the event of second or subsequent conviction, for a term which may extend to five years and also with fine which may extend to Ten Lakh Rupees.

    Deepfakes and Personality Rights

    Deepfakes primarily impact celebrities who have a wealth of information available in the public domain. Celebrities possess legal rights over their identity, including their name, image, and likeness. When individuals use their voice, photos, and videos without permission for commercial purposes, such as to create alterations through Deepfakes, it amounts to an unauthorised exploitation of their persona. This infringement of their personality rights can lead to legal consequences for those misusing and unlawfully profiting from the distinctive attributes associated with the celebrity, as was held in many recent cases, including the Anil Kapoor vs. Simply Life India & Ors. [CS (Comm) 652/2023] and the Jaikishan Kakubhai Saraf Alias Jackie Shroff vs. The Peppy Store and Ors [CS (Comm) 389/2024] 'Bhidu' matter. Moreover, personality rights are also tied to one's right to privacy.

    In another instance, where the voice and image of renowned Bollywood actor Mr Amitabh Bachchan was used without his consent for commercial purposes—legal action was taken against those responsible, and takedown notices were issued to address the misuse of Mr. Bachchan's identity [Amitabh Bachchan vs. Rajat Negi [CS(COMM) 819/2022].

    The practical application of available legal provisions to report such Deepfakes that exploit one's personality rights can also be observed in the case of Ms Rashmika Mandana's deepfake.

    The Delhi Police registered an FIR against the individuals responsible, citing Section 465 [Punishment for forgery] and Section 469 [Forgery for purpose of harming reputation] of the IPC. Additionally, Sections 66C and 66E of the IT Act were invoked to address offences related to computer resources and information. It is to be noted that Section 66C specifically deals with identity theft, while Section 66E addresses the punishment for the violation of privacy.

    Moreover, in the recent case involving Mr Sachin Tendulkar, Section 500 [Punishment for Defamation] of the IPC was invoked against the gaming site owner responsible for propagating the deepfake video. It is to be noted that while the malicious use of deepfakes can potentially constitute criminal defamation under Section 500 of the IPC, the legal position becomes murky when the deepfake depicts the actual individuals themselves. The essential ingredients for defamation, such as making a defamatory statement concerning another person and harming their reputation, are arguably not met when the deepfake is of one's own likeness. This is akin to the Tendulkar case, where he himself is depicted endorsing a brand; thus, this remains to be an interesting grey area that may require a legislative update.

    Deepfakes and Public Riots

    The 2024 Lok Sabha elections presented a unique challenge in mitigating the threats posed by deepfakes. A potential scenario that was often discussed was deepfake videos or audio clips portraying political candidates making inflammatory remarks against a specific community, circulated with the intention of stoking communal tensions or provoking riots. Such actions could disrupt the electoral process and also pose a threat to social harmony and public order. In such cases, employing deepfakes to instigate riots could be considered an offence under Section 153A [Promoting enmity between different groups on grounds of religion, race, place of birth, residence…], and Section 505 [Statements conducing to public mischief] of the IPC.

    While there are indeed various existing legal provisions that could be instrumental in addressing the misuse of deepfakes, it is apparent that the current legal provisions are still lacking in many respects. While the advisories from MeitY offer a temporary head-start, the extent to which they may evolve into enforceable laws remains to be seen and it is for this reason that a specific legislation for AI should be seriously considered.

    Authors: Ravi Goyal (Partner) and Heba Ajaz (Associate) at Scriboard. Views are personal.

    Next Story