Misinformation And Deepfakes In The Age Of AI

  • Misinformation And Deepfakes In The Age Of AI

    The increasing innovations in technology has led to the development of new facets of generative AIs and deepfakes. With the major developed countries bringing legislation to regulate their misuse, the authors analyse the Indian landscape and suggest measures for the same. In the recently concluded Lok Sabha elections, one might have encountered videos over the internet of...

    The increasing innovations in technology has led to the development of new facets of generative AIs and deepfakes. With the major developed countries bringing legislation to regulate their misuse, the authors analyse the Indian landscape and suggest measures for the same.

    In the recently concluded Lok Sabha elections, one might have encountered videos over the internet of deceased politicians campaigning for the electoral candidates of different political parties. In a similar instance, a video on the internet of Delhi BJP Chief Manoj Tiwari appealing to voters in haryanavi, a language which he does not speak, went viral. These videos and audio messages were made with the help of AI softwares. Many such deepfakes videos and audios have been employed by the political parties to personalise their response while at the same time catering to a large audience.

    While it may humour some, an analysis of the same reveals a rising tool which if not regulated may jeopardise the overall neutrality of electoral process. It becomes necessary to address these issues in order to realise the mandate of the Constitution of India to conduct a free and fair elections.

    The Problem Of Deepfakes And The Legal Void In India

    The European Union Artificial Intelligence Act defines deepfake as, “deep fake' means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.

    Deepfake videos and audios are engineered in such a way that a prudent man would not be able to differentiate it from an original one. While these are used to replace faces of public figures with others for fun purposes, it is increasingly becoming prone to misuse in spreading misinformation during elections. The effect may be exacerbated due to the majority of people in the country being illiterate to the innovations in AI.

    Currently India lacks a specific legislation to regulate these AI engineered videos, audios and texts and the closest legislation to regulate the deepfakes is the Information Technology Act, 2000 (the “IT Act''). Section 66D of the IT Act stipulates punishment for cheating by personation for a term of three years along with a fine of 1 lakh rupees and Section 66E of the IT Act addresses the breach of privacy of a person by capturing, transmitting, or publishing his or her image without consent. These sections are invoked in the absence of any legislation particularly dealing with the containment of misinformation due to deepfake and AI engineered videos. A close analysis of the above sections depicts a lack of understanding of these new 'innovations' and thus, its regulatory effect is haphazard. Moreover, it also lacks a prohibitory mechanism i.e., to stop these videos and audios before they are released for the general public.

    MeitY issued an advisory to the social media intermediaries on the perceived violation of due diligence obligations outlined under Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (“IT Rules”). It advised the platforms and intermediaries to label the 'possible inherent fallibility or unreliability of the output generated' and to ensure that the integrity of the electoral process is not threatened due to the use of AI models or generative AI. However, advisory remains ineffective in attracting penal action. Section 3(1) of the IT Rules stipulates the due diligence responsibilities of intermediaries. It obligates the intermediaries to make efforts by themselves to stop the user to host, modify, publish, transmit, store, update, or share any information that spreads any misinformation or is patently false and untrue. The Delhi High Court in the case Gaurav Bhatia v. Naveeen Kumar[1], ordered the intermediaries to remove the deepfake videos available in the public domain as it could severely harm the image of the plaintiff.

    The IT Rules, however, only establish the responsibilities of social media platforms and intermediaries. To penalise those who make such content, the executive often resorts to Indian Penal Code, 1860. Section 465 (forgery) and Section 469 (forgery harming the reputation of any party). Moreover, the Representation of the People Act, 1951 has also been invoked to contain this problem. Section 123 of this Act penalises actions which promote enmity between a group of people on the basis of race, caste, sex, etc.

    However, as we observe further, these provisions are scattered and do not cover the problem of deepfake videos and audios, especially to spread misinformation, holistically.

    International Regulations

    The rising menace of deepfake has caused nations & international organisations to ponder upon the issue and arrive at solutions which promote sustainable development and inclusive economic growth. The gravity of the situation can be realised from the fact that, The Global Risks Report 2024 of World Economic Forum identified AI-generated misinformation and disinformation as the second among all the risks that respondents believe are most likely to present a material crisis on a global scale in 2024.

    Recently 28 Countries, including India, along with the EU participated in the AI Safety Summit on 1-2 November 2023. A declaration was released in the said summit, Bletchley Park Declaration, which focussed on preventing the misuse of AI technologies.

    The EU, which is considered to be having the most advanced jurisprudence on Technological law, recently passed a comprehensive law, Artificial Intelligence Act, to regulate the usage of Artificial Intelligence. EU's AI Act is passed with an objective to promote innovation while at the same time ensuring that such innovations are for the benefit of humanity and are trustworthy. EU's AI Act classifies risks based on a three-tier classification depending on the risk which an AI system poses to user safety.

    Article 50 of the EU's AI Act provides for the Transparency obligations for providers and users of certain AI systems. Under Article 50(4) it shall be mandatory for the deployers, defined in Article 3(4) of the said act, to disclose that content has been artificially generated or manipulated if the AI system generates or manipulates image, audio or video content which constitutes deep fake.

    The Deep Fakes Accountability Act, in the USA, introduced in 2019 requires from the creators of deepfake to communicate the use of such deepfake created, and also has provisions which prohibit circulation of deepfakes with an intention to deceive viewers or cause harm to the reputation of an individual. The violation under the above mentioned act is liable to be fined or being imprisoned. Under the said act, a task force is also established under the Department of Homeland Security which focuses on the effects of deepfake on national security.

    Approach To Be Adopted By India

    In a more concerning finding of the study, which was conducted in January and February, McAfee found that nearly one in four (22 per cent) Indians surveyed said that they had encountered a political deepfake including video, image, or recording of a candidate which they thought was initially true.[2]

    In January this year, the then Union Minister of State for Information Technology, Rajeev Chandrashekhar indicated a possible amendment to the IT Rules to include the regulation of deepfakes and algorithmic bias on technological platforms. While the amendment has still not become reality, this depicts the narrow approach adopted by the Indian Government. This narrow approach, while may be able to curb the ongoing incidents of deepfakes, it may fail in the longer run. We say this because of two reasons. Firstly, these legislation were made at a time when the threat of deepfake was absent. Secondly, even if the rising incidents are being regulated through amendments in these legislations, a holistic coverage may not be possible in the future.

    Supreme Court in the landmark case K.S. Puttaswamy & Anr V. Union of India & Ors[3] has held that, right to privacy is a basic fundamental right and it forms an intrinsic part of Article 21 and freedoms guaranteed in Part III of the Indian Constitution.

    Following the Supreme Court's observation in the above-mentioned landmark case it becomes imperative to recognise that the deepfakes generated are also gross violation of a person's right to privacy. Though Digital Personal Data Protection Act, 2023 covers the right to privacy but its holistic approach still cannot be presumed, as the rules are still to be notified which will tell the way the act is to be implemented. Also amalgamating deepfakes with DPDP Act, 2023 or IT Act, 2000 is of no significant importance as both the above-mentioned acts have different objectives.

    Also, Delhi High Court in the case, Anil Kapoor V. Simplify Life India & Ors.[4], has stated the technological tools that are now freely available make it possible for any illegal and unauthorised user to use, produce or imitate any celebrity's persona, by using any tools including Artificial Intelligence.

    Much like the EU, India needs to pass comprehensive legislation to deal with this problem. All stakeholders should be taken into consideration regarding the myriad aspects associated with the deepfakes and the solutions in the form of law. It is pertinent to note here that the law will require changes as the field is rapidly progressing. What seems pertinent is the inherent opportunistic nature of the stated problem. A sector specific approach will thus limit the effect any legislation passed.

    The advancements in the field of Artificial Intelligence is something which cannot be underestimated. Even if the International Organizations and Nations come together to form rules & regulations which will regulate AI, those rules & regulations will be subject to rapid changes happening in the field of AI. The advancements in the field of AI are to be closely monitored to make sure that humanity witnesses the brighter side of AI. There also exists a problem for countries which are not technologically advanced, as they won't be able to assess the challenges posed by the deepfakes which can in turn affect their political stability.

    Additionally Secretary-General of the United Nations, Antonio Guterres, has stated that there can be an Artificial Intelligence Agency much like IAEA, i.e., International Atomic Energy Agency.

    What the government should do is to continuously strive to eliminate influence that may occur due to the changing technological landscape. Deepfake videos and audios are a single branch of this fast-paced technological landscape. Generative AI can be used for many beneficial purposes, but has many negative uses to the increasingly polarised and vulnerable population and hence, a comprehensive legislation is required.

    Views are personal


    [1] Gaurav Bhatia v. Naveeen Kumar, 2024 LiveLaw (Del) 459

    [2] Team, W.W. (ed.) (2024) Around 75% Indians have encountered deep fake content in last one year: Report, WION. Available at: https://www.wionews.com/india-news/over-75-indians-were-exposed-deepfakes-as-many-struggled-to-distinguish-between-fake-and-real-report-715440 (Accessed: 13 June 2024).

    [3] K.S. Puttaswamy & Anr V. Union of India & Ors, (2017) 10 SCC 1.

    [4] Anil Kapoor V. Simplify Life India & Ors, 2023 LiveLaw (Del) 857.


    Next Story