Synthetic Media And Legal Quagmires: Unveiling Deep Fakes In The Indian Legal Context

Update: 2024-01-23 10:07 GMT
Click the Play button to listen to article
story

The media landscape is currently experiencing a profound transformation due to the introduction of AI-generated content, marking an imminent shift in both media production and consumption. The creation of media has transcended physical boundaries, evolving into a fully digital process,allowing for innovative approaches to content generation. This newly evolved media known as...

Your free access to Live Law has expired
Please Subscribe for unlimited access to Live Law Archives, Weekly/Monthly Digest, Exclusive Notifications, Comments, Ad Free Version, Petition Copies, Judgement/Order Copies.

The media landscape is currently experiencing a profound transformation due to the introduction of AI-generated content, marking an imminent shift in both media production and consumption. The creation of media has transcended physical boundaries, evolving into a fully digital process,allowing for innovative approaches to content generation. This newly evolved media known as 'Synthetic media' encompasses a broad spectrum of techniques involving the creation, alteration, and manipulation of data and media through automated processes. This umbrella term encompasses various methods, prominently leveraging artificial intelligence algorithms. The primary objective of employing such technology is often to deceive or alter the original meaning of the content, highlighting the transformative impact of automated processes on the generation and modification of mediaIn a market flooded with deep-fake content, the quest for truth encounters substantial hurdles. As we stand on the verge of this transformative era, crucial inquiries emerge regarding our readiness to confront impending threats to national security, elections, privacy, and reputations. The intersection of technology, legal frameworks, and societal norms will be instrumental in mitigating the destructive impact of deep-fakes. In this dynamic landscape, where media creation and dissemination exceed traditional confines, addressing the diverse challenges posed by deep-fakes requires a comprehensive strategy. The effectiveness of technological advancements, adaptability of legal structures, and evolution of societal norms will determine the preparedness of our response to navigate this new era of media dynamics.

The rising prevalence of “deep fake” videos has instigated heightened apprehension among legislators, tech entities, and the general populace. These digitally manipulated videos, facilitated by recent strides in machine learning and artificial intelligence, are increasingly proliferating across social media platforms. Although media content manipulation is not a recent predicament, the current milieu permits swift, cost-effective, and highly convincing alterations to both images and videos. The overarching concern is that a surge in realistic deep fakes may inundate platforms, leaving internet users grappling with the challenge of distinguishing reality from falsehood. Such a future, where discerning truth becomes an arduous task, poses a substantial threat to democracies worldwide. Beyond concerns of misinformation, entities such as human rights organizations, journalists, and governments heavily rely on accurate information for shedding light on rights abuses and holding wrongdoers accountable. The national security community is also contending with the intricate challenges posed by deep fakes, complicating intelligence gathering efforts. Deepfakes, inherently deceptive, act as an epistemic contaminant and are subject to intervention through necessary measures to address potential harm. The risks associated with deep fakes extend beyond personal injury, encompassing broader threats to social stability. Malicious actors could exploit these fabrications to accentuate social divides, subvert democratic conversations, and corrode trust in governmental institutions. Instead of contributing to the pursuit of truth, convincing deepfakes can be equated to spreading 'false statements of fact', disrupting the truth-seeking function within the marketplace of ideas. In the realm of democratic governance, deepfakes assume significance as they instigate skepticism toward established authorities. Moreover, the unauthorized use of deepfakes infringes upon individual autonomy and self-fulfillment by depicting individuals without their consent. These considerations reinforce the argument that persuasive, credible, and malevolent deepfakes warrant limited protection under the Freedom of Speech and Expression as outlined in Article 19 (1) (a) of the Indian Constitution.

Legislators are increasingly advocating for regulations to address the deep fake dilemma, introducing a host of rights-related issues, including content moderation and free speech. The emergence of synthetic media through deep fakes represents an evolving and critical challenge in the digital realm, necessitating thorough consideration, deliberate examination, and widespread consultation to formulate effective responses. The multifaceted nature of this issue underscores the need for a comprehensive and well-informed approach to navigate the complex landscape of synthetic media and its implications for society at large. There have been conflicting ideological discussions on whether information intermediaries, by opting to endorse unrestricted expression online, were essentially disseminating principles (such as the Miltonian marketplace of ideas) to other nations in a colonialistic manner. Regulating speech is a sovereign prerogative, and the governing authorities of non-liberal regimes should have their autonomy acknowledged. Nevertheless, others contended that the right to freedom of speech and the ability to access information were intrinsic entitlements that should be accessible universally. International agreements and standards consider certain values as global for a reason. Additionally, some stressed that adhering to censorship criteria of authoritarian regimes would adversely impact vulnerable populations, including political dissidents. While online discourse can be a catalyst for positive societal transformation, some, particularly within the human rights community, have expressed apprehension about the potential for significant tech platforms to be utilized by extremist factions or exploited for political maneuvering. Some participants noted that there might be conflict between governments striving to safeguard democratic principles within their own borders and championing the cause of global democracy. Curbing intermediary liability protections could be a potential solution to mitigate the influence of hate speech and media or political manipulation, which could address concerns about the impact of online discourse on democracy in specific nations or regions. However, this might contradict the objective of safeguarding democracy on a global scale, as intermediary liability protections are indispensable to enable platforms to nurture open discourse environments in areas lacking robust free expression safeguards. For instance, Germany's NetzDG legislation serves as an illustration of a domestic statute governing online discourse that could potentially have adverse effects on global freedom of speech. This law, among other concerns, lacks clarity on the definition of hate speech and lacks provisions for exemptions related to speech acts such as parody and commentary. Similarly, proposed regulations like the U.K.'s suggested two-hour timeframe for removing extremist content impose impractical burdens on technology platforms. When a legal framework mandates stringent adherence from tech platforms, there is a likelihood that these platforms may remove more content than necessary to mitigate the risk of substantial fines or legal repercussions. Such a scenario could detrimentally affect free speech and expression, especially in nations governed by authoritarian regimes, where minority groups and political dissidents might, as described by one participant, become “collateral damage” of the prevailing global trend towards heightened restrictions on online speech. Given the contentious and intricate nature of online speech guidelines, particularly on a global scale, one approach to achieving consensus is to concentrate on relatively impartial standards (for instance, regulations pertaining to bots, automated content, and political advertisements). Additionally, lawmakers might contemplate enacting varying levels of regulations for diverse categories of information intermediaries (like infrastructure versus content platforms, small enterprises versus large corporations, and so forth). It is essential to distinguish between “information intermediaries” and “platforms” and recognize the importance of this distinction in policy discussions. While information intermediaries serve as intermediaries as legally defined, the term “platform” encompasses various meanings. Within the platform spectrum, there exists a multitude of roles, ranging from sizable corporations to small startups, non-profit or hybrid corporate models, and centralized or decentralized management systems. An all-encompassing definition of platforms might lack utility. However, a taxonomy could be beneficial, as categorizing the diverse types of platforms would underscore their variety in the face of regulations aiming to uniformly apply standards. For instance, it could be feasible to implement regulations devoid of content bias for infrastructure information intermediaries that primarily handle non-content functions.The Cyberspace Administration of China has implemented new regulations aimed at restricting the utilization of deep synthesis technology and combatting disinformation. These policies mandate that any manipulated content created through this technology must be explicitly labeled and traceable to its origin. Providers of deep synthesis services are obligated to adhere to local laws, uphold ethical standards, and ensure alignment with the 'correct political direction and correct public opinion orientation

Examining the worldwide legal landscape, within the United States, Texas led the way in 2019 by prohibiting deepfakes designed to influence electoral processes. In that same year, Virginia enacted legislation against deepfake pornography, while California's law barred the generation of manipulated content featuring politicians within 60 days of an election. Notably, injunctions against deepfakes in California are only granted under specific circumstances, such as obscenity and copyright infringement. New York State has reintroduced the Preventing Deepfakes of Intimate Images Act, which aims to federally criminalize the dissemination of fabricated nude images. Originally proposed in May 2023, the legislation proposes amendments to the Violence Against Women Act Reauthorization Act of 2022, concentrating on shielding individuals from the unauthorized exposure of digitally manipulated intimate images, commonly referred to as deep fakes. The bill explicitly criminalizes the disclosure of non consensual intimate deepfakes with the intent to harass, harm, or alarm the victim. Penalties include fines and up to two years of imprisonment, escalating to ten years if the disclosure could impact government functions or incite violence. Importantly, defenses based on unauthorized deep fakes or lack of victim participation are expressly prohibited. At the federal level, the DEEP FAKES Accountability Act, introduced in 2019, aims to compel deepfake creators to reveal their use, prevent the dissemination of misleading deep fakes during elections or those intending to damage an individual's reputation, and establish potential fines and imprisonment for violators. The legislation also proposes the creation of a task force within the Department of Homeland Security to assess and mitigate the impact of deep fakes on national security, along with calling for increased funding for research into detecting and mitigating the harm caused by deep fakes.

In Europe, the EU Commission seeks to address online disinformation, including the use of deep fakes, through various measures, including the implementation of a self-regulatory Code of Practice on Disinformation for online platforms. The European Union (EU) has reinforced its Code of Practice on Disinformation to compel major social media platforms such as Google, Meta, and Twitter to identify and flag deepfake content, potentially facing substantial fines for non-compliance. While initially introduced as a voluntary self-regulatory framework in 2018, the Code now holds legal weight with the support of the Digital Services Act. This Act aims to intensify the oversight of digital platforms, curbing various forms of misuse. Additionally, within the proposed EU AI Act, transparency and disclosure requirements are outlined for providers of deepfake technology.

When these debates exist,in India, the Ministry of Electronics and Information Technology (MeitY) issued a formal advisory on December 26, 2023, urging all intermediaries to adhere to due diligence obligations outlined in the recently notified IT Amendment Rules, 2023. The primary focus of this advisory appears to be on ensuring rigorous compliance in identifying and promptly removing misinformation, false or misleading content, and materials impersonating others. This emphasis is particularly directed at addressing the rising concerns related to deepfakes.In the advisory, intermediaries were reminded of the legal implications in the event of non-compliance with the provisions of the IT Rules, 2021. Specifically, they risk losing the exemption from liability provided under Section 79(1) of the Information Technology Act (IT Act), 2000. The amendments notified in the IT Rules in 2022 expanded the responsibilities of intermediaries under Rule 3(1)(b) to “make reasonable efforts to cause the users” not to post specific types of content. Notably, sub-clauses (i) to (ix) under 3(1)(b), which outline the grounds for content removal by intermediaries, underwent significant revisions. Criticism was particularly directed at Rule 3(1)(b)(v), which pertains to the intentional communication of misinformation or patently false and misleading information. The ambiguity in its phrasing and vague definition raised concerns, especially considering the potential consequences of private entities acting as arbiters of permissible speech. This was noted to be in violation of the directions set forth by the Hon'ble Supreme Court in the Shreya Singhal vs Union of India AIR 2015 SC 1523 case.

The term 'false' itself implies the presence of an objectively identifiable truth, establishing a binary of 'true-false' for all online content. However, the dynamic nature of content in the complex digital space challenges such a simplistic categorization. The provisions, ostensibly introduced to offer additional recourse to aggrieved users, were argued to lack the force of law as they are considered ultra vires the safe harbor framework under Section 79 of the IT Act, 2000. Technology platforms ought to prioritize preventive measures over reactive solutions, steering away from actions like content removals after publication. As an illustration, platforms can integrate safeguards to obstruct users from sharing specific types of content, rather than resorting to the removal of content once it's already posted. Nonetheless, implementing pre-publication reviews becomes unattainable once platforms attain a certain size or user base. Moreover, preemptive solutions hindering users from posting certain content might amplify content censorship, a concern raised by many. Critics argue that tech platforms already engage in excessive content censorship, and introducing proactive measures would only exacerbate this responsibility, which they contend should rest with governmental entities. Alternatively, platforms could intensify efforts to enhance the clarity of their Terms of Service for non-expert users. This anticipatory measure could influence user conduct prior to posting problematic content. Alternatively, the heightened cooperation between the public and private sectors could facilitate the enhancement of lawmakers' digital literacy and promote a more nuanced understanding of the practical aspects of online content moderation. Both platforms and government entities should share roles and responsibilities in upholding speech and democratic values. Legislation mandating platforms to remove content potentially erodes a layer of due process that citizens are entitled to regarding their speech. Conventional speech regulation frameworks are neither straightforward nor seamless, a historical attribute beneficial for safeguarding speech rights. Consequently, to ensure the protection of individual rights, governments should assume more significant roles in decisions related to speech takedown. The authority for online enforcement of public policies has long rested in the hands of digital platforms. These platforms, governing their digital realms, have enjoyed considerable discretion in determining how, or even whether, to implement the laws of individual countries. Essentially, these online entities have assumed quasi-public roles autonomously, operating without the direct oversight of public authorities. The primary concern for governments in this context is not merely the delegation of power but rather the legitimacy and necessity of such delegation. The crucial question arises as to whether the delegation of enforcement functions to online platforms is justifiable and essential. Integrating social media into regulatory frameworks presents an opportunity not only to ensure the enforcement of regulations against unlawful speech but also to curb the potential overreach of private companies in regulating speech without due process. Given the existing challenges governments face in regulating illicit content on these platforms, various approaches can be considered. One option is a reliance on self-regulation by platforms, where they independently manage and enforce compliance. Another approach involves the establishment of a more comprehensive monitoring and tracking system, providing a structured framework for oversight. A third alternative is the adoption of a meta-regulatory framework that sets broader guidelines for the platforms' self-regulation efforts. However, establishing a detailed monitoring system carries the risk of expanding the State's power. The associated challenges include significant financial costs and the potential for a surveillance system that intrudes upon the operations of platforms. Striking a balance between regulation and censorship, as well as between fostering free markets and preventing undue social control, remains a delicate task. Nations that prioritize values such as free speech, a diversity of opinions, and democratic principles must be mindful of this delicate balance in their pursuit of effective online governance.

The author is an Academic Coordinator at School of Law - CHRIST (Deemed to be University), Delhi NCR. Views are personal.


Tags:    

Similar News

Zero FIR