Free Speech v. Digital Disinformation: How Is State Intervention In Social Media Content Moderation Being Navigated In USA & India

Padmakshi Sharma

4 July 2024 6:06 AM GMT

  • Free Speech v. Digital Disinformation: How Is State Intervention In Social Media Content Moderation Being Navigated In USA & India
    Listen to this Article

    Recently, the United States (US) Supreme Court in Moody v. NetChoice, LLC (2024) refrained from issuing what could have been a landmark judgment on a pivotal issue: whether states can enact laws that limit the power of social media companies to moderate content. This case revolved around the constitutionality of laws enacted by Texas and Florida that aimed to restrict social media platforms from censoring user content or deplatforming individuals based on the platforms' guidelines. The lower courts reached conflicting conclusions regarding these laws. One court upheld an injunction against the Florida law, preventing it from taking effect, while another court rejected an injunction against the similar Texas law, allowing it to be enforced. The US Supreme Court did not make a definitive ruling on the substantive issue of whether states can impose such restrictions on social media companies. Instead, the Court vacated both the lower court rulings and remanded the cases for retrial. The rationale provided was that neither of these courts properly apply established case laws when reviewing the free speech issues involved.

    Moody comes amidst a surge of disinformation and biased perspectives on social media platforms worldwide. It underscores the ongoing global debate on balancing the protection of free speech with the need for responsible content moderation in the digital age. A similar debate has gained momentum in India as well, particularly with discussions around India's IT Rules and their recent amendments. India's IT Rules, introduced in 2021 and amended in 2022 and 2023, impose obligations on social media platforms to remove unlawful content within a specified timeframe and require greater transparency in their content moderation processes. Critics argue these rules could lead to overreach and stifle free speech, while the Indian government believes they are necessary to curb the spread of misinformation and harmful content.

    This article aims to explore how free speech can be balanced with social media regulation, shedding light on the challenges faced by democracies worldwide in navigating the digital landscape and safeguarding public discourse.

    Understanding Moody v. NetChoice, LLC (2024): Case Overview

    The Communications Decency Act (1996) has been crucial to the growth of the internet in the US. Section 230 of this Act grants social media platforms immunity from legal liability for user-generated content and allows them to moderate objectionable material. This protection provided by Section 230 faced challenges post the 2020 US Elections. During this time, there was a rise of misinformation, including claims of election fraud and COVID-19 conspiracy theories, particularly from conservative sources in the US. The increase in misinformation prompted platforms like YouTube, Twitter, and Facebook to flag or remove such content. Due to a lot of the removed content belonging to conservative politicians, Republican party members, including then-President Donald Trump, accused the platforms of engaging in politically motivated censorship.

    In response, Florida and Texas passed laws in 2021, SB 7072 and HB 20 respectively, aimed at restricting social media content moderation. Florida's SB 7072 imposed fines on internet firms for banning a political official for more than 60 days and applied similar penalties to "journalistic enterprises" operating in Florida with either 100,000+ monthly users or 50,000+ subscribers. Texas's HB 20 forbade platforms from censoring user-submitted content based on viewpoint and required detailed transparency reports on content moderation policies.

    Both these laws faced legal challenges. The Eleventh Circuit upheld the injunction against Florida's SB 7072, while the Fifth Circuit ruled against an injunction for Texas's HB 20, creating a circuit split. The US Supreme Court agreed to jointly hear challenges against both laws to determine whether they violated the First Amendment.

    On July 1, 2024, the US Supreme Court issued its decision. All nine justices agreed to vacate the Fifth and Eleventh Circuit rulings and remanded the cases back to these courts, finding that neither court had followed established case law in reviewing the First Amendment issues.

    Current Landscape in India: Overview of Issue With IT Rules 2023

    In 2023, the Indian Ministry of Electronics and Information Technology (MeitY) introduced the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Rules 2023). These regulations impose stringent obligations on social media intermediaries, including a requirement to remove any news related to the "business of the Central Government" that the Union Government's fact-checking unit (FCU) deems "fake, false, or misleading." Failure to comply would result in the loss of safe harbour immunity, which protects intermediaries from legal liability for user-generated content.

    On March 21, 2024, the Supreme Court of India stayed the Union's notification establishing the FCU under the IT Rules 2023. This stay does not reflect the Court's stance on the merits of the challenge against the Rules, which is currently pending before the Bombay High Court. However, the court did indicate that the challenge against the IT Rules 2023 raised "serious constitutional questions."

    Those arguing against the FCU asserted that its establishment constituted a violation of free speech. They contended that allowing the Central Government to determine what constitutes "fake, false, or misleading" news would enable it to control the information available to the public, thus becoming the arbiter in its own case. This concentration of power was seen as potentially stifling dissent and limiting the democratic discourse by allowing the government to suppress unfavorable or critical information under the guise of combating misinformation.

    Per contra, the Union Government argued that such measures were necessary to curb the spread of misinformation and harmful content, which could have serious societal and political consequences. It was asserted that a fact-checking unit can provide a standardized approach to identifying and addressing false information, thereby protecting public discourse from manipulation and ensuring that citizens have access to accurate information.

    State Intervention in Content Moderation: Criticism in India and the US

    At first glance, it may appear that journalists and human rights activists in India and the US are advocating for contradictory causes. In the US, activists are concerned about state restrictions preventing social media platforms from removing content considered misinformation. Conversely, in India, the concern lies with state directives compelling social media platforms to remove content labelled as misinformation.

    However, upon closer scrutiny, it becomes evident that the core concern shared by critics in both countries lies with governmental intervention in content moderation processes. In the US context, critics argue that the editorial decisions of social media platforms deserve protection as free speech. They contend that these platforms operate as private entities distinct from government control. Therefore, as private enterprises, these platforms should retain the autonomy to determine the content allowed on their platforms based on their own policies and community standards.

    Similarly, in India, critics do not argue for allowing misinformation to proliferate unchecked. Instead, their concern centers on the mechanism through which misinformation is identified and acted upon. Specifically, critics oppose the establishment of a state-appointed fact-checking units empowered to decide what constitutes misinformation, particularly concerning governmental entities. They argue that such a setup could potentially undermine freedom of expression by allowing the government to influence and control public discourse.

    The crux of the debate boils down to the principle that free speech thrives best with minimal state intervention. As regulatory frameworks continue to evolve, the challenge remains to safeguard free speech while addressing the complexities of misinformation and digital governance. The outcomes in both countries will likely shape future policies and legal precedents, influencing the global landscape of online freedom and responsibility.



    Next Story