Technology And Human Rights

Update: 2023-11-28 06:56 GMT
Click the Play button to listen to article
story

Justice Chandrakant V Bhadang, President, ITAT; Justice R.V Easwar, former judge Delhi High Court; Shri S K Tulsiyan, President, Kolkatta IT Bar Association; Smt. Smita Srivastava, Pr. CCIT, W.B. & Sikkim; Shri Raj Pal Yadav, Vice President, ITAT, Kolkata Zone; Dr. N K Chakraborty, Vice Chancellor, N.U.J.S., Kolkata; Shri J.P. Khaitan, Senior Advocate, Calcutta High Court, my...

Your free access to Live Law has expired
Please Subscribe for unlimited access to Live Law Archives, Weekly/Monthly Digest, Exclusive Notifications, Comments, Ad Free Version, Petition Copies, Judgement/Order Copies.

Justice Chandrakant V Bhadang, President, ITAT; Justice R.V Easwar, former judge Delhi High Court; Shri S K Tulsiyan, President, Kolkatta IT Bar Association; Smt. Smita Srivastava, Pr. CCIT, W.B. & Sikkim; Shri Raj Pal Yadav, Vice President, ITAT, Kolkata Zone; Dr. N K Chakraborty, Vice Chancellor, N.U.J.S., Kolkata; Shri J.P. Khaitan, Senior Advocate, Calcutta High Court, my dear friends of the bar, ladies and gentlemen.

I am honoured to be asked to speak in the memory of a giant, Devi Pal. He started practise when I was barely one year old, and went on to achieve great heights as a counsel, a statesman, and visionary. A glance at his achievements would lead one to wonder how a man could pack in so much in a lifetime: member of parliament, Union Minister for Finance, Member of the finance commission, founder of Cancer Society of India. To be in Kolkatta, speaking of the man, today, fills me with poignancy, because when on the last occasion I was asked by the Kolkatta Income tax bar to speak, Dr. Pal was present.

I want to start my lecture with a modified version of the famed trolley problem. Are some of you familiar with it? In the traditional trolley problem, there's a runaway trolley headed towards a group of people, and the moral dilemma involves deciding whether to divert the trolley onto another track, potentially sacrificing one person to save the group.

Today, I offer an amendment. Imagine a self-driven trolley approaching a fork in the track. On one track, there's a group of people. On the other, there's a single individual. The trolley's AI system is programmed to make split-second decisions based on pre-set algorithms and ethical guidelines. It detects that if it continues straight, it will harm the group of people. However, it can be diverted onto the other track, which would lead to the sacrifice of the single individual. The decision rests with the programming of the AI: either continue on the current path, potentially harming multiple people, or switch tracks, causing harm to one person.

This scenario, is just an illustration of the many ethical questions which arise relating to how AI should navigate such moral dilemmas and whose moral framework should guide its decision-making. In today’s talk, I hope to discuss briefly some of the frictions caused by such technological advancements, to human rights and their protection.

The Universal Declaration of Human Rights, was adopted by the United Nations General Assembly 75 years ago, affirming the various rights of the individual in a free and fair world. This however, came long before the technological revolution - what is now referred to as the ‘Fourth Industrial Revolution’. This has changed the very basics of the way we live, work and interact with one another. Technology has become ubiquitous in every aspect of human life. It has taken our world by storm, centering itself as either the core, or as an influential force, in various tussles of power - whether it be in relation to labour and employment, governance, security and surveillance, innovations and scientific advancement, journalism and disinformation, online violence and harassment, digital identities and privacy, etc.

  1. Overview of challenges

The problems are clearly many. The State’s response has to be a balanced approach that protects innovation, while upholding human rights of the individual. This however, is easier said than done. The avenues that technology opens up, evolve faster than laws are able to keep up. Their transformational potential, to empower individuals and communities, also comes with its own risks of misuse and harm. Balancing these considerations and regulating the use of technology, falls on such technology laws which govern data protection, e-commerce, aspects of intellectual property, information technology, etc.

This mushrooming of technology in all aspects of life, has raised various concerns. Firstly, privacy - this relates to surveillance and data collection by governments, corporations, or malicious actors, data breaches and hacking compromising personal information; secondly, the freedom of expression which is threatened by censorship and content control by governments or platforms, online harassment and cyberbullying which has increased sharply behind the cover of anonymity; thirdly, discrimination and bias which are exacerbated by algorithmic biases since AI can inherit biases from their creators or datasets, and the digital divide or unequal access to technology which affects various rights and opportunities of vulnerable and marginalized communities. Another extremely important dimension is the threat to democratic processes - we’re increasingly seeing both in the domestic, and international arena, how misinformation can manipulate public opinion, thus impacting fair and transparent democratic processes. Cyberattacks, fake news and data manipulation have had the effect of eroding public trust in institutions and frameworks. Two other areas in which technology poses a threat is employment and labour rights wherein advancements like automation threaten job security and exacerbate economic disparities, and the ethical dilemmas relating to technologies like AI, biotechnology, facial recognition, etc. Lastly, but perhaps most pertinent for this era in time, is the issue of health and mental wellbeing which is severely affected by the overuse of technology. In my lecture today, I will briefly address some of these challenges today.

  1. CHALLENGES
  1. Right to access the internet and the digital divide

The right to access the internet was considered at length by the Supreme Court, in Anuradha Bhasin v. Union of India and its role in facilitating access to information and the freedom of speech and expression, is spoken of widely and perhaps requires little elaboration. The Supreme Court, focussed on these aspects, and decried the practice of internet shutdowns as violative of Article 19 of the Constitution. However, an aspect I wish to highlight further is the resulting situation wherein access to internet and technology, has been made a pre-requisite for the government’s basic amenities, and socio-economic entitlements.

The move to a ‘Digital India’ has come with its own challenges that have laid bare the digital divide that exists in India. While a large section of the population has rapidly adapted to it, which is a remarkable feat - the reliance on the internet has rendered others for whom the internet is still a distant reality, more vulnerable. This, of course, is assuming that access to the internet is continuous and unimpeded. Reports, however, indicate differently.

Any discussion on technology and human rights, is based almost entirely on the scope of internet use and access. With over 900 million internet users, India is considered the second largest online market after China. Data reveals that India boasts of 467 million active social media users, and 692 million active internet users. This number however, requires nuanced consideration. While accessibility has increased tremendously both in rural and urban India, internet penetration remains just below 50% across the country. Compare this with Norway, Saudi Arabia or the UAE which have the highest internet penetration rate, at 99%. Along gender lines, 61% of men have a mobile phone, while only 31% of women owned one. So while India numerically has the largest number of users, it is more so a result of our sheer population, and there remains considerable work to be done in improving accessibility to a large section of our populace that already suffers from existing socio-economic inequality in the country.

The COVID-19 pandemic, most starkly brought out this digital divide. The 2022 Oxfam Inequality Report, highlights this dichotomy - demonstrating how the privileged, and rich, continued to enjoy uninterrupted access to the internet and the benefits of being online, while those who were not digitally connected, were left in the lurch, deepening their social and material inequalities. The factors affecting access are - gendered social norms, affordability, geographical location, income, and levels of digital literacy, which affects who has access to the required gadgets. Only 31% of rural India uses the internet, and roughly 67% of the urban population is connected. This digital divide had exclusionary impacts on the right to education, health, and matters of finance. As per the National Sample Survey Office (2017-18), only 9% of students were enrolled in any course, had access to a computer with internet, and 25% had access to the internet through any kind of device. This is further complicated by factors such as caste: the Oxfam Report concluded that only 4% of students from the ST community, and 4% of the SC candidates have access to computer with internet facility.

Internet shutdowns have been rampant across India, and arguably in many instances carried out in a manner that is inconsistent with the exception laid out by the Supreme Court. Authorities claim it is to maintain law and order, newspaper reports paint a different story - that it is often used to prevent cheating in examinations, to curb protests. They have a disproportionate impact on peoples lives, business and work. A recent report prepared by the Internet Freedom Foundation and Human Rights Watch titled ‘No Internet Means No Work, No Pay, No Food - Internet Shutdowns Deny Access to Basic Rights in ‘Digital India’’ sheds some light on how access to the internet is affecting other essential amenities offered by the government. For instance, MNREGA which offers job security to millions of households in rural India, is now tech-enabled - i.e., the attendance checks, wage payments, etc. are digitised and require internet access.

Any discussion on technology and human rights in India, is perhaps incomplete without reference to Aadhar - the world’s largest biometric database, to which a slew of socio-economic entitlements are tied, including access to something as basic as food grain and ration. While the Supreme Court has already considered concerns relating to privacy and the constitutionality of the Aadhar project, much remains to be seen, and monitored in its impacts. Reports of large database leaks and sale of personal data on the dark web, are deeply concerning because of the penetration of Aadhar into most aspects of life, including even banking. Lastly, the transition to digital payments and proliferation of UPI, wherein even the street cart vendor outside your court premises will simply have a QR code, rather than deal with providing change for a larger denomination - is another aspect of life, that is both affected by internet shutdowns, and susceptible to data breach concerns in the absence of stringent regulation on fintech apps.

So while the move to digitise India is laudatory, we must remain cognizant of the high socio-economic inequality that prevails in this country, which needs to be accommodated and considered, so as to mitigate further inequality or exclusion in aspects of life dependent on access to the internet.

  1. Algorithms and biases

With the scaling up, and integration of AI technology in various sectors in India, there's a risk of replicating and exacerbating existing societal biases, including caste-based discrimination, gender biases against women, and marginalization of minorities. If AI is not developed and deployed carefully, it could inherit biases from historical data or societal prejudices, leading to discriminatory outcomes. Facial recognition technology - in addition to raising serious concerns relating to privacy and policing, specifically, also runs the danger of more surveillance of Muslims, Dalits, indigenous adivasis, transgender persons, and other marginalised groups. Linking of databases to a national ID system and using AI for loan approvals, hiring, background checks, etc. are some of the ways these discriminations will play out.

The digital divide in India that I spoke of earlier - relating to smartphone ownership and social media use, means that about half of India’s population - comprising women, rural communities, and adivasi communities, lacks access to the internet and are thus, missing or misrepresented in datasets - leading to “wrong conclusions and residual unfairness”. This is best seen in the impact on healthcare - wherein lifestyle diseases like diabetes receive widespread attention, research, funding, etc. whereas, diseases more common among poorer citizens - like tuberculosis, receive far lesser priority. Safety apps, that use data mapping and user driven inputs, similarly - mark poorer neighbours and urban ghettos as unsafe, leading to over policing and mass surveillance. The true dangers will set in, when AI based on predictive technologies are integrated into the policing and criminal justice system.

An insightful book by Caroline Criado Perez, called "Invisible Women: Exposing Data Bias in a World Designed for Men" explores the pervasive gender data gap that exists in various aspects of society and the author argues that much of the world, from urban planning to medical research, and even design of everyday products, is structured around male-centric data, neglecting or ignoring the specific needs and experiences of women. For instance, she highlights how medical research historically focused on male bodies, leading to misdiagnosis and inappropriate treatments for women. Similarly, urban planning often centers on men's travel patterns and needs, neglecting the safety concerns and mobility challenges faced by women. The lack of gender-disaggregated data affects policy-making, design, and societal structures, leading to a world that is inherently biased against women and a cycle of inequality. From the gender pay gap to the biases embedded in algorithms and technology, it is clear that these oversights have far-reaching consequences, impacting women's safety, economic opportunities, overall well-being, health, transportation and disaster relief efforts.

Ignoring women's perspectives and experiences in data collection and analysis, result in perpetuating inequalities and failing to address the unique challenges faced by women. While her book focuses on the gendered implications of data bias, it is certainly true of other lived experiences as well that are marginalised in society.

  1. Big data and influencing consumers’ rights and choice

The combination of big data with tools such as pricing algorithms, is increasingly diffused in our daily life today. A growing number of business entities are using algorithms to improve their pricing models, customise services and predict market trends. These models provide significant benefits to firms as well as consumers in terms of new, better and more tailored products and services. This is changing the competitive landscape in which many businesses operate, and the way in which they make commercial decisions. Widespread use of algorithms has also raised concerns of possible anti-competitive behaviour as they can make it easier for firms to achieve and sustain collusion without any formal agreement or human interaction. Broadly, AI poses three types of challenges to fair competition: market foreclosure and related exclusionary practices, novel ways of collusion, and new strategies to effectuate discrimination in pricing.

For many data-based businesses, their ability to collect and commercialise data marks their success. AI and machine learning have enabled certain firms to extend the type, volume, and sources of data radically, giving them a competitive edge. Many businesses collect and use large, unique datasets. Commonly, it involves consumers voluntarily providing a firm with personal data in return for a free product or service (e.g. access to a social media or price comparison platform), which is then financed by selling the data on to other customers (e.g. advertisers). The saying - “if something is free, you are the product” rings truer than ever in today’s digital world. Google’s search engine and Facebook’s social network are two prominent examples of businesses which have employed big data to achieve sky-rocketing profits.

In the light of growing developments in technology and their impacts on the market, competition authorities globally are taking note of the effects of AI and big data. The UK authority has set up a Data Unit across disciplines to understand the impact of data and machine learning and other algorithms have on people and markets, the German. Canadian and French Anti-Trust authorities have similarly published reports on the same in the last decade, highlighting the potential for coordination and collusive behaviour and need for monitoring policy.

  1. The ethical dilemmas

The starkest example of futurism, and the allure of it - is perhaps in the sci-fi depictions of travel and transportation. Be it in the movies, or in books, an imaginative indication of the future - was always the depiction of this way of life. Flying cars, levitating trams, you name it. This is not unique to our century; even when transport was largely powered by horse or steam, numerous modes of transport we now take for granted, was seen as unattainably futuristic.

The ethical dilemmas relating to AI driven transportation, are what I started this discussion with. The deterministic effect that the algorithms would have, with all the prejudices it has learnt from those that built it, and the data points available. The lack of accountability is an aspect that runs as a common thread through any discussion of such tech innovations. But another challenge, that exists even if we stop short of the self-driven car - is the privacy nightmare that modern cars now bring with them. I use this example of the ‘smart car’, as an analogy to speak of the myriad ways in which data collection has been commercialised, and the perils that come with it.

Research conducted by the Mozilla Foundation found that the privacy terms of car brands were allowing them to collect a wide range of customer data from facial expressions, phone contacts, voice data, locations, and when, where, and how people drive. This information was then being sold to third-parties. Some of the uses of this data was to disclose it to others for ‘customer research’ use it to profile the consumer and the products they are interested in, for ‘data analysis’ by related companies, etc. While usage of this data has some legitimate purposes - making driving safer, navigation more accurate, making driving more enjoyable, etc., the potential for misuse, especially once passed on to third parties is unimaginable. This is true even for appliances we have inside the house - from infotainment systems, smart kitchen appliances, etc. Most starkly though, these privacy concerns and influencing of choice, is discernible from our use of the smartphone and the creeping dependence it has created, in all facets of modern life.

  1. Mental health and negative impacts

That social media serves as a powerful tool for amplifying voices and democratizing communication, enabling individuals to share diverse perspectives and advocate for social change is well documented. However, recent academic research and statistics underscore the dual impact of social media, revealing its detrimental effects on mental health - including body image issues and development of eating disorders, especially on younger users.

Studies, such as a report in the American Journal of Preventive Medicine, highlight a concerning correlation between excessive social media use and increased rates of depression, anxiety, and feelings of loneliness among users. According to the latest statistics, around 70% of young adults cite social media as a source of stress, indicating its profound impact on mental well-being. While social platforms empower marginalized voices and foster community engagement, the constant exposure to curated content, cyberbullying, and the pressure for perfection exacerbates mental health issues, especially among vulnerable demographics.

Last month, various states in the US, filed lawsuits against Meta (earlier known as Facebook) alleging that the company had deliberately designed its products - Facebook and Instagram, with features that harm teens and young adolescents, and despite knowing so - continued to profit of it. An apt comparison that is often made, is to cigarettes - which in the 1950s were already known to be carcinogenic, but continued to garner massive profits, in the absence of regulation and thus, with non-interference of the State.

Now while this negative impact on health, is limited in effect to the class of youth that has access to internet in the first place, it still requires our attention. Some research has pointed out that the more a young adult used Instagram on one day, the worse their mood and life satisfaction was. Balancing the positive influence of social media in amplifying voices with the need to address its adverse effects on mental health emerges as a critical concern in contemporary society.

The anonymity provided by social media platforms has significantly contributed to the rise of hate speech and hostility online, presenting a multifaceted challenge in India. Research conducted by the Centre for Social Research reveals that nearly three-quarters of Indian internet users have experienced some form of online harassment, with anonymity often amplifying this behavior. While anonymous platforms can empower marginalized voices and enable discussions on sensitive issues, they also serve as breeding grounds for unchecked hate speech and toxicity. The lack of accountability allows individuals to engage in cyberbullying, dissemination of harmful content, and the spread of divisive rhetoric. However, recognizing that hate speech stems from multifaceted societal issues, including polarization and social divisions, addressing this challenge in India requires a nuanced approach that balances preserving anonymity for genuine expression while implementing measures to prevent its misuse, fostering a safer and more respectful online space for all users.

  1. Undermining democratic processes

While there's ongoing research on the impact of technology on democratic processes in India and globally, empirical evidence and studies suggest several threats technology poses to the country's democracy. Studies, such as those conducted by the Academic research, such as a report by the Centre for Internet and Society (CIS) in India, Stanford Internet Observatory and the Oxford Internet Institute, have highlighted the proliferation of misinformation and disinformation campaigns on social media platforms. These campaigns, often driven by bots and coordinated efforts, manipulate public opinion, sow division, and undermine trust in democratic institutions. Additionally, research from the University of Maryland found that cyberattacks targeting electoral infrastructure and disinformation tactics during elections have increased significantly, posing serious threats to the integrity of democratic processes globally.

The use of social media and messaging apps to spread divisive content and incite violence during political events or communal incidents has also been well documented. The Cambridge Analytica scandal, which involved the unauthorized harvesting of personal data for targeted political advertising, further exemplifies how technology can be exploited to manipulate voter behavior and influence electoral outcomes. All these underscore the urgent need for increased transparency in digital political campaigning, and regulation to safeguard democratic processes.

  1. Employment, automation and right to work

A different dimension, but one that is equally relevant when discussing technological advancement and its impact on human rights - is the effect of automation. Automation significantly impacts the right to work by reshaping labour markets and altering the nature of employment itself. While automation introduces efficiency and innovation, it simultaneously poses challenges by potentially displacing human workers. As tasks are automated, certain jobs become obsolete, leading to workforce disruptions and job losses in specific sectors. The right to work, as envisioned, encompasses the right to gainful employment and fair conditions. However, automation's rapid advancement raises concerns about job security, income inequality, and access to suitable employment opportunities. It necessitates a reassessment of labour policies and the need for reskilling or upskilling programs to ensure that individuals can adapt to changing job requirements and continue to access meaningful work in an increasingly automated world. Balancing the benefits of automation with the protection of workers' rights becomes crucial in navigating this evolving landscape.

Take for instance our own line of work. Automation has substantially influenced the legal profession, introducing both opportunities and challenges. The integration of AI in legal practice has streamlined numerous tasks, such as document review, contract analysis, and legal research, enhancing efficiency and reducing the time spent on routine work. However, the implementation of AI in legal settings brings forth significant challenges. One major obstacle is the ethical and regulatory implications surrounding AI's decision-making capabilities, particularly in contexts where critical legal decisions are involved. Ensuring transparency, accountability, and fairness in AI algorithms presents a challenge due to the complexity of legal reasoning and the need for comprehensive understanding and interpretation of laws. With the increasing use of technology to assimilate and review contracts, review legal documentation, etc., the focus, as ever, should be on the lawyer-client confidentiality requirement, and accommodations should be made for the exercise of human agency to take into account each client’s specific needs and offer the most responsible legal advice. Additionally, there's a concern about job displacement and the potential impact on the workforce within the legal sector as AI assumes certain tasks traditionally performed by lawyers. Striking a balance between leveraging AI's capabilities to augment legal work and preserving the core ethical and professional standards of the legal profession remains a crucial challenge in embracing automation in legal practice.

The practise of law, and the traditional functioning of courts has much to gain from the use of technology. During the pandemic, we have seen how useful video platforms have been for online hearings, enabling most courts in the higher judiciary to function practically through lockdowns everywhere. The hybrid format continues to offer greater flexibility to lawyers and litigants. The availability of e-files also makes it easier for judges to read and prepare for hearings, and access to online databases and case law search platforms is useful not just for research, but I personally have seen its functionality during hearings, in the courtroom itself. Technology can also be used in efficient ways to manage judicial dockets, identify priority case types for urgent listing and case management, and enhancing access to justice. In all this, our policies must be considerate of litigants and lawyers who may not have access to such technology, either to make online filings or appear virtually in the courtroom. Accommodations must be made for such contingencies, and the technology used must be accessible, as best as possible. Ultimately, technology must aid judicial functioning and accessibility, and not create ground for further exclusion.

  1. Conclusion: The Positives, And The Way Forward

It would be unfortunate if the takeaway from my lecture today is that tech innovation and development has in general been bad for rights. It remains a space for immense opportunity, but one that requires careful consideration and regulation every step of the way. In many spheres, it has dramatically enhanced the protection of rights. Take for instance, the use of big data in relation to environmental conditions. Cloud computing has meant that key trends can be analysed in advance, and steps can be taken before humanitarian disasters occur. Satellite imagery and weather prediction technology has allowed easier evacuations, and prevention of damage, in the face of natural calamities. In fact, the Centre for Humanitarian Data set up in the Netherlands by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), was set up with the objective of enhancing the use of data in humanitarian work.

The United Nations Secretary General’s Roadmap for Digital Cooperation has a succinct 11 pointer list. It encourages states to place human rights at the centre of regulatory frameworks, and digital technologies legislation. It provides greater guidance on the application of human rights standards in the digital age, address protection gaps created by evolving digital technologies, discourages blanket internet shutdowns and generic blocking and filtering of services, human rights based domestic laws and practices for the protection of data privacy. It promotes clear and specific action to protect privacy rights and other human rights, the adoption and enhancement of safeguards relating to digital identity, protecting people form unlawful or unnecessary surveillance, human rights based laws and approaches to address illegal and harmful online content, ensuring online safe spaces, and accountable content governance frameworks protecting the freedom of expression.

International law is replete with provisions that extend protection to human rights affected by technology and the digital world. For instance the Universal Declaration of Human Rights, asserts the right to privacy as applicable to both physical and digital realms, emphasises freedom of expression and the right to information. Similarly, the International Covenant on Civil and Political Rights (ICCPR) affirms the right to privacy including protection against unlawful or arbitrary interference with privacy and the right to seek, receive and impart information.

There’s a growing acknowledgment of states' responsibilities to ensure cybersecurity while upholding human rights with measures that avoid disproportionate limitations on rights. These reforms and challenges are being faced by countries across the spectrum. For instance, in Australia large sweeping privacy law reforms are underway which include nuanced definitions to terms like ‘personal information’ and higher standards for ‘consent’ which protect consumers from intrusive and manipulative data practices. A test that has garnered support of privacy advocates is the ‘fair and reasonable test’ which examines practices based on substantive fairness.

Domestically, the courts have read these rights into those enumerated in the text of the Constitution. We also have legislations that govern these aspects - with the Information Technology Act, 2000 being the primary enactment. Under this Act, numerous Rules and Regulations tackle the various manifestations of tech in our lives. Earlier this year, the Digital Personal Data Protection Act, 2023 was passed - after various rounds of deliberation, and in the backdrop of the Supreme Court’s decision in Puttuswamy, which declared the fundamental right to privacy. This has come under criticism on the scope of its exceptions on the ground of national security, and will perhaps be the subject of further judicial scrutiny.

While there exists palpable concerns when it comes to automation and the rights of workers, there is no denying that it also has immense scope for increasing the efficiency of work in itself. Automation has revolutionised various spheres of life, for the better. I read recently that in Peru, there now exists a solar-powered robot, that plants seeds and helps with reforestation efforts of the Amazon rainforest. Its function is simply to automate highly repetitive tasks (like planting seeds), thereby accelerating and expanding operations. Reports indicate that with the 20% loss of Amazon rainforest, without the use of technology, conservation would come to a standstill. Examples such as these, highlight the value of technology in enhancing our conservation efforts, and facilitating the right to clean environment that is guaranteed in international and domestic law.

As we navigate the integration of automation and technology into various facets of our society, it becomes imperative to ensure that this evolution augments, rather than undermines, the right to work and enhances the lives of workers. Striking a balance between technological advancements and the preservation of meaningful employment is essential in fostering a sustainable future. By harnessing automation to complement human capabilities rather than replace them, we can create environments where innovation amplifies productivity while empowering workers with new skills and opportunities. This necessitates proactive measures such as reskilling and upskilling programs that equip individuals with the tools to adapt to changing job landscapes. Championing a collaborative relationship between technology and the workforce, we can pave the way for a future where technological progress aligns with the enhancement of livelihoods, preserving the dignity and agency of workers in an increasingly automated world.

Similarly, there is an urgent need for ethical considerations, incorporating oversight mechanisms, and inclusive approaches in AI development to mitigate the replication of biases and ensure fair and equitable outcomes in India's AI initiatives. The importance of incorporating disaggregated data into research and decision-making processes cannot be highlighted enough. By doing so, society can create more equitable and inclusive systems which address data gaps in gender, class, geographical location, etc. and take proactive steps to rectify it.

All of this, however, will be the metaphorical cart before the horse: if the state does not first prioritise, efforts to increase access to the internet, equitably and fairly, to all sections of our society.

I would conclude this talk by quoting a genius who transformed our lives no less, Albert Einstein, who said that : "It has become appallingly obvious that our technology has exceeded our humanity."

Another modern genius, Steven Spielberg put it differently:

“Technology can be our best friend, and technology can also be the biggest party pooper of our lives. It interrupts our own story, interrupts our ability to have a thought or a daydream, to imagine something wonderful, because we are too busy bridging the walk from the cafeteria back to the office on the cell phone.”

The sooner mankind wakes up to this reality and keeps technology as a tool, and not start serving it, the better.

This is the lecture delivered by Justice Ravindra Bhat, former Judge of Supreme Court at Kolkata NUJS .


Tags:    

Similar News

Zero FIR