Navigating A Janus-Faced Problem: Big Data, Rights And The State

Update: 2020-05-13 05:34 GMT
story

Big data seems to have finally made it to mainstream political debate in India. Kerala Government's decision to use the services of data management company & the "Argogya Sethu" app launched by the government of India have both been the subject matter of controversy and litigation. Concerns center around data protection, privacy of users and the potential for surveillance....

Your free access to Live Law has expired
Please Subscribe for unlimited access to Live Law Archives, Weekly/Monthly Digest, Exclusive Notifications, Comments, Ad Free Version, Petition Copies, Judgement/Order Copies.

Big data seems to have finally made it to mainstream political debate in India. Kerala Government's decision to use the services of data management company & the "Argogya Sethu" app launched by the government of India have both been the subject matter of controversy and litigation. Concerns center around data protection, privacy of users and the potential for surveillance. These Criticisms notwithstanding, scientists have made a strong case for use of these advanced computational models as they promise a lot of potential at tracking the spread of the disease or for contact tracing. It is also argued that these models can aid in gradual lifting of lockdowns or aid the planning of logistics for movement of men and medicines to places where the infection is expected to spread faster. We are then once again caught in the middle of a debate that is crafted as a classic one: of individual rights to privacy v. greater welfare to (to the greatest number) by managing the pandemic.

Big Data debates have often followed this paradigm. On one hand there are a large number of promising advances that use big data has brought to the table for citizen welfare- AI and big data-based systems have been outperforming doctors at detecting several medical conditions. There are claims that they can increase access to credit by improving credit score mechanisms. Perhaps the most important of these uses relate to how they can improve government efficiency, especially when it comes to delivery of social welfare. It has also been suggested that they can be deployed for fact finding and to promote human rights. More governments are resorting to use of data to develop predictive analysis to find new opportunities against crime (i.e "predictive policing"). Overall, it is claimed that AI and big data has immense potential that can be harnessed for public good. The United Nations has also recognized the value and potential in these technologies and has instituted a dedicated channel of work titled "Big Data for Sustainable Development". A report of the Independent Expert on the rights of older persons has pointed out the opportunities and challenges in deploying AI and automation in the care of older persons and another one by the High Commissioner on Human Rights reflects on the potential that it holds to promote women's health.

On the other hand, critics have pointed out that big data and AI based solutions impact individual human rights. AI systems ultimately require collection and use of vast quantities of data, which then impacts the individual's right to privacy. It has also been observed that "AI can easily perpetuate existing patterns of bias and discrimination, since the most common way to deploy these systems is to "train" them to replicate the outcomes achieved by human decision-makers", thus questioning any purported improved outcomes that they promise. Researchers have also pointed out that there are risks of discrimination through replication or exacerbation of bias in AI systems, particularly in when it comes to 'predictive policing' methods.

These debates have also found its way into the Charter based human rights system at the UN, especially the Human Rights Council. The Special Rapporteur on Extreme Poverty and Human Rights, Professor Philip Alston has produced three reports that reflect on these themes. Two of them after country visits to the United Kingdom and the United States (States that has been increasingly putting Big Data based solutions in its social welfare system) and a dedicated report on use of digital technologies in the welfare state, which has been prepared through a consultative process with States, Academics and CSOs making their submissions. The report, covers human rights concerns across a range of issues – such as digital identification, automated programmes to assess eligibility and calculation and payment of benefits, and fraud prevention and detection using algorithms to score risks and classify needs. Professor Alston observes that "digital state" is an emerging reality and that this is accompanied by significant reductions in welfare budget by eliminating services, reducing the pool of beneficiaries and imposition of more stringent conditions, many of which are intrusive in nature. More worryingly, they also include state goals for behavioural modification. Across these developments, he sees a more fundamental change in the making – "complete reversal of the traditional notion that the State should be accountable to the individual". These measures are advertised as being "scientific and neutral" and meant to promote efficiency, prevent leakages and better targeting. However, as he argues in his report, they often operate on values that are antithetical to human rights. He identifies a specific set of potential risks that arise in various contexts such as putting vulnerable individuals at a greater risk of exclusion; the dehumanization of the process and elimination of possibilities for clarifications; rolling out rigid systems that does not take into account the needs of particular sets of the population or that which cannot respond to emergencies and how it impacts the dignity of the recipient and the elimination of human values such as compassion. He also notes that the populations that are meant to receive these benefits are relatively more vulnerable and hence they are forced to accept forms of intrusiveness that better-off individuals would never have accepted. He has also reflected on the impact that these programmes have on the civil and political rights of the individuals, specifically that there is a real risk of beneficiaries being effectively forced to give up their right to privacy and data protection to receive their right to social security.

While these concerns in the realm of individual human rights are important, perhaps most significant of the dangers flagged by Professor Alston is the potential that technology now offers for behavioural modification and large scale surveillance – both of which has the potential of altering the landscape of our polity as we know it. This is a good time to recollect the Cambridge Analytica scandal - an instance of how a private Big Data company was able to garner the potential of targeting tailor-cut advertisements on social media to influence the results of Brexit as well as the US Presidential elections. Professor Michal Kosinski, who developed the basic psychometric techniques that were used by Cambridge Analytica has posited that computer based assessments of human personality are more accurate than those made by humans and that with big data, it becomes easier to make large scale assessments. For instance, he notes that facial recognition can tell us what our politics and IQ is, our voice can reveal a lot about our personality and even our physical movements (which our mobile phones can capture) can reveal a lot. Such knowledge can then be harnessed to modify our behaviour, thoughts and attitudes. The proliferation of automated and semi-autonomous bots on social networks must be viewed in this context of attempts at large scale modification of our thinking and consequential changes in actions. The Special Rapporteur on "Promotion and protection of the right to freedom of opinion and expression", in his report to the General Assembly on AI and its impact on freedom of opinion and expression has acknowledged these concerns and has opined that "forced neurological interventions, indoctrination programmes…or threats of violence designed to compel individuals to form particular opinions or change their opinions" are violative of the human rights obligations imposed by the ICCPR.

The same data sets that we create for targetting welfare, making a delivery for efficient or contact tracing amidst the COVID-19 pandemic can thus be used for a variety of other purposes that impacts our individual rights and the future of our political organization. Additionally, we now live in a world where private, corporate actors poses and use these powerful tools and make a case for lesser regulation so that innovation is not stifled. While the language and tools of human rights have been deployed to face these challenges, it is doubtful whether they alone can provide the answers that we seek. For instance, the response to many of these concerns centre around calls for stronger data protection laws and greater individual control over data and a consent-based framework on the use of data and acting within human rights principles. Human Rights are notorious for being too individualistic and one is left wondering whether responses that are built entirely around the capabilities of individuals would be adequate to protect our interests that span across our social and democratic organization.

Perhaps, it is time to move the conversation beyond group welfare versus individual rights into welfare (or other promises) to how the deployment of these technologies impacts our collective democratic and political values. The conversation thus needs to move on from the language of a technical debate on efficiency in delivering welfare (and its potential impact on the rights of individuals) to one that touches democracy and political decision making at large. While the technology itself may be neutral, it is not immune from providing advantages to one or another form of political or economic organization – a factor that needs to be a part of any conversation that we have when we deploy them. As Professor Alston has observed "digital welfare state technologies are not the inevitable result of scientific progress, but instead reflect political choices made by human. Assuming that technology reflects preordained or objectively rational and efficient outcomes risks abandoning human rights principles along with democratic decision-making."

Mahesh Menon is an  Assistant Professor, Daksha Fellowship. Views are personal.

Tags:    

Similar News

Zero FIR