AI Washing In The Securities Market: Navigating Regulatory Challenges And Unlocking Potential

Update: 2024-11-01 05:39 GMT
Click the Play button to listen to article

Change is the only constant. Mankind is one too familiar with this saying. After all, our insatiable curiosity and boundless ambition have continued propelling us towards new frontiers that seek to redefine the contours of our world. Now, with the burgeoning integration of artificial intelligence ('AI') into our lives, we once again stand at the precipice of a new frontier that promises to bring about tectonic disruptions in established business norms.

Such a bold assertion is not unfounded; AI optimizes operations, gears corporations with a distinct competitive advantage to quickly process and analyse data, and enhances predictions to uncover 'hidden' market trends. These abilities, when pitted against the dynamic tendencies of the securities market makes AI, a formidable tool. Therefore, it is unsurprising to witness the financial world harness its potential to enhance risk management and fraud detection, provide investment advice, facilitate algorithm trading, etc.

Yet, the rapid strides in aligning AI with the financial world has been accompanied with an increasingly disturbing practice of firms 'riding' the AI-frenzy wave by making unsubstantiated claims regarding their product/service's affinity and affiliation towards AI. This is turn captures the attention of your average joe, who then proceeds to blindly invest in such firms.

This behaviour, driven by a malicious intent to profit from the AI craze, alarmingly resembles the infamous dot-com bubble burst where investors, enamored with the unprecedented exponential growth of the IT sector, made irrational investments in IPO's, characterized by inflated valuations, without truly understanding the viability and profitability of such company's business models.

To avoid repeating this mistake, it is imperative to recognize the need for taking pre-emptive measures in addressing this unscrupulous behaviour, which commentors have labelled as 'AI washing'. This article attempts to shed light into the evolving discourse surrounding AI washing and briefly examines recent developments that have a direct bearing on regulating this behaviour.

What is AI Washing?

The term 'AI washing' is derived from the expression 'green washing'; it refers to the practice of firms making exaggerated and/or false and misleading claims about the usage of AI in their product/service.

Amazon's flagship program, 'Just Walk Out'[1] is said to be an example of AI washing. Amazon claims that Just Walk Out lets customers shop in its stores without having to stand and wait in a que, and AI-based-sensors automatically bill customers for the products they select. However, a report revealed that rather than solely using AI, Just Walk Out had roughly 1,000 workers in India that were manually reviewing around 700 out of every 1,000 sales. [2]

Amazon was quick to term the report as “erroneous”, asserting that human reviewers in high accuracy-based AI systems was common.[3] However, its inflated claims regarding the program's capabilities showcases the willingness of even established titans to capitalize on the AI obsession.

What is fueling AI washing?

The persistence in AI washing stems from the absence of a globally accepted definition of AI. This void emboldens firms to continue making outlandish claims of AI usage which in turn, results in the inability to test their claims against legislatively prescribed parameters.

Now, calling for the codification of an AI definition is easy. Drafting a comprehensive definition that is flexible enough not to impede future innovation and competition, is in consonance with technological neutrality (giving equal treatment to technologies with equivalent capabilities) and does not need constant amendments is the difficult part. For starters, AI is an umbrella term that encompasses a wide variety of technologies like machine learning ('ML'), deep learning, neural networks, etc., which in turn are subsets of one another.

Secondly, AI mirrors defining aspects of technology already in use. To put things into perspective, one can draw a parallel between ML and a calculator. In ML, programmers gather relevant data and use it to train a machine learning model to either explain what has happened OR predict what can happen OR suggest what action can be taken.[4] The better the quantity, quality and preciseness of the data input, the better the model. This is no different than how we use a calculator, where the final calculation output is dependent on the numerical data input.

Working towards a regulatory regime that defines AI

Notably, many international organisations like the UNESCO[5], the EU[6], the Council of Europe[7] and the OECD [8]as well as countries such as the USA[9] and Canada[10] have proposed/introduced influential definitions of AI.

While there is a lack of uniformity amongst these definitions, most of them acknowledge that an AI system is a “machine-based system” with “varying levels of autonomy” that can, based on data inputs, generate recommendations, predictions or decisions to “influence real or virtual environments”.

Pertinently, these definitions have broad connotations. They have been framed whilst considering the impact AI has/will have on a spectrum of issues such as human rights, ethics, health, racial equality, innovation, consumer rights, etc. Given AI's multifaceted applications across a wide range of fields, such definitions are indeed warranted.

Accordingly, it may be more suitable to consider not placing this complex responsibility upon SEBI – a task, which requires a careful calibration and consideration of sectors beyond its purview. Instead, I submit that the relevant Central Government agency ('CGA'), perhaps the Ministry of Electronics and Information Technology, should come out with a 'primary legislation' which lays down a broad framework that defines and regulates AI.

The relevant CGA, by virtue of its over-arching position of authority, necessarily possesses the resources required to undertake extensive consultation with the public, industry experts, regulators and other material stakeholders, thereby enabling the creation of a definition that harmoniously aligns with the socio-economic-legal realities of India.

Thereafter, SEBI can prepare a 'secondary legislation' that serves 'quasi-complementary' to the primary legislation where, while the overarching definition of AI and other related concepts prescribed by the primary legislation remains constant, the secondary legislation functions as lex specialis, addressing specific nuances within their subject-matter fields.

Such a framework would give the regulator, sufficient leeway to flexibly deal with instances of AI washing and aid in reconciling the definition of AI with the area specific demands of the securities market.

Adoption of a risk-based regulatory regime

Additionally, in alignment with global practices, the authorities can consider formulating these laws with a focus on risk-based regulation. A risk-based regulation approach is a form of legislative governance where the enforcement of the law is “calibrated” to proportionately tackle actual, concrete risks.[11] This process involves three stages: risk assessment and categorization, impact assessment and risk management.

The first stage involves assessing and categorizing various risks AI poses/may pose. This enables various users/providers of AI to prioritize their efforts and allocate resources more effectively vis-à-vis the extent of the risk possessed by the AI system used by them. The second stage goes one step further and seeks to evaluate the wider impact AI systems have on various stakeholder users. Finally, the third stage involves determining, evaluating and ranking the various risks pertaining to AI as well as putting in policies in place to reduce, track and manage to possibility of unforeseen events.

The EU's Regulation (EU) 2024/1689[12] ('Act') provides a clear and concise outline of this approach. The drafters of the Act have assessed the various risks posed by AI and catagorised them into 4 different risk categories: unacceptable risk (prohibited under Chapter II of the Act), high-level risk (regulated under the provisions of Chapter III), transparency risk (governed by Chapter IV) and minimal risk (no regulation).

Thereafter, Article 9 of the Act mandates providers of high-risk AI systems to carry out an impact assessment to identify and analyse any “reasonably foreseeable risks” posed by their system that may be “reasonably mitigated or eliminated”, and accordingly, adopt “appropriate and targeted” risk management measures to address the identified risks.

Such a system would allow for more efficient resource allocation, while also empowering service providers – who know their system best – to implement management measures against all material threats deemed probable by them. If these measures fail due to any misrepresentation or omission, the providers can be held accountable qua caveat venditor.

Key Domestic and International Developments

Internationally, regulatory action against AI washing has started to gain momentum. On 18.03.2024, U.S securities market regulator, the Securities Exchange Commission ('SEC') become the first regulator in the world to act against Delphia[13] and Global Predictions Inc.[14] for engaging in AI washing.

Delphia, an investment advisor ('IA') was accused of making false and misleading statements in its regulatory filings, advertisements and social media posts regarding the usage of AI in providing personalized investment advice to their clients via their spending history and social media data. Global, another IA, was accused of making false and misleading claims regarding its status as the “first regulated AI financial advisor”. In both cases, the firms agreed to settle the SEC's charges and pay $400,000 in civil penalties.

In the domestic front, SEBI appears to be closely monitoring the use/misuse of AI in the securities market. In 2019, it issued two Circulars, seeking information regarding the usage of AI/ML in the applications and systems used by Mutual Funds[15] ('MF') and Market Intermediaries[16] ('MI'). MF's and MI's were directed to fill out a questionnaire on a quarterly basis detailing the specific area in which AI/ML has been used, past/pending claims against these systems, safeguards implemented to mitigate any abnormal behaviour of AI/ML, etc.

Thereafter, in August 2024, SEBI floated a consultation paper [17]seeking a review of the regulatory framework governing IA's/Research Analysts ('RA'). The paper emphasized that IA/RA's utilizing AI to provide financial services to their clients/prospective clients must fully disclose the extent of their reliance on such tools, to allow them to make an informed decision to continue availing their services or otherwise. Concerns were also raised about the use of AI tools potentially compromising client data security and failing to meet the client's personal needs or financial goals.

More recently, in its Board Meeting[18] dated 30.09.2024, SEBI clarified that IA/RA's would be held responsible for integrating AI in their services, irrespective of the scale and scenario of their use.

These developments suggest that SEBI is likely to introduce amendments to existing regulations OR potentially unveil an entirely new AI-centric regulation, aimed at curbing the rise of AI washing in the securities market.

The AI age is here. Yet for its full potential to be realized, steps must be taken to build a sense of trust and credibility around it. Unfortunately, the current regulatory gap (as 24.10.2024) has encouraged companies and start-ups to engage in AI washing. This not only misleads and erodes investor confidence in AI, but in turn, hinders the development and integration of truly transformative innovations into our daily lives.

SEBI's acknowledgement of AI washing in the market is encouraging. However, there is a pressing need to implement a legal framework to regulate such practices. Doing so will instill a sense of comfort for investors in the Indian securities market and allow market participants to responsibly utilize AI tools without having regard for their legal uncertainty.

In in the interim, it would be prudent for investors to exercise caution and conduct their own due diligence when dealing with AI products/services. Moreover, IA/RA/MF's must be mindful of their advertisements, disclosures and any other representations made to the public, ensuring accurate and truthful representations of AI's role in their products/services.

In this regard, investors may refer to the guidance note[19] released by the USA's anti-trust and consumer protection regulator, the FTC ('Federal Trade Commission'). While not legally applicable to Indian markets, the note lists out key critical questions regarding false and misleading AI claims that I encourage all market participants to reflect upon. To conclude, when dealing with the 'unknown', transparency is key; all stakeholders must be mindful of this.

Views are personal.


[1] (Frictionless checkout – just walk out technology – amazon web services) accessed 25 October 2024.

[2] Bitter A, 'Amazon's Just Walk out Technology Relies on Hundreds of Workers in India Watching You Shop: Business Insider India' (Business Insider, 3 April 2024) accessed 25 October 2024.

[3] Kumar D, 'An Update on Amazon's Plans for Just Walk out and Checkout-Free Technology' (About Amazon, 17 April 2024) accessed 25 October 2024.

[4] Brown S, 'Machine Learning, Explained' (MIT Sloan, 21 April 2021) accessed 25 October 2024.

[5] 'Recommendation on the Ethics of Artificial Intelligence' (Unesdoc.unesco.org, 2022) accessed 25 October 2024.

[6] (Regulation (EU) 2024/1689 of the European Parliament and of the Council, 13 June 2024) accessed 25 October 2024.

[7] (Draft framework convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law) accessed 25 October 2024.

[8] 'Recommendation of the Council on Artificial Intelligence' (OECD Legal Instruments, 22 May 2019) accessed 25 October 2024.

[9] 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence' (The White House, 30 October 2023) accessed 25 October 2024.

[10] 'Directive on Automated Decision-Making' (Canada.ca, 24 August 2017) accessed 25 October 2024.

[11] Ebers M, 'Truly Risk-Based Regulation of Artificial Intelligence - How to Implement the EU's AI Act' (SSRN, 26 June 2024) accessed 25 October 2024.

[12] Regulation (n 6).

[13] Release No. 6573 / March 18, 2024.

[14] Release No. 6574 / March 18, 2024.

[15] Sharma J(Reporting for Artificial Intelligence (AI) and machine learning (ML) applications and systems offered and used by mutual funds, 9 May 2019) accessed 25 October 2024.

[16] Bandyopadhyay D (Reporting for Artificial Intelligence (AI) and machine learning (ML) applications and systems offered and used by market intermediaries, 4 January 2019) accessed 25 October 2024.

[17] 'Consultation Paper on Review of Regulatory Framework for Investment Advisers and Research Analysts ' (SEBI, 6 August 2024) accessed 25 October 2024.

[18] 'SEBI Board Meeting' (SEBI, 30 September 2024) accessed 25 October 2024.

[19] Atleson M, 'Keep Your AI Claims in Check' (Federal Trade Commission, 27 February 2023) accessed 25 October 2024.


Tags:    

Similar News