AI Hallucinations And The Non-Delegable Responsibility Of Lawyers
Nehanshu Rao
25 March 2025 11:32 AM
The Bengaluru Bench of the Income Tax Appellate Tribunal recently made headlines for incorrectly passing an Order relying on imaginary judgements that were factually and legally inapplicable to the case at hand.These judgements – one from the Madras High Court and three from the Supreme Court – two were entirely non-existent, one had a citation which led to an unrelated judgement, and...
The Bengaluru Bench of the Income Tax Appellate Tribunal recently made headlines for incorrectly passing an Order relying on imaginary judgements that were factually and legally inapplicable to the case at hand.
These judgements – one from the Madras High Court and three from the Supreme Court – two were entirely non-existent, one had a citation which led to an unrelated judgement, and the fourth pertained to a judgement that was completely irrelevant to the legal and factual issues under consideration.
Although the Order has been withdrawn due to “inadvertent errors”, it has underscored a broader concern of placing blind reliance on AI in legal research and highlighted its propensity to generate convincing yet fictitious information to match user expectations – a phenomenon, commentators have dubbed as 'AI hallucinations'.
As the legal world strives to harmonize the multifaceted capabilities of AI with the rigorous demands of the profession, advocates and law firms must remain vigilant about AI hallucinations and the potential ramifications they may have on their duty towards clients, the bench and the bar..
What are AI Hallucinations?
AI hallucinations are “incorrect or misleading results”[1] generated by AI, that are normally caused due to a variety of factors including, “insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.”[2]
In the context of the legal profession, AI hallucinations can, at the very least, lead to the following erroneous outcomes:
· Outcome 1: AI may accurately describe the law whilst simultaneously generating non-existent judicial precedents or fabricating fictitious case laws (which may or may not be accompanied by seemingly legitimate citations) that do not actually support its assertions.
· Outcome 2: AI may create new legal doctrines or principles or court procedures which do not exist or are inapplicable in a particular jurisdiction.
· Outcome 3: AI may incorrectly summarize case laws, statutes or subordinate legislations, thereby leading to misleading interpretations.
· Outcome 4: AI may incorrectly apply obsolete or repealed laws, or even correctly reference existing legal provisions but arrive at a flawed legal conclusion.
Notably, these outcomes are particularly evident in those areas where the law is in a constant flux, making it even more difficult for AI to provide reliable information.
A Stanford study[3] has identified three crucial challenges that inhibit AI from being an effective and trustworthy tool in legal research.
Firstly, many AI training datasets contain questions with clear and straightforward answers that can be found in the source database. However, legal queries differ from the norm in as much as that they rarely have direct or clear-cut answers. Laws can be open to interpretation; individual judgements may have conflicting rulings, and legal reasoning often depends on the factual context of the case.[4] Thus, deciding what information to retrieve from the dataset and what answer to generate can be a challenging task in a legal setting.
Secondly, most AI systems identify relevant documents based on “some kind of textual similarity”.[5] While these may work in other fields, in law, it can lead AI systems to identify and generate answers derived from sources that appear textually relevant but are, in reality, “irrelevant or distracting”.[6]
Thirdly, generating legal texts or summarizing legal documents is an intricate task which extends beyond simple text retrieval. Such activities require lawyers to rely on their pre-existing command over relevant laws to critically “synthesize facts, holdings, and rules from different pieces of text while keeping the appropriate legal context in mind.”[7] An effective AI system would be expected to emulate these nuanced analytical points to generate precise and reliable responses.
However, these challenges should not deter the legal community from embracing AI in their day-to-day activities. As I have previously argued, AI can be a formidable ally[8] – provided it is used in a responsible manner. This brings me to the next limb of the article, where I will examine the duty and responsibility of advocates and law firms in navigating the risks posed by AI hallucinations.
Responsibility arising out of AI Hallucinations
So far, Indian Courts have not yet had the opportunity to adjudicate on AI hallucinations and their implications for the responsibility advocates and law firms share towards their clients and the Court.
However, the U.S District Court for the District of Wyoming, in the case of Stephanie Wadsworth and Matthew Wadsworth v. Walmart Inc. and Jetson Electric Bikes, LLC.,[9] (hereinafter referred to as “Stephanie”) recently passed an Order on 24.02.2025, directly engaging with AI hallucinations and the corresponding duties of legal professionals.
Briefly put, the facts of the case are as such: On 06.02.2025, the Court issued an “Order to Show Cause Why Plaintiffs' Attorneys Should Not Be Sanctioned or Other Disciplinary Action Should Not Issue” against three Respondents, R1, R2 and R3 – all attorneys, for citing 8 non-existent cases in Motions in Limine.
In response, the Respondents, while collectively acknowledging that the cited cases were fictitious and the result of AI hallucinations, submitted the following response:
· R1 acting under the direction of R2, drafted the Motions while relying on their law firm's in-house database AI platform, which generated the non-existent judgements. R1 did not bother to verify the contents of the decision.
· R2 had a limited involvement in the drafting process in as much as that he merely suggested that a term used during their client's deposition be excluded in the final version of the Motions.
· R3 who was the local counsel in this matter was not at all involved in the drafting of the Motions.
· Both R2 and R3 were not provided copies of the Motions to review prior filing. However, they nonetheless affixed their e-signature at the bottom of the Motions.
· R2 and R3 placed their trust in R1's reputation and experience, if he had fulfilled the duty of conducting a reasonable inquiry into the facts and law.
However, the Court was not persuaded by these submissions. Instead, it held that the Respondents had a “non-delegable responsibility”[10] to read the document and “conduct a reasonable inquiry”[11] into the existing laws. As a thumb rule, attorneys cannot place blind reliance on one another. If an attorney does give permission to another to sign a document on their behalf, without reviewing the facts or law contained therein, they are in possible violation of a “state's ethical rule of competence”.[12]
By neglecting to check the veracity of the AI-generated information, the Respondents not only squandered the Court's valuable time, but also wasted their client's time and money, ultimately depriving them of “their opportunity to file meritorious motions.”[13]
The Court further observed that such conduct causes “potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct.”[14] This in turn fosters cynicism about the legal profession and the judicial system, while creating a dangerous precedent of tempting future litigants “to defy a judicial ruling by disingenuously claiming doubt about its authenticity”.[15]
Consequently, R1's pro hac vice status (a temporary special license allowing an out-of-state lawyer to practice outside their licensed jurisdiction) was revoked, and he was fined $3,000, while R2 and R3, having played no role in drafting, were each fined $1,000.
Interestingly, the Court decided not to hold R1 + R2, and R3's law firms jointly responsible for these omissions. The former law firm had presented evidence, showing that it never trained its employees to use its in-house AI system in the manner R1 had.[16] Additionally, the firm submitted that implemented a remedial safeguard, requiring users to “independently verify any AI-generated information before using or relying on it.”[17] Conversely, the latter law firm was not held liable since, R3 neither used AI in her practice nor participated in drafting the Motions.[18]
Although not a judgement authored by an Indian Court, Stephanie has brought much-needed clarity to the evolving discourse on the expanding responsibilities of the legal community in an era driven by AI.
As Indian law firms increasingly adopt in-house AI systems, and in light of the Supreme Court's recent emphasis on the non-delegable responsibility of an advocate-on-record to ascertain the veracity of the facts and laws cited in an SLP,[19] the observations made in Stephanie are just as relevant in India as they are in the United States. Thus, it would be prudent for legal professionals across the spectrum to be mindful of its implications and the responsibilities it underscores.
Views are personal.
[1] “What Are AI Hallucinations?” (Google Cloud) https://cloud.google.com/discover/what-are-ai-hallucinations accessed March 5, 2025.
[2] ibid.
[3] Magesh V and others, “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools” (2024) https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf accessed March 5, 2025, pg. 6.
[4] ibid.
[5] ibid.
[6] ibid.
[7] ibid, pg. 6-7.
[8] Rao N, “AI Washing In The Securities Market: Navigating Regulatory Challenges And Unlocking Potential” Live Law (November 1, 2024) https://www.livelaw.in/articles/ai-washing-securities-market-navigating-regulatory-challenges-unlocking-potential-274051 accessed March 5, 2025.
[9] Case No. 2:23-CV-118-KHR.
[10] ibid, pg. 8.
[11] ibid.
[12] ibid.
[13] ibid, pg. 13.
[14] ibid.
[15] ibid.
[16] ibid, pg. 16.
[17] ibid.
[18] ibid.
[19] Jitender @Kalla v. State (Govt. of NCT of Delhi) & Ors., 2025 INSC 249 [20-21, 44].