Assessing The Reliability & Admissibility Of AI Expert Opinion Report

Siddharth Singh

19 Aug 2024 8:16 AM GMT

  • use of ChatGPT by lawyers, personal informartion, data breach, client details, privacy policy
    Listen to this Article

    The integration of artificial intelligence (AI) in the workings of various professions continues to yeild efficiency. However, the applicability of AI systems in the legal domain remains debated. One such debated aspect is around the admissibility of expert opinion report by AI. Under the Indian Evidence Act, 1872, the courts rely on opinion of the experts for issues concerning fields of science, art, and others in which the court does not have expertise. An expert, who gives his informed opinion, is not required to hold some sort of qualifications or degrees, rather they must possess the skill to determine the issue at hand. However, there has been instances where an AI report was presented as an expert report in court of law. One such instance is the case of People v. Wakefield in the Court of Appeals, New York. A DNA analysis was to be performed and the same was carried by TrueAllele, which is an AI software that uses statistical methods to perform a DNA analysis. The defendents raised their concerns against the said AI expert report by TrueAllele and the court addressed the same.

    With the emergence of trained AI models, there may be a future wherein AI models give their export opinion helping the court of law, particularly, in adjudicating issues with higher efficiency. Integration of such techology may result in enhancing the efficiency of the Indian justice delivery system. However, there are contingencies and issues of reliability that come along with such technology that must be addressed. In this piece, I analyse whether AI has the required prerequisite and 'skill' to give an expert report, what factors must be considered while adjuding the admissibility of the AI expert report, and whether using AI for ascertaining an expert opinion results in higher efficiency for adjudication by the court of law.

    Breaking Down AI's Expertise

    In order to determine the admissibility of AI's expert opinion, it is crucial to understand the process and workings of an AI model. Working of most of the AI models is largely based on its training over a dataset using principles and methodologies. The AI model processes the provided dataset through its coded methologies in order to yeild results. The accuracy of such result is heavily dependent on the validity & correctness of the dataset provided, the methodologies that is used by the AI model, and the reliable application of methodologies to the relevant facts. This forms the premise of admissibility of an AI model to provide expert opinion. In order to rely upon the AI expert opinion, it is pertinent to understand the kind of data and the computational methodologies that has been used by the AI model to ascertain such opinion.

    To adjudge the admissibility of an AI expert opinion, the court must go through the methodology and satisfy itself whether such computation follows a logical and scientific process. This was emphasized in the case of Daubert v. MerrellDow Pharmaceuticals, Inc., wherein it was stated that “the focus must be solely on principles and methodology, not on the conclusions generated”. However, if the court is unable to understand the computations that reached a certain conclusion with the provided dataset, or “there is simply too great an analytical gap between the data and the opinion proffered” as held in General Electric v.Joiner, then the AI expert opinion should be excluded and set aside. While the admissibility is to be adjudged by a court of law, the AI models must provide the methodology and computational processes with utmost transparency. This might attract the contention of keeping the methodologies secret and the discussion around 'close source software'. Although this issue requires a dedicated discussion of itself, it will be briefly addressed further.

    As discussed above, the reliability on computational methodology used is crucial and must be focussed in detail to determine the admissibility of the AI opinion. However, the dataset, i.e. the facts and other relevant data, must be sufficient and relevant to context of the issue at hand. Even if a court concludes that the computational methodology of the AI model follows a scientific process, any defect or incorrectness in the provided data or facts will lead to incorrect outcomes. This will seriously undermine the admissibility and validity of the AI opinion, and the court will have to exclude such conclusion. For this reason, a court must also determine the reliability and validity of the data provided ensuring the legitimacy of the outcome. After ensuring the validity of the methodology used and the facts provided, a court must ascertain whether such methods and principles were validily applied to the given facts. The context in which the issue persists is crucial. And the application of such methods on the contextual facts is necessary for a reliable conclusion. As mentioned above, there must not be “too great an analytical gap” between the applied methods and the facts in order to ascertain the conclusion. Otherwise, the conclusion will be excluded.

    Legislative Intereference

    Considering India's digital revolution combined with expertise in software technology, AI is being integrated at a significant pace. Although there exists statutes like the IT Act 2000, the considerable widespread application of AI may require legislature to enact an altogether separate law for regulating it. The European Union has already acted upon the same by introducing the AI Act which regulates the use of AI in Europe. This Act includes that the AI systems must allow “appropiate traceability and explanability” ensuring transparency. This deals with the contention of 'closed source software' in context of presenting the computational methodology in a court of law for assessment.

    Similar to the AI Act of the European Parliament, the Indian government must take proactive steps to formalise the regulation of AI in India. It must deal with the issues posed by AI and implement checks on its methodologies to foster reliability. Such regularisation will also legitimise AI systems to be used in various areas including the legal system, particularly, courts of law.

    Regardless as to how significantly AI has developed and integrated into the world around us, its application in the legal system must be based and limited to the current legal jurisprudence. In this particular case, the AI expert opinion must be based on the rules of evidence and the current Indian laws of evidence. The reason for it can be attributed to the fact that the AI systems are at a rudimentary stage. They do not possess the expertise to determine the nuanced aspects of a legal case for the judges to count upon it without assessing its reliability. The AI systems are limited in their functioning and reliability. And therefore, this requires a court to determine whether the AI expert report has used valid methodology and set of facts, maybe with the help of another expert. One of the intentions of integrating AI is that to higher the efficiency of the process. However, the current rudimentary stage of AI systems is in no position to be relied upon on its own. In the court of law, it will require either a judge or another expert to determine the validity of the AI expert report. This poses questions on the efficiency of AI expert opinions. Until reliable advanced AI systems emerge, the current technology will have contours of doubts surrounding them.

    Views are personal.


    Next Story