Dr Keri Grieman is a research associate in the Centre for Commercial Law Studies at Queen Mary University of London and an associate member of the Department of Computer Science at the University of Oxford. She is a qualified lawyer in Ontario, Canada, and the author of Law, Death, and Robots: The Regulation of Artificial Intelligence in High-Risk Civil Applications.
______________________________
As the wheel of technological innovation spins ever faster, how will the judiciary be affected?
In critical applications of AI, the biggest questions are in how they are made, trained, tested, and used. AI is a field that combines mathematics, statistics, and computer science - often to do incredible things, but by math rather than magic. This is important, because it means that AI doesn’t have human motivation: to lie, to obfuscate, to deliberately confuse. However, this also means that AI has no actual level of understanding - of what we hold dear any less than what we discard. AI does not understand the concepts of justice, fairness, or rule of law. It is, however, very good at sounding like it does. This distinction is crucial: applications such as large language models work on vast datasets, applying a probable next word or string of words in completing a sentence. This can result in cases like Avianca, wherein a lawyer asking a large language model (LLM) for relevant cases was given cases which did not exist. While this was something the lawyer could have checked, it brings to mind greater questions of what the use of AI means for the justice system.
Why does AI appeal to the justice system? For very valid reasons: it’s a tool that promises to reduce workload, and aid in the running of a system that is often overworked, overburdened, underfunded, and still exceedingly important. It also appeals for a variety of more concerning reasons: the idea that it is somehow an unbiased, neutral entity; the perception that it provides the purity and accuracy of mathematics with a capacity for decision-making immune from human flaws. As stated, AI doesn’t lie. That does not mean, however, that it is therefore inherently correct. Large language models are the most likely to appear in and around the court, as they can create novel language-based content. However, they also have ‘hallucinations’ where they produce information that is factually incorrect or misleading. It does not intend that these results be wrong, because it cannot intend anything: to be correct, incorrect, or some mixture of both.
There are three ways in which AIs have the potential to touch the justice system, and judicial integrity: in front of the court, by the court, and in examining the court. While there will inevitably be new applications of and approaches to AI, these are likely to be the broad categories that apply.
AI appearing in front of the court - i.e. as part of a matter before the court - will be an interesting progression in legal tradition. AI may function as an intermediary decision maker, and the legal systems will have to develop ways to cope with this, including by finding a corpus of expertise on which to call on in complex AI-related cases, to determine whether the AI has functioned sufficiently or not. While AI in front of the court will no doubt be a fascinating subject, it does not inherently impact the function of the court itself.
AI use by the court would mean the judiciary, or elements of it, incorporating AI. This can be in a broad variety of ways, with different levels of potential impact: scheduling and other administrative applications, translation, and even in judgments. Areas like scheduling are important - they impact the courts and the public - but potential errors include scheduling overlap, needing more time, having scheduled the wrong judge, etc. While these are certainly undesirable issues, they are likely to be noticed when the issue becomes apparent, and are capable of rectification. Administratively, issues do occur without AI, and while ideally AI use will reduce such incidents, they are unlikely to change the level of harm which can occur. They are unlikely to reduce judicial integrity.
Areas such as translation have a greater potential impact on the case: individual’s ability to communicate are directly influenced. The potential harms, however, are those similar to human translators: that something be misheard or incorrectly translated. While translation uses should be heavily vetted for accuracy across all purported languages of use, they do not radically change the type of harm that may occur. Use of AI for judgments, however, is a different story.
England and Wales have introduced cautious approval for judges to use AI in their work. However, they have stressed that while AI may be carefully used in writing judgments, it should not be used in the research or legal analysis, because of the potential for inaccurate or misleading information; nor should judges provide any private or confidential information. Taken at face value, this means that judges should have already made their conclusions, based on independent legal research, before writing begins. This is not necessarily unreasonable. This means, however, that judges must be particularly careful about vetting its accuracy: hallucinations and differences of language may over time alter the way judgments are written, even if they are not inherently correct. While judges are those best placed to, appropriately, ‘judge’ such AI-produced content, that content may subtly alter the shape of judicial language and judgments over time: use of language may be shaped by a more globally-shaped presence of communication, as the large language model draws on a huge variety of content, rather than a localised data set. Additionally, judges are human: while an initial distrust or heightened sensitivity to differences in writing may be present, humans tend to fall victim to overtrust over time, checking a source less carefully the more they come to believe in its accuracy and helpfulness.
Lastly, AI comes into play in examining the court: by profiling the judiciary and members in it. Technology law companies have created applications which act on datasets of specific judges, allowing for the prediction of that judge’s choices and decisions for future cases. French law now bans such applications. Accurate or not, these AI applications are an area that may put a great deal of pressure on judicial integrity, directly or indirectly. It would be an unusual individual indeed who would face a statistical prediction of their decision, and then be capable of acting without being affected in any way. This is not to say that their predictions would necessarily change, but that they must immediately face doubt: have they been swayed to agree with their simulacrum? Or have they overcorrected to disagree with it? Would the reasons for their decision have been the same, or have reflected agreement or argument with the AI’s predictions? Ultimately, do they trust that the AI made the same decision they would have, and for the same reasons?
In summary, AI and the judiciary have an interesting future ahead of them, with some distinct choices to be made. Risk analysis must be undertaken, but done so with a critical eye on how the AI actually functions, the levels to which we understand how those decisions are made, and perhaps most importantly, how the decisions made by the AI may affect human decision makers, even when the human decision makers are not replaced by AI.