A black and white image of a laptop with artificial intelligence icons

Artificial Intelligence in the Justice System: Navigating the Nuances of Natural Language Processing and ChatGPT

The integration of artificial intelligence (AI) in the justice system has been a groundbreaking development in recent years. This year, there has been ample discussion around the implications of the use of AI—often without considering the nuances of the types of AI at play and the manner in which they are deployed.

Today, we’ll explore the differences between generative AI and other forms of AI, particularly natural language processing for speech-to-text technology. We will also delve into how these technologies are already impacting court proceedings and the wider justice system. Finally, we will evaluate the trends within the discourse relating to the deployment of AI within the justice space—and the need for further nuance within these discussions.

What is AI anyway?

Merriam Webster describes AI as “the capability of computer systems or algorithms to imitate intelligent human behavior.” According to Microsoft, AI refers to “the capability of a computer system to mimic human cognitive functions such as learning and problem-solving. Through AI, a computer system uses math and logic to simulate the reasoning that people use to learn from new information and make decisions.”

As AI evolves and becomes increasingly more sophisticated, the myriad applications are becoming more apparent. There are generally six subsets of AI:

  • Machine learning
  • Deep learning
  • Robotics
  • Neural networks
  • Natural language processing

While the deployment of AI systems often transcend these subfields or categories, they do provide a useful framework for understanding the current state of AI and where it’s likely to go in the future.

Is AI already in use in the justice space?

Many legal technology solutions have been underpinned by AI in recent years—and will likely continue to do so for many years to come. For courts, AI has been deployed at various stages of judicial proceedings.

These deployments range from e-discovery and contract management solutions to timekeeping and scheduling tools. For instance, in the pre-trial period, attorneys have already been using AI tools for research and brief building, including Casetext’s CARA AI, which uses a legal brief or complaint as a basis and delivers customized results for the most relevant cases and caselaw based on the context of the research.

For judges, AI can also be deployed to automatically distribute and schedule the cases. The recent adoption of legal automation programs including self-help chatbots have been found to assist in removing some structural barriers to the justice system for self-represented litigants.

During deliberations, AI can be used to produce a summary of the facts of a case to aid in a final judgement. Additionally, modern speech-to-text tools increase the accessibility of audio/video recordings through real-time feeds, allowing judges and attorneys to immediately search the court record and revisit testimony as required.

Interestingly, these modern speech-to-text tools aren’t necessarily the first examples of AI being deployed in the court record capturing process. Voice writing through the use of a stenomask is a hybrid solution which uses an individual’s voice to “re-voice” everything they are able to hear, which is then transcribed by a speech recognition system that leverages AI—a practice which has been occurring since the 1990s.

Generative AI versus natural language processing

A problem with the current discourse around the deployment of AI within the justice system is the nuances of the type of AI we’re actually discussing. Without defining the specifics of the AI, we lean into a more general mistrust which may infringe on its usefulness without considering why we need to be mistrustful in the first place. Without a clear understanding, we may hinder the development of the future use of AI and therefore meaningful progression.

When we talk about AI, most people are referring to generative AI, because we have been dealing with AI—both knowingly and unknowingly—in the justice system for many years.

Among the various forms of AI, generative AI stands out as a powerful and transformative technology since its wider public release in 2022. Generative AI describes systems (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.

ChatGPT is a large language model trained on extensive amounts of text data, enabling it to generate human-like responses to user prompts. Ultimately the goal of the tool is to read input information and predict probabilities—i.e., the system predicts what would be a coherent and plausible response for a human to make as a result of a given prompt (or query) based on the myriad of input data the system has available.

Speech-to-text technology, also known as automatic speech recognition, primarily relies on the machine learning and natural language processing subfields of AI. Where machine learning is concerned with algorithms that can effectively generalize and perform tasks without explicit instructions, natural language processing leverages models to process and understand existing human language. Applications of natural language processing include voice assistants, customer service chatbots, and speech-to-text software.

Speech-to-text systems meticulously analyze the audio input, making an informed estimation by prioritizing the most probable word. In contemporary applications such as FTR RealTime, these systems undergo a second scrutiny of the audio, evaluating the coherence of the initial word in relation to the preceding context.

Consequently, speech-to-text systems have evolved to adeptly concentrate on the nuances of language context, refining the ultimate output through probabilistic processing.

It’s a matter of accuracy

But, these speech-to-text systems still aren’t 100% accurate—and they are very unlikely to ever be entirely accurate. Similarly, certified transcripts produced by stenographers are highly unlikely to ever be entirely accurate because they are relying on a human—who is also susceptible to limitations—to interpret what was said correctly.

Furthermore, generative AI models are not trained to understand the “truth”, they’re trained to understand the relationships in language. If the training input data is biased, incomplete, inconsistent, or ambiguous, it’s likely the model will struggle to grasp the full context and will produce inaccurate or incomplete outputs.

So, if generative AI or large language models, speech-to-text systems, and stenographic transcripts all face accuracy limitations, how do we leverage these tools effectively? Aside from technology companies continuing efforts to diversify and enhance training data, there is another crucial component: a source of truth.

With large language models like ChatGPT, providing a reference text that you know to be true and using prompts such as summarization or comparative analysis would yield more useful outputs than relying on it for research purposes for instance.

Consider FTR RealTime, which is up to 95% accurate. Its text output accuracy can easily be measured or confirmed with the synced audio and video recordings, which serve as the source of truth.

This is a key difference to stenographic-produced transcripts, which lack the capacity to be verified against a source of truth because usually the stenographer does not have a multi-channel recording to refer to for verification purposes.

Concerns around AI

Concerns about AI in courtrooms encompass various areas, such as diminishing human involvement in favor of automated decision-making and the generation of legal submissions by “robots”. These concerns also extend to issues of privacy, data rights, biases within input data sets, and security considerations, including the potential threat posed by deepfakes. In upcoming segments of our AI series, we will explore these concerns in detail. However, today our focus will center on the one that has been most prominently observed: the replacement of humans in legal processes.

Within the current discourse, AI is often mentioned in the context of judges being replaced by robots and algorithms instead generating sentencing for cases. While this is undoubtedly a development that should be analyzed and the impact of its application evaluated, it’s important to note that AI-based decision making is not the current norm—nor is it likely to progress to that status any time soon.

It’s clear more nuance in this conversation is required. Speaking to the impact of AI on courts, Judge Scott Schlegel from the Louisiana Fifth Circuit Court of Appeal—who also chairs the Louisiana Supreme Court Technology Commission and is on the Advisory Council of the American Bar Association Task Force on the Law and Artificial Intelligence—recently noted in a webinar hosted by the National Center for State Courts, “I’m not sure that a judge should be using these tools when it comes to decision making…that to me is possibly an abdication of my judicial decision making process and my responsibilities.”

The current discourse doesn’t necessarily note that these technologies—and the majority of the ways in which they are deployed—are meant to assist and enhance human decision-making, boost efficiencies, and increase accessibility, rather than replace the critical role of legal professionals and judges.

Judge Schlegel agrees, suggesting other ways that tools such as ChatGPT can be useful without veering into judicial decision-making abdication, including presenting the fact pattern and the answer to the tool to summarize or to write up the opinion the legal professional had come to on their own. He notes, “You are using these tools to enable you to increase your productivity and to do a better job.” As such, it’s clear the way forward to leverage these tools effectively is to use them with a source of reference that you know to be true.

Narrowing the perspective: how does natural language processing AI differ from generative AI within the justice context?

This is where speech-to-text technology and natural language processing stands out from the current discourse around the deployment of AI in the justice space. The critical difference lies in the purpose and potential impact of these AI technologies.

Natural language processing primarily assists in understanding and converting existing information. Generative AI, while remarkable in its capabilities, raises concerns due to its potential to create new, potentially unverifiable content, which could influence decision-making or perpetuate biases without easy oversight.

The aim of speech-to-text systems is to accurately process and transcribe what has already been spoken or written. Speech-to-text tools cannot predict any output if no input has been processed—i.e., they cannot “hallucinate” or generate text (inaccurate or otherwise) before an audio input is received.

As mentioned earlier, FTR RealTime processes the audio and then has two passes at generating the most probable output based on the context of the preceding sentences. Maintaining the link between the audio and video recordings and the speech-to-text output increases the transparency of proceedings because the converted source of truth remains accessible, clear, and verifiable.

What’s next?

AI within the justice system isn’t new: various types of AI-based tools are already making a significant impact on processes, enhancing efficiency, accuracy, and access to legal resources.

In dealing with the accuracy limitations of generative AI, speech-to-text systems, and stenographic transcripts, the effective utilization of these tools involves not only ongoing efforts by technology companies to improve training data, but also the incorporation of a reliable source of truth.

The discourse surrounding AI in the justice system requires further nuance to ensure we do not perpetuate misunderstandings, thereby reducing the potential for progress. It’s important to note that not all AI is generative, and not all AI will end in an abdication of judicial decision making.