Posted by Mahi Patel.
Artificial Intelligence (AI) is transforming practically every sector and industry, with some businesses and professions being revolutionized quicker and more severely than others. The AI-powered era, unlike the industrial revolution, which mechanized physically demanding work and “replaced muscles with hydraulic pistons and diesel engines” (Stepka), which is automating cerebral activities. While AI may be only optimizing certain blue-collar jobs, it is also affecting many white-collar jobs that were originally expected to be immune to robot technology. Some of these occupations are being fundamentally revolutionized by AI to achieve things that were previously impossible, enhancing and even substituting their human counterparts in the workplace. As a result, AI is having a significant impact on legal practices and procedures. AI is already being used to examine contracts, uncover important documents in the trial process, and undertake legal analysis and inquiries, though it is more likely to assist than replace attorneys in the future. However, it is a real possibility they may replace attorneys and lawyers because of advancement in technology. AI is now being used to assist in drafting contracts, predicting legal decisions, and even advise judicial judgments on punishment and bail, allowing for faster research and decision-making. The prospective applications of AI in law are substantial. It has the potential to boost attorney efficiency while eliminating costly errors. Nevertheless, AI has not yet been fully prepared to replace human and professional judgment in the practice of law.
Correctional Offender Management Profiling for Alternative Sanctions is one such application of advising judges on bail decisions (COMPAS). In several places, criminal courts employ COMPAS and comparable AI techniques to assess the recidivism risk of defendants or convicted people when making decisions about pre-trial holding, sentencing, or early release. The legitimacy and accuracy of these systems are fiercely disputed. According to a ProPublica investigation, such assessment systems appeared to be prejudiced towards black inmates, classifying them as considerably more likely to reoffend than white inmates. Equivant, the company that created COMPAS, attempted to deny ProPublica’s findings and dismissed its racial bias claims.
In order for AI to design legal contracts, it will need to be taught as a qualified lawyer. This necessitates the AI’s inventor gathering legal performance data on multiple types of contract language (also known as “labeling”). The AI is then taught how to construct a decent contract using this labeled data. The legal performance of a contract, on the other hand, is frequently very context-specific, as well as varies by jurisdiction and its laws. Furthermore, because most contracts are never seen in a trial, their terms remain unanswered and confidential to the parties. AI generative systems that are trained on contracts carry the potential of amplifying both excellent and poor legal services. As a result, it’s difficult to see how AI contract writers can improve significantly anytime soon. AI tools simply lack the topic understanding and linguistic clarity required to function autonomously. While these tools may be beneficial for drafting language, the result must still be reviewed by humans before being used.
Lawgeex, for example, offers a service that can examine contracts rapidly and more precisely than humans. Law firms around the United States use AI-powered discovery services from companies like CS Disco, which officially went public. Quick Check, another AI-powered function from Westlaw Edge, analyzes a draft argument to obtain additional insights or find important authority that may have been overlooked. Quick Check can even tell you if a case you’ve mentioned has been overturned in an unexpected way.
AI is a welcomed instrument in the pursuit of justice as a way to make the legal process speedier and more devoid of discrepancies. AI may be a more efficient means to decide civil matters while also boosting consistency without posing a systemic risk. However, when AI is utilized to substitute human judgment, especially in the context of criminal law, it becomes increasingly critical. For a variety of reasons, AI isn’t ready for this. For one thing, there could be bias in the training data, which will be reinforced and entrenched by the ML (machine-learning) models. Researchers may be ready to surpass this challenge; in fact, the process of removing distortion from our training data may lead us to recognize and address some of our legal system’s intrinsic racism and bigotry.
Mahi is a marketing major, Stillman School of Business, Seton Hall University, Class of 2025