Judges in England and Wales will have approval for “careful use” of artificial intelligence (AI) to help produce rulings, but experts remain divided over how extensively judges or the wider law profession should seek to use the technology. 

“I would say AI is probably appropriate to cast a wide net to gather as much information as possible,” William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, told Fox News Digital. 

“That might inform your decision, but I don’t think it is at a place now – and I don’t know if it ever will be – that it can actually do the sorting … and make the sort of decisions and determinations that you need to make, whether it’s as a judge or a lawyer,” Jacobson said. 

The Courts and Tribunals Judiciary, the body of various judges, magistrates, tribunal members and coroners in England and Wales, decided that judges may use AI to write opinions, and only opinions, with no leeway to use the technology for research or legal analyses due to the potential for AI to fabricate information and provide misleading, inaccurate and biased information.


Caution over AI’s use in the legal field partially stems from a few high-profile blunders that resulted from lawyers experimenting with the tech, which produced court filings that included references to fictional cases, known as “hallucinations.” 

Attorney Steven A. Schwartz, who filed a case on behalf of an injured client against Colombian airline Avianca, had used AI to hunt for legal precedents and thought that when he couldn’t find the cases himself it was due to an inability to access them the way AI platform ChatGPT could. Schwartz admitted he had no idea that the platform could invent cases

More recently, former President Trump’s onetime lawyer, Michael Cohen, admitted to filing a motion that included fake legal cases generated by Google’s AI platform Bard, a service he believed was simply a “supercharged” search engine.

Despite these serious blunders, AI experts remain confident in the ability to use the platforms as long as the human element remains squarely in the frame and in considerable control of the process.


“This is going to be a problem that is going to persist in this generation of the technology for a long time,” Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital.

“Given the rules that [the government] put around it, the way I would characterize what the judges will have at their disposal … you should treat it like a superintelligent teenager with no judgment,” Siegal said. “You can ask it to write your brief for you, but if you don’t check the work and you don’t make sure that it came back with everything being accurate – with cases being referred to that are real and so forth – then you are going to continue to get gibberish and it’s going to be a problem.”

Siegel said putting the responsibility on judges will help drive them to go the extra mile should they use AI in their process, likening the process to the already familiar clerk system, which often sees knowledgeable but inexperienced law students or fresh law school graduates writing opinions for a judge. 

“If you’re a judge that has a law clerk with no experience and you don’t check their work, I think you get what you deserve,” Siegel said. 


Michael Frank, a senior fellow with the Wadhwani Center for AI and Advanced Technologies, said the technology has already cut down on the volume of hallucinations in AI models, even as he acknowledged that, due to the impact and weight of legal decisions, the public would not likely be happy with near-zero rates of hallucinations. 

Referencing the recent rate of 3% in ChatGPT 4 as “really, really good,” Frank told Fox News Digital that “obviously, for something as sensitive as sentencing and writing legal opinions, that’s still too high, and essentially there is always going to be this problem with this paradigm of AI.”

“The way these models work is they predict the next word in a sequence, and they’ve gotten very good at doing that,” Frank said. “But we cannot pinpoint exactly why these models come to the conclusion that they come to. It’s technically impossible as technology exists today.” 

But Frank said the technology has improved enough to provide the English judiciary with confidence for some use. Additionally, as it will in other industries, AI will help take away the burden of labor-intensive tasks and help address the “pain points” of the legal system.

“There needs to be a recognition that the goal should not be to completely remove humans in the loop,” Frank said. “It should understand what is the role of a judge, what are the pain points in their job right now, and how can you deploy these models to resolve those pain points.”

Jacobson noted that the skepticism could derive partially from generational differences, pointing to the greater adoption and familiarity younger generations have with technology and AI, specifically, leading him to speculate that over time the technology’s use will increase even if it remains inaccurate to some significant degree.

“You can’t hold back the floodgates of technology, and anybody who tries to ignore this, I think will do it to their disadvantage or disadvantage in court,” Jacobson said, “their disadvantage in running a law firm or in running a corporation. It’s here. We’ve got to deal with it.”

Fox News Digital’s Brie Stimson and the Associated Press contributed to this report.