British Judges Get Permission to Use AI to Help with Writing


15 January 2024

Judges in Britain and Wales have been given permission to use artificial intelligence (AI) tools to help them write legal opinions.

The Courts and Tribunals Judiciary is the organization that oversees court systems and judges in Britain and Wales. It provided new rules on the use of AI last month.

While approving AI methods to support basic duties, the judiciary warned judges to never use such tools to carry out case research or legal examinations. The guidance said AI should not be used for those activities because the technology can produce false, misleading or biased information.

FILE - Judges cross the road toward Parliament after a Service at Westminster Abbey for the opening of the new legal year in London, Friday, Oct. 1, 2021. (AP Photo/Frank Augstein)
FILE - Judges cross the road toward Parliament after a Service at Westminster Abbey for the opening of the new legal year in London, Friday, Oct. 1, 2021. (AP Photo/Frank Augstein)

Geoffrey Vos is Head of Civil Justice in England and Wales. He told The Associated Press the new rules aim to permit "the careful use of AI" to help judges with parts of their jobs. Vos added, however, that judges "must ensure that they protect confidence and take full personal responsibility for everything they produce."

Vos told Reuters that judges are already well-equipped to differentiate between real arguments and those prepared by AI when considering evidence. "Judges are trained to decide what is true and what is false and they are going to have to do that in the modern world of AI just as much as they had to do that before," he said.

Vos said he even thinks the technology might be used in the future to help resolve low-level legal disputes. He noted this could one day help reduce the large number of unresolved cases in the justice system. "I rule nothing out as to what may be possible."

But Vos said he does not feel that people and businesses currently have the confidence to trust AI to independently resolve legal issues.

Ryan Abbott is a law professor at Britain's University of Surrey and wrote the book The Reasonable Robot: Artificial Intelligence and the Law. He told the AP there is currently a lot of debate over how AI should be legally restricted.

Abbott added that many people in the legal field are concerned about possible misuses of AI by lawyers and judges. "So I do think AI may be slower disrupting judicial activity than it is in other areas..." he added.

Court officials in some places issued rules years ago. In 2019, the European Commission body in charge of court systems issued ethical guidelines on the use of AI in courts. While the document is not current with the latest technology, it does offer guidance on accountability and ways to reduce risks linked to AI tools.

In the United States, the federal court system has not established any official guidance on the use of AI. But U.S. Supreme Court Chief Justice John Roberts did recently speak about pros and cons of the technology during a report on the high court's activities during 2023.

But individual courts and judges at both the federal and local levels have likely been setting their own rules, said Cary Coglianese. He is a law professor at the University of Pennsylvania.

Coglianese told the AP the rules approved by Britain's judiciary represent "the first, published set of AI-related guidelines in the English language." But he said he suspects "many judges" have already warned employees about the dangers of AI producing false information and possibly violating the privacy of individuals.

The new British guidance states that judges and court workers who have heavy job loads and write long decisions can effectively use AI as a valuable tool. This is especially true for judges and workers required to write background material or create briefs of existing information.

In addition to using the technology for emails or presentations, judges were told they could also use it to quickly find material they already know well.

But the guidelines warn that AI tools should not be used to find new information that cannot be independently confirmed. The judiciary's guidance also said the technology should not be used to provide detailed analysis or reasoning.

I'm Bryan Lynn.

The Associated Press and Reuters reported on this story. Bryan Lynn adapted the reports for VOA Learning English.

____________________________________________

Words in This Story

biased – adj. showing unfair support for or opposition to someone or something because of personal opinions

confidence – n. the feeling or belief that someone or some group is good or able to succeed at something

disrupt – v. to interfere with a normal activity

ethical – adj. relating to what is right or wrong

accountable – adj. having to be responsible for what you do and able to explain your actions

pros and cons – n. arguments for and against something

analysis – n. the process of closely examining something