England’s 1,000-year-old legal system has taken a cautious step into the future by permitting judges to use artificial intelligence to help produce rulings. Still, the Courts and Tribunals Judiciary stressed that judges shouldn’t use AI for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate, and biased information.
“Judges do not need to shun the careful use of AI…But they must ensure that they protect confidence and take full personal responsibility for everything they produce.”
Geoffrey Vos Master of the Rolls
At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, for a profession slow to embrace technological change, it’s a proactive step as government and industry — and society in general — react to a rapidly advancing technology alternately portrayed as a panacea and a menace.
In its effort to maintain the court’s integrity while moving forward, the guidance is rife with warnings about the limitations of the technology and possible problems if a user is unaware of how it works.
At the top of the list is a warning about chatbots. One is ChatGPT, the conversational tool that exploded into public view last year. It has generated the most buzz over the technology because of its ability to swiftly compose everything from term papers to songs to marketing materials.
The pitfalls of the technology in court are already infamous after two New York lawyers relied on ChatGPT to write a legal brief that quoted fictional cases. Because chatbots remember questions they are asked and other information they provide, judges in England and Wales were told not
“Do not enter any information into a public AI chatbot that is not already in the public domain…Any information you input into a public AI chatbot should be seen as being published worldwide.”
Other warnings include being aware that much of the legal material that AI systems have been trained on comes from the internet and is often based mainly on U.S. law.
But jurists who have large caseloads and routinely write decisions dozens — even hundreds — of pages long can use AI as a secondary tool, mainly when writing background material or summarizing information they already know, the courts said.
In addition to using the technology for emails or presentations, judges were told they could use it to quickly locate material they are familiar with but don’t have within reach. But it shouldn’t be used for finding new information that can’t independently be verified, and it is not yet capable of providing convincing analysis or reasoning, the courts said.
“I asked ChatGPT if you could give me a summary of this area of law, and it gave me a paragraph, …I know the answer because I was about to write a paragraph that said that, but it did it for me, and I put it in my judgment. It’s there, and it’s jolly useful.”
Lord Justice Colin Birss
Source ABC News
Leave a Reply