Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

Language models used in the legal industry, such as ChatGPT and PaLM, contain numerous errors and are prone to producing content that deviates from established legal principles and precedents. A new preprint study by the Stanford RegLab and Institute for Human-Centered AI researchers shows that legal hallucinations are pervasive and disturbing. The study found hallucination rates ranging from 69% to 88% in response to specific legal queries for state-of-the-art language models. Furthermore, the study also found that LLMs are not yet able to perform the kind of legal reasoning that attorneys perform when they assess the precedential relationship between cases, which is a core objective of legal research. The study suggests that LLMs may struggle with localized legal knowledge, which is often crucial in lower court cases, and questions the claims that LLMs will reduce longstanding access to justice barriers in the United States.

Source: Stanford


Posted

in

, ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *