Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

The emergence of large language models (LLMs) like ChatGPT, PaLM, Claude, and Llama is transforming the legal industry. These models are being utilized to execute various legal tasks such as examining discovery documents, drafting legal memoranda, and creating legal strategies. However, a growing concern is the tendency of LLMs to generate content that diverges from actual legal facts or well-established legal principles and precedents, which is known as legal hallucinations.

A new preprint study conducted by researchers from Stanford RegLab and the Institute for Human-Centered AI shows that legal hallucinations are both prevalent and alarming. The study reveals that hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models. Moreover, the study demonstrates that when dealing with more complicated tasks that require a nuanced comprehension of legal issues and interpretation of legal texts, the performance of these models deteriorates.

Source: Stanford HAI


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *