A study conducted by Stanford University has warned that chatbots powered by AI, such as those from OpenAI, Google, and Meta, can produce inaccurate and misleading responses to legal questions, posing a particular risk to individuals who cannot afford to hire an actual lawyer.
The research revealed that large language models, including OpenAI’s ChatGPT 3.5, Google’s PaLM 2, and Meta’s Llama 2, can hallucinate roughly 75% of the time when answering questions related to a court’s core ruling. Hallucination is a significant issue, given that around 92% of low-income Americans in need of legal assistance do not receive adequate help, according to the Legal Services Corporation.
The hope is that AI could fill this gap, but the study suggests that AI’s inaccuracies could undermine this goal. The study recommends caution when deploying AI chatbots for legal purposes and suggests that models designed for such use may be more effective.
However, it warns that even those models may not be entirely accurate and that lawyers should carefully assess the integrity of any information AI provides. Finally, the researchers found that AI models struggle with core legal research tasks, such as determining whether two court cases agree or disagree with one another, and perform no better than random guessing.
Source: Bloomberg Law
Leave a Reply