The Truth About Hallucinations in Legal Research AI: How to Avoid Them and Trust Your Sources

Hallucinations are outputs from LLMs and generative AI that look coherent but are wrong or absurd. They may come from errors or gaps in the training data “garbage in, garbage out”.

But just as importantly, hallucinations may arise from the nature of the task we are giving to the model. The objective during text generation is to produce human-like, coherent and contextually relevant responses, but the model does not check responses for truth. And simply asking the model if its responses are accurate is not sufficient.

In the coming months, as legal research generative AI products become increasingly available, librarians will need to adapt to develop methods for assessing accuracy. Currently, there appear to be no benchmarks to compare hallucinations across platforms. Knowing librarians, that won’t be the case for long, at least concerning legal research.

Source: AI Law Librarians


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *