RAG Systems Can Still Hallucinate

The retrieval-augmented generation (RAG) technique has not entirely eliminated the issue of hallucinations in Lexis+ AI while answering legal questions. It is recommended to ask vendors which sources are included in the generative AI tool and only ask questions that can be answered from that data. It is also advised to always read the cases for yourself and be especially wary if the summary refers to a case not linked. 

When using generative AI tools, it is essential to ask your vendor which sources are included and to limit your questions to those that can be answered from that data. It’s also important to remember that these research products may not automatically have access to other data from the vendor, such as Shepard’s, litigation analytics, or PACER, and that it may take time to implement. 

It would help to always read the cases and not rely solely on AI-generated summaries or editor-written headnotes, as these may be inaccurate or incomplete. Additionally, be cautious if the summary refers to a case that is not linked, as this could indicate that the AI may have incorrectly summarised the linked source. 

Finally, it is best to ask your questions neutrally and get a dispassionate summary of the law before launching into an argument, even if you ultimately plan to use the authorities. By following these tips, you can use generative AI tools more effectively and avoid potential pitfalls.

The tools available for legal research are constantly improving, and the creators of these tools are very open to feedback. However, the Mata v. Avianca errors remain. Importantly, all of us need to be aware that such hallucinations can still occur, even with systems connected to actual cases, and that there are ways we can interact with these systems to reduce such hallucinations.

Source: AI Law Librarian



, ,



Leave a Reply

Your email address will not be published. Required fields are marked *