How To Manage The Hallucination Problem Of Legal AI

 AI can be an excellent tool in the legal sphere. But reliability matters a lot, which means that AI-powered legal tech providers, must be conscientious, cautious and ethical in our deployment of this technology. Safety matters. Truth matters.

We’re in this phase where we know that AI will transform law for the better, but we’re also at risk of overestimating the technological status quo. We’re in this murky place between a passed bar exam and sanctions from the Bar Association.

So what do we do? How do we leverage the technology without causing harm?

Let the pros do the prompting in a high-stakes environment. Prompting is a difficult skill to master. Clear and specific prompts are crucial to avoid hallucinations, so be careful with products where you have to do all the prompting.

The second step is to train the AI models on diverse, balanced and well-structured data. Don’t use a free subscription to a large general model to write your contracts, for example. Use more limited and specialised models with more predefined formats. That will minimise irrelevant results and ensure the output aligns with pre-described guidelines. For creativity, it doesn’t matter. For reliability, it’s a different case.

Furthermore, in general, limit your use of AI to specific and well-defined use cases. AI can help you identify simple clauses in a contract. It can extract specific data points and summarise documents. But it’s not a general intelligence. Please don’t use it as a search engine and research tool. Utilise AI, mainly in low-stakes situations where it does a specific job, and the outcome is predictable.

Source: Forbes



, ,



Leave a Reply

Your email address will not be published. Required fields are marked *