Lawyers or Machines: Who do you blame for GenAI Hallucinations?

The use of GenAI assistant in legal research has sparked a debate among lawyers. While some believe that they can do no wrong, others think that it is dangerous and could lead to serious miscarriages of justice. The issue with GenAI is that it produces legal hallucinations between 69% and 88% of the time when queried about a legal matter. The report finds that most GenAI models do not fare better than random guessing, and in answering queries about a court’s core ruling, models hallucinate at least 75% of the time. Therefore, lawyers and pro se litigants should not use GenAI for legal research, and a clear message needs to be sent out that there are consequences if you misuse it.

Source: Thetimeblawg






Leave a Reply

Your email address will not be published. Required fields are marked *