Journalists Figure Out This Morning That ChatGPT Screws Up Legal Questions

The prevalence of legal errors made by generative AI models like ChatGPT and other large language models has been reported by various media outlets, including The Hill, The Register, and Bloomberg. Legal professionals should use tools specifically designed for legal work, while consumers should be alerted to the risks of relying on these bots for legal help. A recent study from Stanford shows that case law from lower courts is more prone to errors than from higher courts, suggesting that LLMs may struggle with local legal knowledge and casting doubt on claims that LLMs will reduce access to justice barriers in the US. However, with trusted datasets and legal-minded guardrails, LLMs married to the right, professionally vetted tool can make pro bono and low bono work less burdensome for attorneys, taking a different path to bridge the justice gap.

Source: Above the Law



, ,



Leave a Reply

Your email address will not be published. Required fields are marked *