Generative artificial intelligence (GenAI) tools, such as ChatGPT and Lexis+ AI, are transforming legal work by accelerating drafting and research. Yet, recent English High Court decisions reveal the dangers of using them without proper verification.
The Problem: Fabricated Case Law in Court
In Frederick Ayinde v London Borough of Haringey and Hamad Al-Haroun v Qatar National Bank QPSC, the High Court reprimanded a trainee barrister and a solicitor for submitting documents that cited non-existent or inaccurate cases generated by AI. The judges made it clear: “freely available generative artificial intelligence tools are not capable of conducting reliable legal research.”
The Court held that all lawyers—regardless of experience—remain personally responsible for the accuracy of their submissions.
These incidents exemplify AI “hallucinations,” where systems confidently produce false information. Studies show hallucination rates of 58–88% for legal questions, meaning such errors are not rare. In Ayinde, an invented case was attached to a genuine case number. In Al-Haroun, 18 of 45 authorities were fabricated or irrelevant. The Court viewed these lapses not as innocent mistakes but as professional misconduct, issuing disciplinary measures against both the individuals and their supervisors.
A Profession Under Pressure
The cases reflect a broader issue: overworked practitioners and powerful but unreliable tools. Junior lawyers, often experimenting with AI out of necessity or curiosity, face career-threatening risks if they cannot detect hallucinations. This raises concerns about whether current supervision and training adequately prepare new lawyers for the pitfalls of AI.
Education as the Solution
The judgment indicates that sanctions alone are insufficient. Law schools and regulators must integrate AI literacy into professional education. This includes teaching:
- Why hallucinations occur and how to detect them.
- How to verify AI outputs against reliable databases.
- When using AI is inappropriate for legal work.
The integrity of justice depends on verified citations and ethical advocacy. As AI becomes embedded in practice, AI competence must become a core professional skill, just like research and citation methods.
Broader Lessons
The same problem extends beyond legal professionals. In a Manchester civil case, a self-represented litigant relied on ChatGPT, submitting one fabricated case and three genuine ones with false quotations. The judge accepted it was an innocent error but noted the serious risks of unverified AI outputs entering court records.
Key Takeaway
The message from Ayinde and Al-Haroun is clear: AI does not lessen a lawyer’s duty of care—it heightens it.
Future lawyers must be trained to use generative AI responsibly, with an emphasis on verification, transparency, and ethical awareness. Courts Will no longer excuse reliance on hallucinated case law; the burden now lies with legal professionals and educators to ensure AI enhances justice rather than undermines it.
Source: The Conservation
Leave a Reply