As GenAI tools like ChatGPT become embedded in legal workflows, a quiet but critical problem is emerging: verification drift. This term, coined in a recent empirical study at the University of Wollongong, describes the gradual erosion of caution as users begin to trust AI-generated content too readily—despite its known tendency to fabricate or misrepresent sources.
The study, conducted in a law school elective on emerging technologies, offered students hands-on training in responsible GenAI use. They were encouraged to integrate AI tools into their research on autonomous vehicle regulation, supported by practical sessions on prompt design, critical evaluation, and source verification.
The results were mixed. While many students used GenAI effectively—to clarify complex ideas, refine arguments, or explore structure—others submitted assignments that included confidently phrased, subtly inaccurate content. These inaccuracies were not always obvious. In one instance, a GenAI-assisted claim suggested autonomous vehicles could remove 90% of cars in urban areas—a distortion of the source material, which referred to a very different transport model. In another, a statistic was lifted correctly, but a specific timeframe was erroneously added.
What’s striking is that these errors weren’t the result of negligence or ignorance. Students had been trained. They had guidance. The issue wasn’t a lack of awareness—it was cognitive. As the study revealed, even well-informed users can drift toward misplaced trust when AI outputs appear coherent, fluent, and authoritative. This is the essence of verification drift.
Legal Tech’s Double-Edged Sword
Since ChatGPT’s rise in 2022, legal professionals have seen both the potential and peril of GenAI. Courtroom embarrassments involving hallucinated case citations have been widely reported, yet similar incidents continue to surface. The legal community is not unaware of the risks—but awareness doesn’t always translate into discipline, especially when the content looks right.
Verification is both essential and exhausting. Reviewing AI-generated claims against cited sources requires meticulous cross-checking and a level of scrutiny that’s mentally taxing and time-intensive—sometimes more so than conducting the legal research from scratch. This is particularly burdensome for time-poor practitioners, who may rely on AI’s polish without sufficiently interrogating its accuracy.
The Odysseus Strategy
To resist GenAI’s allure, the study draws on the myth of Odysseus and the Sirens—who lured sailors to ruin with their enchanting songs. Odysseus knew the danger and took precautions: he had his crew plug their ears and tied himself to the mast. Legal professionals need their own version of this strategy—acknowledging GenAI’s persuasive power while building systems to prevent being swayed by it.
The study’s recommendations offer a roadmap for responsible AI use in legal practice:
- Move beyond guidelines: PDF checklists and ethical statements aren’t enough. Legal professionals need interactive training that builds practical, critical engagement with GenAI.
- Make AI literacy essential: Every lawyer will encounter GenAI in their work. Understanding its limitations and risks should be part of baseline professional competence—possibly even a CPD requirement.
- Learn from real mistakes: Case studies of AI misuse in actual litigation are powerful tools. Understanding how seasoned professionals went astray can foster a more vigilant mindset.
- Challenge the performance hype: Claims that GenAI outperforms law graduates (e.g. scoring in the 90th percentile of the U.S. Bar Exam) need to be scrutinised. Australian empirical studies suggest these tools often underperform compared to average students. Without robust, transparent benchmarking, performance claims should be treated with scepticism.
Conclusion
GenAI tools offer immense promise for legal research, drafting, and education—but they also demand new skills, habits, and safeguards. Verification drift is not a technical flaw; it’s a human vulnerability. To use these tools wisely, legal professionals must resist their polish, question their precision, and approach every output with critical intent.
In legal tech, speed and efficiency are tempting metrics—but in law, accuracy still reigns supreme.
source: LSJ
Leave a Reply