Verification Drift in Legal Education and Practice

An empirical study at the University of Wollongong highlights “verification drift” — the tendency for initial caution with GenAI outputs to erode into misplaced trust due to their polished and authoritative tone. While verifying AI-generated content is crucial, the process is mentally demanding and often more time-consuming than traditional legal research, undermining the efficiency GenAI claims to offer.

Legal Risks and Persistent Misuse

Since the launch of ChatGPT in 2022, courts worldwide have seen lawyers submit fabricated or inaccurate AI-generated material, despite widespread warnings about “AI hallucinations.” This persistence raises the question: are legal professionals unaware of GenAI’s risks, or is its apparent authority too persuasive to resist?

UOW Classroom Study

In the elective Law and Emerging Technologies, 72 students were askedto research autonomous vehicle regulation using AI tools. Of the 28 who consented to participate in the study:

  • Positive findings: Students effectively utilised GenAI for brainstorming, structuring arguments, clarifying writing, and integrating it with traditional resources.
  • Concerning findings: Despite training, some included inaccurate, irrelevant, or subtly misleading AI content—often where genuine sources were cited but misrepresented. Examples showed small shifts in wording that significantly altered meaning.

Verification Drift Explained

Even trained students fell into verification drift: beginning cautiously but gradually trusting AI outputs without rigorous checking. Reflecting a cognitive bias—AI’s fluent and authoritative style creates a false sense of reliability. Verifying even simple summaries requires detailed cross-checking, which is time-intensive and cognitively draining.

Implications for Legal Practice

For lawyers, exhaustive verification of GenAI outputs may be less efficient than conventional research. The persistence of AI misuse suggests that guidelines alone (e.g., from the Supreme Court and Law Society of NSW) are insufficient; a more systemic response is required.

Recommendations for Responsible AI Use

The study proposes four steps:

  1. Hands-on AI training – Beyond written guidelines, professionals need interactive, skills-based learning.
  2. Mandatory AI literacy – Given its inevitability in practice, lawyers must understand GenAI’s limits to avoid severe consequences.
  3. Case study engagement – Reviewing past instances of fabricated submissions can build scepticism and caution.
  4. Critical evaluation of AI claims – Treat vendor performance claims (e.g., exam results) with scepticism until independently benchmarked in legal contexts.

Conclusion

GenAI can be a valuable tool, but it also carries significant risks of overreliance. Like Odysseus resisting the Sirens, lawyers must consciously prepare to resist AI’s persuasive style. Responsible use demands scepticism, rigorous verification, and ongoing literacy training.

Source: LSJ.com.au

Leave a Reply

Your email address will not be published. Required fields are marked *