The array of judicial policies on AI use in and by courts

Generative AI in the Courts: A Call for Vigilance and Reform

In a 2024 address at Durham University, Chief Justice Bell reaffirmed the judiciary’s primary duty: to uncover the truth using the best evidence available. But that duty is increasingly challenged by the rise of generative AI (GenAI)—tools capable of producing seemingly coherent and authoritative content that may, in fact, be fabricated or misleading.

To address this, the Supreme Court of NSW issued a Practice Note and accompanying judicial guidelines that impose strict controls on the use of GenAI. These prohibit judges from using GenAI to assist with drafting judgments or analysing evidence. Legal practitioners must not rely on AI-generated evidence without the Court’s permission. Chief Justice Bell warned against the creeping reliance on AI, noting that efficiency must not come at the cost of accuracy or integrity.

Other jurisdictions—including Victoria, New Zealand, Hong Kong, the UK, and the US—have introduced more flexible guidance, largely emphasising that any AI use must uphold the administration of justice. NSW’s response is unique in its proscriptive stance, particularly its blanket prohibition on judicial reliance on GenAI in judgment writing.

This crackdown comes amid a wave of high-profile cases where AI-generated content has led to professional misconduct. Courts in Australia and abroad—including in Mata v Avianca Inc (US) and Dayal (Australia)—have sanctioned lawyers for submitting fake or misleading authorities generated by AI. In Toyota Finance Australia Limited v Islam, a self-represented litigant was declared vexatious partly due to his improper use of GenAI. And in DPP v Khan [2024] ACTSC 19, a judge placed little weight on a character reference suspected to be AI-generated.

One key concern is “AI hallucination”—a term now widely used to describe when GenAI outputs plausible-sounding but false information. Stanford research found that leading chatbots hallucinated between 58% and 82% of the time in legal contexts. Even advanced platforms like DeepSeek suffer from this flaw.

The issue is not simply technological but epistemological: if courts or practitioners begin to trust tools that can so easily produce “2+2=5”-style outputs—plausible on the surface but fundamentally false—the rule of law may erode. The Orwellian risk is not that machines lie, but that humans may come to accept and rely on their fabrications.

The path forward requires more than just professional accountability. It demands widespread AI literacy, cautious procedural reform, and above all, a commitment to ensuring that truth—not convenience—remains at the heart of justice.

Source: LSJ.com.au

Leave a Reply

Your email address will not be published. Required fields are marked *