Legal practitioners (judicial officers) must ensure that any use of generative AI (by themselves, their staff, or court users) aligns with their obligation to deliver equal justice under the law. As lawyers and litigants increasingly utilise AI tools, these guidelines clarify the associated risks and responsibilities. They apply across Queensland’s major courts and tribunals.
Key definitions include AI, generative AI, chatbots (e.g., ChatGPT, Claude, Gemini, Copilot), large language models (LLMs), and prompts.
Core Guidelines
1. Understand AI and Its Limits
- Generative AI is not “intelligent” in the human sense—responses are probabilistic predictions, not reasoned answers.
- Outputs may be inaccurate, biased, or misleading, as chatbots cannot distinguish fact from opinion.
- Public models are trained on broad, often unreliable internet data, with limited coverage under Australian law.
- Even with good prompts, outputs are prone to error and should be seen as non-definitive.
- Commercial legal AI tools may be more reliable, but still require caution.
2. Confidentiality, Suppression, and Privacy
- Never enter confidential, private, or suppressed information into public AI tools, as inputs may be stored, reused, or disclosed.
- Treat inputs as if they were publicly available.
- Disable chat histories where possible and report accidental disclosures.
3. Accountability and Accuracy
- Always verify AI outputs before relying on them.
- Risks include fabricated cases, fake citations, misleading legal principles, and factual errors.
- AI can assist with summaries or overviews, but it is not a substitute for trusted sources (e.g., case law databases, academic texts).
- Effective prompting improves results, but it does not eliminate risk.
4. Ethical Issues
- AI reveals gaps and biases in training data, thereby increasing the risks of bias, copyright infringement, and plagiarism.
- Using AI to summarise or reformat material may infringe copyright unless handled carefully.
- Practitioners should verify the accuracy of AI-generated content used in speeches or writing and properly cite it where applicable.
5. Security
- Use work devices and work email accounts.
- Paid subscriptions are more secure than free public tools.
- Report suspected breaches to the relevant authorities.
6. Responsibility
- Judicial officers remain personally accountable for all material issued in their name.
- AI may be used as a supplementary research or preparatory tool, but not for judicial reasoning or drafting decisions.
- Staff use of AI should be discussed and monitored to ensure compliance with core judicial values (open justice, impartiality, fairness).
7. Awareness of AI Use by Court Users
- AI is already embedded in everyday tools (e.g., predictive text, disclosure review software).
- The concern is AI-generated submissions or documents, which may be persuasive but error-ridden.
- Lawyers must verify citations and arguments, while self-represented litigants may rely uncritically on flawed AI outputs.
- Warning signs include fictitious cases, inconsistent authorities, or unnatural language.
- Judges should be vigilant in addressing AI-assisted expert reports and the risks associated with deepfakes or fabricated evidence.
Source: Queensland Courts
Leave a Reply