Fake Law, Real Trouble: The Risks Of AI Errors In Court Submissions

Halton (Regional Municipality) v Rewa [2025] ONSC 4503

The Ontario Superior Court’s decision in Halton (Regional Municipality) v Rewa highlights growing judicial unease over the misuse of artificial intelligence in court proceedings. The self-represented defendant, Mr Rewa, filed submissions containing fabricated case citations generated by AI tools. He acknowledged relying on AI due to his inability to afford counsel, but the Court identified multiple false authorities that bore the hallmarks of “AI hallucination.”

This was not Mr Rewa’s first infraction — he had previously been cautioned about using non-existent cases. The judge criticised his failure to verify his sources or alert the Court once errors were discovered, describing his conduct as reflecting a “let’s see if I get caught” attitude rather than an honest mistake.

The Court reaffirmed that misleading submissions—whether from lawyers or self-represented litigants—undermine the integrity of the justice system and waste judicial resources. Although Mr Rewa was allowed to amend his motion, he was ordered to pay costs, with the Court warning that future reliance on fictitious AI-generated authorities would not be tolerated.

Key Takeaways

  • AI cannot replace human legal research or judgment.
  • Submissions citing non-existent or irrelevant AI-generated authorities damage credibility and risk cost penalties.
  • Courts expect all litigants, regardless of representation, to confirm the existence and relevance of cited cases.
  • Canadian courts are increasingly alert to AI misuse and are setting clearer boundaries for its appropriate role in legal practice.

Source: Mondaq

Leave a Reply

Your email address will not be published. Required fields are marked *