US courtrooms are increasingly relying on video, but scholars caution that the justice system is not ready for a future in which AI can fabricate highly realistic footage.
A new report from the University of Colorado Boulder says the country needs uniform standards to govern how courts verify and interpret AI-generated or AI-enhanced video.
The authors point out that judges and juries receive almost no training in recognising manipulated recordings, even though video appears in more than 80% of cases.
Anxieties have intensified as deepfakes become simpler to create. In September, a California civil case collapsed after a judge found that a witness video had been fabricated, and researchers expect similar disputes to increase as tools like Sora 2 make it possible to generate convincing simulations within minutes.
Experts also highlight the rise of the “deepfake defence,” where lawyers seek to undermine authentic footage by suggesting it has been faked.
AI is likewise being used to enhance legitimate recordings and to link surveillance images to suspects. While these methods can sharpen evidence, they risk exacerbating inequalities when only some litigants can afford them.
Well-publicised errors involving facial recognition have already led to wrongful arrests, underscoring the need for more explicit rules on digital evidence.
The report recommends specialised training for judges, improved systems for managing video files, and stronger safeguards to help viewers detect manipulation without endangering whistleblowers.
Researchers hope the recommendations will drive reforms that embed scientific rigour into courtroom practices as digital evidence increasingly becomes shaped by AI.
Source: Digiwatch
Leave a Reply