Deepfakes and AI in the courtroom: Report calls for legal reforms to address a troubling trend
From cell phone footage to bodycam and surveillance clips, U.S. courtrooms are awash with video these days, with more than 80% of court cases hinging to some degree on video evidence.
But in the age of artificial intelligence, the legal system is ill-prepared to distinguish deep fakes from real footage and handle other AI-enhanced evidence equitably, according to a new University of Colorado Boulder-led report.
“Courts in the United States, both at the state and federal level, lack clear guidelines for the use of video as evidence in general, and this picture is only going to get more complicated with the rise of AI and deep fakes,” said senior author Sandra Ristovska, associate professor of media studies and director of CU’s new Visual Evidence Lab. “We felt that something needed to be done.”

Sandra Ristovska, director of the Visual Evidence Lab at CU Boulder.

In this social media post, a Washington Post reporter demonstrates how easy it is to use Sora 2 to create fake body cam footage.

A screenshot of a deepfake video testimony presented in Alameda County, California court earlier this year. The judge threw the case out and sanctioned the plaintiffs for "intentionally submitting false evidence."
The 26-page report, compiled by 20 experts from around the country, comes as new AI video generators have made it remarkably easy to create life-like clips, including fraudulent witness testimonies and crime scene footage. Meanwhile, AI is increasingly used to enhance real video footage, making poor quality recordings easier to see and hear, and to match security camera clips with suspects — sometimes in error.
Among other reforms, the report calls for specialized training for judges and jurors to help them critically evaluate AI-enhanced or AI-generated footage, as well as national standards governing what kind of AI is permissible.
“Judges, attorneys and jurors treat video in highly varied ways around the country, and if the playing field isn’t equal, that could lead to unfair renderings of justice,” said Ristovska.
The rise of deepfakes
On Sept. 9, 2025, a judge in Alameda County, California, threw out a civil case and recommended sanctions for the plaintiffs after determining that a videotaped witness testimony was a deepfake. The case was among the first known cases in which a deepfake was deliberately used in the courtroom.
Due to rapidly advancing technology, Ristovska suspects there will be more.
She points to a recent social media post in which a reporter demonstrated how easy it is to abuse the new Sora 2 video generation technology: In less than a minute, he was able to get it to make a video showing “bodycam footage of cops arresting a dark-skinned man in a department store.”
"This shows how AI-generated videos are becoming misleadingly persuasive and how they can be exploited to incriminate and further marginalize racial and ethnic minorities," she said.
Right now, she’s more concerned about a different problem—what she calls the “deepfake defense.”
More attorneys may be painting real video footage as fake.
For instance, in a 2023 lawsuit brought by the family of a man who died when his Tesla crashed while using the self-driving feature, the company's defense counsel attempted, unsuccessfully, to dismiss a video by claiming it was a deepfake.
“If this continues, jurors will accord little or no weight to authentic footage that they really should be paying attention to,” she said.
AI enhancement
Deepfakes aside, lawyers are increasingly submitting authentic videos that have been enhanced by AI to make the sound or visuals clearer. But not everyone can afford to use those technologies, said Ristovska, and while some judges allow them, some don’t.
“There is a real concern that AI enhancement may exacerbate already existing inequalities in access to justice,” she said.
AI is also routinely used to match surveillance video with potential suspects through facial recognition technology. But the system is far from fool proof.
One recent Washington Post investigation found that no less than eight people have been wrongly arrested after being identified by facial recognition software.
“People are so accustomed to thinking that the technological solution is the trusted solution that even if it is a low-quality image or video, if it is run through AI, people will trust that it is an accurate match,” she said.
Educating judges and jurors
Ristovska has studied video evidence and the impact it can have on human rights for most of her career. She founded the center in April to bring together experts from across multiple disciplines to address the shifting landscape around video use in the courtroom.
In addition to new trainings for judges and jurors, the report calls for the establishment of a new system for storing and retrieving evidentiary videos, which are far harder for journalists and the public to access than text court records.
It also calls on technology companies to develop ways that make it easier for viewers to detect deepfakes without putting videographers who want to remain anonymous (like whistleblowers and activists) in jeopardy.
“Our hope is that this report inspires legal reforms, policy proposals based on science and more research,” Ristovska said. “This is just the beginning.”