A California housing dispute case, *Mendones v. Cushman & Wakefield, Inc.*, brought to light the first known instance of a deepfake video being submitted as evidence in court. Judge Victoria Kolakowski dismissed the case after detecting the AI-generated video, raising concerns among judges about the threat of hyperrealistic fake evidence. Legal experts and judges are warning that advancements in generative AI could erode trust in courtrooms. With various solutions being considered, the future of evidence is uncertain.
Read the original article here
AI-generated evidence showing up in court alarms judges, and for good reason. It’s a bit of a “duh” moment, really, an unforeseen consequence that was, perhaps ironically, entirely foreseeable. We’re talking about the integrity of the justice system, and the stakes couldn’t be higher. It’s like something straight out of a dystopian movie, where a judge has to decipher whether the evidence presented is real or some cleverly crafted digital illusion. Remember that deepfake video used to frame someone? It’s like something out of Judge Dredd, ironically, isn’t it? That kind of deception shouldn’t just be a dismissal; it should be a serious criminal offense.
The speed at which AI is improving is also quite concerning, especially when you consider the average age of judges. How many cases will slip through the cracks unnoticed? It’s a legitimate worry, and it feels like we need to do something drastic, a full-on reset of the situation. Submitting falsified evidence should be a felony, with real consequences. We absolutely need to mandate some kind of watermark or identifier for all AI-generated content, something that’s difficult to remove and easily detectable by the courts.
It’s tempting to dismiss this with a laugh, but the implications are terrifying. The video mentioned was obviously fake, but what about the ones that aren’t? The ones that are meticulously crafted and seamlessly integrated into a case? We need a system, much like the one used to prevent counterfeiting of currency, that can help identify AI-generated content. We need to agree on some kind of standard so that when a court is presented with information, particularly video, there’s a reliable way to verify its authenticity. Otherwise, what can we trust?
Unfortunately, the genie is already out of the bottle. AI models are readily available, even on personal computers. Trying to regulate them out of existence is simply impossible. There will always be those who find ways to exploit new technologies. But that doesn’t mean we shouldn’t try to mitigate the damage. Perhaps requiring AI developers to include detectors for their own creations could be a step in the right direction. It’s a race, an arms race to detect AI-generated content, and it’s a race we have to run forever.
Maybe we should also consider the language we use. Terms like “AI slop” might be catchy, but they’re not exactly conducive to serious discussion about the very real threat AI poses. And, can’t AI itself create with a tag stating it’s AI? Isn’t there an easy fix here?
Of course, the debate around regulation brings up questions about the First Amendment. Where do we draw the line? At what point does it become an infringement on free speech? And while we’re talking about the law, it’s worth noting that falsifying evidence is already a crime. The real problem isn’t the technology itself; it’s the people who misuse it. It’s hard to prove when people lie. We need to create significant penalties for submitting false information, whether it’s AI-generated or not.
The existing legal framework needs to be adapted to meet this challenge. All evidence must be authenticated, and those who submit it must attest to its legitimacy under penalty of perjury. It’s about ensuring due process and holding people accountable for their actions. Judges, when faced with such evidence, would ideally refer it to the district attorney. The DA would then decide whether to proceed with charges.
