Deepfakes and criminal proceedings: the digital evidence crisis

Criminal justice was built on a basic assumption: documentary evidence faithfully represents reality. Deepfake digital evidence has broken that assumption. Synthetic video and audio files can be generated in minutes with tools anyone can access, and the output is often indistinguishable from authentic footage. Consider the scale: deepfake content is growing at over +900% annually, with projections of 8 million synthetic files by 2025. When a judge cannot trust what they see and hear, reasonable doubt stops being a safeguard for the accused and becomes a mechanism for procedural paralysis. The answer is not to chase fakes with detection algorithms that grow less reliable with each generative leap. It is to certify authenticity at the source, before digital content ever enters a proceeding. This analysis examines why, drawing on the broader framework of admissibility requirements for digital evidence in modern legal systems.

This insight is part of our guide: Admissibility of digital evidence: a complete guide to requirements, standards and best practices

Why deepfakes undermine documentary evidence

Every piece of deepfake digital evidence introduced in a criminal proceeding makes the presumption of genuineness obsolete. This presumption underpins documentary proof in both common law and civil law jurisdictions. When any video, audio recording, or photograph can be synthetically generated, the burden of proof effectively inverts: the producing party must now guarantee original authenticity, rather than the challenging party having to demonstrate falsity. With AI-enabled fraud losses projected to reach $40 billion by 2027, this is not a theoretical concern. Courts are already grappling with it.

The authenticity presumption that no longer holds

Documentary evidence in criminal proceedings has traditionally rested on an implicit causal link between representation and reality. A photograph was the optical-chemical product of what the lens captured. A recording was the acoustic trace of what the microphone registered. Under the U.S. Federal Rules of Evidence (Rules 901 and 1001-1004), authentication requires evidence "sufficient to support a finding that the item is what the proponent claims it is." EU procedural frameworks operate on similar assumptions. An image produced by a generative adversarial network breaks that causal chain at the root: it represents nothing, it simulates everything. Traditional authentication methods worked when fabrication required physical manipulation of a tangible medium. They cannot address a forgery that exists as a mathematically plausible but entirely synthetic construction.

A new type of forgery: neither material nor ideological

Legal systems have historically distinguished between material forgery (alteration of the physical medium) and ideological forgery (false content on a genuine medium). Deepfakes fit neither category. The medium is authentic: a real digital file with coherent metadata and valid encoding. The content is entirely synthetic: it does not depict an event that occurred, but simulates one in a way indistinguishable from the real thing. The EU AI Act, Article 50, will impose machine-readable labeling obligations for AI-generated content starting August 2, 2026. This is a step forward, but labeling is circumventable by design, offers no solution for content produced before the regulation takes effect, and does not cover content generated outside EU jurisdiction. For now, the law still lags behind what the technology makes possible.

TrueScreen certified digital evidence litigation

Use case

Certified digital evidence for litigation

See how TrueScreen certifies digital evidence with legal value for civil and criminal litigation.

The Liar's Dividend and judicial paralysis

The most damaging threat deepfakes pose to criminal justice is not the fabrication of false evidence. It is the delegitimization of real evidence. The Liar's Dividend, theorized by Chesney and Citron in their 2019 California Law Review article (107 Cal. L. Rev. 1753), describes how a party contests the authenticity of genuine evidence simply by invoking the theoretical possibility that it could be a deepfake. The court, lacking definitive verification tools, has no solid ground to rule either way.

Contesting genuine evidence by exploiting the existence of fakes

The Liar's Dividend works as a strategic weapon in litigation. During the Capitol breach prosecutions, the defense in United States v. Reffitt tried to discredit video recordings by arguing they could be deepfakes, despite the footage having been acquired from verified sources. In civil litigation, Mendones v. Cushman & Wakefield raised similar authentication challenges for digital content. Tesla has faced attempts to dismiss audiovisual evidence involving Elon Musk on the same grounds. The damage runs in both directions: the guilty contest authentic evidence, and the innocent can be accused on the basis of synthetic content. Each successful or semi-successful challenge chips away at judicial confidence in audiovisual proof as a category. Not because fakes are everywhere, but because the possibility of fakes is now enough to cast doubt on anything.

Judicial discretion vs delegation to technical experts

When the question becomes whether a video is real or synthetic, judges find themselves forced to hand the entire assessment to a technical forensic consultant. Judicial discretion, the cornerstone of evidence evaluation, empties out in practice. It gets replaced by a technical delegation the judge cannot independently verify. A forensic expert's conclusion that content is "probably authentic" or "likely synthetic" is probabilistic by nature. Under the reasonable doubt standard for criminal conviction (grounded in both the Sixth Amendment tradition and the European Convention on Human Rights, Article 6), a probabilistic technical opinion may not be enough. If the expert cannot rule out the synthetic nature of evidence, that residual uncertainty becomes reasonable doubt.

Parameter Deepfake detection Source certification
Timing of intervention Post-hoc: analyzes content after production Preventive: certifies at the moment of acquisition
Accuracy Drops to 50% on "in the wild" content; cross-dataset degradation >15% Deterministic: cryptographic hash makes any modification detectable
Probative value Probabilistic opinion from a technical expert Documentary evidence with verifiable chain of custody
Scalability over time Chases GAN advances in an asymmetric race Independent of generation technology
Resistance to Liar's Dividend None: residual doubt fuels contestation Provenance is attested before any challenge arises

Source certification: the answer detection cannot provide

Digital forensics applied to deepfake detection has a structural problem that no algorithmic advance can fix: it analyzes content after the fact, locked in an asymmetric race with generation technologies. The deepfake digital evidence problem gets resolved by inverting the logic: rather than asking whether content is fake, source certification guarantees its authenticity at the moment of acquisition.

Why digital forensics alone is not enough

Detection operates post-hoc on deepfake digital evidence already produced, with error margins incompatible with criminal proceedings. CNN architectures commonly used for detection show accuracy degradation exceeding 15% when applied to datasets different from their training set. On "in the wild" content, accuracy drops to 50%: barely better than a coin flip. Deepfake fraud attempts have grown by +2,137% in three years, expanding the volume of contestable evidence at a pace no detection system can match. A rigorous forensic examination still produces a probabilistic opinion. Under the reasonable doubt standard, that is not enough to sustain a conviction or to definitively authenticate evidence. The inherent limits of detection are not a temporary gap waiting for better models. They are a permanent architectural constraint.

Certified acquisition and native chain of custody

TrueScreen, the Data Authenticity Platform, works on the opposite principle: it certifies digital content at the moment of acquisition, applying a digital seal with an eIDAS-compliant timestamp and generating a SHA cryptographic hash that makes any subsequent modification detectable. The digital chain of custody is born with the data, not reconstructed after the fact. Evidence built this way satisfies authentication requirements under the Federal Rules of Evidence (Rule 901) and equivalent EU procedural standards, without requiring a forensic examination to prove authenticity. The process follows ISO 27037 and eIDAS 2.0 (including qualified electronic seal and qualified archival service), producing a certified report with verifiable digital provenance that can be attached directly to proceedings. The point is not to detect the fake, but to guarantee the authentic. The comprehensive guide to digital evidence admissibility standards provides an operational starting point. Legal professionals can integrate a digital signature compliant with eIDAS into the acquisition workflow, eliminating the contestability problem at its root.

FAQ: deepfakes and digital evidence in criminal proceedings

Can a deepfake be used as evidence in court?
A deepfake can be submitted in court like any other digital document. Whether it holds up as reliable evidence depends on the ability to demonstrate its authenticity. Without source certification and a verifiable chain of custody, the opposing party can challenge its genuineness by invoking the Liar's Dividend, rendering the evidence effectively unusable.
How do you certify digital evidence for criminal proceedings?
Three elements are needed: acquisition of the content with a cryptographic seal and timestamp at the moment of collection, generation of a hash guaranteeing immutability, and maintenance of a complete digital chain of custody. Lawyers and forensic consultants use TrueScreen to acquire screenshots, photos, and videos with certified probative value, compliant with ISO 27037 and eIDAS.
What is the Liar's Dividend?
The Liar's Dividend is the advantage a party gains by contesting the authenticity of real evidence, exploiting the mere existence of deepfake technology. Theorized by Chesney and Citron in their 2019 California Law Review article, it describes how the spread of synthetic content lets anyone cast doubt on any piece of audiovisual evidence, even when it is entirely genuine.
Can forensic analysis detect a deepfake with certainty?
No. Current detection techniques achieve variable accuracy and degrade significantly on uncontrolled content. A forensic examination delivers a probabilistic opinion, not deterministic certainty. Source certification offers a more robust approach: it does not need to detect the fake, because it guarantees the authentic at the moment of acquisition.

Certify your digital evidence at the source

Protect the probative value of your digital content with certified acquisition, native chain of custody, and eIDAS-compliant digital signature.

mockup app