The liar’s dividend: when authentic content gets dismissed as fake

Every day, organizations of all sizes produce photos, videos, documents, and digital communications that end up in legal proceedings, business negotiations, and internal reports. Until a few years ago, the main concern was straightforward: verifying that content had not been tampered with. Today, the problem has flipped.

With the spread of generative AI and deepfakes, anyone can claim that authentic content is fake. And that claim has become plausible. Researchers Robert Chesney and Danielle Citron coined the term "liar's dividend": the benefit dishonest actors gain from the mere existence of synthetic content. They don't need to create deepfakes. They just need to invoke their existence to cast doubt on any real evidence.

For organizations, this is an operational risk that is starting to bite. The answer cannot be deepfake detection, because no detection tool offers legal certainty. What is needed is an infrastructure that makes the authenticity of evidence independently verifiable from the moment of creation: Digital Provenance.

What is the "liar's dividend" and why it matters for organizations

The concept was introduced in 2019 by Bobby Chesney and Danielle Citron in a paper published in the California Law Review. The idea is straightforward: in a world where deepfakes exist, anyone can deny the truthfulness of inconvenient content by claiming it was AI-generated. And that denial has become credible.

UNESCO, in its publication "Deepfakes and the crisis of knowing", frames the phenomenon as a "crisis of knowing itself." Deepfakes do not merely introduce falsehoods into the information ecosystem. They erode the mechanisms by which societies build a shared understanding of reality. The concern is not just that fake content exists. It is that the mere possibility of synthetic content changes how all content is perceived, including authentic content.

The World Economic Forum's Global Risks Report 2025 ranked misinformation and disinformation as the number one short-term global risk for the second consecutive year. The liar's dividend is the less discussed side of this crisis: it is not about the impact of fake content, but about the effect that its mere existence has on the credibility of real content.

The awareness paradox

There is an irony worth considering. The more people learn about deepfakes, the stronger the liar's dividend becomes. Anyone looking to deny authentic evidence benefits directly from growing media literacy on the subject. "You know as well as I do that anything can be faked nowadays" is the perfect argument to dismiss any inconvenient proof. Media education alone is not enough. Paradoxically, it risks amplifying the problem.

Real-world consequences in the most exposed sectors

The liar's dividend is not an abstract academic concept. It produces measurable effects across multiple sectors, with real costs for those whose authentic evidence gets challenged.

Legal proceedings and disputes

In courtrooms, the challenge is already concrete. Under the eIDAS regulation, electronic documents with qualified seals carry a presumption of integrity. But digital evidence without a verifiable chain of custody can be challenged simply by raising the possibility of AI manipulation. No proof of tampering is required: the doubt alone is enough to undermine the evidence.

In the United States, the phenomenon has already reached trial courts. In the Tesla case, the company's lawyers argued that Elon Musk's statements about self-driving safety could not be used as evidence because they were potentially AI-generated. They did not have to prove they were deepfakes. Raising the doubt was enough.

Corporate communication and reputational crises

For companies, the risk operates on two levels. The first is well known: a competitor or malicious actor creates fake content to damage reputation. The second is less obvious but equally dangerous. When a company produces authentic documentation to defend itself, that documentation can also be dismissed as "probably AI-generated." The defense itself becomes attackable.

Political and institutional contexts

Authentic footage of events gets denied by political actors who invoke deepfake risk. The EPRS (European Parliamentary Research Service) estimated that 8 million deepfakes would circulate by 2025, up from 500,000 in 2023. Europol has projected that 90% of online content could be synthetically generated by 2026. Plausible deniability has become the cheapest tool for evading accountability.

The fundamental asymmetry: challenging costs nothing, proving authenticity is expensive

The mechanism works because it exploits a deep economic asymmetry. Saying "it could be a deepfake" costs nothing. No evidence, expertise, or investment required. Anyone can say it in any context.

Proving that content is authentic is a different matter entirely. It requires forensic analysis, metadata examination, chain of custody testimony, and technical consulting. And even after all of this, the result remains probabilistic. No expert can guarantee with absolute certainty that a file has not been manipulated, unless an immutable cryptographic trace was generated at the moment of creation.

Those who lie have it easy. Those who tell the truth pay the price of uncertainty created by others.

Why detection is not the answer

The instinctive reaction to deepfakes is detection: tools that analyze content and determine whether it is authentic or AI-generated. But detection has concrete limitations that make it inadequate against the liar's dividend.

The technology gap between generation and detection

The quality of synthetic content improves faster than the ability to detect it. Each new generative model outperforms existing detectors. It is an asymmetric race, and the generators always start with the advantage.

The absence of legal certainty

A detection tool that returns "probably authentic" does not eliminate reasonable doubt. In legal proceedings, a probability is not proof. The liar's dividend thrives in the space between "probably true" and "certainly true." As long as that space exists, the liar has room to maneuver.

The dependence on context

Detectors analyze the file, but they cannot attest where, when, and by whom that file was created. Without verifiable context, even content declared "authentic" by a detector remains vulnerable.

The real answer: certifying authenticity at the source

If proving content authenticity after the fact is costly, uncertain, and always contestable, the logical approach is to guarantee authenticity at the moment of creation. This approach is called Digital Provenance: a verifiable trail of the origin, integrity, and history of every piece of digital content.

Digital Provenance does not attempt to identify what is fake. It guarantees what is real. It shifts from authenticating the user to authenticating the data itself.

How source certification works

Content certified at the source carries verifiable and immutable metadata: the creator's digital signature, a timestamp issued by a qualified third party, verified GPS coordinates, a cryptographic hash of the file, and device metadata. Together, these form a chain of custody that anyone can verify independently, without needing to trust whoever produced the content.

TrueScreen operates with this approach. The platform acquires and certifies digital content (photos, videos, documents, emails, screenshots, web browsing) using a forensic methodology compliant with ISO/IEC 27037 standards and the eIDAS regulation. Every file is sealed with a digital signature and qualified timestamp issued by an international Qualified Trust Service Provider, in compliance with ISO/IEC 27001.

When someone claims "it could be a deepfake," the organization can respond with cryptographically verifiable proof. The claim "it could be fake" becomes demonstrably false. The liar's dividend has no room left to operate.

From plausible deniability to cryptographic certainty

Under the eIDAS regulation, qualified electronic seals carry a legal presumption of integrity and correctness of data origin. Content certified with a digital signature, qualified timestamp, and verifiable metadata answers every potential challenge preemptively. Integrity is guaranteed by cryptographic hash. The timeline is attested by a certified third-party timestamp. Provenance is verified through the creator's identity. Context is documented by GPS coordinates and device metadata.

Digital evidence certified to these standards is not "probably authentic." It is legally presumed authentic until qualified counter-evidence is presented. The space in which the liar's dividend operates closes.

FAQ: the liar's dividend and the digital trust crisis

What is the "liar's dividend"?
The liar's dividend is the advantage gained by dishonest actors in a world where synthetic content exists: they can deny the authenticity of any real evidence by claiming it could be AI-generated, without having to prove it.
Why doesn't deepfake detection solve the liar's dividend problem?
Because detection tools return probabilistic results, not legal certainties. In legal or business contexts, a probability does not eliminate reasonable doubt and leaves room for challenge.
How can you prove that digital content is authentic?
Through source certification with digital provenance: digital signature, qualified timestamp, cryptographic hash, and verifiable metadata created at the moment of acquisition make authenticity independently verifiable by anyone.
Which sectors are most exposed to the liar's dividend risk?
The legal, insurance, corporate communication, and institutional sectors are particularly exposed because they rely on the credibility of digital evidence in decision-making, disputes, and legal proceedings.
What legal frameworks support certified digital evidence?
The eIDAS regulation establishes that qualified electronic seals carry a legal presumption of integrity and data origin correctness. ISO/IEC 27037 provides guidelines for digital evidence handling, and the Budapest Convention addresses cybercrime evidence standards across jurisdictions.

Protect the value of your digital evidence

With TrueScreen, every photo, video, and document is certified at the source with a digital signature, qualified timestamp, and verifiable metadata. The liar’s dividend does not work against cryptographically authentic evidence.

mockup app