When Real Looks Fake: the Authenticity Paradox in the AI Era

In March 2026, an authentic video of military operations in Iran was shared on social media. Within hours, thousands of users dismissed it as a deepfake. Grok, the AI chatbot built by xAI and integrated into X, gave three contradictory explanations for the same footage in 24 hours: it was from Pakistan in 2014, then Kabul in 2021, then Iran in 2026. The video was real. The damage was done.

That same month, a University of East Anglia study examined over 400 public comments beneath images posted by 17 NGOs, including Amnesty International, WHO, and WWF. Fewer than 1 in 5 comments focused on the humanitarian message. The rest debated whether the image was authentic or AI-generated. Even when the images were real. Even when they were properly labelled.

This is the authenticity paradox of the generative AI era: genuine content must prove it is real, while fake content is believed by default. Cryptographic certification at the source is the only structural answer to this reversal, because it eliminates the need to convince anyone: the content carries its own proof of authenticity.

The liar's dividend: what it is and why it targets authentic content

The liar's dividend is the mechanism by which the mere existence of deepfakes allows anyone to challenge any authentic evidence simply by claiming it is fake. The term was coined by legal scholars Bobby Chesney and Danielle Citron to describe a side effect of the proliferation of synthetic content: no technical expertise is needed to discredit genuine evidence. Doubt alone is enough. A study published in the American Political Science Review by Yale University researchers demonstrated that falsely claiming an authentic video is a deepfake can actually improve public perception of a political leader, even when the video is genuine. The liar's dividend works both ways: it shields liars and harms truth-tellers.

How systematic doubt works as a weapon

The mechanism is straightforward. A politician caught on camera making embarrassing statements claims the video is a deepfake. A company confronted with authentic documents argues they were manipulated with AI. A government accused of documented violence labels the evidence "AI slop". There is no need to prove the content is actually fake: sowing doubt is sufficient. During the Iran conflict in March 2026, state-affiliated accounts shared fabricated thermal maps to argue that authentic protest videos were AI-generated, a tactic Foreign Policy called "forensics cosplay".

The UEA study: fewer than 1 in 5 comments address the actual message

Researchers David Girling and Deborah Adesina analysed 171 AI-generated images used by humanitarian organisations and over 400 public comments. Across 17 organisations, the comment distribution tells a precise story: 141 comments focused on AI ethics and authenticity, 122 critiqued technical execution, and only 80 (fewer than 20%) engaged with the humanitarian message. The most telling finding concerns transparency: roughly 85% of the images were correctly labelled as AI-generated, yet the disclosure did not protect the organisations from public backlash. Labelling is not enough: when the public is trained to doubt, it doubts everything.

The reversal of the burden of proof in the generative AI era

The burden of proof for digital content has been reversed. Until recently, visual content was presumed authentic unless proven otherwise. Today, the opposite holds: any photograph, video, or digital document is potentially suspect until someone demonstrates it is genuine. This reversal disproportionately affects those who produce authentic content to document facts: NGOs, investigative journalists, organisations communicating verifiable data.

When a real video is dismissed as a deepfake

The cases keep accumulating. The Brennan Center for Justice documented a pattern of incidents: Spanish politician Alfonso Dastis called images of police violence in Catalonia "fake photos"; US Mayor Jim Fouts called verified audio recordings "phony, engineered tapes" despite forensic expert confirmation; Tesla's legal team suggested that Elon Musk's own safety statements might be deepfakes. In every case, the strategy is identical: do not challenge the content on its merits, challenge its very existence.

The reputational cost for organisations documenting contestable facts

For organisations documenting uncomfortable facts, the liar's dividend is an operational risk. An NGO publishing photographs of a humanitarian crisis finds itself defending the authenticity of its images before it can even communicate its message. A company presenting favourable environmental data gets accused of greenwashing with manipulated content. The cost goes beyond reputation: it is communicative paralysis. When every piece of content becomes potentially contestable, fact-based communication loses its function.

Proposed FRE 901 and Rule 707: a regulatory signal from the US legal system

The US legal system is already responding. The proposed amendment to Federal Rule of Evidence 901(c) would raise the evidentiary standard for digital content authenticity when a deepfake objection is raised: from a sufficiency standard to preponderance of evidence (more likely than not). In parallel, proposed Rule 707 (August 2025) introduces specific standards for machine-generated evidence. The signal is unambiguous: the authenticity of digital content can no longer be taken for granted, not even in court.

Why deepfake detection does not solve the authenticity paradox

The intuitive response to the deepfake problem is to try detecting them. But deepfake detection, however sophisticated, is structurally inadequate to solve the authenticity paradox. Not because the technology is insufficient today, but because the model itself is wrong.

The structural limits of deepfake detection

Every deepfake detection tool operates after the fact: it analyses content that has already been produced and attempts to determine whether it was generated or manipulated by AI. The problem is that generative models improve faster than detection tools: an asymmetric race where synthetic content creators hold a structural advantage. The University of Edinburgh study on AI digital fingerprints showed that traces left by generative models are vulnerable and can be circumvented.

From post-hoc analysis to certification at the source

The paradigm shift is conceptual before it is technological. Instead of asking "is this content fake?", the right question is: "does this content have proof of authenticity?". Digital provenance reverses the approach: rather than analysing content after creation, it certifies content at the moment of acquisition. Content certified at the source does not need to pass an authenticity test because it carries cryptographic proof of its own origin.

What is cryptographic certification at the source of digital content

Cryptographic certification at the source is the process that binds digital content to its origin at the moment of acquisition, creating mathematical proof of authenticity that cannot be replicated or altered afterwards. Unlike deepfake detection, which operates by exclusion ("does not appear synthetic"), certification operates by inclusion ("has verifiable proof of authenticity"). TrueScreen, the Data Authenticity Platform, implements this process through forensic acquisition of any digital content: photos, videos, audio, documents, emails, web pages, screen recordings, and online meetings, applying a digital signature, qualified timestamp, and complete chain of custody at the moment of creation.

Cryptographic hash and qualified timestamp

A cryptographic hash is a unique digital fingerprint calculated on the content at the moment of acquisition. Any subsequent modification, even a single bit, produces a completely different hash. The qualified timestamp, issued by a qualified trust service provider under the eIDAS Regulation, certifies the exact moment the content was acquired with legal presumption of accuracy across the entire European Union. The combination of hash and timestamp creates a mathematical bond between content, time, and the identity of the acquirer that cannot be contested.

GPS metadata and complete chain of custody

Every piece of content certified with the TrueScreen app includes verified GPS metadata at the moment of acquisition, documenting where the content was created. The chain of custody records every access and every transfer of the data from creation to presentation, following the guidelines of the ISO/IEC 27037 standard for handling digital evidence. The output is a complete evidentiary package: what was acquired, when, where, by whom, and mathematical proof that it has not been altered.

Scenario: an NGO documenting a crisis with certified photographs

A humanitarian organisation documenting a crisis uses the TrueScreen app to capture photographs in the field. Each image is certified at the moment of capture: cryptographic hash, qualified timestamp, verified GPS coordinates. When the photographs are published and someone challenges them as AI-generated, the organisation does not need to convince anyone: the certificate autonomously demonstrates that those images were acquired at that location, at that time, from that device. The liar's dividend loses its effectiveness because doubt collides with verifiable cryptographic proof.

Three concrete benefits for organisations certifying at the source

Certifying content at the source delivers three measurable benefits for organisations exposed to the liar's dividend. These are direct consequences of attaching cryptographic proof of authenticity to every critical piece of content.

Benefit Without certification With certification at the source
Burden of proof The organisation must prove the content is authentic The certificate autonomously proves authenticity
Reputational protection Every piece of content is contestable, communication is paralysed Certified content withstands systematic doubt
Legal standing No recourse against those who cry "it is fake" Cryptographic proof enforceable in legal proceedings

Eliminating the burden of proof on authenticity

The most immediate benefit is the shift of the burden of proof. Content certified at the source does not require the organisation to convince the public, the media, or a court of its authenticity. The certificate is the proof. The cryptographic hash, qualified timestamp, and chain of custody form a self-sufficient evidentiary package that speaks for itself, regardless of the context in which the content is presented.

Reputational protection for contestable facts

NGOs in crisis zones, companies under environmental scrutiny, public bodies facing media investigations: for all these organisations, certification at the source is structural protection. Consider a concrete case: a company accused of greenwashing presents environmental data certified at the source through the TrueScreen platform. The data carries proof of the time, location, and conditions of acquisition. The challenge shifts from "these are fake data" to the substance of the information.

Legal standing against those who abuse the liar's dividend

Certification creates concrete legal standing to challenge those who discredit genuine evidence. The proposed FRE 901(c) in the United States is raising the evidentiary standard for digital authenticity: those presenting evidence must demonstrate by preponderance that it is authentic. Content certified with a digital signature and qualified timestamp meets even this more rigorous standard. And it opens the possibility of acting against those who abuse doubt to discredit documented facts.

EU AI Act, Digital Services Act, and ISO 27037: the regulatory framework

The European and international regulatory framework is converging on a precise point: digital content must have verifiable proof of authenticity. The EU AI Act, the Digital Services Act, and ISO standards define obligations and guidelines that make certification at the source not just a competitive advantage, but a compliance requirement.

Transparency and traceability obligations for content

Article 50 of the EU AI Act, fully enforceable from 2 August 2026, requires providers of AI systems to ensure machine-readable marking and detectability of AI-generated or manipulated content. But the regulation addresses only half the problem: it labels synthetic content without providing a mechanism to certify authentic content. The ISO/IEC 27037 standard partially fills this gap by defining guidelines for identification, collection, acquisition, and preservation of digital evidence with forensic value. Certification at the source is the point of convergence: it meets ISO standards for chain of custody and anticipates AI Act requirements for traceability.

Platform responsibility in content verification

The Digital Services Act imposes specific obligations on Very Large Online Platforms (VLOPs) regarding the management of illegal content, including deepfakes, and the mitigation of AI-related risks. The Grok incident during the Iran conflict demonstrates how far these obligations are from being effective: an AI chatbot integrated into the platform amplified confusion rather than reducing it. Certification at the source offers an alternative: content with cryptographic proof of authenticity does not depend on platforms' ability to distinguish real from fake.

FAQ: the authenticity paradox and the liar's dividend

What is the liar's dividend?
The liar's dividend is the advantage bad actors gain from the mere existence of deepfakes: they can discredit any authentic content by claiming it was AI-generated. The term was coined by legal scholars Bobby Chesney and Danielle Citron. No technical expertise is required: doubt alone neutralises genuine evidence.
Why is deepfake detection not sufficient?
Deepfake detection tools operate after the fact, analysing content that has already been produced. Generative models improve faster than detection tools, creating an asymmetric race. Certification at the source bypasses this problem entirely: it does not try to distinguish real from fake but creates proof of authenticity at the moment of acquisition.
How does cryptographic certification at the source work?
Certification at the source acquires digital content and applies a cryptographic hash (unique digital fingerprint), a qualified timestamp issued by an eIDAS trust service provider, verified GPS metadata, and the acquirer's digital signature. Any subsequent modification to the content produces a non-matching hash, making tampering evident.
Does the EU AI Act solve the liar's dividend problem?
The EU AI Act (Article 50, enforceable from 2 August 2026) mandates the labelling of AI-generated content, but it addresses only half the problem: it labels synthetic content without providing a mechanism to certify authentic content. Certification at the source completes the picture by proving the authenticity of genuine content.
Which organisations are most exposed to the liar's dividend?
The most vulnerable organisations are those documenting contestable facts: NGOs in crisis zones, investigative journalists, companies subject to environmental or social scrutiny, and public bodies facing media investigation. For these organisations, certifying content at the source is a form of structural reputational protection.

Certify your content at the source

In the era of the liar\’s dividend, authentic content needs cryptographic proof. TrueScreen certifies photos, videos, documents, and communications at the moment of acquisition, with digital signature and qualified timestamp.

mockup app