On 6 January 2026, the European Commission published the first draft of the “Code of Practice on the Transparency of AI‑Generated Content” under Article 50 of the EU AI Act.

The goal is to make transparency for AI-generated or AI-manipulated content measurable, introducing practical obligations such as labeling, traceability, and verifiability, including a common EU icon for deepfakes.

By 2 August 2026, these provisions will become binding for organizations that publish AI content in, or into, the European Union.

But why is this not yet the solution to the real problem?

The real issue: a label cannot protect content integrity

According to the Code of Practice, a label is meant to make it visible to the user that a piece of content is generated or manipulated by AI, so it can be interpreted more consciously.

The real limitation of this proposal is structural: in practice, the label is metadata.

And metadata, by definition, does not “seal” the content. It can be removed in seconds, modified without leaving obvious signs to the human eye, or forged by a malicious actor.

As a result, a label correctly applied today does not guarantee the authenticity of that content tomorrow. And declared transparency is not the same as transparency that can be proven and verified over time.

In a realistic scenario, a malicious actor can:

  • generate AI content and label it correctly, then modify it after publication;
  • use the label as “proof of compliance” to legitimize altered content;
  • exploit the false sense of security conveyed by the label to bypass more rigorous checks.

In other words, if treated as an end point rather than a starting point, the label risks becoming an accelerator of unearned trust.

The difference between “transparency” and “proof”

The Code of Practice makes transparency a managed, measurable requirement subject to controls.

This is certainly an important regulatory step forward, but by its nature it remains at the level of a declaration.

To build digital trust in processes, an additional layer is needed: proof of origin, integrity, and traceability.

  1. Proof of origin: the ability to demonstrate, in a verifiable way, who generated the content, when, and in what acquisition or production context. In critical processes, origin is what enables accountability, reconstructs a timeline, and reduces disputes.
  2. Proof of integrity: the ability to demonstrate that the content has not been modified after creation. This is where technical concepts such as the cryptographic fingerprint, or hash, come into play, an identifier computed from the content. If even a single detail changes, the hash changes as well. This makes integrity verifiable rather than perception-based.
  3. Proof of traceability: the ability to reconstruct the “history” of the content: handoffs, access, versions, and transfers. In other words, a digital chain of custody that preserves traceability throughout distribution and operational use.

Only when these three elements are present is transparency truly verifiable and measurable.

A label is not the solution, but only the starting point

We insist on this point: a label cannot be the solution to the huge and complex problem of trust in AI-generated content.

Companies operating in high-risk contexts cannot only ask whether they can label AI content in a compliant way.

They must ask whether they can demonstrate, objectively and verifiably, that the content they publish, receive, or use is authentic, intact, and traceable.

Answering that second question requires technologies and processes that go beyond labeling:

  • cryptographic verification systems for origin and integrity;
  • timestamps and digital seals that make content verifiable over time;
  • a digital chain of custody for critical content;
  • operational controls embedded into workflows for provenance and changes (approvals, archiving, audit).

TrueScreen enables you to certify authenticity and protect content integrity

TrueScreen is a Data Authenticity Platform that allows you to acquire and certify digital content, ensuring integrity, authenticity, and traceability throughout its lifecycle. In this way, transparency becomes a governance choice to reduce fraud, speed up verification, decrease disputes, and improve the quality of data exchanged with customers and partners:

  • Integrity verification through cryptographic fingerprints that make any subsequent changes immediately detectable;
  • Certified timestamp and digital seal to anchor content to a time reference and strengthen its verifiability over time, aligned with the requirements and trust services under the eIDAS framework;
  • Digital chain of custody: traceability and auditability of relevant steps, useful in cross-functional and multi-stakeholder processes;
  • Forensic technical report explaining in detail how content was acquired and certified.

FAQ: the most common questions about AI content transparency and the EU AI Act

1) What does the EU AI Act require on transparency for AI content?

In general, the framework pushes toward measurable transparency obligations for AI-generated or AI-manipulated content, such as labeling and the ability to trace and verify it, especially when published in or into the EU. Operational details depend on implementation and guidance.

2) Is the “AI-generated” label proof that the content is transparent and trustworthy?

No. It is useful information, but it does not prove integrity and provenance. If the label can be removed or altered, it does not guarantee the content was not manipulated later.

3) Why can a label be removed or forged so easily?

Because it often lives as metadata or a presentation element: conversions, exports, screenshots, and re-uploads can remove or change it without immediate evidence for the user.

4) What is the difference between transparency and proof?

Transparency declares “this content is AI-generated/manipulated.” Proof demonstrates, in a verifiable way, origin, integrity, and traceability throughout the content lifecycle.

5) Can transparency become a competitive advantage?

Yes, if it is verifiable: it reduces disputes, accelerates audits, increases trust between companies, and improves the quality of processes based on digital content.

Make your digital evidence indisputable

TrueScreen is a Data Authenticity Platform that helps companies and professionals protect, verify, and certify the origin, history, and integrity of any digital content, turning it into evidence with legal value.

TrueScreen mobile app