EU AI Act: transparency obligations for businesses from August 2026

European businesses face a paradigm shift. AI adoption in operational processes is growing at an unprecedented pace: according to McKinsey's State of AI report, 88% of organizations already use AI in at least one business function. In Italy alone, ISTAT data shows AI adoption among businesses doubled from 8.2% in 2024 to 16.4% in 2025.

Yet from August 2026, Regulation (EU) 2024/1689, known as the EU AI Act, will make stringent transparency requirements fully enforceable for anyone developing or deploying AI systems. Article 50 imposes specific obligations around labeling AI-generated content, documenting data provenance, and informing users. The problem: most businesses are not ready.

The answer is not last-minute compliance patches. It lies in building a Digital Provenance infrastructure that certifies the origin and authenticity of digital content at the source, turning compliance from a reactive burden into a structural competitive advantage.

EU AI Act: the first comprehensive AI regulation

The EU AI Act is the world's first comprehensive regulatory framework for artificial intelligence. Entered into force on August 1, 2024, it introduces obligations progressively over a three-year timeline.

Key milestones:

  • February 2025: AI literacy obligations (Art. 4) and prohibition of unacceptable-risk practices (Art. 5) already in force
  • August 2025: obligations for general-purpose AI model (GPAI) providers, designation of national competent authorities, penalty regime activated with fines up to EUR 35 million or 7% of global turnover
  • August 2026: transparency obligations (Art. 50) and requirements for high-risk AI systems (Annex III) become fully enforceable

The enforcement structure relies on national competent authorities in each Member State, coordinated through the European AI Board. Penalties apply to both EU and non-EU companies offering AI systems in the European market, making this a truly extraterritorial regulation with teeth.

Article 50: transparency requirements in detail

Article 50 of the AI Act introduces differentiated obligations for providers and deployers of AI systems, with particular focus on AI-generated content.

Obligations for AI system providers

Developers of AI systems must ensure three fundamental requirements:

  • AI systems interacting directly with people must inform users they are communicating with an artificial intelligence, unless obvious from the context
  • Generated content (text, audio, images, video) must be marked in a machine-readable format and detectable as artificially generated
  • Free interfaces, APIs, or publicly accessible verification tools must be provided so third parties can verify content provenance

Obligations for deployers

Organizations deploying AI systems in their processes carry equally specific responsibilities:

  • Disclose that deepfake content has been artificially generated or manipulated
  • Label AI-generated text when published on matters of public interest
  • Implement persistent visual indicators for images and video, opening disclaimers for live video, and audible disclaimers for audio content
  • Train staff on identifying AI-generated content and establish human oversight mechanisms

The Transparency Code of Practice

The European Commission is finalizing a Code of Practice on transparency and watermarking for AI content. The first draft, published in December 2025, prescribes a multi-layered approach: metadata embedding with provenance standards (including C2PA), imperceptible watermarking at the pixel level, and logging systems where other techniques prove insufficient. The final version is expected by May-June 2026, shortly before enforcement begins.

The readiness gap across European businesses

The data on European business readiness reveals a concerning gap. A Deloitte survey of 500 managers found that only 35.7% feel adequately prepared for AI Act compliance, while 19.4% describe themselves as poorly prepared. Just 26.2% have actually started concrete compliance activities.

Structural gaps are widespread:

  • Over 50% of organizations lack a basic inventory of their AI systems
  • Only 18% have implemented a complete AI governance framework
  • 70% have no formalized AI governance model
  • Only 28% have CEO-level oversight and 17% have board-level oversight of AI

The penalty regime, already active for prohibited practices, provides for fines of up to EUR 35 million or 7% of global annual turnover for the most serious violations, up to EUR 15 million or 3% for other non-compliance (including transparency obligations), and up to EUR 7.5 million or 1% for providing inaccurate information to authorities.

The challenge for small and mid-size enterprises

While large corporations are accelerating AI adoption (53.1% in Italy, according to ISTAT), SMEs lag significantly behind at 15.7%. The gap widened by 12 percentage points in a single year. For SMEs, the dual challenge of limited AI expertise and compliance complexity creates a particularly difficult environment. The 60% of companies that evaluated AI investments without proceeding cite lack of adequate skills as the primary barrier.

Data provenance: the structural approach to compliance

Complying with the AI Act is not about adding labels to AI-generated content after the fact. It requires building an infrastructure capable of tracing, verifying, and certifying the origin of every piece of digital content throughout its entire lifecycle.

This approach, grounded in Digital Provenance, follows a clear logic: in a world where any content can potentially be generated or manipulated by AI, trying to detect what is fake becomes increasingly futile. What works is guaranteeing what is authentic at the source, making every piece of data immediately verifiable and trustworthy.

How it works in practice

TrueScreen operates exactly in this direction. As a data authenticity platform, it enables organizations to acquire, verify, and certify any digital content (photos, videos, documents, emails, web pages) with legal and probative value.

The process is built on a forensic methodology compliant with ISO/IEC 27037 and ISO/IEC 27001 standards. Upon acquisition, each file is sealed with an electronic seal and qualified timestamp issued by a Qualified Trust Service Provider (QTSP) under the eIDAS Regulation (EU 910/2014). Cryptographic hashing algorithms ensure that any subsequent modification is immediately detectable.

The certified output includes a complete package: the original files, a human-readable PDF report, a machine-readable JSON report, and an XML file containing the QTSP certification with digital signature and timestamp. This package creates a complete, verifiable audit trail: precisely the kind of provenance documentation that Article 50 requires.

From reactive compliance to proactive infrastructure

The fundamental difference between surface-level compliance and a structural strategy lies in sustainability. Adding watermarks or metadata on a case-by-case basis is fragile, generating rising costs and coverage gaps. Integrating source certification into existing operational workflows through APIs and SDKs transforms compliance into an automated, scalable process.

TrueScreen already supports compliance with major European and international regulatory frameworks: EU AI Act, eIDAS, GDPR, NIS2, DSA, and the Data Act. A single provenance infrastructure that covers multiple regulatory requirements, reducing the fragmentation of compliance tools across the organization.

Operational roadmap: preparing before August 2026

With months remaining before full enforcement of transparency obligations, here are the five concrete steps every organization should take.

1. Map all AI systems in use: create a comprehensive inventory of AI systems deployed across the organization. For each system, identify: the provider, the type of output generated (text, images, audio, video), the risk level, and the business processes involved. Over 50% of European organizations have not yet completed this fundamental first step.

2. Classify by risk level: the AI Act distinguishes four risk levels: unacceptable (prohibited), high, limited, and minimal. High-risk systems (Annex III) require technical documentation, automatic logging, and human oversight. Limited-risk systems fall under the transparency obligations of Article 50. Correctly classifying systems is the prerequisite for defining the required compliance actions.

3. Implement content traceability: for every piece of content generated or modified by AI, ensure origin traceability. Adopt data provenance solutions that certify content origin with verifiable metadata, digital signature, and qualified timestamp. Integrate these solutions into existing workflows via APIs for scalable, automated compliance.

4. Build internal governance and training: article 4 of the AI Act already mandates AI literacy obligations for staff. Define clear roles and responsibilities, implement internal policies on AI usage, train teams on managing AI-generated content, and establish human oversight mechanisms.

5. Monitor regulatory developments continuously: the regulatory landscape continues to evolve. The Transparency Code of Practice will be finalized by summer 2026, and the Commission's Digital Omnibus package may adjust certain deadlines. A proactive approach requires ongoing monitoring and timely process adaptation.

FAQ: EU AI Act transparency obligations for businesses

When do the EU AI Act transparency obligations take effect?
The transparency obligations under Article 50 of the EU AI Act become fully enforceable on August 2, 2026. However, some related obligations are already in force: AI literacy requirements (Art. 4) apply since February 2025, and obligations for general-purpose AI models apply since August 2025.
Which businesses are subject to the AI Act transparency requirements?
The obligations apply to both providers (who develop AI systems) and deployers (who use them in their processes) operating in the European market, regardless of where the company is headquartered. SMEs benefit from reduced penalty thresholds but are not exempt from the substantive obligations.
What does data provenance mean in the context of the AI Act?
Data provenance is the ability to document the origin, history, and transformations of digital content throughout its entire lifecycle. In the AI Act context, it means being able to demonstrate whether content was generated or modified by an AI system, when, by whom, and with which tool, through verifiable and traceable metadata.
What penalties does the EU AI Act impose for non-compliance?
The penalty regime has three tiers: up to EUR 35 million or 7% of global annual turnover for prohibited practice violations; up to EUR 15 million or 3% for other non-compliance (including transparency obligation violations); and up to EUR 7.5 million or 1% for providing inaccurate information to authorities.
How can a business start preparing for the transparency obligations?
The fundamental steps are: map all AI systems in use, classify them by risk level, implement data provenance solutions for AI-generated content traceability, train staff, and define internal governance. Integrating source certification into operational workflows via APIs ensures structural and scalable compliance.

Protect the authenticity of your digital content

The EU AI Act sets new transparency standards. TrueScreen provides the data provenance infrastructure to certify the origin of your data and ensure compliance.

mockup app