AI Act Article 50: EU Rules on Labelling Synthetic Content from 2 August 2026


On 2 August 2026, Article 50 of Regulation (EU) 2024/1689, the EU AI Act, becomes applicable. From that date, providers and deployers of AI systems must make it immediately clear to users when audio, video, images or text have been generated or manipulated by artificial intelligence, with a reinforced disclosure obligation on deepfakes. The rule introduces a single European standard for synthetic content transparency, ending a fragmented landscape in which voluntary watermarking and provider-specific labels lived side by side with unlabelled outputs. For AI compliance officers, media leads and editorial teams using generative AI, the countdown is no longer hypothetical: workflows, contracts with vendors and publishing pipelines need to be aligned now. Labelling what is synthetic is only half of the problem: the other half is proving that an original photo, video or document is authentic and untouched. Ex-post transparency on AI outputs works when it is paired with ex-ante verifiability on originals, so that authentic material is not mistaken for manipulated content.

What Article 50 of the AI Act requires

Article 50 sits inside the transparency obligations of the AI Act and imposes duties on both providers (who develop or place AI systems on the EU market) and deployers (who use them professionally). The common principle: users and recipients of content must know when they are interacting with AI, or consuming output generated or altered by AI.

Clear labelling for AI-generated audio, video, image and text content

Providers of AI systems that generate synthetic audio, image, video or text must ensure that outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. Solutions must be technically feasible, effective, interoperable and robust considering the state of the art. In parallel, deployers of AI systems that generate or manipulate text published to inform the public on matters of public interest must disclose that the text has been artificially produced, unless a human has reviewed and assumed editorial responsibility. The logic is consistent: whenever content reaches the public, the public must know its origin. For newsrooms and corporate communications teams, this translates into explicit labelling rules inside content management systems, editorial checklists and publishing templates.

Deepfakes: reinforced disclosure obligation

Deepfakes receive a dedicated paragraph. Deployers of AI systems that generate or manipulate image, audio or video content constituting a deepfake must disclose that the content has been artificially generated or manipulated. The disclosure must be clear and distinguishable, no later than at the time of first interaction or exposure. This goes beyond a generic watermark: it is a duty of active communication toward the audience. The rule applies regardless of the artistic quality of the output and regardless of whether the deepfake is benign or malicious. Reference: Regulation (EU) 2024/1689.

Code of Practice from the Commission

Article 50 is complemented by guidance and a Code of Practice that the European Commission, supported by the AI Office, is finalising. The Code translates the legal obligations into operational requirements on marking techniques, metadata, detectors and cross-platform interoperability. Compliance teams should monitor updates on digital-strategy.ec.europa.eu and align internal policies with the final text once published, expected around the entry into application.

Who is obligated

The scope of Article 50 is wide and covers the full value chain of synthetic content, from model providers to enterprises that integrate GenAI into their daily operations.

Providers of general-purpose AI systems (GPAI)

Providers of GPAI systems capable of generating synthetic content must embed marking mechanisms at the model or system level, so that outputs carry machine-readable signals identifying them as AI-generated. The marking must survive reasonable post-processing (re-encoding, format conversion) and must be documented in technical documentation made available to downstream deployers. In practice, vendors of foundation models and generative services need to expose, through API and product documentation, how their marking works and how deployers can keep it intact.

Deployers: enterprises and newsrooms using GenAI

Any organisation that uses generative AI to produce content for the public is a deployer. This includes media companies publishing AI-assisted articles, marketing departments generating product images or videos, legal and compliance teams drafting synthetic summaries, and public administrations using chatbots on matters of public interest. Each of them must ensure that outputs are labelled and, where deepfakes are involved, disclosed to the audience. Internal policies need to cover tooling selection, editorial workflows, disclosure templates and staff training.

Platforms and sharing services

Online platforms and sharing services remain governed by the Digital Services Act, but Article 50 creates a stronger substrate of labelled content that platforms can surface to users. Where platforms integrate generative features, they act as both providers and deployers and must honour both roles. Expect platform-level UI changes: badges, disclosure banners and labels on synthetic media are likely to become the default. For a broader view on how these duties combine with the rest of the regulation, see our analysis of AI Act transparency obligations.

Exceptions and sanctions regime

Transparency duties are not absolute. Article 50 foresees targeted exceptions and a sanctions framework proportionate to the scale of the operator.

Obviously artistic, satirical, or fictional content

When synthetic content is part of an evidently creative, satirical, fictional or analogous work, the disclosure duty is adapted so that it does not hinder the enjoyment of the work itself. The disclosure must still be present, but in a form and place that does not impair creative freedom. The exception is narrow: it applies to content whose artistic nature is obvious to a reasonable viewer, not to deepfakes disguised as news or testimonial.

Sanctions up to 3% of global annual turnover

Non-compliance with Article 50 falls under the general sanctions framework of the AI Act. Breaches of transparency obligations can reach fines of up to EUR 15 million or up to 3% of the total worldwide annual turnover for the preceding financial year, whichever is higher. National authorities, coordinated with the AI Office, enforce sanctions. The financial exposure is material enough to justify board-level attention, formal risk assessments and dedicated budget for compliance tooling.

Ex-post transparency and ex-ante verifiability: the role of TrueScreen

Article 50 addresses synthetic content after it has been generated: it tells the audience that what they see or read is artificial. This is necessary, but it does not prove that an original photo, video or document is authentic. The mirror-image question becomes urgent: how do we certify that a real asset has not been altered and that it was captured at a specific time and place?

TrueScreen, the Data Authenticity Platform, does not detect deepfakes. It produces cryptographic digital seals on original content at the moment of acquisition. Photos, videos and documents are captured or ingested through TrueScreen interfaces (mobile app, web portal, Forensic Browser, Chrome Extension, API, SDK) and receive a qualified timestamp, hash and metadata bound to the acquisition session. The resulting package has legal value and can be verified independently at any later moment.

For a compliance strategy aligned with Article 50, the combination is straightforward. On one side, deployers label every AI-generated output and disclose deepfakes. On the other side, teams that need authenticity on originals (legal evidence, inspections, journalism, regulatory filings, insurance claims) adopt TrueScreen to certify the asset at the source, making it immutable from the moment of capture. Labelling the synthetic and certifying the original are complementary layers: together, they protect against both undisclosed manipulation and later tampering.

FAQ

When does AI Act Article 50 start applying?
Article 50 becomes applicable on 2 August 2026, 24 months after the AI Act entered into force. From that date, providers and deployers of AI systems placed on the EU market or used in the EU must comply with the labelling and disclosure duties set out in the Regulation.
Who must label AI-generated content?
Two categories are directly obligated. Providers of AI systems that generate synthetic audio, image, video or text must mark outputs in machine-readable formats. Deployers who publish AI-generated or AI-manipulated content to the public, including deepfakes and text on matters of public interest, must disclose the artificial nature of the content to the audience, unless a human has assumed editorial responsibility.
What about deepfakes?
Deepfakes carry a reinforced disclosure duty. Deployers must clearly communicate to the audience that image, audio or video content has been artificially generated or manipulated, in a form that is immediately distinguishable, at the time of first interaction. The duty applies regardless of the intent behind the deepfake and is adapted only for obviously artistic, satirical or fictional works.
How does mandatory labelling combine with source certification?
Labelling tells the audience that a given output is synthetic. Source certification proves that a given original is authentic and untouched. The two layers answer different questions and apply to different assets. TrueScreen produces cryptographic seals on originals at acquisition, so that authentic photos, videos and documents are verifiable and legally robust, while Article 50 ensures that AI-generated content is always recognised as such.

AI Act Article 50 compliance meets source certification.

See how TrueScreen certifies original photos, videos and documents at acquisition, complementing AI Act labelling with verifiable proof of authenticity.

mockup app