EU AI Act Record-Keeping Requirements: A Practical Guide for Businesses

The EU AI Act (Regulation 2024/1689) enters full application for high-risk AI systems on 2 August 2026. For companies that develop or deploy these systems, the countdown to EU AI Act compliance has already started. Article 12 is the provision that will hit operations hardest: it mandates automatic event logging throughout the entire lifecycle of every high-risk AI system. The regulatory text, though, is precise on objectives and thin on operational detail. What events must be recorded? In what format? Who bears responsibility for storage and integrity? As discussed in our guide on AI data certification, governance and compliance, the answer goes beyond technical logging. Organizations that combine automatic logging with a minimum six-month retention period and legal-grade certification of their records will exceed the regulatory floor and build a defensible audit trail.

This insight is part of our guide: AI Data Certification: Governance and Compliance

What Article 12 of the AI Act requires for AI system logging

Article 12 of the EU AI Act requires every high-risk AI system to be designed with automatic event logging capabilities. Logs must allow full traceability of the system’s operation from deployment through decommissioning, covering every algorithmically driven decision. Article 12(2) spells out three purposes: identifying risk situations under Article 79, supporting post-market monitoring under Article 72, and tracking operational performance under Article 26(5). Manual recording does not count. The system itself must generate the records without operator intervention.

Article 12(2), Regulation 2024/1689: “High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system.” Logs must serve three regulatory purposes: risk identification under Article 79, post-market monitoring under Article 72, and deployer oversight under Article 26(5).

What events to log and for how long

Deployers of high-risk AI systems must retain automatically generated logs for a minimum of six months from the date each log is created, per Article 26(6) of Regulation 2024/1689. This six-month floor applies per log entry, not per deployment cycle. Sectoral legislation, including the GDPR where logs contain personal data, may extend the retention period. The storage infrastructure must let national market surveillance authorities access and examine logs on request. Retention alone is not enough, either: logs must remain readable, intact, and correlatable for the entire period.

The specific events to be logged depend on the system category. Article 12(2) requires any information needed to reconstruct the system’s behavior in relation to identified risks. For remote biometric identification systems listed under Annex III, point 1(a), Article 12(3) gets more granular: timestamps for each use, the reference database consulted, input data that produced a match, and identification of the personnel who verified the results. For other high-risk systems, no exhaustive event list exists. The guiding principle is full reconstructability of algorithmic decisions and a complete audit trail of every relevant interaction.

Which AI systems qualify as high-risk

Annex III of the AI Act identifies eight areas: biometric identification, management of critical infrastructure, education and vocational training, employment and worker management, access to essential services (credit, insurance, public services), law enforcement, migration management, and administration of justice. Systems used as safety components of products already covered by EU harmonized legislation (medical devices, machinery, toys, among others) also fall within scope. Classification depends on use case and deployment context, not on the underlying technology.

Provider obligations vs deployer obligations

The regulation draws a clear line between two roles. The provider develops the AI system or places it on the market. Providers must design the logging architecture, define which events get recorded, and produce logs in formats suitable for analysis. The deployer uses the system in its own operational context. Deployers must retain logs for at least six months, make them available to surveillance authorities on request, and monitor that the system performs within the provider’s specifications.

And here is where it gets uncomfortable for many buyers: purchasing a high-risk AI system from an external vendor does not exempt you from any of this. The deployer remains accountable for log retention and accessibility regardless of who built the system. Solutions such as TrueScreen provide automated certification workflows that integrate with existing AI logging infrastructure, enabling deployers to meet Article 12 record-keeping obligations with a continuous, tamper-proof audit trail.

Organizations preparing for EU AI Act compliance should also account for the interaction between Article 12 and two adjacent provisions. Article 11 requires providers to produce technical documentation before the system reaches the market. Article 18 obliges deployers to keep that documentation accessible for ten years after the AI system is withdrawn from the market or service. Where Article 12 mandates automatic event logs with a six-month minimum retention, Article 18 covers the broader technical documentation and conformity assessment records with a ten-year horizon. Both obligations require proof of integrity on demand, a challenge that becomes more acute in agentic AI systems where autonomous agents make decisions across organizational boundaries.

Fines and timeline: what non-compliance costs after August 2026

Failing to meet the AI Act’s record-keeping obligations triggers penalties of up to EUR 15 million or 3% of global annual turnover, whichever is higher. That is the second of three penalty tiers under Article 99. The first tier, reserved for prohibited AI practices under Article 5, reaches EUR 35 million or 7% of turnover. The third, for providing inaccurate information to authorities, carries fines of up to EUR 7.5 million or 1% of turnover. SMEs and startups pay the lower of the two amounts in each tier.

Deadline Scope Legal reference
2 February 2025 Ban on unacceptable AI practices (social scoring, subliminal manipulation) Art. 5
2 August 2025 General-purpose AI model obligations and governance Arts. 51-56
2 August 2026 Full application for high-risk AI systems (Annex III), including Article 12 record-keeping Arts. 6-49
2 August 2027 AI systems under pre-existing sectoral legislation (medical devices, automotive) Art. 113
Tier Violation Maximum penalty
Tier 1 Prohibited AI practices (Art. 5) EUR 35M or 7% of global annual turnover
Tier 2 High-risk system obligations, including record-keeping (Art. 12) EUR 15M or 3% of global annual turnover
Tier 3 Inaccurate or incomplete information to authorities EUR 7.5M or 1% of global annual turnover

Tying penalties to global turnover follows the GDPR playbook and scales the risk to company size. There is also a wrinkle many companies miss: providing incomplete or unverifiable logs to market surveillance authorities is itself a separate third-tier violation, even if the underlying system is otherwise compliant.

TrueScreen certified digital evidence for litigation

Use case

Certified digital evidence for litigation

TrueScreen turns digital evidence into legally enforceable documents for court proceedings.

Read the use case →

How to ensure log integrity with legal evidentiary value

Article 12 tells you what to record. It says nothing about how to protect those records from tampering after the fact. Technical logging, by itself, does not guarantee integrity over time. A log file can be modified, overwritten, or deleted without leaving a trace unless it is secured through certification mechanisms independent of the system that generated it. If those records need to hold up as admissible evidence in judicial or regulatory proceedings, the question becomes one of chain of custody. Each event must be acquired, timestamped with an eIDAS-compliant qualified timestamp, and made immutable by a third party independent of both the provider and the deployer. Without that step, a log is an internal document. With it, it becomes evidence.

The compliance gap most organizations miss: Article 12 mandates what to log, but not how to protect those logs from tampering. A standard database record can be altered without trace. Only independent, third-party certification with eIDAS-qualified timestamps closes the gap between technical compliance and legal enforceability.

TrueScreen, the Data Authenticity Platform, enables organizations to certify AI system logs with legal evidentiary value, turning each record into third-party enforceable proof. Through its API, logs are acquired at the source and certified with eIDAS-compliant qualified timestamps following the ISO/IEC 27037 standard for digital evidence collection and preservation. The data becomes immutable from the moment of recording. For organizations running high-risk AI systems, the practical upside is clear: certified logs document the full chain of custody from event generation all the way to presentation during an inspection or in court.

A management framework like ISO/IEC 42001 for AI systems can sit alongside this certification architecture, while the forthcoming standard prEN ISO/IEC 24970 on AI system event logging will define implementation-level requirements for log format, granularity, and storage. Together, they create an environment where AI data governance and compliance is traceable and verifiable. Article 12 compliance can be supported by TrueScreen, the digital provenance platform that documents the complete chain of custody of every log.

FAQ

FAQ: EU AI Act record-keeping requirements

What events must a high-risk AI system log under Article 12?

The regulation requires automatic logging of all events necessary to trace the system’s operation throughout its lifecycle. What counts as a “relevant event” varies by system type. For remote biometric identification, Article 12(3) is specific: timestamps, the reference database used, input data that produced matches, and identification of verifying personnel. For other high-risk systems, no closed list exists. The standard is whether the logs allow risk identification, post-market monitoring, and operational performance verification.

How long must AI system logs be retained?

Six months from the date of creation, per Article 26(6). That is the minimum. Sectoral or national regulations may push the period longer. Throughout that time, logs must stay intact and accessible so market surveillance authorities can examine them on request.

What are the penalties for non-compliance with record-keeping obligations?

Up to EUR 15 million or 3% of global annual turnover, whichever is higher. That is the second penalty tier under Article 99, and record-keeping violations fall squarely within it. SMEs and startups pay the lower of the two amounts. On top of the fine, national market surveillance authorities can suspend non-compliant AI systems entirely.

What is the difference between technical logging and legally certified records?

Technical logging writes events to local files or databases. It checks the minimum Article 12 box, but nothing stops someone from altering or deleting those logs after the fact. Legally certified records close that gap. An independent third party acquires each log, applies an eIDAS-qualified timestamp, and locks the data so it cannot be changed. You end up with a verifiable chain of custody that carries evidentiary weight in court and before regulators.

What is the difference between Article 12 and Article 18 of the AI Act?

Article 12 governs automatic event logging: high-risk AI systems must generate records of their operation throughout the lifecycle, and deployers must retain those logs for at least six months. Article 18 addresses broader technical documentation: deployers must keep the documentation received from the provider accessible for ten years after the system is withdrawn. In practice, Article 12 covers the operational audit trail, while Article 18 covers the conformity assessment and design documentation. Both carry Tier 2 penalties of up to EUR 15 million or 3% of global turnover for non-compliance.

Certify your AI system logs

Turn logging into legally enforceable evidence. Contact us to learn how TrueScreen certifies high-risk AI system data.

mockup app