Agentic AI Liability: Who Is Responsible When the Agent Fails
Enterprises are deploying autonomous AI agents to handle complex decision-making processes: from credit risk assessment to automated claims approval, from regulatory compliance to supply chain management. These agents do not merely suggest: they decide, act, and produce legal effects. When they cause harm, agentic AI liability becomes the defining legal question for every enterprise deploying autonomous systems.
But when an AI agent causes harm, who bears responsibility? The agentic AI legal issues that enterprises face extend well beyond traditional software liability. The answer is far from simple. The liability chain fragments across developer, deployer, and end user, and the European regulatory framework is shifting fundamentally in 2026. The Product Liability Directive 2024/2853 extends strict liability to AI software, while the withdrawal of the AI Liability Directive leaves a regulatory gap for fault-based claims. In this landscape, certifying the agent’s data at every operational level becomes the only viable defensive strategy to reconstruct the causal chain and prove compliance.
This insight is part of our guide: AI Agent Data Certification: Governance, Audit and Compliance
The Liability Chain: Developer, Deployer, User
An AI agent has no legal personality. It cannot be sued, holds no assets, and bears no legal responsibility for its actions. Liability necessarily falls on the human and organizational actors who created, configured, and deployed it. The AI Act (EU Regulation 2024/1689) and the Product Liability Directive 2024/2853 define three distinct roles with specific obligations.
| Role | Key obligations | Regulatory source |
|---|---|---|
| Developer (provider) | AI system conformity, risk management, technical documentation, automatic logging, post-market monitoring | AI Act Art. 16, PLD 2024/2853 |
| Deployer | Use in accordance with instructions, human oversight, log retention, DPIA, transparency to exposed users | AI Act Art. 26, GDPR Art. 22 |
| End user | Proper use of the system, reporting malfunctions, compliance with terms of use | PLD 2024/2853 Art. 12 |
The critical point is that these liabilities are cumulative, not alternative. Managing agentic AI risks requires understanding this overlap. In a case of damage caused by an AI agent, the injured party can simultaneously pursue the developer for product defect, the deployer for inadequate oversight, and the user for improper use. Without certified documentation of every step, each actor in the chain risks being unable to demonstrate compliance with their obligations.
For a comprehensive analysis of governance and audit obligations, see the full guide on AI agent data certification and compliance.
The European Regulatory Framework in 2026
2026 marks a turning point in the regulation of AI liability across Europe. Two regulatory developments reshape the landscape: one entering into force, the other conspicuously absent.
Product Liability Directive 2024/2853: AI software as a product
The revised Product Liability Directive, adopted in 2024 with a transposition deadline of 9 December 2026, represents the most significant change. For the first time, software, AI systems, machine learning models, and large language models are explicitly classified as “products” subject to strict liability.
This means the injured party does not need to prove the manufacturer’s negligence: it is sufficient to demonstrate the product defect, the damage suffered, and the causal link between them. For highly complex AI systems, the directive also introduces a presumption of causality: if the product fails to meet mandatory safety requirements and the type of damage is consistent with the defect, the causal link is presumed until proven otherwise.
The impact on agentic AI is direct: every autonomous decision by an AI agent that causes personal or financial harm can trigger a strict liability action against the developer. The only defence is to demonstrate the absence of defect at the time of placing on the market, which requires comprehensive and verifiable documentation of the entire development and deployment lifecycle. Organizations use TrueScreen to certify the complete development and deployment lifecycle of AI agents, creating the verifiable documentation that the PLD requires to demonstrate absence of defect at the time of placing on the market.
The AI Liability Directive gap
The proposed AI Liability Directive, put forward by the European Commission in 2022, would have harmonised national rules on fault-based claims related to AI. The proposal was formally withdrawn in October 2025 from the Commission’s work programme, on the grounds that there was “no foreseeable agreement.”
The practical consequence is that fault-based liability actions (i.e., all cases not covered by strict product liability) remain governed by the national laws of each Member State, without EU harmonisation. A company operating across multiple Member States must therefore navigate different liability regimes for the same AI agent.
Insurance and bounded autonomy: the emerging standard
Insurers are increasingly requiring verifiable proof of bounded autonomy before providing AI liability insurance coverage for autonomous agent operations. Bounded autonomy refers to documented constraints on what an AI agent can decide and execute independently, with clear escalation thresholds triggering human review.
Major insurers now mandate that enterprises demonstrate continuous monitoring and certified logging of agent decision-making as a precondition for coverage. This shifts the insurance model from reactive (compensating after damage) to preventive (requiring proof of controls before issuing a policy). For deployers, certifying agent operations is no longer solely a regulatory obligation under the AI Act: it is an insurance requirement, making agentic AI compliance a prerequisite for coverage. Every tool call, reasoning step, and output decision must be documented with tamper-proof timestamps to satisfy both the AI Act’s logging mandate and insurers’ bounded autonomy criteria.
How Data Certification Protects Against AI Agent Liability
TrueScreen, the Data Authenticity Platform, enables organizations to certify every data point in the AI agent lifecycle, creating the forensic-grade audit trail that the Product Liability Directive 2024/2853 requires to overcome the presumption of causality. Under the PLD’s strict liability regime, the burden of proof is effectively reversed: it is not the injured party who must prove fault, but the developer and deployer who must demonstrate the absence of defect and regulatory compliance. TrueScreen addresses this by certifying AI agent data across four distinct levels:
- Level 1: Knowledge Base: certification of the data feeding the agent, ensuring the integrity and traceability of information sources
- Level 2: Prompts and Instructions: certification of system prompts, user prompts, and configurations that define agent behaviour
- Level 3: Operations: certification of tool calls, reasoning steps, and intermediate decisions the agent makes during execution
- Level 4: Output: certification of the results produced by the agent and delivered to the user or downstream system
Each certification produces an origin verification, structured reporting, indexed organisation, and a qualified digital seal with eIDAS-compliant timestamps. Integration is available via REST API and MCP (Model Context Protocol), compatible with major agentic AI frameworks.
Practical scenario: an AI agent in insurance
An insurance company deploys an AI agent for automated claims assessment. The agent analyses photographic documentation, cross-references data with the client’s history, and produces a recommendation to settle or deny the claim. One day, the agent denies a legitimate claim based on corrupted data in the knowledge base. The client suffers harm and sues the insurer.
Without certification, the insurer (deployer) cannot prove that the defect lay in the knowledge base supplied by the developer rather than in their own agent configuration. With TrueScreen certification across all four levels, the causal chain becomes reconstructable: Level 1 certifies the state of the knowledge base at the time of the decision, Level 2 demonstrates that the prompt was compliant, Level 3 documents the agent’s reasoning, and Level 4 crystallises the output produced. The insurer can thus prove the defect was in the product, not in its use, and the developer can verify whether the corrupted data was already present at source or was altered after delivery.
This reconstructability of the causal chain is precisely what PLD 2024/2853 requires to overcome the presumption of causality and what AI Act Art. 12 record-keeping requirements mandate as an automatic logging obligation for high-risk systems.
FAQ: Agentic AI Liability
Who is liable when an AI agent makes a mistake?
An AI agent has no legal personality, so liability falls on the actors in the chain: the developer (provider) is liable for product defects under the Product Liability Directive 2024/2853, the deployer for inadequate oversight and non-compliant use under AI Act Article 26, and the end user for improper use. These liabilities are cumulative, meaning the injured party can pursue all actors simultaneously. Under the PLD, the developer faces strict liability without need to prove negligence. The deployer must demonstrate effective human oversight, log retention, and use in accordance with the provider’s instructions. The only reliable defence for each actor is comprehensive, certified documentation proving compliance at each step of the AI agent lifecycle. Without this evidence, the presumption of causality under the PLD works against the defendant.
Does the Product Liability Directive apply to AI software?
Yes. The Product Liability Directive 2024/2853, with transposition due by 9 December 2026, explicitly classifies software, AI systems, machine learning models, and large language models as “products” subject to strict liability. The injured party needs only to prove defect, damage, and causal link, without demonstrating negligence. For complex AI systems, the directive introduces a rebuttable presumption of causality: if the product fails to meet mandatory safety requirements and the type of damage is consistent with the defect, the causal link is presumed until the manufacturer proves otherwise. This presumption effectively reverses the burden of proof, placing the onus on the developer to demonstrate that the AI system was free from defects at the time of placing on the market. Free and open-source software developed outside a commercial activity is excluded from this regime.
What are deployer obligations under the AI Act?
The AI Act (EU Regulation 2024/1689) Article 26 imposes specific obligations on deployers of high-risk AI systems: use the system in accordance with the provider’s instructions, maintain effective human oversight with appropriately trained personnel, retain automatically generated logs for the period specified by the provider or at least six months, conduct a data protection impact assessment (DPIA) where applicable under GDPR, and provide transparent information to users exposed to the AI system’s decisions. The deployer must also monitor the system for risks and report serious incidents to both the provider and relevant authorities. Failing to meet these obligations exposes the deployer to direct liability and administrative fines up to 15 million EUR or 3% of total worldwide annual turnover, whichever is higher.
What happened to the AI Liability Directive?
The European Commission withdrew the proposed AI Liability Directive in October 2025 from its work programme, citing “no foreseeable agreement” among Member States. The proposal, first introduced in September 2022, would have harmonised national rules on fault-based civil liability for damage caused by AI systems, including a right of access to evidence and a rebuttable presumption of causality for fault-based claims. With its withdrawal, fault-based AI liability claims remain governed by divergent national laws across the 27 EU Member States. This creates legal fragmentation: an enterprise deploying the same AI agent across multiple countries faces different liability rules in each jurisdiction. The Product Liability Directive 2024/2853 partially fills this gap for strict liability claims, but fault-based claims lack any EU-level harmonisation.
What are the main agentic AI risks for enterprises?
The primary agentic AI risks for enterprises fall into three categories: legal liability, operational unpredictability, and regulatory non-compliance. Legal liability arises because AI agents make autonomous decisions that produce legal effects, yet lack legal personality to bear responsibility for those decisions. Operational risk stems from the agent’s ability to chain multiple actions without human approval, meaning a single flawed reasoning step can cascade into significant harm before detection. Regulatory risk is acute in 2026 because the AI Act’s obligations for high-risk systems are entering enforcement while the Product Liability Directive extends strict liability to AI software. Enterprises that deploy agents without certified audit trails, bounded autonomy controls, and documented human oversight face exposure on all three fronts simultaneously.

