Synthetic identity fraud: defending onboarding from AI-driven attacks

Financial institutions have invested years into digital onboarding: selfies, ID scans, real-time video KYC. The goal was to open accounts and underwrite policies without requiring branch visits. That model, built on the assumption that the document shown was genuine and that a real person sat behind the camera, is showing its cracks.

Synthetic identity fraud combines stolen real data with fabricated data to produce an identity that matches no living individual. With the latest generative models, organised crime rings produce faces, voices, identity documents and even live selfies that are convincing at near-zero cost. The result is an identity that sails through traditional KYC, opens accounts, obtains credit, buys policies and launders proceeds without any real person being involved.

The answer is not to chase every new deepfake technique. It is to move verification upstream: certify documents, selfies and customer communications at the source, at the very moment they are produced, with electronic seal, qualified timestamp and forensic metadata that make it impossible to substitute them with AI-generated artefacts. TrueScreen builds this certification layer for banks, insurers and fintech firms, embedded into KYC, anti-money laundering and claims workflows.

What synthetic identity fraud is and how it differs from identity theft

Synthetic identity fraud is a scheme where an attacker combines real stolen elements (tax ID, date of birth, address) with fabricated ones (invented name, AI-generated document, deepfake selfie) to build a "new" identity that does not exist in civil records yet is treated as legitimate by verification systems. Unlike classic identity theft, there is no victim flagging the abuse: any real fragments usually belong to a minor, a deceased individual or an elderly person who does not use digital services. The identity can live for months or years before being flagged as fraudulent, accumulating credit lines, banking history and credit reputation.

The operational difference is material for onboarding teams. Identity theft gets intercepted because the real owner complains. Synthetic identity fraud, in contrast, leaves systems convinced they have welcomed a genuine customer: no early behavioural anomalies, formally valid documents, liveness checks passed. The hole opens at bust-out time, when the attacker maxes out credit and disappears.

How generative AI is scaling the problem

Three capabilities of generative AI, available today as a service, change the scale of the problem:

  • Document generation: diffusion models produce driver's licenses, national IDs and passports with simulated holograms, microprinting and coherent MRZ codes. Template datasets circulate in underground forums as monthly subscriptions.
  • Biometric deepfakes: GAN networks and video models generate synthetic faces that pass liveness checks based on motion or expression. The attack is enhanced with voices cloned from seconds of audio.
  • Injection attacks: instead of showing a deepfake to the webcam, the attacker injects the synthetic video directly into the data stream of the KYC session, bypassing any control based on the device camera.

Combined, these three techniques break the equation "selfie + document = verified identity". Not because biometrics fails in principle, but because the data reaching the verification system is no longer trustworthy by construction.

What the numbers show: 2026 is the inflection year

Three recent reports capture the speed at which this threat is scaling. On 1 April 2026, the American Bankers Association, the Better Identity Coalition and the Financial Services Sector Coordinating Council released a joint plan to fight AI identity attacks: biometric deepfakes are up 58%, injection attacks up 40%, and for the first time the financial sector is asking policymakers for specific tools (cryptographic credentials, mobile driver's licenses, broader access to the SSA verifier) because it recognises that current controls no longer hold.

The second data point comes from the insurance industry. The Verisk report of 17 March 2026 surveyed 1,000 U.S. consumers and 300 claims professionals: 98% of insurers agree that AI editing tools are fuelling an increase in digital fraud, 99% have already received altered or AI-generated materials, and 36% of consumers admit they would consider digitally altering a photo or document to maximise their claim. The figure rises to 55% among Gen Z and 49% among millennials.

The third reference is regulatory. The Financial Action Task Force published in January 2026 a horizon scan dedicated to AI and deepfakes in the AML/CFT/CPF context, identifying three critical areas: the spread of facial and video KYC creates attack surface for deepfakes, AML systems are not equipped to distinguish synthetic content, and the interconnection of global systems amplifies the damage. Recommendations include multi-layer verification tools and specialised training for prosecutors.

Why traditional KYC controls no longer hold

The verification processes deployed across financial services rely on three families of controls: visual document inspection, biometric selfie-to-document matching, and liveness detection. Each of these families was designed for a world where producing a credible fake document required specialist skills and access to specific materials. Generative AI has removed both barriers: the person producing the artefact no longer needs to know how to forge anything, the model does it for them.

The limits of OCR and template matching

Document verification systems work by extracting fields via OCR, checking known patterns (MRZ, fonts, microprinting) and matching against official templates. A document generated by a diffusion model trained on real examples passes most of these controls, because every field is formally correct and the simulated security patterns are pixel-perfect. The system has no way of knowing that the document never existed physically.

The limits of liveness and dynamic selfies

Liveness detection pushed attackers to sharpen their techniques: instead of showing a photo, they show a video. Instead of a recorded video, they generate live video with reactions to prompts (turn your head, smile, read these numbers). The latest models execute these actions in real time on synthetic faces. Injection attacks remove even the need for a physical face in front of the camera: the video is injected via driver or emulator into the feed reaching the KYC application.

The limit of "document as static object"

The underlying logic of traditional KYC is: the document is an object the customer possesses, they show it, I inspect it. That logic works as long as the act of showing is expensive to replicate. Today replicating a document is essentially free. The question has to change. Not "does this document look genuine?" but "was this document produced by a process certified at the source, at a verifiable moment, on a device tied to this identity?". The answer requires moving trust from the document to the process that generates it.

What is certification at the source for KYC processes

Certification at the source is a forensic methodology that captures the document, selfie or communication at the very moment it is produced, generating a non-repudiable technical proof composed of content hash, electronic seal from a Qualified Trust Service Provider, qualified timestamp and forensic metadata (geolocation, device, network). The result is a digital object that anyone can verify independently to establish that that content existed in that exact form at that moment and came from that device. Applied to KYC processes, it moves verification from "recognising the fake" to guaranteeing the real: if the document does not originate from a certified process, it does not enter the onboarding file.

This stance, technical and legal together, is grounded in Digital Provenance, the principle that every digital artefact used in evidentiary contexts must be able to declare where it came from, who produced it, when and under which conditions. TrueScreen embeds this chain of custody inside the apps and portals used in KYC, anti-money laundering and claims processes.

Certified capture of documents and selfies

Instead of uploading a file to the portal, the customer captures it through the TrueScreen App or an SDK embedded in the bank's application. The capture produces, in the same moment, the document image, the selfie, the device metadata, geolocation, network signature, and applies the QTSP seal and qualified timestamp. What gets uploaded is no longer a PDF or JPG the system has to "trust": it is a technical proof that the system verifies cryptographically.

API-based integration into existing workflows

Certification at the source does not require rewriting the KYC process. The TrueScreen API slots in as the capture layer inside the existing flow: the antifraud system continues to read document and selfie as before, but it now receives them as signed, traceable objects, and without the possibility of having been generated elsewhere by an AI model. Typical integration takes weeks rather than months and does not touch core systems.

Independent verification and evidentiary value

Every certified document can be independently verified by a third party: supervisory authority, judge, forensic expert, counterparty. The file content and the QTSP seal carry everything needed to reconstruct the chain of custody. In a dispute, the financial institution holds a proof built with forensic methodology, instead of defending itself with generic application logs. This level of proof already matches the language required by AMLD6 and eIDAS 2.0.

Operational scenarios: banking, insurance, fintech

Certification at the source changes concretely how three types of institutions manage identity. The examples below reflect real adoption scenarios in the European financial market.

Sector Exposed process How certification at the source applies
Retail banking Digital onboarding, account opening, credit applications Selfie and document captured via certified app; QTSP seal and timestamp before the antifraud system scores the profile
Insurance Policy opening, claim submission with photos and videos Photos of damage and vehicle captured via certified process; AI-generated or edited images are excluded before submission
Fintech / neobanks Mass cross-border onboarding, wallets, crypto-asset services Mobile SDK for certified capture embedded in the app; independent verification of the selfie-document pair aligned with AMLD6
Payment institutions (PSD2) Strong Customer Authentication, disputes, chargebacks Forensic traceability of every authentication event; technical proof usable in dispute resolution

Banks: onboarding and credit applications

In retail banking the most immediate use is the onboarding phase. When the customer opens the account, capture of the document and selfie happens through a certified component: the back office does not receive neutral files but signed, dated artefacts. The same principle extends to credit-limit increases, refinancing, and every request a synthetic identity attacker could use to maximise credit. Similar scenarios are described in certified client onboarding.

Insurance: policy opening and claims handling

The insurance side combines two fraud vectors: synthetic identity at policy inception and manipulated documentation at claim time. 99% of insurers report having already received AI-altered content. The answer is to capture photos of damage, vehicle and location through a certified process: if the photo does not originate from a TrueScreen capture, it does not enter the file. The same logic applies to insurance claims certification.

Fintech and crypto: high volumes, synthetic identities at scale

Fintech platforms handle onboarding volumes several times higher than a traditional bank and are the preferred target of organised synthetic fraud: hundreds of identities spun up in parallel, each with an AI-generated document and deepfake selfie. By embedding the TrueScreen SDK in the app, every inbound identity carries a technical proof of certified capture, and those without one are filtered out. The antifraud control works on a dataset cleaned at the source.

The regulatory frame: AMLD6, PSD2 SCA, eIDAS 2.0, EUDI Wallet

European and international regulators are aligning to the need for strong proof on digital identity. The language used in recent texts converges on three concepts: enhanced verification, traceability, evidentiary value. Certification at the source is the technical way to answer all three at once.

AMLD6 and enhanced due diligence

The Sixth Anti-Money Laundering Directive (AMLD6) requires obliged entities to adopt a risk-based approach with customer due diligence measures proportionate to the risk. When the risk is "AI-generated identity", traditional measures are no longer proportionate. Supervisors expect robust documentary evidence, not screenshots of passed liveness checks. Certification at the source produces exactly the kind of documentary evidence expected.

PSD2 and Strong Customer Authentication

PSD2 imposes SCA as a requirement for every electronic payment operation. Payment institutions have historically worked on the three factors (knowledge, possession, inherence). Inherence (biometrics) is today the factor most exposed to synthetic fraud. Embedding certified capture into SCA flows adds a fourth layer: not just "I recognise you", but "I recognise you starting from a certified data point nobody could have substituted with a deepfake".

eIDAS 2.0 and EUDI Wallet

EU Regulation 2024/1183 updates eIDAS by introducing the European Digital Identity Wallet (EUDI Wallet). The goal is to give every EU citizen a tool to present certified attributes (name, age, qualifications) to public and private services. EUDI solves verified-at-source identity for those who use it. But for years to come, millions of onboarding events will still rely on national documents. For those, certification at the source is the bridge between today's world and the EUDI world.

FATF recommendations and retention obligations

AMLD6 imposes retention of KYC data for at least five years after the end of the relationship. Retention, to be meaningful, must preserve integrity and authenticity. A file kept without QTSP seal and timestamp offers no evidentiary value in court. FATF Recommendations and the January 2026 Horizon Scan further confirm that multi-layer verification and forensic readiness are the direction of travel. Certification at the source ensures every retained file is already in the form required to respond to an inspection or litigation.

What changes for compliance officers and fraud teams

For those working in compliance, AML and fraud teams, moving to certification at the source involves concrete changes in how onboarding and documentation are managed. It is not a pure technology project: it is a posture shift in what counts as "sufficient evidence".

  • Acceptance policy: update internal procedures to require that inbound documents and selfies be captured through a process certified at the source, not uploaded as neutral files.
  • KRI review: traditional key risk indicators (percentage of rejected KYC documents, onboarding time, biometric match score) do not catch synthetic fraud. Add indicators based on the presence or absence of source certification.
  • Team training: investigators and fraud analysts must be able to verify QTSP seal and timestamp without delegating verification to the system vendor.
  • Vendor audit trail: review contracts with video-KYC providers to ensure the capture flow can integrate with a certification layer.
  • Dispute handling: when a customer contests an operation, the institution must be able to produce not just application logs but technical proof of the identity certified at onboarding. The same principle applies to MiFID II certified communications and to certified contact center communications.

The benefit is not measured only in fraud avoided. A KYC process with certification at the source shortens response times to supervisory inspections, reduces manual workload in second-line checks, and makes digital onboarding growth sustainable without expanding fraud teams in proportion.

FAQ: synthetic identity fraud and certification at the source

What is synthetic identity fraud?
Synthetic identity fraud is a scheme in which a criminal combines stolen real data (tax ID, date of birth) with AI-fabricated elements (name, face, document) to create an identity that belongs to no existing person yet passes KYC checks. That identity is used to open accounts, obtain credit and commit fraud without a legitimate holder flagging abuse.
Why are video KYC and dynamic selfies no longer enough against AI?
Generative models now produce live selfie videos with coherent expressions, and injection attacks feed the deepfake straight into the data stream of the KYC session, bypassing the device camera. According to a plan released in April 2026 by the American Bankers Association and the Better Identity Coalition, biometric deepfakes and injection attacks have grown by 58% and 40% respectively.
What does it mean to certify a document at the source?
Certifying at the source means capturing the document or selfie at the very moment it is produced and immediately applying hash, QTSP electronic seal, qualified timestamp and forensic metadata. The resulting artefact is independently verifiable by anyone and cannot be replaced by an AI-generated file without verification failing.
Does certification at the source replace current antifraud systems?
No, it strengthens them. The antifraud system continues to score the customer profile, but it starts from certified inputs rather than neutral files. This removes the risk that scoring is fooled by AI-generated documents or selfies, and reduces false positives on legitimate customers. Integration happens via API without touching core systems.
What legal value does a document certified at the source have in the EU?
A document carrying a qualified electronic seal and qualified timestamp benefits in the EU from the presumption of integrity and accurate time reference under the eIDAS Regulation. For AMLD6, this type of evidence meets due diligence and retention obligations. In litigation, it qualifies as technical proof built with forensic methodology.
How long does it take to integrate certification at the source into an existing KYC process?
A typical integration via API or SDK takes weeks rather than months and does not affect the core systems of the bank or fintech. The certified capture layer is inserted at the point where the customer currently uploads documents and selfies. The rest of the antifraud workflow remains untouched and receives higher-quality artefacts.

Harden your KYC process against synthetic identity fraud

Embed source certification for documents, selfies and customer communications into KYC, AML and claims workflows. Evidentiary value, AMLD6 and eIDAS 2.0 alignment, API and SDK integration.

mockup app