Deepfake corporate fraud: why source certification is the real defense
A finance director receives a video call from the CEO. The face matches, the voice is familiar, the request is urgent: authorize a $25.6 million wire transfer to a new supplier. Everything looks legitimate. The money is gone within minutes, and the deepfake is discovered hours later. This scenario, which hit engineering firm Arup in 2024, was not a one-off. It has become the new baseline for deepfake corporate fraud, and the numbers behind it are accelerating well beyond what most security teams planned for.
The instinctive response has been to invest in detection: tools that analyze pixels, audio frequencies, and behavioral patterns to flag synthetic content. But detection operates after the content already exists. It is reactive by design, and its accuracy drops every time generative AI models improve. The alternative works in the opposite direction: instead of chasing fakes, organizations certify authentic communications at the source, so any uncertified content becomes immediately suspect. This is the shift from recognizing the fake to guaranteeing the real, and it is already changing how enterprises address the digital trust gap in corporate communications.
This insight is part of the guide "Digital trust gap: why every piece of online data needs a trust layer".
How deepfakes are transforming corporate fraud
Deepfake-enabled fraud is no longer a theoretical risk debated at cybersecurity conferences. It is an operational reality draining corporate accounts at scale. Trend Micro's 2025 threat report documented a 300% increase in deepfake fraud attempts over the previous year, while aggregate losses from AI-driven corporate fraud in the United States alone hit $1.1 billion in drained accounts. These are confirmed figures, not projections, and they point to a threat outpacing defenses.
CEO fraud and voice cloning: the 2025 numbers
The most damaging vector is CEO fraud: business email compromise (BEC) enhanced with synthetic media. The attack follows a predictable pattern. Criminals clone the voice or likeness of a senior executive and use it to authorize financial transactions, override internal controls, or extract sensitive data.
Voice cloning, specifically, has become the fastest-growing channel. Deepstrike.io reports a 680% surge in voice cloning fraud since 2023, fueled by the fact that convincing voice synthesis now needs less than ten seconds of sample audio. The UK energy company case from 2019, where a CEO voice clone extracted EUR 220,000 in a single phone call, was an early warning. By 2025, similar attacks happen daily across industries, and the average amount per incident keeps climbing.
What makes these attacks so effective is their target: the trust channels companies depend on for routine work. Phone calls between executives. Video conferences with remote teams. Voice messages confirming approvals. Every one of these channels was built on a simple assumption: seeing or hearing a person confirms their identity. That assumption no longer holds.
From email to video: the escalation of attack channels
Traditional BEC relied on spoofed emails with urgency cues and authority signals. Deepfake technology has blown that attack surface wide open, extending it to every communication channel a business uses. The Arup incident involved a full video call with multiple synthetic participants, convincing enough to override the target's initial skepticism. The World Economic Forum flagged the case as a watershed: a manipulated email might trigger scrutiny, but a live video call bypasses it.
The escalation follows a clear path. Email impersonation led to voice cloning. Voice cloning led to real-time video deepfakes. The next wave will combine all three in coordinated campaigns. Deloitte projects AI-driven fraud losses across all channels will reach $40 billion globally by 2027.
Why detection is not enough to protect businesses
Detection tools have a role in cybersecurity. But treating them as the primary defense against deepfake corporate fraud creates a serious blind spot. The issue is not that detection is useless: it is that detection degrades exactly when threats grow most sophisticated.
The technical limits of detection
Current systems work by spotting artifacts: pixel inconsistencies, unnatural blinking patterns, audio spectral anomalies. Each new generation of generative AI reduces those artifacts. Research from iProov found that only 0.1% of people can accurately spot high-quality deepfakes. Automated systems do not perform dramatically better against the latest synthetic media.
| Criteria | Detection | Source certification |
|---|---|---|
| Timing | After content is received | At the moment of creation |
| Reliability over time | Degrades as generators improve | Stable: independent of deepfake quality |
| Legal value | Expert testimony, challengeable | Timestamped and digitally signed evidence |
| False positives | Frequent: legitimate content flagged as fake | None: certified content is authentic by definition |
| Scalability | Requires per-content analysis | Integrated into business workflows |
The deeper problem is asymmetry. Attackers get unlimited attempts. They have access to the same detection tools defenders use, so they iterate until their output passes every check. Defenders need to be right every single time: one missed deepfake can mean millions lost. That math does not improve with time.
The time problem: reacting after the damage
Even when detection works, it works late. A deepfake video call that authorizes a wire transfer does its damage in real time. By the time a detection system flags the recording, the money has already moved through multiple accounts. In the Arup case, the fraud succeeded not because detection failed technically, but because no verification existed at the moment of decision.
This is the gap that matters. Detection analyzes content after creation. Corporate fraud moves at the speed of a phone call. Closing that gap means moving the verification point from after the communication to before or during it.
Certifying at the source: the alternative to chasing fakes
Source certification inverts the problem. Instead of asking "is this content fake?" after receiving it, certification establishes "this content is verified authentic" at the moment of creation or transmission. Anything without certification becomes suspect by default, no matter how convincing it looks.
How corporate communication certification works
This is how TrueScreen operates as a Data Authenticity Platform. Data is captured and certified at origin through a process combining forensic acquisition, digital signature, and timestamping. The result is not a label applied after the fact, but a chain of custody that starts the moment a communication is generated.
Take a practical scenario. A company needs every email sent to clients, partners, or regulators to be verifiable as authentic. With email certification, each outgoing message gets a cryptographic signature and a qualified timestamp at the moment of sending. The recipient, or any third party, can independently confirm the email came from the claimed sender, that nothing was altered, and exactly when it was sent.
The same principle works across channels. Video recordings, documents, and data files can all be certified at capture through TrueScreen's management platform, building an organizational layer of digital provenance that covers every channel deepfakes can exploit. The difference from detection is temporal: certification happens before or during the communication, not after an attack has already done its damage.
NIS2 and the AI Act: regulation pushing toward certification
Regulation is pushing in the same direction. The EU's NIS2 Directive, effective since October 2024, requires essential and important entities to implement risk management measures covering supply chain security, incident handling, and business continuity. For organizations dealing with sensitive communications, that translates to a concrete obligation: demonstrate that data integrity controls are proactive, not just reactive.
The AI Act adds specific requirements around transparency and traceability for AI-generated content. Organizations relying only on detection to manage deepfake risk may struggle to show the proactive governance regulators now expect. NIS2-compliant certification delivers documented, auditable proof that communications were authentic at origin: the kind of evidence that holds up in both regulatory reviews and courtrooms.
As generative AI narrows the gap between synthetic and real content, the burden of proof is shifting. Organizations will need to prove what is real, because trying to detect what is fake will not scale. Source certification is how that proof gets built. Both the NIS2 Directive and the AI Act are pushing the market toward this model as the trust layer that closes the digital trust gap.

