You don’t need a camera, or a pencil, or a paintbrush to create an image or artwork: as of April 2022, all you need is DALL-E 2, the AI-powered image generator capable of creating any scenario we can imagine by simply typing natural language text. So it only needs a few words, even a single emoji, to briefly describe the image we have in mind and give it shape in seconds thanks to artificial intelligence.

DALL-E, a name derived from the fusion of Salvador Dali and Pixar’s WALL-E, was first released by the OpenAI research lab in 2021. Although it was already a remarkable tool, it is with the accuracy achieved with the latest version, released this year, that elicits a general sense of awe, especially at the speed with which artificial intelligence has been able to progress.

However, the spread of this “conversational photoshop,” as DALL-E is called in a Washington Post article, is also followed by a strong sense of danger. A future where technology and artificial intelligence would soon take over was already foreseen, but no one imagined so soon and especially so easily accessible: five months after release there are 1.5 million users generating about 2 million images per day.

Why it is dangerous to use an image generator

Giving everyone the opportunity to enjoy image generators ignites debates on various issues, first of all related to the origin of creativity, the meaning of art and authorship, but also mass misinformation.

The risks are so many that we have to wonder whether valid enoughin counterbalance to the advantages we get with image generators. As stated by Hany Farid, a UC Berkeley professor specialized in digital forensics, computer vision and disinformation, “We’re no longer in the early days of the Internet, where we can’t see what the damage is.”

So many negative phenomena followed each evolution of technology: each new tool or system, while introducing greater efficiency, brought with it potential damage. It is sufficient to consider how advances in artificial intelligence have given rise to deepfakes, a broad term that encompasses all media synthesized by artificial intelligence, from doctored videos to strikingly realistic photos of people who never existed. In fact, when the first deepfakes were released, experts had already revealed how they would be used to undermine policy.

To dilute these concerns, the creators of DALL-E have actually enforced restrictions on the use of the system: first with the removal of violent and sexual graphic content from the data used to train DALL-E, then to control for any targeted harassment, bullying, and exploitation.

From DALL-E 2 to the generative AI explosion: what changed by 2026

When DALL-E 2 launched in 2022, it was a novelty. By 2026, the landscape of AI image generation has transformed beyond recognition. Midjourney, Stable Diffusion, Adobe Firefly, and dozens of open-source alternatives now produce images that are virtually indistinguishable from photographs. Video generation has followed the same trajectory: tools can now create realistic clips from text descriptions, and real-time face-swapping operates seamlessly during live video calls.

The accessibility barrier has vanished entirely. What once required technical knowledge and GPU clusters is now available through mobile apps and browser-based tools, often for free. The volume of AI-generated content online has grown exponentially: industry estimates suggest that by 2026, a significant portion of images shared on social media platforms are either fully generated or substantially modified by AI.

This creates a trust crisis that extends far beyond social media. In insurance, fabricated damage photos can support fraudulent claims. In legal proceedings, manipulated screenshots and images undermine the reliability of digital evidence. In journalism, AI-generated visuals blur the line between reporting and fabrication. The question is no longer whether AI-generated images will be misused, but how organizations and individuals can protect themselves in a world where any image might be synthetic.

The sectors most exposed to synthetic image risks

While AI image generation affects everyone, certain industries face particularly acute risks:

  • Insurance and claims management: photos documenting vehicle damage, property conditions, or workplace incidents can be generated or altered to inflate claims. Without certified visual evidence, insurers face growing fraud exposure.
  • Legal and judicial proceedings: courts rely on digital evidence, including photos, screenshots, and video recordings. AI-generated content threatens the admissibility and reliability of this evidence, prompting new authentication requirements in multiple jurisdictions.
  • Real estate and construction: property inspection photos, construction progress documentation, and condition reports are critical for contracts and disputes. Manipulated images can misrepresent property conditions or project status.
  • E-commerce and marketplaces: product images can be enhanced or fabricated to misrepresent quality, condition, or features, eroding buyer trust and increasing return rates.
  • Media and corporate communications: AI-generated images of public figures, events, or corporate assets can fuel disinformation campaigns with reputational consequences.

Across all these sectors, the common thread is clear: organizations need a reliable way to prove that their visual content is authentic and unaltered. This is not a future problem; it is a present operational requirement.

If anyone can generate images, who certifies the real ones?

DALL-E, Midjourney, Stable Diffusion, and the ever-growing landscape of generative AI tools have fundamentally changed the relationship between images and truth. When photorealistic images can be conjured from a text prompt in seconds, the traditional assumption that “seeing is believing” no longer holds. Detection tools will always lag behind generation capabilities: the real solution is not to chase fakes, but to certify what is authentic from the start.

TrueScreen addresses this challenge at its root. As a cybersecurity platform for digital content certification, TrueScreen captures photos, videos, screenshots, and documents with an immediate digital signature, certified timestamp, and forensic metadata. This creates a tamper-proof chain of custody with legal value, making it possible to prove that a specific piece of content was captured at a given time and place, and has not been altered since. For professionals in insurance, law, construction, real estate, and any field where visual evidence matters, this is the difference between content that can be questioned and content that stands up in court.

Certify your photos and videos with legal value

TrueScreen guarantees the authenticity and integrity of digital content at the source: no detection needed, just certified proof from the moment of capture.

Learn more
Try for free

FAQ: AI Image Generators and Authenticity

Are AI-generated images a threat to visual evidence?
Yes. Tools like DALL-E, Midjourney, and Stable Diffusion can produce photorealistic images that are indistinguishable from real photographs. This creates serious risks in legal proceedings, insurance claims, journalism, and any context where the authenticity of visual content matters. Without a certified chain of custody, there is no reliable way to prove whether an image depicts reality or was generated by AI.
Can detection tools reliably identify AI-generated images?
Detection accuracy is declining as generation models improve. Current detectors achieve variable accuracy rates and frequently produce false positives and negatives. More importantly, detection is fundamentally reactive: it tries to catch fakes after they exist. The more sustainable approach is proactive certification, which guarantees authenticity at the source rather than attempting to identify manipulations after the fact.
How does TrueScreen protect content from being confused with AI-generated media?
TrueScreen certifies content at the moment of capture by applying a digital signature, a certified timestamp, and comprehensive forensic metadata. This creates an immutable record that proves the content was captured by a real device, at a specific time and GPS location, and has not been modified since. Unlike detection, this method is future-proof: it works regardless of how advanced AI generators become, because it verifies the origin rather than analyzing the output.