You don’t need a camera, or a pencil, or a paintbrush to create an image or artwork: as of April 2022, all you need is DALL-E 2, the AI-powered image generator capable of creating any scenario we can imagine by simply typing natural language text. So it only needs a few words, even a single emoji, to briefly describe the image we have in mind and give it shape in seconds thanks to artificial intelligence.

DALL-E, a name derived from the fusion of Salvador Dali and Pixar’s WALL-E, was first released by the OpenAI research lab in 2021. Although it was already a remarkable tool, it is with the accuracy achieved with the latest version, released this year, that elicits a general sense of awe, especially at the speed with which artificial intelligence has been able to progress.

However, the spread of this “conversational photoshop,” as DALL-E is called in a Washington Post article, is also followed by a strong sense of danger. A future where technology and artificial intelligence would soon take over was already foreseen, but no one imagined so soon and especially so easily accessible: five months after release there are 1.5 million users generating about 2 million images per day.

Why it is dangerous to use an image generator

Giving everyone the opportunity to enjoy image generators ignites debates on various issues, first of all related to the origin of creativity, the meaning of art and authorship, but also mass misinformation.

The risks are so many that we have to wonder whether valid enoughin counterbalance to the advantages we get with image generators. As stated by Hany Farid, a UC Berkeley professor specialized in digital forensics, computer vision and disinformation, “We’re no longer in the early days of the Internet, where we can’t see what the damage is.”

So many negative phenomena followed each evolution of technology: each new tool or system, while introducing greater efficiency, brought with it potential damage. It is sufficient to consider how advances in artificial intelligence have given rise to deepfakes, a broad term that encompasses all media synthesized by artificial intelligence, from doctored videos to strikingly realistic photos of people who never existed. In fact, when the first deepfakes were released, experts had already revealed how they would be used to undermine policy.

To dilute these concerns, the creators of DALL-E have actually enforced restrictions on the use of the system: first with the removal of violent and sexual graphic content from the data used to train DALL-E, then to control for any targeted harassment, bullying, and exploitation.

How an app can ensure the authenticity of images

Protecting and ensuring the authenticity of images becomes critical to meeting these new challenges. So, it becomes necessary and inevitable, the use of an equally accessible and easy-to-use tool that can provide a safe and secure method of capturing media files.

TrueScreen is a mobile solution capable of proving, by means of a forensically valuable report issued by an official certifying body, the object and certain date of material creation of intellectual activity of a creative nature.
The app verifies, in a matter of seconds, the integrity of deposited files, enabling the preservation and protection of creative works of genius, sheltering them from the risk of counterfeiting and the crime of plagiarism. With this acquisition method, protecting the result of one’s work from its creation becomes quick and easy.

Advanced Cybersecurity Tools in your hands

Certifying with legal value has never been easier: from mobile apps to websites, you always can protect your content.