Differentiating between a real image and a deepfake was practically impossible. So far, according to Adobe

0
42
 Differentiating between a real image and a deepfake was practically impossible.  So far, according to Adobe
differentiating between a real image and a deepfake was practically

Perhaps you remember the case. Just under a year ago, in May 2021, Jonas Bendiksen, a Norwegian photojournalist, published a book titled ‘The Book of Veles’. Apparently it was a catalog of images taken in the city of Veles, in North Macedonia. Curious and very evocative, yes; but little else. That was apparently, of course. In reality, Bendiksen had faked the photos with software, even including characters generated with 3D models similar to those used in video games, in an attempt to open a debate on manipulation.

To Bendiksen’s astonishment, however, the book was distributed like a typical photojournalism work, a series of completely true images that showed the reality of the former Soviet satellite. It even passed the filters of the prestigious Visa Pour L’Image international festival, in France, without anyone noticing the slightest deception. Only at the end of everything did he end up uncovering himself.

What he wanted to demonstrate, he would explain later, is something that the media, the political class and, increasingly, society itself, have internalized: manipulating images and videos is very easy. And cash. So much so that it is becoming a disturbingly common problem.

A Sensity study concluded last year that in a matter of six months the number of deepfakes in circulation had doubled. The worst thing is that only 7% they had been made for “entertainment”. Obama or Trump themselves have been the protagonists of a type of manipulation that is frequently used for purposes as questionable and illicit as generating pornography by superimposing the face of unwitting victims or carrying out scams.

Objective: trust

In an attempt to tackle the problem or at least create tools that allow the media and individuals to uncover the manipulations, some time ago some big firms in the technology sector —Adobe, Arm, BBC, Intel, Microsoft and Truepic— launched the Coalition for Content Provenance and Authenticity (C2PA). The movement brings together the work of the Content Authenticity Initiative (CAI), championed by Adobe; and Project Origin, promoted in turn by Microsoft and the BBC, and has just borne its first fruits: the initial version of a standard, a kind of “seal of guarantee” that endorses the true material and, in this way, helps hunt deepfakes.

SEE ALSO  Your computer will not be able to continue using Windows 11 if it does not meet this new requirement

The idea is to enable an open standard that can be integrated into any software, device or online platform and helps to prove the origin and structure of the material, define the information associated with each image, video or audio, for example, and how it is stored.

“C2PA allows authors to securely link provenance data claims using their unique credentials. These statements are called assertions by C2PA. They can include statements about who, how, when, and where the content was created. Also about when and how it was edited. The author of the content and the publisher – if it is the author of the source data – always is in control on whether to include data of origin, as well as on the assertions that are added”, details the coalition of technology companies on its official website.

Those responsible for the project warn that, today, creators who want to include metadata about their work -authorship, without going any further- “cannot do it in a safe, tamper-proof and standardized way on all platforms”, which makes it difficult for publishers to have a “critical context” to trust the authenticity of the material. To solve it, Adobe and the rest of the firms involved in C2PA seek to offer “authenticity indicators” that helps to know, if necessary, who has altered a photo and what exactly has changed.

“This ability to provide provenance is essential to facilitating trust,” reflects C2PA, which clarifies that its goal is “to enable the global and voluntary adoption of digital provenance techniques through the creation of a rich ecosystem of applications.”

SEE ALSO  Goodbye to images made with AI that pretend to be real: Instagram and Facebook will label them

C2pa Visualglossary

Data that make up the C2PA architecture. Image: C2PA

The million dollar question: How do you get it?

That, and How does C2PA intend to establish that trust?

The key is largely due to the use of unique signature credentials that, thanks to a certification authority (CA), certify that a person is who they claim to be. The process is not very different from the one already used on the World Wide Web (WWW) and its objective is to prevent attackers from impersonating an identity. “For example, before issuing a certificate for https://c2pa.org/, the CA verified that the requestor did in fact control the C2PA domain name before issuing a certificate for that web name,” he says. for example, the coalition itself.

“The provenance data and the asset are the two parts of the same puzzle, a unique puzzle. The chance of any other piece matching, either by coincidence or intentional creation, is so low that it would be practically impossible. It is known as a hard link. In other words, any alteration of the asset or provenance, however insignificant, would alter the mathematical algorithm, the shape of the puzzle piece, in such a way that it would no longer match.”

What is that guarantee for in practice? Well, basically to have a precise idea of ​​who created the file or if it has been modified. Imagine, for example, that a friend sends you a video with controversial content. If it complies with the C2PA standard, you will be able to see – with a C2PA-enabled application – if it has been validated by an organization you can trust. You will be able to do this because the video will have been taken with an enabled camera that, among other things, will have created a manifest with information about the device itself and cryptographic hashes.

SEE ALSO  ChatGPT is now much more productive: you can read your responses aloud

Something similar would happen with a social network. Thanks to C2PA provenance, the platform can verify if a photo comes from the same source that published it. The objective: “Establish trust”.

“This is a monumental step for creators, publishers and consumers around the world,” reflected Andy Parsons of Adobe’s CAI, shortly after the C2PA launch: “We will continue to drive industry awareness of the importance of provenance and We will work to seek broad adoption to combat the rise of inauthentic content.” The challenge now is precisely to achieve widespread use of the standard.

It is not the first time that Adobe has shown interest in the effects of deepfake. Last fall, for example, it presented Project Morpheus, which allows you to adjust the appearance of people in videos, changing their facial expression. The tool has interesting applications, for example, for publishing professionals; but it can also be used in political propaganda.