A DeepFake video can now be identified with 99% accuracy

0
73
deepfake.jpg
deepfake.jpg

Videos where one face is changed for another using a computer are becoming more and more frequent. Deepfakes are dangerous, as they can fool many people into believing that someone has said something specific, when in fact it is someone else digitally “camouflaged”.

The problem is that it is increasingly difficult to identify whether a video is fake or not, since deepfakes, which use Artificial Intelligence to perfectly adapt the shape and color of the face, as well as the movements of the lips and facial muscles in general, are increasingly at the hands of anyone, not just “pranksters” with big computers.

Now there’s a new method that can help identify a lie, a process by computer scientists at UC Riverside that can detect manipulated facial expressions more accurately than current methods.

Current methods manage to identify more or less well when they change a face for that of another person, but they usually give errors when what they do is change someone’s facial expression. If we modify a video so that a subject changes from sad to happy, for example, the current tools would not be able to identify him, and the result could be just as dangerous (imagine one politician laughing at the threat of another).

Using manipulated videos to influence political or social opinion is becoming more common, and that is what has made Amit Roy-Chowdhury, professor of electrical and computer engineering at the Bourns College of Engineering, along with his colleagues, decide work on a more reliable method.

What they do is divide the task into two components within a deep neural network. The first differentiates facial expressions and sends information about the regions that contain the expression, such as the mouth, the eyes or the forehead. The second obtains this data and works on the detection and location of manipulations.

They’ve called it “Expression Manipulation Detection,” and it can detect and locate the specific regions within an image that have been altered.

The system was presented at the 2022 Winter Conference on Machine Vision Applications, now it just needs to be included in the common software we have at home, so we can identify a lie without having to ask for help from university students.

Previous articleEamon Ryan calls on pro-Putin rally organisers to ‘stand down’ as protests planned in Ireland
Next articleChina has confined the “iPhone city”
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.