Intel Introduces Deepfake Detector With 96% Accuracy


The tools that are capable of creating deefakes are increasingly available to everyone, so the amount of fake news that can be generated using the faces of politicians and celebrities in general will increase a lot over the next few years.

Soon it will be possible to create a video with the face of a politician saying anything using his own voice, and from the mobile, so it is important to be prepared for this topic.

Now Intel introduced FakeCatcher, the first real-time detector of deepfakes, ideal for knowing if an image or video is real or created by computer, and works with an accuracy rate of 96%.

It does this by analyzing video pixels for results in milliseconds, a system created by Ilke Demir, principal research scientist on the staff of Intel Labs, along with Umur Ciftci of the State University of New York at Binghamton. The product uses Intel hardware and software, runs on a server, and interacts through a web-based platform.

How FakeCatcher works

The program focuses on the clues within the actual videos using a method used to measure the amount of light that is absorbed or reflected by blood vessels in living tissue. With FakeCatcher, those signals, called PPGs, are collected from 32 locations on the face and then mapped from the temporal and spectral components.

Clearly, a computer-generated image doesn’t have such signals, there are no blood vessels, so something like that makes a lot of sense.

FakeCatcher is the first deep fake detection algorithm using heart rates.

They use Intel technologies to run the analysis in real time, up to 72 simultaneous detection streams, thus allowing FakeCatcher to be one of the protagonists of Intel’s research called Trusted Media, which is working on the detection of manipulated content, deepfakes, generation responsible and origin of the media.

They’re also working on other clues to authenticity, like gaze detection, so when you put it all together, you’ll have a decent result.

How could FakeCatcher be used?

Once we have it on the market, we can use it to:

– Detection tools built into editing software used by content creators and broadcasters.
– Detection of media and broadcasters in news video sequences, particularly content from third parties.
– Detection of social networks as part of a user-generated content selection process.
– Al for Social Good, who democratized deepfake detection through a common platform, allowing any person or entity to confirm the authenticity of a video.

They are now working on improving the training of the AI ​​with large PPG data sets, which do not exist at the moment. At the moment they only use the information of about 40 people, so it is not possible to generalize much.

Learn more at

Previous articleMore transparency: TikTok tests feature that allows you to analyze social network data privately
Next articleMediaTek plans to enter PC market with ARM processors
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.