Thursday, April 3, 2025

Working From Home May Lead to Weight Gain

A study has found that employees who...

North Korea to Get 10 Laptops Amid Sanctions – Why This Tiny Shipment is a Big Deal

UN Security Council's North Korea Sanctions Committee approved ten laptops for the IFRC to support its Pyongyang office.

New Evidence Unveils North Korea-Russia Military Deals

The Voice of America (VOA) reported on...

How Your Veins Could Help Spot a Deepfake—Meet FakeCatcher

TechHow Your Veins Could Help Spot a Deepfake—Meet FakeCatcher
Official introduction video of FakeCatcher by Intel (captured from Intel\'s official website)
Official introduction video of FakeCatcher by Intel (captured from Intel’s official website)

Every time the heart beats, blood flows through the veins, subtly altering the color of the skin. Artificial Intelligence (AI) can detect areas on the face where these minute changes in vein color are missing in video pixels. This technique, FakeCatcher, identifies synthetic videos by tracking blood flow.

According to recent reports from the tech industry on Tuesday, major companies are now deploying FakeCatcher, a real-time detection system, to counter the growing threat of deepfakes—AI-generated manipulated videos.

FakeCatcher analyzes video pixels for blood flow patterns and detects shifts in vein coloration on the face. While authentic videos of human faces naturally exhibit these subtle circulatory signals, doctored videos lack this physiological signature.

The AI behind FakeCatcher examines tiny blood flow variations in the original video’s pixels, using these as clues to identify inconsistencies in potentially fake content.

Intel Corporation introduced this technology in 2022. A research paper from the Institute of Information & Communications Technology Planning & Evaluation (IITP) indicates that FakeCatcher-based systems can detect deepfake videos with 96% accuracy.

However, this accuracy rate is based on the dataset used during the initial training phase. Since then, generative AI has advanced significantly, with a broader range of datasets now available for training.

Professor Woo Simon Sung-il from Sungkyunkwan University’s Software Department states, “Using data different from what was originally trained on can lead to drastically different accuracy results. The landscape of generative AI has evolved considerably since the technology’s first release.”

Researchers, government institutions, and private-sector companies consistently refine their strategies to combat deepfakes as generative AI continues to improve.

Companies in the European Union (EU) and the United States (U.S.) are developing deepfake detection technologies similar to FakeCatcher.

These advancements include multimodal AI capable of simultaneously analyzing video and audio and AI algorithms designed to prevent deepfake generation altogether. Some algorithms under development can block commands instructing AI to create deepfakes containing illicit content and control the synthesis of specific images.

Woo notes, “Social media platforms are the primary breeding grounds for deepfakes. While screening every uploaded video with FakeCatcher might reduce user engagement, platform companies’ widespread adoption of such detection technologies could significantly curb deepfake-related crimes.”

Check Out Our Other Content

Check Out Other Tags:

Most Popular Articles