Taylor Lei of VidMage shares seven practical methods to protect yourself and your business from synthetic video fraud.
If you look at social media right now, you’ll see a popular claim: if you’re on a video call and think the person might not be real, ask them to hold up three fingers in front of their face. The idea is that AI deepfake systems will distort or misrender the fingers, exposing the fake.
It sounds appealing, a simple trick for a complex threat. But as someone who builds AI-powered video tools, I have to be honest: the reality is more complicated than just a finger test.
This issue is urgent because most people are not good at spotting deepfakes without any tricks. A study by iProov found that people could correctly detect deepfakes only about 24.5% of the time, and they often overestimated their ability. Most people believe what they see and accept synthetic media as real instead of questioning it. Scammers take advantage of this tendency.
The three-finger test (with major caveats)
This method works because older generative models struggled with the details of hand shapes. Hands are difficult to create realistically since each has 27 bones, 29 joints, and skin that changes with every movement. Early models would often glitch when a hand appeared near the face.
But this advantage is disappearing quickly. Today’s real-time deepfake systems are trained on much larger datasets and have special modules for hand reconstruction. On a compressed video call, you might not notice any difference. Even worse, security experts warn that viral tricks like this can give people a false sense of security, since bad actors work to evade popular detection methods. The test can help, but use it as just one clue, not a final answer.
Watch for micro-expressions
Deepfake models are good at replicating main expressions, such as smiles, nods, and raised eyebrows. But they still have trouble with quick, subtle movements, such as a flicker of doubt before someone answers, a slight twitch of the lip, or the way real eyes narrow just before a laugh.
Studies that focus on micro-expressions have achieved detection accuracy scores of up to 99% because synthetic faces tend to smooth out the natural messiness of emotions. Pay attention to how expressions change, not just the expressions themselves. If a face moves between emotions too smoothly, it could be a warning sign.
Check the edges of the face
The hairline, the area between the jaw and neck, and the spot where the face meets the ear are usually the hardest parts for generative models to render accurately. In good lighting, check for slight blurring, colour bands, or an airbrushed look around these edges. The face might look perfect, but the edges often do not.
Ask an unexpected, specific question
Real-time deepfakes require a live feed of the source person or a short reference clip. But they do not have that person’s knowledge, memories, or relationships. Ask something only the real person would know, like a shared memory, the name of a mutual friend’s pet, or a detail from a recent conversation. The synthetic face might look perfect, but it cannot make up a real personal history.
Check for audio-visual desynchronisation
Current deepfake systems process video and audio with separate models, meaning one AI creates the visuals and another generates the sound, then the two are synced together afterwards. When things get stressful, such as during fast speech, overlapping sounds, or strong emotions, this separation becomes noticeable.
Watch the lip sync closely when someone speaks quickly or with emotion. If you see a slight lag or lips that overcorrect, it means two systems are trying to work together instead of one real person speaking.
Request an unscripted physical action
Ask the person to turn fully to the side, not just tilt their head, but rotate 90 degrees so you can see their full profile. This usually disrupts most face-swap systems, which are designed for front or near-front views. You can also ask them to move close to a window or another natural light source and tilt their face upward. Big changes in angle and lighting are still hard for real-time generation systems.
Verify outside the call entirely
The most reliable method is also the simplest: end the call and reach out to the person through another verified channel before you do anything. Call them on a different number or send a message through a platform they control. Using multiple verification methods is considered best practice in fraud prevention, and feeling rushed is often a warning sign.
If someone says they are your bank, boss, or family member and wants you to act quickly, take a moment to check. Real people can wait a couple of minutes for you to confirm who they are.
The three-finger trick became popular because people wanted an easy, shareable answer to a complicated problem. Simple solutions can make people feel confident when they should be cautious. That is why, moving forward, making verification a habit is your best defence.
Remember one rule: if something feels urgent and something important is at risk, check through another channel before you do anything. Making thorough verification second nature is the habit that will help protect you from deepfakes now and in the future.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.
