FaceSign
Security

Deepfake Detection

How FaceSign detects deepfakes with 99% or higher accuracy using interactive liveness analysis.

FaceSign detects deepfakes, face swaps, and presentation attacks at a rate of 99% or higher, matching the iBeta PAD Level 2 standard for presentation attack detection. This page explains how the system achieves that rate and why interactive liveness analysis defeats attacks that fool static checks.

Why static liveness checks fail

Traditional liveness detection asks the user to take a selfie or hold still for a moment. These approaches are vulnerable to:

  • Pre-recorded video replayed on a screen held in front of the camera
  • Real-time deepfake generators that render a synthetic face over a live camera feed
  • 3D-printed masks and silicone replicas
  • Photo injection attacks that bypass the camera entirely and inject a static image into the video stream

A single-frame analysis cannot distinguish a high-quality deepfake from a real face. The temporal dimension — how a face moves and responds over time — is what separates real from synthetic.

Interactive liveness analysis

FaceSign uses interactive challenges during the LIVENESS_DETECTION node. Instead of asking for a static selfie, the avatar directs the user through randomized, unpredictable prompts that require real-time physical responses.

What the system checks

Analysis layerWhat it measuresWhat it catches
Frame-by-frame temporal consistencyMicro-movements, skin texture flow, lighting response across consecutive framesPre-recorded video, looped playback, frame injection
Micro-expression timingInvoluntary facial muscle activation timing after stimulusDeepfake generators that render expressions with artificial delay
Physiological responsePupil dilation, blink patterns, blood flow indicators visible through skinMasks, silicone replicas, static images
Interactive challenge responsePhysical actions in response to randomized, unpredictable promptsPre-rendered deepfakes that cannot respond to unknown challenges
Depth and parallax3D facial geometry from motion parallaxFlat screens, printed photos, 2D image injection

Why interactive challenges defeat deepfakes

Pre-rendered deepfakes are generated in advance. The attacker produces a synthetic video and replays it during the verification session. This approach fails against interactive challenges because:

  1. The challenge is randomized — The avatar selects from a pool of prompts at runtime. The attacker cannot predict what will be asked.
  2. The response must be immediate — The system measures the latency between the challenge and the user's physical response. Real-time deepfake generators introduce measurable processing delay.
  3. The response must be physiologically consistent — When a real person turns their head, hundreds of micro-movements occur in facial muscles, skin folds, and lighting angles. Deepfake generators approximate these but introduce statistical artifacts detectable across multiple frames.

iBeta PAD Level 2

FaceSign's deepfake detection meets the iBeta Presentation Attack Detection Level 2 standard:

MetricRequirementFaceSign
Attack detection rateVaries by attack type99% or higher across all tested attack vectors
Tested attack typesPrinted photos, screen replay, 3D masks, video injectionAll passed
False rejection rateMust remain usable for legitimate usersBelow industry threshold

PAD Level 2 is the higher of the two iBeta testing levels. It includes advanced attack types (3D masks, video injection) that Level 1 does not test.

Attack types and detection

AttackHow it worksHow FaceSign detects it
Screen replayAttacker plays a video of the victim on a phone/monitorTemporal analysis detects screen refresh artifacts; interactive challenges go unanswered
Real-time deepfakeSoftware generates a synthetic face over a live feedProcessing latency on challenge-response; micro-expression timing anomalies
3D maskPhysical mask worn by the attackerPhysiological signals absent (no blood flow, no pupil response); texture analysis
Photo injectionImage injected into the video stream bypassing the cameraNo temporal variation; fails all interactive challenges
Face swap appMobile app that swaps faces in real-timeEdge artifacts at face boundary; inconsistent lighting between swapped face and body

Integration

Deepfake detection runs automatically in two node types:

  • LIVENESS_DETECTION — Dedicated interactive liveness check with randomized challenges
  • FACE_SCAN — Liveness analysis runs alongside biometric capture

You do not need to configure deepfake detection separately. Adding either node to your verification flow activates the full analysis pipeline.

Next steps

On this page