Deepfake Detection
How FaceSign detects deepfakes with 99% or higher accuracy using interactive liveness analysis.
FaceSign detects deepfakes, face swaps, and presentation attacks at a rate of 99% or higher, matching the iBeta PAD Level 2 standard for presentation attack detection. This page explains how the system achieves that rate and why interactive liveness analysis defeats attacks that fool static checks.
Why static liveness checks fail
Traditional liveness detection asks the user to take a selfie or hold still for a moment. These approaches are vulnerable to:
- Pre-recorded video replayed on a screen held in front of the camera
- Real-time deepfake generators that render a synthetic face over a live camera feed
- 3D-printed masks and silicone replicas
- Photo injection attacks that bypass the camera entirely and inject a static image into the video stream
A single-frame analysis cannot distinguish a high-quality deepfake from a real face. The temporal dimension — how a face moves and responds over time — is what separates real from synthetic.
Interactive liveness analysis
FaceSign uses interactive challenges during the LIVENESS_DETECTION node. Instead of asking for a static selfie, the avatar directs the user through randomized, unpredictable prompts that require real-time physical responses.
What the system checks
| Analysis layer | What it measures | What it catches |
|---|---|---|
| Frame-by-frame temporal consistency | Micro-movements, skin texture flow, lighting response across consecutive frames | Pre-recorded video, looped playback, frame injection |
| Micro-expression timing | Involuntary facial muscle activation timing after stimulus | Deepfake generators that render expressions with artificial delay |
| Physiological response | Pupil dilation, blink patterns, blood flow indicators visible through skin | Masks, silicone replicas, static images |
| Interactive challenge response | Physical actions in response to randomized, unpredictable prompts | Pre-rendered deepfakes that cannot respond to unknown challenges |
| Depth and parallax | 3D facial geometry from motion parallax | Flat screens, printed photos, 2D image injection |
Why interactive challenges defeat deepfakes
Pre-rendered deepfakes are generated in advance. The attacker produces a synthetic video and replays it during the verification session. This approach fails against interactive challenges because:
- The challenge is randomized — The avatar selects from a pool of prompts at runtime. The attacker cannot predict what will be asked.
- The response must be immediate — The system measures the latency between the challenge and the user's physical response. Real-time deepfake generators introduce measurable processing delay.
- The response must be physiologically consistent — When a real person turns their head, hundreds of micro-movements occur in facial muscles, skin folds, and lighting angles. Deepfake generators approximate these but introduce statistical artifacts detectable across multiple frames.
FaceSign's liveness detection is not a single check. It runs continuously throughout the LIVENESS_DETECTION node, analyzing every frame. The longer the interaction, the more data points the system collects and the higher the confidence.
iBeta PAD Level 2
FaceSign's deepfake detection meets the iBeta Presentation Attack Detection Level 2 standard:
| Metric | Requirement | FaceSign |
|---|---|---|
| Attack detection rate | Varies by attack type | 99% or higher across all tested attack vectors |
| Tested attack types | Printed photos, screen replay, 3D masks, video injection | All passed |
| False rejection rate | Must remain usable for legitimate users | Below industry threshold |
PAD Level 2 is the higher of the two iBeta testing levels. It includes advanced attack types (3D masks, video injection) that Level 1 does not test.
Attack types and detection
| Attack | How it works | How FaceSign detects it |
|---|---|---|
| Screen replay | Attacker plays a video of the victim on a phone/monitor | Temporal analysis detects screen refresh artifacts; interactive challenges go unanswered |
| Real-time deepfake | Software generates a synthetic face over a live feed | Processing latency on challenge-response; micro-expression timing anomalies |
| 3D mask | Physical mask worn by the attacker | Physiological signals absent (no blood flow, no pupil response); texture analysis |
| Photo injection | Image injected into the video stream bypassing the camera | No temporal variation; fails all interactive challenges |
| Face swap app | Mobile app that swaps faces in real-time | Edge artifacts at face boundary; inconsistent lighting between swapped face and body |
Integration
Deepfake detection runs automatically in two node types:
LIVENESS_DETECTION— Dedicated interactive liveness check with randomized challengesFACE_SCAN— Liveness analysis runs alongside biometric capture
You do not need to configure deepfake detection separately. Adding either node to your verification flow activates the full analysis pipeline.