FaceSign
Security

Security Architecture

Six parallel AI defense layers, HSM tokenization, and zero raw video retention.

FaceSign's security architecture is built on a principle: tokenize first, analyze second. All biometric data passes through HSM tokenization at the point of ingestion, before it reaches the AI analysis layer. Raw video is never stored.

Six parallel AI models

Every verification session runs six AI models simultaneously. Each model analyzes a different dimension of the interaction, and their outputs are combined into a unified risk score.

ModelWhat it analyzesWhat it catches
Interactive livenessFacial motion, depth, temporal consistencyDeepfakes, photos, pre-recorded video, masks
Coercion detectionVocal stress, gaze, micro-expressions, response timingDuress, coaching, elder abuse, forced transactions
Environmental analysisBackground, lighting, audio fingerprintVirtual backgrounds, fraud call centers, spoofed locations
Behavioral analysisInteraction patterns, sentiment, deviation from baselineCompromised accounts, behavioral anomalies
Predictive riskDevice fingerprint, IP, geolocation, time-of-dayUnusual devices, impossible travel, high-risk geolocations
Adaptive conversationReal-time dialogue analysis, topic coherenceScripted responses, inconsistent answers, evasion

The models run independently and cannot influence each other's analysis. Each produces its own confidence score. The platform combines these into a final risk assessment that your application receives via webhook.

Data flow

The path from user camera to your webhook follows a strict sequence designed to minimize exposure of raw biometric data:

StageWhat happensRaw data present?
1. CaptureUser's camera and microphone stream to the FaceSign widget over TLS 1.3Yes (in transit only)
2. HSM tokenizationVideo and audio frames enter the Hardware Security Module. Biometric features are extracted and tokenized. Raw frames are discarded.Discarded after tokenization
3. AI analysisSix models analyze the tokenized biometric features in parallelNo
4. Result deliveryRisk scores, transcript, and decision are delivered to your webhookNo
5. RetentionOne-way biometric fingerprint stored for future recognition. All other session data follows your configured retention policy.No

Encryption

LayerStandard
In transitTLS 1.3 for all connections between client, FaceSign servers, and your webhooks
At restAES-256 encryption for all stored data, including tokenized biometric fingerprints
Key managementHardware Security Modules (HSMs) for tokenization keys and biometric feature extraction

Data roles

FaceSign operates as a data processor under GDPR and a service provider under CCPA. Your organization is the data controller. This means:

ResponsibilityWho owns it
Deciding when to trigger a verification sessionYou (controller)
Processing biometric data during the sessionFaceSign (processor)
Determining retention periods and deletion policiesYou (controller)
Responding to data subject access requestsYou (controller), with FaceSign support
Securing the processing infrastructureFaceSign (processor)

A Data Processing Agreement (DPA) is available on request. FaceSign also provides DPIA (Data Protection Impact Assessment) support materials for organizations that require them.

What FaceSign does not store

  • Raw video or audio recordings
  • Unencrypted biometric data
  • Personally identifiable information beyond what you explicitly send in the session payload
  • Session media after the tokenization stage

What FaceSign does store

  • One-way biometric fingerprint (for future RECOGNITION node matching)
  • Session metadata (timestamps, risk scores, node outcomes)
  • AI-generated transcript of the conversation
  • Tokenized biometric features used for analysis

All stored data is encrypted with AES-256 and subject to your configured retention policy. See Biometric Data Handling for retention periods.

Next steps

On this page