Account Recovery
Let users recover locked accounts through face verification instead of passwords or security questions.
Passwords get forgotten. Security questions get phished. Account recovery is one of the highest-friction, highest-risk flows in any application. FaceSign replaces "mother's maiden name" with real-time face recognition -- matching the user against their enrolled biometric and confirming their identity through an AI-powered conversation.
Requires enrolled faces -- The RECOGNITION node matches the user against faces already stored in your FaceSign database. You need to enroll users during onboarding or account setup before they can use face-based recovery. See Verification Patterns for enrollment flows.
How it works
- The user clicks "Forgot password" or "Can't access my account" in your application.
- Your backend creates a FaceSign session with recognition (1:N face matching), liveness detection, and a conversation node for knowledge-based verification.
- You embed
clientSecret.urlas aniframesrcinside your recovery page — the verification runs inline on your domain, keeping the user in your app and branding. Redirecting the user to the hosted URL is the fallback when an iframe can't be used (e.g., strict parent-frame CSP, native mobile wrappers without a webview). - FaceSign matches the user's face against your enrolled database and asks them to confirm account details.
- Your webhook handler receives the result -- if the face matches a known user, you grant access and let them reset their credentials.
The recognition branching pattern
The proven pattern for flows that use facial recognition branches recognized users and new users into separate paths, each with its own liveness check and conversation node. This avoids putting conditional logic in a single prompt -- the most common cause of broken flows.
// Recognition branches to different liveness nodes
{ id: 'recognition', type: 'recognition',
outcomes: { recognized: 'liveness_known', newUser: 'liveness_new', noFace: 'liveness_new' } },
// Known user path
{ id: 'liveness_known', type: 'liveness_detection',
outcomes: { livenessDetected: 'greet_known', deepfakeDetected: 'greet_known', noFace: 'greet_known' } },
{ id: 'greet_known', type: 'conversation',
prompt: "Greet them by full name and ask DOB. Use their full name, not just first name.",
outcomes: [{ id: 'done', condition: 'User responded', targetNodeId: 'closing' }] },
// New user path
{ id: 'liveness_new', type: 'liveness_detection',
outcomes: { livenessDetected: 'ask_name', deepfakeDetected: 'ask_name', noFace: 'ask_name' } },
{ id: 'ask_name', type: 'conversation',
prompt: "Say: Welcome! What's your name?",
outcomes: [{ id: 'done', condition: 'User provided name', targetNodeId: 'ask_dob' }] },Always route every liveness and recognition outcome forward to a conversation node. Do not route any outcome to end or an error state. The video AI analysis handles deepfake scoring separately, out of band.
Build the flow
Define the verification flow
Create a session with recognition to match the user against enrolled faces, liveness detection to prevent photo/video replay attacks, and a conversation node to confirm account details.
{
"clientReferenceId": "recovery-user-4421",
"metadata": {
"recoveryType": "forgot_password",
"requestedAt": "2025-01-15T10:30:00Z"
},
"flow": [
{
"id": "start",
"type": "start",
"outcome": "greeting"
},
{
"id": "greeting",
"type": "conversation",
"prompt": "Say: Hi! I need to verify your identity to help you recover your account. It only takes a few seconds. Ready?",
"outcomes": [
{
"id": "g1",
"targetNodeId": "recognition",
"condition": "User has responded or acknowledged"
}
]
},
{
"id": "recognition",
"type": "recognition",
"outcomes": {
"recognized": "liveness_known",
"newUser": "liveness_new",
"noFace": "liveness_new"
}
},
{
"id": "liveness_known",
"type": "liveness_detection",
"outcomes": {
"livenessDetected": "confirm_identity",
"deepfakeDetected": "confirm_identity",
"noFace": "confirm_identity"
}
},
{
"id": "confirm_identity",
"type": "conversation",
"prompt": "Say: I've confirmed your face matches our records. For additional security, can you tell me the email address associated with your account?",
"outcomes": [
{
"id": "identity_confirmed",
"targetNodeId": "closing",
"condition": "The user provided an email address"
},
{
"id": "identity_not_confirmed",
"targetNodeId": "closing",
"condition": "The user could not provide an email address or refused"
}
]
},
{
"id": "liveness_new",
"type": "liveness_detection",
"outcomes": {
"livenessDetected": "not_found",
"deepfakeDetected": "not_found",
"noFace": "not_found"
}
},
{
"id": "not_found",
"type": "conversation",
"prompt": "Say: I wasn't able to find a matching account. If you're a new user, you'll need to create an account first. If you believe this is an error, please contact our support team.",
"doesNotRequireReply": true,
"outcomes": [
{
"id": "acknowledged",
"targetNodeId": "end",
"condition": ""
}
]
},
{
"id": "closing",
"type": "conversation",
"doesNotRequireReply": true,
"prompt": "Say: You're all set! We'll send you a password reset link shortly. Thanks!",
"outcomes": [
{
"id": "c1",
"targetNodeId": "end",
"condition": ""
}
]
},
{
"id": "end",
"type": "end"
}
],
"videoAIAnalysisEnabled": true
}Create the session from your backend
When a user initiates account recovery, call the FaceSign API.
async function initiateRecovery(recoveryRequest) {
const res = await fetch('https://api.facesign.ai/sessions', {
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.FACESIGN_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
clientReferenceId: `recovery-${recoveryRequest.id}`,
metadata: {
recoveryType: recoveryRequest.type,
requestedAt: new Date().toISOString(),
},
flow: recoveryFlow, // The flow defined above
videoAIAnalysisEnabled: true,
}),
})
const { session, clientSecret } = await res.json()
// Track the recovery attempt
await db.recoveryAttempts.create({
id: recoveryRequest.id,
facesignSessionId: session.id,
status: 'pending',
createdAt: new Date(),
})
return clientSecret.url
}import os
import requests
from datetime import datetime
def initiate_recovery(recovery_request):
res = requests.post(
"https://api.facesign.ai/sessions",
headers={
"Authorization": f"Bearer {os.environ['FACESIGN_API_KEY']}",
"Content-Type": "application/json",
},
json={
"clientReferenceId": f"recovery-{recovery_request['id']}",
"metadata": {
"recoveryType": recovery_request["type"],
"requestedAt": datetime.now().isoformat(),
},
"flow": recovery_flow, # The flow defined above
"videoAIAnalysisEnabled": True,
},
)
data = res.json()
session = data["session"]
client_secret = data["clientSecret"]
# Track the recovery attempt
db.recovery_attempts.create(
id=recovery_request["id"],
facesign_session_id=session["id"],
status="pending",
created_at=datetime.now(),
)
return client_secret["url"]Handle the webhook
When the session completes, check the recognition result and grant or deny access.
import crypto from 'node:crypto'
// Verify an HMAC-SHA256 signature of the form `t=<timestamp>,v1=<hex>`
// over `${timestamp}.${rawBody}`, using the webhook secret from the
// FaceSign dashboard. Reject any request whose signature doesn't match.
function verifyFacesignSignature(rawBody, signatureHeader, secret) {
if (!signatureHeader) return false
const parts = Object.fromEntries(
signatureHeader.split(',').map((p) => p.split('='))
)
if (!parts.t || !parts.v1) return false
const expected = crypto
.createHmac('sha256', secret)
.update(`${parts.t}.${rawBody}`, 'utf8')
.digest('hex')
try {
return crypto.timingSafeEqual(
Buffer.from(parts.v1, 'hex'),
Buffer.from(expected, 'hex')
)
} catch {
return false
}
}
export async function POST(req) {
// Read the raw body before parsing — signature is computed over the
// exact bytes we received.
const rawBody = await req.text()
const signature = req.headers.get('x-facesign-signature')
if (!verifyFacesignSignature(rawBody, signature, process.env.FACESIGN_WEBHOOK_SECRET)) {
return new Response('Invalid signature', { status: 401 })
}
const event = JSON.parse(rawBody)
if (event.type !== 'session.status') {
return Response.json({ received: true })
}
// Fetch the full session. GET /sessions/:id returns
// `{ session, clientSecret }` — unpack the inner session.
const sessionRes = await fetch(
`https://api.facesign.ai/sessions/${event.sessionId}`,
{
headers: {
Authorization: `Bearer ${process.env.FACESIGN_API_KEY}`,
},
}
)
const { session } = await sessionRes.json()
// Find the recovery attempt
const recovery = await db.recoveryAttempts.findBy({
facesignSessionId: event.sessionId,
})
if (!recovery) {
return Response.json({ received: true })
}
// Check recognition + liveness outcomes
const recognitionReport = session.report?.nodeReports
?.find((r) => r.nodeId === 'recognition')
const livenessReport = session.report?.nodeReports
?.find((r) => r.nodeId === 'liveness_known')
const recognized = recognitionReport?.outcome === 'recognized'
const livenessPass = livenessReport?.outcome === 'livenessDetected'
if (session.status === 'complete' && recognized && livenessPass) {
// Face matched an enrolled user. The recognition report exposes the
// enrollment profile fields (name, email, phone, company, jobTitle) —
// not an internal FaceSign user ID. Look the customer up in your own
// database by the enrolled email.
const matchedEmail = recognitionReport.report?.email
const user = matchedEmail
? await db.users.findByEmail(matchedEmail)
: null
if (user) {
await db.recoveryAttempts.update(recovery.id, {
status: 'approved',
matchedUserId: user.id,
})
// Generate a password reset token
const resetToken = await auth.createPasswordResetToken(user.id)
await notifications.sendResetLink(user.id, resetToken)
} else {
await db.recoveryAttempts.update(recovery.id, {
status: 'denied',
reason: 'matched_face_has_no_local_user',
})
}
} else {
await db.recoveryAttempts.update(recovery.id, {
status: 'denied',
reason: !recognized ? 'face_not_recognized' : 'liveness_failed',
})
}
return Response.json({ received: true })
}import hmac, hashlib, json, os
from flask import Flask, request, jsonify, abort
import requests
app = Flask(__name__)
def verify_facesign_signature(raw_body: bytes, signature_header: str, secret: str) -> bool:
"""HMAC-SHA256 of ``{timestamp}.{raw_body}`` must match ``v1`` in the header."""
if not signature_header:
return False
try:
parts = dict(p.split("=", 1) for p in signature_header.split(","))
except ValueError:
return False
if "t" not in parts or "v1" not in parts:
return False
expected = hmac.new(
secret.encode("utf-8"),
f"{parts['t']}.".encode("utf-8") + raw_body,
hashlib.sha256,
).hexdigest()
return hmac.compare_digest(parts["v1"], expected)
@app.route("/webhooks/facesign", methods=["POST"])
def handle_webhook():
# Read the raw body before parsing — signature is computed over the
# exact bytes we received.
raw_body = request.get_data()
signature = request.headers.get("X-FaceSign-Signature")
if not verify_facesign_signature(
raw_body, signature, os.environ["FACESIGN_WEBHOOK_SECRET"]
):
abort(401, description="Invalid signature")
event = json.loads(raw_body)
if event["type"] != "session.status":
return jsonify({"received": True}), 200
# Fetch the full session. GET /sessions/:id returns
# {"session": ..., "clientSecret": ...} — unpack the inner session.
session_res = requests.get(
f"https://api.facesign.ai/sessions/{event['sessionId']}",
headers={
"Authorization": f"Bearer {os.environ['FACESIGN_API_KEY']}",
},
)
session = session_res.json()["session"]
# Find the recovery attempt
recovery = db.recovery_attempts.find_by(
facesign_session_id=event["sessionId"]
)
if not recovery:
return jsonify({"received": True}), 200
# Check recognition + liveness outcomes
node_reports = (session.get("report") or {}).get("nodeReports") or []
recognition_report = next(
(r for r in node_reports if r["nodeId"] == "recognition"), None
)
liveness_report = next(
(r for r in node_reports if r["nodeId"] == "liveness_known"), None
)
recognized = (
recognition_report
and recognition_report.get("outcome") == "recognized"
)
liveness_pass = (
liveness_report
and liveness_report.get("outcome") == "livenessDetected"
)
if session["status"] == "complete" and recognized and liveness_pass:
# Face matched an enrolled user. The recognition report exposes
# the enrollment profile fields (name, email, phone, company,
# jobTitle) — not an internal FaceSign user ID. Look the customer
# up in your own database by the enrolled email.
matched_email = (recognition_report.get("report") or {}).get("email")
user = (
db.users.find_by_email(matched_email) if matched_email else None
)
if user:
db.recovery_attempts.update(
recovery["id"],
status="approved",
matched_user_id=user["id"],
)
reset_token = auth.create_password_reset_token(user["id"])
notifications.send_reset_link(user["id"], reset_token)
else:
db.recovery_attempts.update(
recovery["id"],
status="denied",
reason="matched_face_has_no_local_user",
)
else:
db.recovery_attempts.update(
recovery["id"],
status="denied",
reason="face_not_recognized" if not recognized else "liveness_failed",
)
return jsonify({"received": True}), 200Recognition outcomes
The RECOGNITION node returns one of three outcomes. Your recovery logic should handle each one:
| Outcome | Meaning | Recovery action |
|---|---|---|
recognized | User's face matches a known enrollment | Proceed to liveness check, then grant access |
newUser | No match found in the database | Deny recovery -- display "no account found" message |
noFace | No face detected in the camera frame | Prompt the user to retry with better lighting or camera positioning |
Never skip liveness after recognition. A recognition match alone does not prove the live person is present -- an attacker could hold up a photo of the enrolled user. Always chain RECOGNITION with LIVENESS_DETECTION.
Strengthening the flow
You can add additional verification steps depending on your security requirements:
| Addition | When to use | Node to add |
|---|---|---|
| Email OTP | You have the user's email on file | TWO_FACTOR_EMAIL after confirm_identity |
| Document scan | High-security accounts (banking, healthcare) | DOCUMENT_SCAN + FACE_COMPARE for document-to-selfie match |
| Multiple conversation nodes | Need to verify several knowledge factors | Add separate CONVERSATION nodes for each question |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
Recognition always returns newUser | User was never enrolled | Ensure your onboarding flow includes a FaceSign session with liveness detection -- enrollment happens automatically when a face is first seen |
Recognition returns recognized but wrong user | Low uniqueness threshold or poor enrollment photo | Re-enroll with a clearer photo; use liveness detection during enrollment to ensure quality |
| Recognition fails despite user being enrolled | Greeting too short for video accumulation | Make the greeting at least 5 seconds of avatar speech so recognition has enough video to work with |
| Conversation node does not wait for reply | doesNotRequireReply is set to true | Remove that field or set it to false for interactive conversation nodes |
| Session completes but webhook never fires | Webhook endpoint not registered, unreachable, or rejected (non-2xx) | Register the endpoint in your FaceSign dashboard (webhooks are account-level, not per-session), make sure it is publicly reachable over HTTPS, and respond 2xx. Temporarily point it at webhook.site to confirm delivery |