Step-Up from Anomaly Detection
Trigger FaceSign verification when your fraud system detects suspicious activity.
Your fraud detection platform already assigns risk scores to user actions. When that score crosses a threshold -- an unusual login location, a device change, or a spending spike -- you need to verify the user is who they claim to be without blocking legitimate activity. FaceSign provides a real-time step-up that integrates with any risk engine.
Post-login step-up, not login MFA -- This pattern assumes the user is already authenticated. FaceSign verifies identity after your anomaly detection system flags an in-session action as suspicious. It is not a replacement for login authentication.
How it works
- Your fraud engine (Silverfort, Alloy, Sardine, or custom rules) evaluates a user action and returns a risk score.
- If the score exceeds your threshold, your backend creates a FaceSign session.
- You embed
clientSecret.urlas aniframesrcinside your app — the step-up runs inline on your domain, keeping the user in your product. Redirecting the user to the hosted URL is the fallback when an iframe can't be used (e.g., strict parent-frame CSP, native mobile wrappers without a webview). - FaceSign runs liveness detection and an AI conversation to confirm the user's intent.
- Your webhook handler receives the result and feeds it back to your risk engine for the final decision.
Integration architecture
User Action
|
v
+-------------------+ risk score +------------------+
| Your Application | -----------------> | Fraud Engine |
+-------------------+ | (Silverfort, |
| | Alloy, custom) |
| score > threshold +------------------+
| ^
v |
+-------------------+ webhook +-------+----------+
| FaceSign API | -----------------> | Your Backend |
| (step-up session) | | (decision logic) |
+-------------------+ +------------------+Build the integration
Define the trigger logic
Intercept user actions and check the risk score before allowing them to proceed. This pseudocode works with any risk engine that returns a numeric score.
const RISK_THRESHOLD = 75 // Score 0-100; tune for your risk tolerance
async function checkRiskAndStepUp(userId, action, context) {
// Query your fraud engine
const riskScore = await fraudEngine.evaluate({
userId,
action, // e.g., 'change_email', 'add_payee', 'export_data'
ip: context.ip,
deviceFingerprint: context.deviceId,
geoLocation: context.geo,
timestamp: Date.now(),
})
if (riskScore < RISK_THRESHOLD) {
// Low risk -- allow the action
return { allowed: true }
}
// High risk -- require FaceSign step-up
const verificationUrl = await createStepUpSession(
userId, action, riskScore, context
)
return {
allowed: false,
requiresStepUp: true,
verificationUrl,
}
}import time
RISK_THRESHOLD = 75 # Score 0-100; tune for your risk tolerance
async def check_risk_and_step_up(user_id, action, context):
# Query your fraud engine
risk_score = await fraud_engine.evaluate(
user_id=user_id,
action=action, # e.g., "change_email", "add_payee", "export_data"
ip=context["ip"],
device_fingerprint=context["device_id"],
geo_location=context["geo"],
timestamp=int(time.time() * 1000),
)
if risk_score < RISK_THRESHOLD:
# Low risk -- allow the action
return {"allowed": True}
# High risk -- require FaceSign step-up
verification_url = await create_step_up_session(
user_id, action, risk_score, context
)
return {
"allowed": False,
"requires_step_up": True,
"verification_url": verification_url,
}Create the FaceSign session
When the risk threshold is exceeded, create a verification session with liveness detection and a conversation node that describes the flagged action to the user.
async function createStepUpSession(userId, action, riskScore, context) {
const user = await db.users.findById(userId)
const res = await fetch('https://api.facesign.ai/sessions', {
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.FACESIGN_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
clientReferenceId: `stepup-${userId}-${Date.now()}`,
metadata: {
userId,
action,
riskScore,
triggerIp: context.ip,
triggerGeo: context.geo,
},
providedData: {
email: user.email,
},
flow: [
{ id: 'start', type: 'start', outcome: 'greeting' },
{
id: 'greeting',
type: 'conversation',
prompt:
"Say: Hi! We noticed something unusual on your account and want to make sure it's really you. This will only take a moment — I'll take a quick look, then ask you to confirm.",
outcomes: [{ id: 'next', targetNodeId: 'liveness', condition: '' }],
},
{
id: 'liveness',
type: 'liveness_detection',
outcomes: {
livenessDetected: 'confirm_action',
deepfakeDetected: 'closing',
noFace: 'closing',
},
},
{
id: 'confirm_action',
type: 'conversation',
prompt: `Say: You're attempting to ${describeAction(action)} from ${context.geo || 'an unrecognized location'}. Can you confirm this is you and you intend to take this action?`,
outcomes: [
{
id: 'confirmed',
targetNodeId: 'email_otp',
condition: 'The user explicitly confirmed the action is theirs',
},
{
id: 'denied',
targetNodeId: 'closing',
condition: 'The user denied the action, disputed details, or expressed concern',
},
{
id: 'stall',
targetNodeId: 'closing',
condition: 'Conversation reached 4 exchanges with no clear answer',
},
],
},
{
id: 'email_otp',
type: 'two_factor_email',
otpLength: 6,
expirySeconds: 300,
outcomes: {
verified: 'closing',
delivery_failed: 'closing',
failed_unverified: 'closing',
cancelled: 'closing',
},
},
{
id: 'closing',
type: 'conversation',
prompt:
'Thank the user for completing the verification and tell them the session is now complete. Keep it to one sentence.',
doesNotRequireReply: true,
outcomes: [{ id: 'done', targetNodeId: 'end', condition: '' }],
},
{ id: 'end', type: 'end' },
],
videoAIAnalysisEnabled: true,
}),
})
const { session, clientSecret } = await res.json()
// Store the pending step-up
await db.stepUpRequests.create({
userId,
action,
riskScore,
facesignSessionId: session.id,
status: 'pending',
})
return clientSecret.url
}
function describeAction(action) {
const descriptions = {
change_email: 'change your email address',
add_payee: 'add a new payment recipient',
export_data: 'export your account data',
change_password: 'change your password',
large_transfer: 'make a large transfer',
}
return descriptions[action] || action
}import os
import time
import requests
ACTION_DESCRIPTIONS = {
"change_email": "change your email address",
"add_payee": "add a new payment recipient",
"export_data": "export your account data",
"change_password": "change your password",
"large_transfer": "make a large transfer",
}
async def create_step_up_session(user_id, action, risk_score, context):
user = await db.users.find_by_id(user_id)
action_desc = ACTION_DESCRIPTIONS.get(action, action)
res = requests.post(
"https://api.facesign.ai/sessions",
headers={
"Authorization": f"Bearer {os.environ['FACESIGN_API_KEY']}",
"Content-Type": "application/json",
},
json={
"clientReferenceId": f"stepup-{user_id}-{int(time.time() * 1000)}",
"metadata": {
"userId": user_id,
"action": action,
"riskScore": risk_score,
"triggerIp": context["ip"],
"triggerGeo": context.get("geo"),
},
"providedData": {
"email": user["email"],
},
"flow": [
{"id": "start", "type": "start", "outcome": "greeting"},
{
"id": "greeting",
"type": "conversation",
"prompt": (
"Say: Hi! We noticed something unusual on your account "
"and want to make sure it's really you. This will only "
"take a moment — I'll take a quick look, then ask you "
"to confirm."
),
"outcomes": [
{"id": "next", "targetNodeId": "liveness", "condition": ""}
],
},
{
"id": "liveness",
"type": "liveness_detection",
"outcomes": {
"livenessDetected": "confirm_action",
"deepfakeDetected": "closing",
"noFace": "closing",
},
},
{
"id": "confirm_action",
"type": "conversation",
"prompt": (
f"Say: You're attempting to {action_desc} from "
f"{context.get('geo', 'an unrecognized location')}. "
f"Can you confirm this is you and you intend to take this action?"
),
"outcomes": [
{
"id": "confirmed",
"targetNodeId": "email_otp",
"condition": "The user explicitly confirmed the action is theirs",
},
{
"id": "denied",
"targetNodeId": "closing",
"condition": "The user denied the action, disputed details, or expressed concern",
},
{
"id": "stall",
"targetNodeId": "closing",
"condition": "Conversation reached 4 exchanges with no clear answer",
},
],
},
{
"id": "email_otp",
"type": "two_factor_email",
"otpLength": 6,
"expirySeconds": 300,
"outcomes": {
"verified": "closing",
"delivery_failed": "closing",
"failed_unverified": "closing",
"cancelled": "closing",
},
},
{
"id": "closing",
"type": "conversation",
"prompt": (
"Thank the user for completing the verification and "
"tell them the session is now complete. Keep it to one sentence."
),
"doesNotRequireReply": True,
"outcomes": [
{"id": "done", "targetNodeId": "end", "condition": ""}
],
},
{"id": "end", "type": "end"},
],
"videoAIAnalysisEnabled": True,
},
)
data = res.json()
session = data["session"]
client_secret = data["clientSecret"]
# Store the pending step-up
await db.step_up_requests.create(
user_id=user_id,
action=action,
risk_score=risk_score,
facesign_session_id=session["id"],
status="pending",
)
return client_secret["url"]Handle the webhook and update your risk engine
When the session completes, feed the FaceSign result back to your fraud engine so it can update its risk model.
import crypto from 'node:crypto'
// Verify an HMAC-SHA256 signature of the form `t=<timestamp>,v1=<hex>`
// over `${timestamp}.${rawBody}`, using the webhook secret from the
// FaceSign dashboard. Reject any request whose signature doesn't match.
function verifyFacesignSignature(rawBody, signatureHeader, secret) {
if (!signatureHeader) return false
const parts = Object.fromEntries(
signatureHeader.split(',').map((p) => p.split('='))
)
if (!parts.t || !parts.v1) return false
const expected = crypto
.createHmac('sha256', secret)
.update(`${parts.t}.${rawBody}`, 'utf8')
.digest('hex')
try {
return crypto.timingSafeEqual(
Buffer.from(parts.v1, 'hex'),
Buffer.from(expected, 'hex')
)
} catch {
return false
}
}
export async function POST(req) {
// Read the raw body before parsing — signature is computed over the
// exact bytes we received.
const rawBody = await req.text()
const signature = req.headers.get('x-facesign-signature')
if (!verifyFacesignSignature(rawBody, signature, process.env.FACESIGN_WEBHOOK_SECRET)) {
return new Response('Invalid signature', { status: 401 })
}
const event = JSON.parse(rawBody)
if (event.type !== 'session.status') {
return Response.json({ received: true })
}
// Fetch the full session. GET /sessions/:id returns
// `{ session, clientSecret }` — unpack the inner session.
const sessionRes = await fetch(
`https://api.facesign.ai/sessions/${event.sessionId}`,
{
headers: {
Authorization: `Bearer ${process.env.FACESIGN_API_KEY}`,
},
}
)
const { session } = await sessionRes.json()
const stepUp = await db.stepUpRequests.findBy({
facesignSessionId: event.sessionId,
})
if (!stepUp) {
return Response.json({ received: true })
}
// Evaluate results
const livenessReport = session.report?.nodeReports
?.find((r) => r.nodeId === 'liveness')
const otpReport = session.report?.nodeReports
?.find((r) => r.nodeId === 'email_otp')
const livenessPass = livenessReport?.outcome === 'livenessDetected'
const otpPass = otpReport?.outcome === 'verified'
const verified = session.status === 'complete' && livenessPass && otpPass
// Feed result back to your fraud engine
await fraudEngine.reportStepUpResult({
userId: stepUp.userId,
action: stepUp.action,
originalRiskScore: stepUp.riskScore,
facesignVerified: verified,
facesignSessionId: session.id,
livenessOutcome: livenessReport?.outcome,
})
if (verified) {
// Allow the original action to proceed
await db.stepUpRequests.update(stepUp.id, { status: 'verified' })
await actionQueue.release(stepUp.userId, stepUp.action)
} else {
// Block the action and flag the account
await db.stepUpRequests.update(stepUp.id, { status: 'failed' })
await fraudEngine.escalate({
userId: stepUp.userId,
reason: 'step_up_verification_failed',
livenessOutcome: livenessReport?.outcome,
})
}
return Response.json({ received: true })
}import hmac, hashlib, json, os
from flask import Flask, request, jsonify, abort
import requests
app = Flask(__name__)
def verify_facesign_signature(raw_body: bytes, signature_header: str, secret: str) -> bool:
"""HMAC-SHA256 of ``{timestamp}.{raw_body}`` must match ``v1`` in the header."""
if not signature_header:
return False
try:
parts = dict(p.split("=", 1) for p in signature_header.split(","))
except ValueError:
return False
if "t" not in parts or "v1" not in parts:
return False
expected = hmac.new(
secret.encode("utf-8"),
f"{parts['t']}.".encode("utf-8") + raw_body,
hashlib.sha256,
).hexdigest()
return hmac.compare_digest(parts["v1"], expected)
@app.route("/webhooks/facesign", methods=["POST"])
def handle_webhook():
# Read the raw body before parsing — signature is computed over the
# exact bytes we received.
raw_body = request.get_data()
signature = request.headers.get("X-FaceSign-Signature")
if not verify_facesign_signature(
raw_body, signature, os.environ["FACESIGN_WEBHOOK_SECRET"]
):
abort(401, description="Invalid signature")
event = json.loads(raw_body)
if event["type"] != "session.status":
return jsonify({"received": True}), 200
# Fetch the full session. GET /sessions/:id returns
# {"session": ..., "clientSecret": ...} — unpack the inner session.
session_res = requests.get(
f"https://api.facesign.ai/sessions/{event['sessionId']}",
headers={
"Authorization": f"Bearer {os.environ['FACESIGN_API_KEY']}",
},
)
session = session_res.json()["session"]
step_up = db.step_up_requests.find_by(
facesign_session_id=event["sessionId"]
)
if not step_up:
return jsonify({"received": True}), 200
# Evaluate results
node_reports = (session.get("report") or {}).get("nodeReports") or []
liveness_report = next(
(r for r in node_reports if r["nodeId"] == "liveness"), None
)
otp_report = next(
(r for r in node_reports if r["nodeId"] == "email_otp"), None
)
liveness_pass = (
liveness_report
and liveness_report.get("outcome") == "livenessDetected"
)
otp_pass = otp_report and otp_report.get("outcome") == "verified"
verified = session["status"] == "complete" and liveness_pass and otp_pass
# Feed result back to your fraud engine
fraud_engine.report_step_up_result(
user_id=step_up["user_id"],
action=step_up["action"],
original_risk_score=step_up["risk_score"],
facesign_verified=verified,
facesign_session_id=session["id"],
liveness_outcome=liveness_report.get("outcome") if liveness_report else None,
)
if verified:
# Allow the original action to proceed
db.step_up_requests.update(step_up["id"], status="verified")
action_queue.release(step_up["user_id"], step_up["action"])
else:
# Block the action and flag the account
db.step_up_requests.update(step_up["id"], status="failed")
fraud_engine.escalate(
user_id=step_up["user_id"],
reason="step_up_verification_failed",
liveness_outcome=liveness_report.get("outcome") if liveness_report else None,
)
return jsonify({"received": True}), 200Risk engine integration patterns
How you integrate depends on your fraud detection platform:
| Platform | Integration point | Pattern |
|---|---|---|
| Silverfort | Policy engine webhook | Silverfort flags the action, your middleware intercepts and creates a FaceSign session, then reports the result back via Silverfort's API |
| Alloy | Journey orchestration | Use Alloy's webhook step to trigger FaceSign session creation; report the outcome as a journey event |
| Sardine | Risk scoring API | Check Sardine's risk score in your middleware; create FaceSign session if score exceeds threshold |
| Custom rules engine | Your own middleware | Evaluate rules, trigger FaceSign, feed result back to your rules database |
Choosing the right flow nodes
Every step-up flow starts with a short greeting conversation (so the camera has enough video for liveness) and ends with a closing conversation — these are required regardless of tier. What varies between tiers is what sits between them:
| Risk level | Actions | Verification nodes between greeting and closing |
|---|---|---|
| Medium (score 60-75) | Password change, email change | Liveness + Email OTP |
| High (score 75-90) | Add payee, data export | Liveness + confirmation conversation + Email OTP |
| Critical (score 90+) | Large transfer, account closure | Liveness + confirmation conversation + Email OTP, with videoAIAnalysisEnabled: true to pick up coercion signals in post-session analysis |
videoAIAnalysisEnabled can be left on for all tiers — the example flow sets it to true — but it's most valuable at the critical tier where the extra analysis cost is easily justified.
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| Step-up triggered on every action | Risk threshold is too low | Increase RISK_THRESHOLD; review your fraud engine's scoring calibration |
| Step-up never triggers | Risk threshold is too high or fraud engine is not evaluating the action | Lower the threshold; verify the action type is included in your risk evaluation |
| User completes verification but action stays blocked | Webhook handler does not release the queued action | Verify actionQueue.release() is called on successful verification |
| Fraud engine does not learn from FaceSign results | Result feedback is not implemented | Call your fraud engine's feedback API with the FaceSign outcome after each session |
| Deepfake detected but no alert fires | Missing escalation logic for deepfakeDetected | Add an explicit check for deepfakeDetected in your webhook handler and trigger an alert |