Ship step-up verification in an afternoon.
FaceSign replaces insecure SMS codes and expensive call centers with a 20-second AI-led conversation. Six models verify both the person and their state of mind.
npm i @facesignai/apiFive steps to your first session
FaceSign exposes a Model Context Protocol server. Any MCP-capable client (Claude Code, Cursor, Zed) can build, preview, and export a full Next.js app through natural language.
A real flow, not a toy.
Greeting, recognition, liveness, closing. The complete happy path in one session create call.
import Client, { FSNodeType } from '@facesignai/api'const client = new Client({ auth: process.env.FACESIGN_API_KEY })const { session, clientSecret } = await client.session.create({ clientReferenceId: 'user-123', avatarId: '65f9e3c9-d48b-4118-b73a-4ae2e3cbb8f0', langs: ['en', 'it', 'de', 'fr', 'es'], defaultLang: 'en', videoAIAnalysisEnabled: true, flow: [ { id: 'start', type: FSNodeType.START, outcome: 'greeting' }, { id: 'greeting', type: FSNodeType.CONVERSATION, prompt: "Warmly greet the user. Ask them to confirm they're ready to verify.", outcomes: [{ id: 'ready', condition: 'user is ready', targetNodeId: 'recognition' }] }, { id: 'recognition', type: FSNodeType.RECOGNITION, outcomes: { recognized: 'liveness_known', newUser: 'liveness_new', noFace: 'liveness_new', } }, { id: 'liveness_known', type: FSNodeType.LIVENESS_DETECTION, outcomes: { livenessDetected: 'closing', deepfakeDetected: 'closing', noFace: 'closing', } }, { id: 'liveness_new', type: FSNodeType.LIVENESS_DETECTION, outcomes: { livenessDetected: 'closing', deepfakeDetected: 'closing', noFace: 'closing', } }, { id: 'closing', type: FSNodeType.CONVERSATION, prompt: "Say: You're all set. Thanks for verifying.", doesNotRequireReply: true, outcomes: [{ id: 'done', condition: '', targetNodeId: 'end' }] }, { id: 'end', type: FSNodeType.END }, ],})// Embed clientSecret.url in an iframe or redirect the user to it.console.log(clientSecret.url)Every outcome routes forward. Video AI analysis runs out-of-band after the session ends.
AI-Guided Flow Building
Describe your use case in plain English. The MCP interviews you, builds the verification flow, and exports a production-ready app.
10 Verification Nodes
Liveness, facial recognition, document scanning, coercion detection, KBA, and OTP — all composable in any verification flow.
Deploy Anywhere
Export to Next.js or static HTML. One environment variable. Zero vendor lock-in.
Seven live demos
Every reference demo is a working FaceSign integration you can try now and adapt. Each is built from the same MCP primitives.
Step-up auth
Before a sensitive action, promote the user through a short verification. Recognition + liveness + conversational check.
Coercion detection
Detects signs of duress, secondary voices, scripted responses, and environmental anomalies for high risk transactions.
Account recovery
Self-serve recovery flow with knowledge-based questions, liveness, and recognition. Replaces call center KBA.
Workforce verification
Verify employees before accessing sensitive files, systems, or admin panels. Per-session identity confirmation.
Interview verification
Confirms the person in interview two is the same from interview one. Mitigates identity fraud in remote hiring.
Gaming and gambling
Responsible gaming check-ins. Verify the player is the account holder and not under duress or impaired.
13 languages, zero extra work
Browser language detection drives both avatar speech and the wrapper UI automatically. You write prompts once. FaceSign handles the rest.
How it works
The session matches the end user's navigator.language against your langs whitelist. If it matches, that language is used for both avatar speech and the wrapper UI. Otherwise it falls back to defaultLang (default: en). BCP-47 normalization covers aliases like zh-Hans and base-language fallbacks like fr-CA. No extra localization code required.
Test any language
Append ?lang=XX to any demo URL to override browser detection. Pick one:
The full playbook
Flow rules, edge cases, deployment gotchas, and complete example flows. Written for engineers shipping to production.
Prefer to build by hand?
The MCP handles most integrations, but direct SDK and REST access are first-class for teams who need lower-level control.
TypeScript / Node SDK
Official package with full types. Works in Next.js, Express, serverless functions, and anywhere Node runs.
npm i @facesignai/apiREST API
The raw HTTP API if you're working in Python, Go, Ruby, or any other backend. Session create, session get, and webhook events are the three endpoints you'll use most.
API referenceFrequently asked questions
How do I get an API key?
Request an sk_prod_ production key or sk_test_ sandbox key from your account dashboard. If you don't have an account yet, email developers@facesign.ai. Keys carry no setup fee.
Is there a sandbox or free tier?
Yes. Every account gets a sandbox environment with an sk_test_ key that includes free sessions for development. Sandbox sessions are functionally identical to production.
How does pricing work?
Usage-based, priced per verification session. No minimums, no implementation fees, and volume discounts kick in as you scale.
Which browsers and devices are supported?
Any modern Chromium, Safari 15+, or Firefox on desktop and mobile. Standard WebRTC, camera and mic permissions. No native install.
Can I embed FaceSign in my own app?
Yes. Drop the session URL into an iframe or WebView, or use the JavaScript SDK for a programmatic embed. About 5 MB, one outbound port.
How do I receive session results?
Webhooks notify your backend when a session's state changes; your server then fetches the full payload via GET /sessions/:id with your API key. Polling the same endpoint works as a fallback.
Can I customize the avatar, voice, or language?
Yes. Choose from our avatar library or work with us on a branded custom avatar. 10+ languages supported.
What data does FaceSign store?
FaceSign is a data processor; you are the data controller. Video is processed in memory and discarded. You choose your data processing region.
Is FaceSign SOC 2 or GDPR compliant?
Architected for GDPR and CCPA from day one. SOC 2 Type II is in progress. Ask security@facesign.ai for details.
Where is data processed?
US and EU processing regions are available. Other regions are on request. Data residency can be configured at the account level. Contact security@facesign.ai to configure residency for your deployment.
How do webhooks work?
Register an HTTPS endpoint with a signing secret in your dashboard. FaceSign POSTs a compact event (id, type, sessionId, createdAt) on status, media, and analysis updates — verify the HMAC-SHA256 X-FaceSign-Signature header, then fetch the full state via GET /sessions/:id. Transient delivery failures are retried automatically.
What compliance frameworks does FaceSign cover?
GDPR and CCPA from day one. SOC 2 Type II is in progress. For questions about compliance posture, specific certifications, or security documentation, email security@facesign.ai.
Ready to build?
Request an API key, wire up the MCP, and ship a working integration today.