Build with AI
Connect FaceSign to your AI coding tool and ship a working verification integration in minutes.
FaceSign exposes a Model Context Protocol (MCP) server that lets AI assistants create, test, and export verification sessions through natural language.
Instead of manually wiring nodes and writing integration code, you describe what you need — the MCP builds the verification flow, previews it locally, and exports a production-ready application.
Video is processed in memory and discarded at session end. No raw biometric data is retained by default. FaceSign operates as a data processor; you are the data controller. GDPR, CCPA, BIPA aligned; SOC 2 Type II in progress. See Security for details.
What is FaceSign?
Why MCP for step-up authentication?
| Traditional integration | MCP integration |
|---|---|
| Read docs, learn node types, write flow JSON | Describe the use case in plain English |
| Build frontend, configure webhooks, handle errors | MCP generates the full app with error handling |
| Days to first working session | Minutes to first working session |
| Changes require code edits and redeployment | Describe the change, MCP rebuilds |
Quickstart
By the end of this section, you'll have a working FaceSign verification session running from your AI assistant.
Prerequisites
You need two things:
1. An API key.
| Environment | Key prefix | What it's for |
|---|---|---|
| Sandbox | sk_test_... | Free development sessions, full feature parity with production |
| Production | sk_prod_... | Real user verification, per-contract limits |
If you don't have a key yet, email developers@facesign.ai with your company name and intended use case. Sandbox keys are free.
API keys are server-side only. Never embed them in client code, public repos, or browser environments. For frontend integration, create sessions on your backend and pass only the single-use clientSecret.url to the user. Full reference in Authentication.
2. An MCP-compatible client.
Claude Code, Claude Desktop, Cursor, or any client that supports the Model Context Protocol over Streamable HTTP.
Step 1: Add FaceSign to your MCP config
Add the FaceSign server to your client's MCP configuration:
{
"mcpServers": {
"facesign": {
"type": "http",
"url": "https://mcp.facesign.ai/mcp"
}
}
}For Claude Code, add this to .mcp.json in your project root. For Claude Desktop, add it to claude_desktop_config.json. For Cursor, add it to .cursor/mcp.json. See Supported clients below for exact paths.
Step 2: Set your API key
Once connected, the MCP needs your API key to authenticate with FaceSign:
Tell the MCP: "Set my FaceSign API key to sk_test_your_key_here"The MCP calls set_api_key, loads the available avatars and languages, and confirms the connection.
Step 3: Describe what you want to build
Tell your AI assistant what verification flow you need. Be specific about the use case:
"Build a step-up verification for wire transfers. It should check liveness,
ask a security question about the transaction, and send an email OTP."The MCP will:
- Ask you clarifying questions (avatar choice, failure handling, etc.)
- Assemble the verification flow from your answers
- Validate the flow structure
Step 4: Preview locally
Once the flow is built, the MCP opens a browser preview:
"Launch a preview so I can test this flow"This calls launch_session_ui, which opens a local web page. Click Start Session to run through the verification with your camera and microphone.
Step 5: Export for production
When the flow looks right, export it as a deployable Next.js application:
"Export this as a Next.js app"This calls export_app, which generates a complete project — React frontend, API routes for session creation, .env.local template, and a README with deployment instructions. Set FACESIGN_API_KEY in your environment and deploy to any Node.js host (Vercel, Railway, Fly).
Session creation goes through the @facesignai/api package, which is server-side only — it uses your FACESIGN_API_KEY, which must never be exposed to the browser. Next.js gives us a runtime where the key stays on the server (API routes / server actions) while the verification UI runs in the client. That's why the export target is Next.js, not a static bundle.
Supported clients
FaceSign MCP works with any client that supports the Model Context Protocol. Pick yours:
Add FaceSign to your project config. Claude Code has full tool support with interactive prompts.
{
"mcpServers": {
"facesign": {
"type": "http",
"url": "https://mcp.facesign.ai/mcp"
}
}
}Or add it globally at ~/.claude.json to make it available in all projects.
Add FaceSign to your Claude Desktop configuration.
{
"mcpServers": {
"facesign": {
"type": "http",
"url": "https://mcp.facesign.ai/mcp"
}
}
}Config file location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add FaceSign to your Cursor MCP settings.
{
"mcpServers": {
"facesign": {
"type": "http",
"url": "https://mcp.facesign.ai/mcp"
}
}
}Any MCP client that supports Streamable HTTP transport can connect to FaceSign at:
https://mcp.facesign.ai/mcpThe server is unauthenticated at the transport layer — the first tool call should be set_api_key to initialize the session with your FaceSign API key.
Tools
The FaceSign MCP server exposes five tools for building and managing verification sessions:
| Tool | Purpose |
|---|---|
set_api_key | Authenticate with FaceSign |
launch_session_ui | Preview a verification flow in a local browser |
export_app | Generate a deployable Next.js application |
get_session | Retrieve results for a specific verification session |
list_sessions | Query past sessions with filters |
set_api_key
Authenticates the MCP server with your FaceSign API key. Must be called before any other tool. sk_test_ keys connect to the sandbox, sk_prod_ keys connect to production. On success the server loads the available avatars and languages.
"Set my FaceSign API key to sk_test_abc123"launch_session_ui
Opens a local web page where you can run through a verification session interactively. A new session is created each time you click Start Session, so page refreshes work correctly.
"Launch a preview with a liveness check followed by a security question"Flows must be linear with no loops or cycles. A node must never navigate back to a previous node. All paths move forward toward an END node.
export_app
Generates a standalone Next.js application from the current flow, ready for deployment. The exported project includes the frontend, API routes for session creation, an .env.local template, and a README. Set FACESIGN_API_KEY in your environment and deploy to any Node.js host.
"Export this flow as a Next.js app"get_session
Retrieves the full results of a verification session — transcript, AI analysis, node reports, and media references.
"Show me the results of session <id>"list_sessions
Queries past verification sessions with optional filtering by status, date range, client reference, or free-text search. Returns a cursor-paginated list.
"Show me the last 10 completed sessions"Node types available in flows
FaceSign flows are assembled from these 13 node types:
| Node type | Purpose |
|---|---|
START | Entry point for every flow |
END | Terminal node — session ends here |
PERMISSIONS | Request camera and/or microphone access with a custom prompt or branch |
CONVERSATION | AI-led conversational turn with custom prompts and conditional outcomes |
LIVENESS_DETECTION | Deepfake and liveness check on the live video feed |
FACE_SCAN | High-quality face capture with an oval overlay |
FACE_COMPARE | Compare two face images (session video, face scan, document photo, or provided image) |
RECOGNITION | Match the user's face against previously registered faces |
DOCUMENT_SCAN | Capture and extract data from identity documents |
ENTER_EMAIL | Collect the user's email address without sending an OTP |
TWO_FACTOR_EMAIL | Send and verify an email one-time passcode |
TWO_FACTOR_SMS | Send and verify an SMS one-time passcode |
DATA_VALIDATION | Validate collected data and branch the flow on the result |
Recipes
Each recipe shows a natural-language prompt you give your AI assistant and what the MCP does behind the scenes. Pick one to see the full pattern.
Wire transfer with coercion detection. The most common FaceSign use case: verify the account holder before a high-value transfer, and detect if they are acting under duress.
Prompt:
Build a step-up verification for wire transfers over $5,000. Start with liveness
detection, then ask the user to confirm the transfer details in a conversation.
Include coercion detection. Send an email OTP as the final step.What the MCP builds:
START — Initializes the verification session.
LIVENESS_DETECTION — Interactive deepfake check. The avatar asks the user to perform randomized actions to confirm a live human.
CONVERSATION — The avatar asks the user to confirm the recipient, amount, and purpose. Six AI models run in parallel: coercion detection analyzes vocal stress, gaze patterns, and response timing throughout.
TWO_FACTOR_EMAIL — Sends a one-time passcode to the user's email for a second authentication factor.
END — Session completes. Results are available via webhook or get_session.
Coercion detection runs across every node, not as a separate step. The conversational node gives it the richest signal because the user is speaking freely.
Account recovery without passwords. Replace security questions with a face-based recovery flow. The user proves identity through biometric recognition instead of knowledge-based answers.
Prompt:
Build an account recovery flow. Collect the user's email, verify their face
against our stored biometric, then do a liveness check. If everything passes,
let them proceed to password reset.What the MCP builds:
START — Initializes the session.
ENTER_EMAIL — Collects the user's email address to look up their account.
RECOGNITION — Compares the live face against the stored biometric fingerprint for that account.
LIVENESS_DETECTION — Confirms the person is physically present (not a photo or deepfake).
END — Session completes. Your app receives the verification result and can proceed with the password reset.
KYC-style onboarding with document scan. While FaceSign is designed for post-login step-up, you can build a first-time identity enrollment flow that captures a document and creates a biometric fingerprint for future recognition.
Prompt:
Build an onboarding flow for new users. Capture their government ID, scan their
face, do a liveness check, then have the avatar welcome them and explain how
future verifications will work.What the MCP builds:
START — Initializes the session.
DOCUMENT_SCAN — Captures and validates the user's government-issued ID.
FACE_SCAN — Captures the user's face and creates a biometric fingerprint for future recognition.
LIVENESS_DETECTION — Confirms a live human is present.
CONVERSATION — The avatar welcomes the user and explains what to expect in future verification sessions.
END — Session completes. The biometric fingerprint is stored for future RECOGNITION nodes.
Export and deploy to production. After building and testing any flow, export it as a production-ready application.
Prompt:
Export this flow as a Next.js app. I want to deploy it to Vercel.What the MCP does:
- Calls
export_appwith the current flow. - Generates a complete Next.js project containing the React frontend component with the FaceSign widget, API routes for creating and managing sessions, an
.env.localtemplate, and deployment-ready configuration.
Deploy steps:
cd exported-app
npm install
echo "FACESIGN_API_KEY=sk_prod_your_key" > .env.local
vercel deployTroubleshooting
If something isn't working with the FaceSign MCP, start here. Each section covers a specific failure mode with symptoms and fixes.
Still stuck?
Contact developers@facesign.ai with your client name and version, the error message or unexpected behavior, and your MCP config (with the API key redacted).
When to use MCP vs. the API
| Use MCP when... | Use the API when... |
|---|---|
| You want to build a flow from a description | You have an exact flow spec already |
| You're prototyping or iterating quickly | You're integrating into an existing codebase |
| You want a full exported app | You need fine-grained control over every step |
| Your team uses AI assistants for development | Your team prefers traditional SDK integration |
Both paths produce the same result — a verification session powered by the same API. MCP is faster for getting started; the REST API gives you full control.
What's next?
Get production access
You've got sandbox working. Email us to move to sk_prod_ keys with your contract terms, rate limits, and data residency zone.
Share with your security team
Tokenization, retention model, and compliance posture. Self-contained pages you can forward to InfoSec.
Adapt to your use case
Battle-tested flow patterns for wire transfers, account recovery, anomaly step-up, and payment authentication.
Last updated: 2026-04-17 · MCP server: 2.8.0 · Changelog