FaceSign

Build with AI

Connect FaceSign to your AI coding tool and ship a working verification integration in minutes.

FaceSign exposes a Model Context Protocol (MCP) server that lets AI assistants create, test, and export verification sessions through natural language.

Instead of manually wiring nodes and writing integration code, you describe what you need — the MCP builds the verification flow, previews it locally, and exports a production-ready application.

What is FaceSign?

Why MCP for step-up authentication?

Traditional integrationMCP integration
Read docs, learn node types, write flow JSONDescribe the use case in plain English
Build frontend, configure webhooks, handle errorsMCP generates the full app with error handling
Days to first working sessionMinutes to first working session
Changes require code edits and redeploymentDescribe the change, MCP rebuilds

Quickstart

By the end of this section, you'll have a working FaceSign verification session running from your AI assistant.

Prerequisites

You need two things:

1. An API key.

EnvironmentKey prefixWhat it's for
Sandboxsk_test_...Free development sessions, full feature parity with production
Productionsk_prod_...Real user verification, per-contract limits

If you don't have a key yet, email developers@facesign.ai with your company name and intended use case. Sandbox keys are free.

2. An MCP-compatible client.

Claude Code, Claude Desktop, Cursor, or any client that supports the Model Context Protocol over Streamable HTTP.

Step 1: Add FaceSign to your MCP config

Add the FaceSign server to your client's MCP configuration:

.mcp.json
{
  "mcpServers": {
    "facesign": {
      "type": "http",
      "url": "https://mcp.facesign.ai/mcp"
    }
  }
}

Step 2: Set your API key

Once connected, the MCP needs your API key to authenticate with FaceSign:

Tell the MCP: "Set my FaceSign API key to sk_test_your_key_here"

The MCP calls set_api_key, loads the available avatars and languages, and confirms the connection.

Step 3: Describe what you want to build

Tell your AI assistant what verification flow you need. Be specific about the use case:

"Build a step-up verification for wire transfers. It should check liveness,
ask a security question about the transaction, and send an email OTP."

The MCP will:

  1. Ask you clarifying questions (avatar choice, failure handling, etc.)
  2. Assemble the verification flow from your answers
  3. Validate the flow structure

Step 4: Preview locally

Once the flow is built, the MCP opens a browser preview:

"Launch a preview so I can test this flow"

This calls launch_session_ui, which opens a local web page. Click Start Session to run through the verification with your camera and microphone.

Step 5: Export for production

When the flow looks right, export it as a deployable Next.js application:

"Export this as a Next.js app"

This calls export_app, which generates a complete project — React frontend, API routes for session creation, .env.local template, and a README with deployment instructions. Set FACESIGN_API_KEY in your environment and deploy to any Node.js host (Vercel, Railway, Fly).


Supported clients

FaceSign MCP works with any client that supports the Model Context Protocol. Pick yours:

Add FaceSign to your project config. Claude Code has full tool support with interactive prompts.

.mcp.json
{
  "mcpServers": {
    "facesign": {
      "type": "http",
      "url": "https://mcp.facesign.ai/mcp"
    }
  }
}

Or add it globally at ~/.claude.json to make it available in all projects.

Add FaceSign to your Claude Desktop configuration.

claude_desktop_config.json
{
  "mcpServers": {
    "facesign": {
      "type": "http",
      "url": "https://mcp.facesign.ai/mcp"
    }
  }
}

Config file location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add FaceSign to your Cursor MCP settings.

.cursor/mcp.json
{
  "mcpServers": {
    "facesign": {
      "type": "http",
      "url": "https://mcp.facesign.ai/mcp"
    }
  }
}

Any MCP client that supports Streamable HTTP transport can connect to FaceSign at:

https://mcp.facesign.ai/mcp

The server is unauthenticated at the transport layer — the first tool call should be set_api_key to initialize the session with your FaceSign API key.


Tools

The FaceSign MCP server exposes five tools for building and managing verification sessions:

ToolPurpose
set_api_keyAuthenticate with FaceSign
launch_session_uiPreview a verification flow in a local browser
export_appGenerate a deployable Next.js application
get_sessionRetrieve results for a specific verification session
list_sessionsQuery past sessions with filters

set_api_key

Authenticates the MCP server with your FaceSign API key. Must be called before any other tool. sk_test_ keys connect to the sandbox, sk_prod_ keys connect to production. On success the server loads the available avatars and languages.

"Set my FaceSign API key to sk_test_abc123"

launch_session_ui

Opens a local web page where you can run through a verification session interactively. A new session is created each time you click Start Session, so page refreshes work correctly.

"Launch a preview with a liveness check followed by a security question"

export_app

Generates a standalone Next.js application from the current flow, ready for deployment. The exported project includes the frontend, API routes for session creation, an .env.local template, and a README. Set FACESIGN_API_KEY in your environment and deploy to any Node.js host.

"Export this flow as a Next.js app"

get_session

Retrieves the full results of a verification session — transcript, AI analysis, node reports, and media references.

"Show me the results of session <id>"

list_sessions

Queries past verification sessions with optional filtering by status, date range, client reference, or free-text search. Returns a cursor-paginated list.

"Show me the last 10 completed sessions"

Node types available in flows

FaceSign flows are assembled from these 13 node types:

Node typePurpose
STARTEntry point for every flow
ENDTerminal node — session ends here
PERMISSIONSRequest camera and/or microphone access with a custom prompt or branch
CONVERSATIONAI-led conversational turn with custom prompts and conditional outcomes
LIVENESS_DETECTIONDeepfake and liveness check on the live video feed
FACE_SCANHigh-quality face capture with an oval overlay
FACE_COMPARECompare two face images (session video, face scan, document photo, or provided image)
RECOGNITIONMatch the user's face against previously registered faces
DOCUMENT_SCANCapture and extract data from identity documents
ENTER_EMAILCollect the user's email address without sending an OTP
TWO_FACTOR_EMAILSend and verify an email one-time passcode
TWO_FACTOR_SMSSend and verify an SMS one-time passcode
DATA_VALIDATIONValidate collected data and branch the flow on the result

Recipes

Each recipe shows a natural-language prompt you give your AI assistant and what the MCP does behind the scenes. Pick one to see the full pattern.

Wire transfer with coercion detection. The most common FaceSign use case: verify the account holder before a high-value transfer, and detect if they are acting under duress.

Prompt:

Build a step-up verification for wire transfers over $5,000. Start with liveness
detection, then ask the user to confirm the transfer details in a conversation.
Include coercion detection. Send an email OTP as the final step.

What the MCP builds:

START — Initializes the verification session.

LIVENESS_DETECTION — Interactive deepfake check. The avatar asks the user to perform randomized actions to confirm a live human.

CONVERSATION — The avatar asks the user to confirm the recipient, amount, and purpose. Six AI models run in parallel: coercion detection analyzes vocal stress, gaze patterns, and response timing throughout.

TWO_FACTOR_EMAIL — Sends a one-time passcode to the user's email for a second authentication factor.

END — Session completes. Results are available via webhook or get_session.

Account recovery without passwords. Replace security questions with a face-based recovery flow. The user proves identity through biometric recognition instead of knowledge-based answers.

Prompt:

Build an account recovery flow. Collect the user's email, verify their face
against our stored biometric, then do a liveness check. If everything passes,
let them proceed to password reset.

What the MCP builds:

START — Initializes the session.

ENTER_EMAIL — Collects the user's email address to look up their account.

RECOGNITION — Compares the live face against the stored biometric fingerprint for that account.

LIVENESS_DETECTION — Confirms the person is physically present (not a photo or deepfake).

END — Session completes. Your app receives the verification result and can proceed with the password reset.

KYC-style onboarding with document scan. While FaceSign is designed for post-login step-up, you can build a first-time identity enrollment flow that captures a document and creates a biometric fingerprint for future recognition.

Prompt:

Build an onboarding flow for new users. Capture their government ID, scan their
face, do a liveness check, then have the avatar welcome them and explain how
future verifications will work.

What the MCP builds:

START — Initializes the session.

DOCUMENT_SCAN — Captures and validates the user's government-issued ID.

FACE_SCAN — Captures the user's face and creates a biometric fingerprint for future recognition.

LIVENESS_DETECTION — Confirms a live human is present.

CONVERSATION — The avatar welcomes the user and explains what to expect in future verification sessions.

END — Session completes. The biometric fingerprint is stored for future RECOGNITION nodes.

Export and deploy to production. After building and testing any flow, export it as a production-ready application.

Prompt:

Export this flow as a Next.js app. I want to deploy it to Vercel.

What the MCP does:

  1. Calls export_app with the current flow.
  2. Generates a complete Next.js project containing the React frontend component with the FaceSign widget, API routes for creating and managing sessions, an .env.local template, and deployment-ready configuration.

Deploy steps:

cd exported-app
npm install
echo "FACESIGN_API_KEY=sk_prod_your_key" > .env.local
vercel deploy

Troubleshooting

If something isn't working with the FaceSign MCP, start here. Each section covers a specific failure mode with symptoms and fixes.

Still stuck?

Contact developers@facesign.ai with your client name and version, the error message or unexpected behavior, and your MCP config (with the API key redacted).


When to use MCP vs. the API

Use MCP when...Use the API when...
You want to build a flow from a descriptionYou have an exact flow spec already
You're prototyping or iterating quicklyYou're integrating into an existing codebase
You want a full exported appYou need fine-grained control over every step
Your team uses AI assistants for developmentYour team prefers traditional SDK integration

Both paths produce the same result — a verification session powered by the same API. MCP is faster for getting started; the REST API gives you full control.


What's next?

Last updated: 2026-04-17 · MCP server: 2.8.0 · Changelog

On this page