Artificial Intelligence

Build an MCP Server and Client with Xano

Model Context Protocol (MCP) opens up a powerful new way to connect AI to your backend logic — and with Xano's built-in MCP support, you can have a working server and client without leaving your workspace. Here's how to set it all up, from configuring your tools to executing them through an AI-powered chatbot.

Setting Up Your MCP Server

Xano makes launching an MCP server as simple as clicking a button inside the AI Tools section of your workspace. Once created, you can configure server-level instructions that help any connected AI understand the purpose and behavior of your tools. These instructions are delivered automatically when a client connects to your server.

Each tool you add to your server works just like an existing Xano function stack — no new concepts to learn. You define tool instructions, set up inputs, and add descriptions that the AI will use to determine when and how to call the tool. If you already have API endpoints built, you can convert them directly to MCP tools using the built-in "Convert to Tool" option, making your existing business logic instantly AI-accessible.

Enabling User Authentication on Your Tools

One of the standout features here is that your MCP tools support Xano's native authentication system. This means each user gets a personalized experience — your tools only return data tied to the authenticated user making the request, just like a standard secured API endpoint. You can build a multi-user AI agent experience without any extra infrastructure.

Building the MCP Client

The client lives inside a regular Xano API endpoint and uses two new function stack statements: MCP List Tools and MCP Call Tool. Here's the flow you'll build:

Step 1 — Retrieve and Transform Tools: Use MCP List Tools to pull all tool definitions from your server via its connection URL. Because OpenAI expects a slightly different format than the raw MCP response, you'll use Xano's template engine to convert the tool list into the structure OpenAI requires — mapping input schemas to parameters and adding the required type fields.

Step 2 — Contact OpenAI: With your tools formatted correctly, you send the user's query, conversation history, and tool definitions to OpenAI. The AI responds not by executing the tool itself, but by returning the tool name and the arguments it wants to pass in.

Step 3 — Execute the Tool and Return a Response: Using the tool name and arguments from OpenAI, you call MCP Call Tool to actually run the tool on your Xano server. You then pass the tool result back to OpenAI in a follow-up request so it can generate a natural language response summarizing what happened.

Getting Started Quickly

Everything shown here is available as a downloadable snippet — free and paid accounts alike can install it directly. The only configuration you'll need is adding your OpenAI API key as an environment variable, and your chatbot will be ready to run. It's a complete starting point for building AI agents that interact with real user data through your own backend.

Sign up for Xano

Join 100,000+ people already building with Xano.
Start today and scale to millions.