Build an MCP Server: Complete Tutorial for Custom AI Agent Tools (2026)
The Model Context Protocol (MCP) is how you give AI agents real capabilities — database access, API calls, file operations, anything your business needs. Instead of hoping prompt engineering works, you build actual tools the agent can use.
This tutorial walks you through building a production MCP server from scratch. By the end, you'll have a working server that connects to Claude Desktop, Cursor, and any MCP-compatible client.
Why Build an MCP Server?
Before MCP, giving an AI agent access to your systems meant:
- Writing custom function-calling wrappers for every LLM provider
- Rebuilding integrations when switching from OpenAI to Claude to Gemini
- No standard way to share tools across projects or teams
MCP changes this. Build your tool server once, and it works with every MCP-compatible client — Claude Desktop, Cursor, Windsurf, Cline, your own apps, and hundreds more.
Think of MCP like a USB port for AI. You build the device (server), and any computer (LLM client) can use it.
MCP Architecture in 30 Seconds
The protocol has three layers:
- Tools — Functions the AI can call (like API endpoints). "Search the database", "Send an email", "Create a ticket".
- Resources — Data the AI can read (like GET endpoints). "Current user profile", "Latest metrics", "Company knowledge base".
- Prompts — Reusable prompt templates. "Analyze this data using our standard format", "Write a response in brand voice".
The server exposes these over stdio (local, for desktop apps) or SSE/Streamable HTTP (remote, for web apps and teams).
Project Setup
We're building with TypeScript and the official @modelcontextprotocol/sdk. Here's the project structure:
my-mcp-server/
├── src/
│ ├── index.ts # Server entry point
│ ├── tools/
│ │ ├── database.ts # Database query tool
│ │ ├── api.ts # External API tool
│ │ └── files.ts # File operations tool
│ ├── resources/
│ │ └── metrics.ts # Live metrics resource
│ └── prompts/
│ └── analysis.ts # Analysis prompt template
├── package.json
├── tsconfig.json
└── README.md
1 Initialize the Project
mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsx
# Create tsconfig
cat > tsconfig.json << 'EOF'
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"declaration": true
},
"include": ["src/**/*"]
}
EOF
mkdir -p src/tools src/resources src/prompts
2 Build the Server Core
This is the entry point. It creates the MCP server and registers all tools, resources, and prompts:
// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "my-business-tools",
version: "1.0.0",
capabilities: {
tools: {},
resources: {},
prompts: {},
},
});
// ── Tool 1: Database Query ──────────────────────────
server.tool(
"query_customers",
"Search the customer database by name, email, or status",
{
query: z.string().describe("Search term"),
status: z.enum(["active", "churned", "trial"])
.optional()
.describe("Filter by customer status"),
limit: z.number().default(10).describe("Max results"),
},
async ({ query, status, limit }) => {
// Replace with your actual database call
const results = await searchCustomers(query, status, limit);
return {
content: [{
type: "text",
text: JSON.stringify(results, null, 2),
}],
};
}
);
// ── Tool 2: Create Support Ticket ───────────────────
server.tool(
"create_ticket",
"Create a new support ticket in the helpdesk system",
{
title: z.string().describe("Ticket title"),
description: z.string().describe("Detailed description"),
priority: z.enum(["low", "medium", "high", "urgent"]),
customer_email: z.string().email().describe("Customer email"),
},
async ({ title, description, priority, customer_email }) => {
const ticket = await createHelpdeskTicket({
title, description, priority, customer_email,
});
return {
content: [{
type: "text",
text: `✅ Ticket created: #${ticket.id}\nPriority: ${priority}\nAssigned to: ${ticket.assignee}`,
}],
};
}
);
// ── Tool 3: Fetch External API Data ─────────────────
server.tool(
"get_weather",
"Get current weather for a city (example external API)",
{
city: z.string().describe("City name"),
},
async ({ city }) => {
const res = await fetch(
`https://wttr.in/${encodeURIComponent(city)}?format=j1`
);
const data = await res.json();
const current = data.current_condition[0];
return {
content: [{
type: "text",
text: `Weather in ${city}: ${current.temp_C}°C, ${current.weatherDesc[0].value}. Humidity: ${current.humidity}%, Wind: ${current.windspeedKmph} km/h`,
}],
};
}
);
// ── Tool 4: File Operations ─────────────────────────
server.tool(
"read_report",
"Read a report file from the reports directory",
{
filename: z.string().describe("Report filename (e.g. 'q4-2025.md')"),
},
async ({ filename }) => {
const fs = await import("fs/promises");
const path = await import("path");
// Sanitize: prevent directory traversal
const safe = path.basename(filename);
const content = await fs.readFile(
path.join("./reports", safe), "utf-8"
);
return {
content: [{ type: "text", text: content }],
};
}
);
// ── Tool 5: Calculate Metrics ───────────────────────
server.tool(
"calculate_mrr",
"Calculate Monthly Recurring Revenue from subscription data",
{
period: z.enum(["current", "previous", "compare"])
.describe("Which period to calculate"),
},
async ({ period }) => {
const mrr = await calculateMRR(period);
return {
content: [{
type: "text",
text: JSON.stringify(mrr, null, 2),
}],
};
}
);
// ── Resource: Live Metrics ──────────────────────────
server.resource(
"metrics",
"metrics://dashboard/current",
async (uri) => ({
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
mrr: 48500,
customers: 234,
churn_rate: 2.1,
nps: 72,
updated_at: new Date().toISOString(),
}),
}],
})
);
// ── Resource: Company Knowledge ─────────────────────
server.resource(
"handbook",
"docs://company/handbook",
async (uri) => ({
contents: [{
uri: uri.href,
mimeType: "text/markdown",
text: "# Company Handbook\n\n## Support Policy\n- Response time: < 4 hours for urgent...\n- Escalation path: L1 → L2 → Engineering...",
}],
})
);
// ── Prompt: Analysis Template ───────────────────────
server.prompt(
"analyze_customer",
"Analyze a customer account using our standard framework",
[{ name: "customer_id", description: "Customer ID to analyze", required: true }],
async ({ customer_id }) => ({
messages: [{
role: "user",
content: {
type: "text",
text: `Analyze customer ${customer_id} using this framework:
1. **Health Score** — Activity last 30 days, feature adoption, support tickets
2. **Revenue Risk** — Contract end date, usage trends, expansion potential
3. **Action Items** — Specific next steps for the account team
Use the query_customers and calculate_mrr tools to pull real data.
Be specific and data-driven. Flag any churn risks immediately.`,
},
}],
})
);
// ── Start Server ────────────────────────────────────
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("MCP server running on stdio");
}
main().catch(console.error);
// ── Helper Functions (replace with real implementations) ──
async function searchCustomers(query: string, status?: string, limit?: number) {
// Your database query here
return [
{ id: 1, name: "Acme Corp", email: "team@acme.com", status: "active", mrr: 299 },
{ id: 2, name: "TechStart", email: "hi@techstart.io", status: "trial", mrr: 0 },
].filter(c =>
c.name.toLowerCase().includes(query.toLowerCase()) &&
(!status || c.status === status)
).slice(0, limit);
}
async function createHelpdeskTicket(data: any) {
return { id: Math.floor(Math.random() * 10000), assignee: "support-team", ...data };
}
async function calculateMRR(period: string) {
return {
period,
mrr: 48500,
growth: 12.3,
new_mrr: 4200,
churned_mrr: 1100,
net_new: 3100,
};
}
z.string().email() and path.basename() patterns above show good practices.
Connect to Claude Desktop
The fastest way to test your MCP server. Add it to Claude Desktop's config:
// ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
// %APPDATA%\Claude\claude_desktop_config.json (Windows)
{
"mcpServers": {
"my-business-tools": {
"command": "npx",
"args": ["tsx", "/absolute/path/to/my-mcp-server/src/index.ts"],
"env": {
"DATABASE_URL": "postgresql://...",
"API_KEY": "your-api-key"
}
}
}
}
Restart Claude Desktop. You'll see a 🔨 icon showing your tools are available. Ask Claude: "Search for customers named Acme" and watch it call your query_customers tool.
Connect to Cursor / Windsurf / Other Clients
Most MCP clients use the same config format. For Cursor, add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"my-business-tools": {
"command": "npx",
"args": ["tsx", "./src/index.ts"],
"env": {}
}
}
}
Deploy as a Remote Server (SSE)
Local stdio is great for personal use. For teams or web apps, deploy as a remote HTTP server with Server-Sent Events:
// src/remote.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
const app = express();
const server = new McpServer({
name: "my-business-tools",
version: "1.0.0",
});
// ... register same tools, resources, prompts ...
// SSE endpoint for MCP clients
let transport: SSEServerTransport;
app.get("/sse", async (req, res) => {
transport = new SSEServerTransport("/message", res);
await server.connect(transport);
});
app.post("/message", async (req, res) => {
await transport.handlePostMessage(req, res);
});
// Health check
app.get("/health", (req, res) => {
res.json({ status: "ok", tools: 5, uptime: process.uptime() });
});
app.listen(3001, () => {
console.log("MCP server running on http://localhost:3001");
});
5 Production Patterns
Pattern 1: Authentication Middleware
Protect your remote MCP server with API key or JWT auth:
// Middleware for remote server
function authMiddleware(req, res, next) {
const apiKey = req.headers["x-api-key"];
if (!apiKey || !validApiKeys.has(apiKey)) {
return res.status(401).json({ error: "Invalid API key" });
}
// Attach user context for tool-level permissions
req.user = getUserFromApiKey(apiKey);
next();
}
app.use("/sse", authMiddleware);
app.use("/message", authMiddleware);
Pattern 2: Tool-Level Permissions
Not every user should access every tool. Gate sensitive operations:
server.tool(
"delete_customer",
"Permanently delete a customer record (admin only)",
{ customer_id: z.number() },
async ({ customer_id }, { meta }) => {
// Check permissions from the session context
if (!meta?.user?.roles?.includes("admin")) {
return {
content: [{ type: "text", text: "❌ Requires admin role" }],
isError: true,
};
}
await deleteCustomer(customer_id);
return {
content: [{ type: "text", text: `Customer ${customer_id} deleted.` }],
};
}
);
Pattern 3: Rate Limiting
const toolCalls = new Map<string, number[]>();
const RATE_LIMIT = 60; // calls per minute
function checkRateLimit(toolName: string): boolean {
const now = Date.now();
const calls = toolCalls.get(toolName) || [];
const recent = calls.filter(t => now - t < 60_000);
if (recent.length >= RATE_LIMIT) return false;
recent.push(now);
toolCalls.set(toolName, recent);
return true;
}
Pattern 4: Structured Error Handling
server.tool(
"query_database",
"Run a read-only SQL query",
{ sql: z.string() },
async ({ sql }) => {
try {
// Only allow SELECT statements
if (!sql.trim().toUpperCase().startsWith("SELECT")) {
return {
content: [{ type: "text", text: "❌ Only SELECT queries allowed" }],
isError: true,
};
}
const results = await db.query(sql);
return {
content: [{ type: "text", text: JSON.stringify(results.rows, null, 2) }],
};
} catch (error) {
return {
content: [{ type: "text", text: `Query failed: ${error.message}` }],
isError: true,
};
}
}
);
Pattern 5: Caching Expensive Operations
const cache = new Map<string, { data: any; expires: number }>();
function cached<T>(key: string, ttlMs: number, fn: () => Promise<T>): Promise<T> {
const entry = cache.get(key);
if (entry && entry.expires > Date.now()) return entry.data;
return fn().then(data => {
cache.set(key, { data, expires: Date.now() + ttlMs });
return data;
});
}
// Usage in a tool:
const report = await cached(
`report-${period}`,
5 * 60 * 1000, // 5 minutes
() => generateExpensiveReport(period)
);
Real-World MCP Server Ideas
| Server | Tools | Use Case |
|---|---|---|
| CRM Server | search_contacts, create_deal, update_pipeline, log_activity | Sales agents that update your CRM automatically |
| DevOps Server | deploy_service, check_status, rollback, view_logs | AI-powered incident response |
| Content Server | search_assets, generate_image, publish_post, schedule | Content creation pipeline |
| Finance Server | get_invoices, create_expense, reconcile, forecast | Bookkeeping and financial analysis |
| HR Server | search_candidates, schedule_interview, send_offer | Recruitment automation |
| E-commerce Server | search_products, check_inventory, process_return | Customer support with real data |
🚀 Want the Complete Agent Playbook?
MCP servers are just the tools. The AI Employee Playbook shows you how to build the full agent — memory, decision-making, autonomy, and deployment.
Get the Playbook — €29Testing Your MCP Server
Don't ship without testing. Here's how to test MCP servers properly:
Method 1: MCP Inspector
# Official MCP debugging tool
npx @modelcontextprotocol/inspector npx tsx src/index.ts
This opens a web UI where you can call each tool, inspect inputs/outputs, and debug issues.
Method 2: Programmatic Testing
// test/tools.test.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { InMemoryTransport } from "@modelcontextprotocol/sdk/inMemory.js";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
describe("MCP Tools", () => {
let client: Client;
beforeAll(async () => {
const server = createServer(); // your server factory
const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair();
await server.connect(serverTransport);
client = new Client({ name: "test", version: "1.0" });
await client.connect(clientTransport);
});
test("query_customers returns results", async () => {
const result = await client.callTool({
name: "query_customers",
arguments: { query: "Acme", limit: 5 },
});
expect(result.content[0].text).toContain("Acme");
});
test("create_ticket validates email", async () => {
await expect(client.callTool({
name: "create_ticket",
arguments: {
title: "Test",
description: "Test ticket",
priority: "low",
customer_email: "not-an-email",
},
})).rejects.toThrow();
});
});
Method 3: Claude Desktop Testing
The real test. Open Claude Desktop and try these prompts:
- "Search for all active customers" → should call
query_customers - "Create an urgent ticket for billing@acme.com about payment failure" → should call
create_ticket - "What's our current MRR and how does it compare to last month?" → should call
calculate_mrr - "Analyze customer 42 using the analysis framework" → should use the prompt template + tools
Deployment Options
| Platform | Transport | Best For | Cost |
|---|---|---|---|
| Local (stdio) | stdio | Personal use, Claude Desktop | Free |
| Railway | SSE | Quick remote deploy | ~$5/mo |
| Fly.io | SSE | Low-latency, global edge | ~$3/mo |
| Docker + VPS | SSE | Full control, self-hosted | ~$5/mo |
| Cloudflare Workers | Streamable HTTP | Serverless, global CDN | Free tier |
| AWS Lambda | Streamable HTTP | Enterprise, auto-scale | Pay per request |
Docker Deployment
# Dockerfile
FROM node:22-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY dist/ ./dist/
EXPOSE 3001
USER node
CMD ["node", "dist/remote.js"]
# Build and deploy
npm run build
docker build -t my-mcp-server .
docker run -p 3001:3001 \
-e DATABASE_URL=postgresql://... \
-e API_KEY=... \
my-mcp-server
7 Common MCP Server Mistakes
- No input validation — Always use Zod schemas. Never trust LLM-generated inputs blindly.
- Returning too much data — LLMs have context limits. Paginate, summarize, or limit results to what's needed.
- Missing error messages — Return
isError: truewith clear messages so the LLM can retry or explain the failure. - No tool descriptions — The LLM decides which tool to use based on your descriptions. Vague descriptions = wrong tool calls.
- Exposing write operations without confirmation — Destructive tools should require explicit confirmation or be behind a permission gate.
- Ignoring rate limits on external APIs — Your MCP server will get called more than you expect. Cache and rate-limit everything.
- Hardcoded credentials — Use environment variables. Never commit API keys to the server code.
60-Minute Quickstart
Here's your speedrun to a working MCP server:
0:00 – 0:10 → Setup
mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript tsx @types/node
0:10 – 0:30 → Build 3 Tools
Copy the server core code from above. Replace the helper functions with your actual data sources. Start with one real tool (e.g., query your database) and two simple ones.
0:30 – 0:40 → Test with MCP Inspector
npx @modelcontextprotocol/inspector npx tsx src/index.ts
Call each tool, verify the outputs, fix any schema issues.
0:40 – 0:50 → Connect to Claude Desktop
Add the config to claude_desktop_config.json, restart Claude Desktop, test with natural language prompts.
0:50 – 0:60 → Add Error Handling & Ship
Add try/catch to every tool, validate inputs, add rate limiting. Commit to git. Done.
📬 The Operator Signal
Weekly dispatch: AI agent patterns, MCP server examples, and automation strategies that actually work.
Subscribe FreeWhat's Next
You've built an MCP server. Here's where to go from here:
- Add more tools — Connect your CRM, helpdesk, analytics, or any API your business uses
- Build agent workflows — Chain multiple tool calls together for complex tasks
- Deploy remotely — Share your server with your team via SSE transport
- Add memory — Combine MCP tools with agent memory for context-aware operations
- Publish to MCP Hub — Open-source your server for the community
MCP is the standard for AI agent tooling. Every server you build today becomes more valuable as more clients adopt the protocol.
Build the tools. Let the AI use them. That's the whole game.