MCP: The Universal Protocol for AI Tool Servers
A deep dive into the Model Context Protocol—how it works, why it matters, and how to implement it for your AI agents.
A deep dive into the Model Context Protocol—how it works, why it matters, and how to implement it for your AI agents.
Remember the chaos of proprietary phone chargers? Every device needed its own cable, creating a drawer full of incompatible connectors. The AI agent ecosystem faces the same fragmentation today—except instead of charging cables, it's tool integration protocols.
Enter the Model Context Protocol (MCP): Anthropic's answer to the AI integration chaos. Like USB-C revolutionized device connectivity, MCP promises to standardize how AI agents connect to tools, databases, and external systems. No more building custom integrations for every AI provider. No more maintaining multiple protocol implementations. Just one standard that works everywhere.
But MCP is more than just another protocol—it's becoming the foundation for the next generation of AI agent infrastructure. With adoption by OpenAI, Google, and Microsoft in 2025, MCP is rapidly becoming the industry standard. Here's everything you need to know to implement, test, and deploy MCP servers in production.
Traditional API integrations require AI providers to build custom connectors for every service. This creates an N×M problem: N AI providers times M services equals exponential complexity. MCP solves this with a universal protocol that any AI can speak and any service can implement.
MCP servers expose three types of capabilities to AI agents:
🔧 Tools - Functions that agents can execute
📚 Resources - Data sources agents can access
💬 Prompts - Pre-defined interaction templates
This trinity of capabilities transforms any application into an AI-ready tool server that any MCP-compatible agent can utilize.
At its core, MCP follows a client-server architecture built on JSON-RPC 2.0:
Why JSON-RPC 2.0?
Every MCP message follows this structure:
{
"jsonrpc": "2.0",
"id": "unique-request-id",
"method": "method-name",
"params": {
// method-specific parameters
}
}
Simple, consistent, predictable—exactly what you want in a protocol.
MCP supports multiple transport protocols, each optimized for different use cases:
The simplest and most secure transport. The client launches the server as a subprocess and communicates via stdin/stdout.
Perfect for:
Configuration:
{
"command": "node",
"args": ["my-mcp-server.js"],
"env": {
"API_KEY": "your-key"
}
}
The go-to transport for network-accessible MCP servers. A single /mcp
endpoint handles all operations, supporting both request-response and streaming.
Perfect for:
Required headers:
POST /mcp HTTP/1.1
Content-Type: application/json
Accept: application/json,text/event-stream
Mcp-Session-Id: session-123
SSE as a standalone transport is deprecated as of November 2024. Use Streamable HTTP instead, which incorporates SSE for streaming responses when needed.
Every MCP connection begins with a three-step handshake that establishes capabilities and protocol compatibility:
The client introduces itself and declares its capabilities:
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {
"roots": {
"listChanged": true
},
"sampling": {}
},
"clientInfo": {
"name": "my-ai-agent",
"version": "1.0.0"
}
}
}
The server responds with its capabilities:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {
"listChanged": false
},
"resources": {
"subscribe": true,
"listChanged": true
},
"prompts": {
"listChanged": false
}
},
"serverInfo": {
"name": "example-server",
"version": "1.0.0"
}
}
}
The client confirms it's ready:
{
"jsonrpc": "2.0",
"method": "notifications/initialized",
"params": {}
}
Now the connection is established and both parties know each other's capabilities. Time to get to work!
Once connected, clients can discover and use server capabilities through these core methods:
// Request
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list",
"params": {}
}
// Response
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "calculate",
"description": "Perform mathematical calculations",
"inputSchema": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
]
}
}
// Request
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "calculate",
"arguments": {
"expression": "2 + 2"
}
}
}
// Response
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [
{
"type": "text",
"text": "4"
}
]
}
}
// List available resources
{
"jsonrpc": "2.0",
"id": 4,
"method": "resources/list",
"params": {}
}
// Read a specific resource
{
"jsonrpc": "2.0",
"id": 5,
"method": "resources/read",
"params": {
"uri": "file:///data/config.json"
}
}
Let's build a simple MCP server that exposes a calculation tool. Here's a complete TypeScript implementation:
import { Server } from '@modelcontextprotocol/sdk/server';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio';
const server = new Server({
name: 'calculator-server',
version: '1.0.0',
});
// Define the calculate tool
server.setRequestHandler('tools/list', async () => ({
tools: [
{
name: 'calculate',
description: 'Evaluate mathematical expressions',
inputSchema: {
type: 'object',
properties: {
expression: {
type: 'string',
description: 'Math expression to evaluate',
},
},
required: ['expression'],
},
},
],
}));
// Implement the tool
server.setRequestHandler('tools/call', async (request) => {
if (request.params.name === 'calculate') {
try {
// WARNING: eval is dangerous! Use a proper math library in production
const result = eval(request.params.arguments.expression);
return {
content: [
{
type: 'text',
text: result.toString(),
},
],
};
} catch (error) {
return {
content: [
{
type: 'text',
text: `Error: ${error.message}`,
},
],
isError: true,
};
}
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
// Start the server
const transport = new StdioServerTransport();
server.connect(transport);
Save this as calculator-server.ts
and run it with:
npx tsx calculator-server.ts
Your MCP server is now ready to accept connections!
Testing is crucial for MCP development. Here's how to test any MCP server using curl:
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json,text/event-stream" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {
"name": "curl-test",
"version": "1.0"
}
}
}'
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "notifications/initialized",
"params": {}
}'
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list",
"params": {}
}'
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "calculate",
"arguments": {
"expression": "42 * 10"
}
}
}'
Some servers require session management. Here's how to handle it:
# Create a session
SESSION_ID="test-session-$(date +%s)"
# Initialize with session
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json,text/event-stream" \
-H "Mcp-Session-Id: $SESSION_ID" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {
"name": "curl-test",
"version": "1.0"
}
}
}'
# Use the same session for subsequent requests
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SESSION_ID" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list",
"params": {}
}'
# Clean up the session when done
curl -X DELETE http://localhost:8000/mcp \
-H "Mcp-Session-Id: $SESSION_ID"
For stdio servers, use echo and pipes:
# Simple one-shot test
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' | node calculator-server.js
# Interactive testing with named pipes
mkfifo /tmp/mcp_in /tmp/mcp_out
node calculator-server.js < /tmp/mcp_in > /tmp/mcp_out &
# Send requests
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' > /tmp/mcp_in
# Read responses
cat /tmp/mcp_out
MCP uses standard JSON-RPC error codes plus custom ones for protocol-specific issues:
Code | Meaning | When to Use |
---|---|---|
-32700 | Parse error | Invalid JSON |
-32600 | Invalid request | Missing required fields |
-32601 | Method not found | Unknown method name |
-32602 | Invalid params | Wrong parameter types/values |
-32603 | Internal error | Server-side failure |
{
"jsonrpc": "2.0",
"id": "request-id",
"error": {
"code": -32601,
"message": "Method not found",
"data": {
"method": "unknown/method",
"availableMethods": ["tools/list", "tools/call"]
}
}
}
The MCP ecosystem is growing rapidly with servers for popular services:
GitHub MCP Server
Slack MCP Server
Postgres MCP Server
Google Drive MCP Server
The community has built MCP servers for everything from weather APIs to cryptocurrency exchanges. Check out the comprehensive list at: https://github.com/cyanheads/model-context-protocol-resources
MCP servers handle sensitive operations. Follow these security guidelines:
Never expose an MCP server without authentication:
server.setRequestHandler('initialize', async (request, context) => {
// Validate auth token
const token = context.headers['authorization'];
if (!validateToken(token)) {
throw new Error('Unauthorized');
}
// Continue with initialization
return { /* ... */ };
});
Always validate and sanitize inputs:
server.setRequestHandler('tools/call', async (request) => {
const { name, arguments: args } = request.params;
// Validate tool name
if (!ALLOWED_TOOLS.includes(name)) {
throw new Error(`Unknown tool: ${name}`);
}
// Validate arguments
const schema = TOOL_SCHEMAS[name];
if (!validateSchema(args, schema)) {
throw new Error('Invalid arguments');
}
// Safe to proceed
return executeTool(name, args);
});
Prevent abuse with rate limiting:
const rateLimiter = new Map();
server.setRequestHandler('tools/call', async (request, context) => {
const clientId = context.sessionId || context.ip;
const limit = rateLimiter.get(clientId) || 0;
if (limit > MAX_REQUESTS_PER_MINUTE) {
throw new Error('Rate limit exceeded');
}
rateLimiter.set(clientId, limit + 1);
// Process request
});
For production, always use:
"jsonrpc": "2.0"
id
for requests (not notifications)Mcp-Session-Id
header consistentlyMCP adoption has exploded in 2025 with major players embracing the standard:
OpenAI - Native MCP support in GPT models Google - Gemini models speak MCP natively Microsoft - Azure AI services include MCP connectors Amazon - Bedrock supports MCP servers
This industry-wide adoption means:
While MCP standardizes connections, you still need visibility into what's happening. AgentFlare provides comprehensive observability for MCP servers:
Just point your MCP client through AgentFlare's proxy:
const client = new Client({
transport: new HTTPTransport({
baseUrl: "https://your-workspace.agentflare.com/proxy",
headers: {
"Authorization": `Bearer ${API_KEY}`,
"X-Target-Server": "your-mcp-server"
}
})
});
Now every MCP interaction is automatically monitored, giving you the insights needed to optimize performance, control costs, and debug issues.
Ready to implement MCP? Here's your action plan:
Start with a simple, well-defined tool that would benefit from AI integration.
initialize
and tools/list
tools/call
for your specific toolsMCP is more than a protocol—it's becoming the foundation for AI agent infrastructure. As the ecosystem matures, we're seeing:
The companies that master MCP today will have a significant advantage as AI agents become central to business operations.
The Model Context Protocol represents a fundamental shift in how we think about AI integration. Instead of building custom connectors for every AI provider, we can now build once and support them all. Instead of proprietary protocols, we have an open standard. Instead of fragmentation, we have unity.
For developers, MCP means less time on integration and more time on innovation. For businesses, it means faster deployment and lower costs. For the industry, it means a more robust and interoperable AI ecosystem.
The USB-C moment for AI is here. The question isn't whether to adopt MCP—it's how quickly you can get started.
Official Resources:
AgentFlare Resources:
Community:
Ready to monitor your MCP servers? Start with AgentFlare for complete observability of your AI tool infrastructure.