
MCP Server Examples: Building Intelligent Integrations for AI Agents
Explore comprehensive MCP server examples and learn how to build, deploy, and integrate Model Context Protocol servers to enhance AI agent capabilities across e...

Learn how to develop a Model Context Protocol (MCP) server that seamlessly integrates with OpenAI’s API, enabling powerful AI-driven tool execution and intelligent automation.
The Model Context Protocol (MCP) represents a paradigm shift in how artificial intelligence systems interact with external tools and data sources. When combined with OpenAI’s powerful language models, an MCP server becomes a gateway to intelligent automation, enabling AI systems to execute complex operations, retrieve real-time data, and integrate seamlessly with your existing infrastructure. This comprehensive guide walks you through the entire process of developing an MCP server that connects to OpenAI, from foundational concepts to production-ready implementation.
Whether you’re building a customer service automation platform, an intelligent data processing system, or a sophisticated business intelligence tool, understanding how to architect and implement an MCP server is essential for modern AI development. The integration between MCP servers and OpenAI creates a powerful ecosystem where AI models can reason about problems, decide which tools to use, and execute those tools with precision—all while maintaining security, reliability, and scalability.
The Model Context Protocol is an open standard that defines how AI models can discover and interact with external tools, services, and data sources. Rather than embedding all functionality directly into an AI model, MCP allows developers to create specialized servers that expose capabilities through a standardized interface. This separation of concerns enables better modularity, security, and scalability in AI applications.
At its core, MCP operates on a simple principle: the AI model (in this case, OpenAI’s GPT) acts as an intelligent orchestrator that can understand what tools are available, determine when to use them, and interpret their results. The MCP server acts as a provider of these tools, exposing them through a well-defined API that the AI model can discover and invoke. This creates a clean contract between the AI system and your custom business logic.
The beauty of MCP lies in its flexibility. Your server can expose tools for anything—database queries, API calls to third-party services, file processing, calculations, or even triggering complex workflows. The AI model learns about these capabilities and uses them intelligently within conversations, making decisions about which tools to invoke based on the user’s request and the context of the conversation.
The integration of MCP servers with OpenAI addresses a fundamental limitation of large language models: they have a knowledge cutoff and cannot directly interact with real-time systems or proprietary data. By implementing an MCP server, you extend the capabilities of OpenAI’s models far beyond their base training, enabling them to access current information, execute business logic, and integrate with your existing systems.
Consider these practical scenarios where MCP servers prove invaluable:
The architecture also provides significant advantages for development teams. Multiple teams can develop and maintain their own MCP servers independently, which are then composed together to create sophisticated AI applications. This modular approach scales well as your organization grows and your AI capabilities become more complex.
Before diving into implementation details, it’s crucial to understand the architectural flow of how an MCP server integrates with OpenAI. The process involves several key components working in concert:
The AI Model (OpenAI) initiates conversations and makes decisions about which tools to invoke. When the model determines that a tool call is necessary, it generates a structured request containing the tool name and parameters.
The MCP Client acts as a translator and intermediary. It receives tool invocation requests from OpenAI, translates them into the format expected by your MCP server, sends the request to the appropriate server, and returns the results back to OpenAI in the format the model expects.
The MCP Server is your custom application that exposes tools and capabilities. It receives requests from the MCP client, executes the requested operations (which might involve database queries, API calls, or complex computations), and returns structured results.
Tool Definitions are the contracts that define what tools are available, what parameters they accept, and what they return. These definitions are discovered by the MCP client and registered with OpenAI so the model knows what’s available.
This architecture creates a clean separation of concerns: OpenAI handles reasoning and decision-making, your MCP server handles domain-specific logic and data access, and the MCP client handles the communication protocol between them.
The foundation of any successful MCP server is a clear definition of the tools you want to expose. This isn’t just a technical exercise—it’s a strategic decision about what capabilities your AI system needs to accomplish its goals.
Start by identifying the specific problems your AI system needs to solve. Are you building a customer service chatbot that needs to look up order information? A data analysis assistant that needs to query databases? A content creation tool that needs to access your company’s knowledge base? Each use case will have different tool requirements.
For each tool, define:
get_customer_order_history, search_knowledge_base, execute_sql_query)Here’s an example of well-defined tool specifications:
| Tool Name | Purpose | Input Parameters | Output Format | Use Case |
|---|---|---|---|---|
get_customer_info | Retrieve customer details | customer_id (string) | JSON object with name, email, account_status | Customer service queries |
search_orders | Find orders matching criteria | customer_id, date_range, status | Array of order objects | Order lookup and history |
create_support_ticket | Open a new support case | customer_id, issue_description, priority | Ticket object with ID and confirmation | Issue escalation |
check_inventory | Query product availability | product_id, warehouse_location | Inventory count and location details | Stock inquiries |
process_refund | Initiate refund transaction | order_id, amount, reason | Transaction confirmation with reference number | Refund processing |
This table-based approach helps you think through the complete tool ecosystem before writing any code. It ensures consistency, clarity, and completeness in your tool definitions.
Creating an MCP server requires a solid development foundation. While MCP servers can be built in multiple languages, we’ll focus on the most popular approaches: TypeScript/Node.js and Python, as these have the most mature MCP libraries and community support.
For TypeScript/Node.js Development:
Start by creating a new Node.js project and installing the necessary dependencies:
mkdir mcp-server-openai
cd mcp-server-openai
npm init -y
npm install @modelcontextprotocol/sdk openai dotenv express cors
npm install --save-dev typescript @types/node ts-node
Create a tsconfig.json file to configure TypeScript:
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"lib": ["ES2020"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
}
}
For Python Development:
Create a virtual environment and install dependencies:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install mcp openai python-dotenv fastapi uvicorn
Regardless of your language choice, you’ll need:
.env file to store sensitive credentials securelyThe core of your MCP server is the server application that exposes your tools through the MCP protocol. This involves creating endpoints for tool discovery and tool execution.
TypeScript/Node.js Implementation:
Create a basic MCP server structure:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server({
name: "openai-mcp-server",
version: "1.0.0",
});
// Define your tools
const tools = [
{
name: "get_customer_info",
description: "Retrieve customer information by ID",
inputSchema: {
type: "object",
properties: {
customer_id: {
type: "string",
description: "The unique customer identifier",
},
},
required: ["customer_id"],
},
},
// Add more tools here
];
// Handle tool listing requests
server.setRequestHandler(ListToolsRequestSchema, async () => {
return { tools };
});
// Handle tool execution requests
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "get_customer_info") {
// Implement your tool logic here
const customerId = args.customer_id;
// Query your database or API
return {
content: [
{
type: "text",
text: JSON.stringify({
id: customerId,
name: "John Doe",
email: "john@example.com",
}),
},
],
};
}
return {
content: [{ type: "text", text: `Unknown tool: ${name}` }],
isError: true,
};
});
// Start the server
const transport = new StdioServerTransport();
server.connect(transport);
Python Implementation:
from mcp.server import Server
from mcp.types import Tool, TextContent, ToolResult
import json
app = Server("openai-mcp-server")
# Define your tools
TOOLS = [
Tool(
name="get_customer_info",
description="Retrieve customer information by ID",
inputSchema={
"type": "object",
"properties": {
"customer_id": {
"type": "string",
"description": "The unique customer identifier"
}
},
"required": ["customer_id"]
}
)
]
@app.list_tools()
async def list_tools():
return TOOLS
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "get_customer_info":
customer_id = arguments.get("customer_id")
# Implement your tool logic
result = {
"id": customer_id,
"name": "John Doe",
"email": "john@example.com"
}
return ToolResult(
content=[TextContent(type="text", text=json.dumps(result))]
)
return ToolResult(
content=[TextContent(type="text", text=f"Unknown tool: {name}")],
isError=True
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
The real power of your MCP server comes from the actual implementation of your tools. This is where you connect to databases, call external APIs, process data, and execute business logic.
Database Integration Example:
import { Pool } from "pg"; // PostgreSQL example
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
async function getCustomerInfo(customerId: string) {
try {
const result = await pool.query(
"SELECT id, name, email, account_status FROM customers WHERE id = $1",
[customerId]
);
if (result.rows.length === 0) {
return {
error: "Customer not found",
status: 404,
};
}
return result.rows[0];
} catch (error) {
return {
error: "Database query failed",
details: error.message,
status: 500,
};
}
}
External API Integration Example:
import axios from "axios";
async function searchExternalDatabase(query: string) {
try {
const response = await axios.get(
"https://api.external-service.com/search",
{
params: { q: query },
headers: {
Authorization: `Bearer ${process.env.EXTERNAL_API_KEY}`,
},
}
);
return response.data;
} catch (error) {
return {
error: "External API call failed",
details: error.message,
};
}
}
Error Handling and Validation:
Robust error handling is critical for production MCP servers. Implement comprehensive validation and error handling:
function validateInput(args: any, schema: any): { valid: boolean; error?: string } {
// Validate required fields
for (const required of schema.required || []) {
if (!(required in args)) {
return { valid: false, error: `Missing required parameter: ${required}` };
}
}
// Validate field types
for (const [key, property] of Object.entries(schema.properties || {})) {
if (key in args) {
const value = args[key];
const expectedType = (property as any).type;
if (typeof value !== expectedType) {
return {
valid: false,
error: `Parameter ${key} must be of type ${expectedType}`,
};
}
}
}
return { valid: true };
}
The MCP client is the bridge between OpenAI and your MCP server. It handles the translation between OpenAI’s function-calling format and your MCP server’s protocol.
TypeScript/Node.js MCP Client:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import OpenAI from "openai";
class MCPOpenAIBridge {
private mcpClient: Client;
private openaiClient: OpenAI;
private availableTools: any[] = [];
constructor() {
this.openaiClient = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
}
async initialize() {
// Connect to MCP server
const transport = new StdioClientTransport({
command: "node",
args: ["dist/server.js"],
});
this.mcpClient = new Client({
name: "openai-mcp-client",
version: "1.0.0",
});
await this.mcpClient.connect(transport);
// Discover available tools
const toolsResponse = await this.mcpClient.listTools();
this.availableTools = toolsResponse.tools;
}
async executeWithOpenAI(userMessage: string) {
const messages = [{ role: "user" as const, content: userMessage }];
// Convert MCP tools to OpenAI function format
const tools = this.availableTools.map((tool) => ({
type: "function" as const,
function: {
name: tool.name,
description: tool.description,
parameters: tool.inputSchema,
},
}));
let response = await this.openaiClient.chat.completions.create({
model: "gpt-4",
messages,
tools,
});
// Handle tool calls in a loop
while (response.choices[0].finish_reason === "tool_calls") {
const toolCalls = response.choices[0].message.tool_calls || [];
for (const toolCall of toolCalls) {
const toolName = toolCall.function.name;
const toolArgs = JSON.parse(toolCall.function.arguments);
// Execute tool on MCP server
const toolResult = await this.mcpClient.callTool({
name: toolName,
arguments: toolArgs,
});
// Add tool result to messages
messages.push({
role: "assistant",
content: response.choices[0].message.content || "",
});
messages.push({
role: "tool",
content: JSON.stringify(toolResult),
tool_call_id: toolCall.id,
});
}
// Get next response from OpenAI
response = await this.openaiClient.chat.completions.create({
model: "gpt-4",
messages,
tools,
});
}
return response.choices[0].message.content;
}
}
// Usage
const bridge = new MCPOpenAIBridge();
await bridge.initialize();
const result = await bridge.executeWithOpenAI(
"What is the order history for customer 12345?"
);
console.log(result);
Security is paramount when building MCP servers that interact with sensitive data and external APIs. Implement multiple layers of security:
API Key Management:
import crypto from "crypto";
class APIKeyManager {
private validKeys: Set<string> = new Set();
constructor() {
// Load valid API keys from environment or secure storage
const keys = process.env.VALID_API_KEYS?.split(",") || [];
this.validKeys = new Set(keys);
}
validateRequest(apiKey: string): boolean {
return this.validKeys.has(apiKey);
}
generateNewKey(): string {
return crypto.randomBytes(32).toString("hex");
}
}
Request Signing and Verification:
import crypto from "crypto";
function signRequest(payload: any, secret: string): string {
return crypto
.createHmac("sha256", secret)
.update(JSON.stringify(payload))
.digest("hex");
}
function verifySignature(payload: any, signature: string, secret: string): boolean {
const expectedSignature = signRequest(payload, secret);
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(expectedSignature)
);
}
Rate Limiting:
import rateLimit from "express-rate-limit";
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: "Too many requests from this IP, please try again later.",
});
app.use("/api/", limiter);
Input Sanitization:
function sanitizeInput(input: string): string {
// Remove potentially dangerous characters
return input
.replace(/[<>]/g, "")
.trim()
.substring(0, 1000); // Limit length
}
function validateCustomerId(customerId: string): boolean {
// Only allow alphanumeric and hyphens
return /^[a-zA-Z0-9-]+$/.test(customerId);
}
Comprehensive testing ensures your MCP server works correctly and handles edge cases gracefully.
Unit Tests Example (Jest):
import { getCustomerInfo } from "../tools/customer";
describe("Customer Tools", () => {
test("should return customer info for valid ID", async () => {
const result = await getCustomerInfo("cust_123");
expect(result).toHaveProperty("id");
expect(result).toHaveProperty("name");
expect(result).toHaveProperty("email");
});
test("should return error for invalid ID", async () => {
const result = await getCustomerInfo("invalid");
expect(result).toHaveProperty("error");
});
test("should handle database errors gracefully", async () => {
// Mock database error
const result = await getCustomerInfo("cust_error");
expect(result).toHaveProperty("error");
expect(result.status).toBe(500);
});
});
Integration Tests:
describe("MCP Server Integration", () => {
let server: Server;
beforeAll(async () => {
server = new Server({ name: "test-server", version: "1.0.0" });
// Initialize server
});
test("should list all available tools", async () => {
const tools = await server.listTools();
expect(tools.length).toBeGreaterThan(0);
expect(tools[0]).toHaveProperty("name");
expect(tools[0]).toHaveProperty("description");
});
test("should execute tool and return result", async () => {
const result = await server.callTool({
name: "get_customer_info",
arguments: { customer_id: "cust_123" },
});
expect(result).toBeDefined();
});
});
FlowHunt provides a comprehensive platform for automating the entire lifecycle of MCP server development, testing, and deployment. Rather than manually managing each step of your MCP server workflow, FlowHunt enables you to create intelligent automation flows that handle repetitive tasks and ensure consistency across your development process.
Automated Testing and Validation:
FlowHunt can orchestrate your testing pipeline, running unit tests, integration tests, and end-to-end tests automatically whenever you commit code. This ensures that your MCP server tools are always functioning correctly before they’re deployed to production.
Continuous Integration and Deployment:
Set up FlowHunt workflows to automatically build, test, and deploy your MCP server whenever changes are pushed to your repository. This eliminates manual deployment steps and reduces the risk of human error.
Monitoring and Alerting:
FlowHunt can monitor your MCP server’s health, track API response times, and alert you to any issues. If a tool starts failing or performance degrades, you’ll be notified immediately so you can take action.
Documentation Generation:
Automatically generate API documentation for your MCP server tools, keeping your documentation in sync with your actual implementation. This ensures developers always have accurate, up-to-date information about available tools.
Performance Optimization:
FlowHunt’s analytics help you identify bottlenecks in your tool execution. You can see which tools are called most frequently, which ones have the highest latency, and where optimization efforts would have the most impact.
Deploying an MCP server to production requires careful planning and execution. Consider these deployment strategies:
Docker Containerization:
Create a Dockerfile for your MCP server:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY dist ./dist
EXPOSE 8000
CMD ["node", "dist/server.js"]
Build and push to a container registry:
docker build -t my-mcp-server:1.0.0 .
docker push myregistry.azurecr.io/my-mcp-server:1.0.0
Kubernetes Deployment:
Deploy your containerized MCP server to Kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mcp-server
spec:
replicas: 3
selector:
matchLabels:
app: mcp-server
template:
metadata:
labels:
app: mcp-server
spec:
containers:
- name: mcp-server
image: myregistry.azurecr.io/my-mcp-server:1.0.0
ports:
- containerPort: 8000
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: openai-secrets
key: api-key
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secrets
key: connection-string
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Environment Configuration:
Use environment variables for all configuration:
# .env.production
OPENAI_API_KEY=sk-...
DATABASE_URL=postgresql://user:pass@host:5432/db
MCP_SERVER_PORT=8000
LOG_LEVEL=info
ENABLE_METRICS=true
As your MCP server grows in complexity, consider these advanced patterns:
Tool Composition:
Create higher-level tools that compose multiple lower-level tools:
async function processCustomerRefund(customerId: string, orderId: string, amount: number) {
// Get customer info
const customer = await getCustomerInfo(customerId);
// Get order details
const order = await getOrderDetails(orderId);
// Verify order belongs to customer
if (order.customerId !== customerId) {
throw new Error("Order does not belong to customer");
}
// Process refund
const refund = await createRefund(orderId, amount);
// Send notification
await sendNotification(customer.email, `Refund of $${amount} processed`);
return refund;
}
Caching Strategy:
Implement caching to reduce latency and API calls:
import NodeCache from "node-cache";
const cache = new NodeCache({ stdTTL: 600 }); // 10 minute TTL
async function getCustomerInfoWithCache(customerId: string) {
const cacheKey = `customer_${customerId}`;
// Check cache first
const cached = cache.get(cacheKey);
if (cached) {
return cached;
}
// Fetch from database
const customer = await getCustomerInfo(customerId);
// Store in cache
cache.set(cacheKey, customer);
return customer;
}
Async Job Processing:
For long-running operations, implement async job processing:
import Bull from "bull";
const jobQueue = new Bull("mcp-jobs");
jobQueue.process(async (job) => {
const { toolName, arguments: args } = job.data;
// Execute tool
const result = await executeTool(toolName, args);
return result;
});
async function executeToolAsync(toolName: string, args: any) {
const job = await jobQueue.add(
{ toolName, arguments: args },
{ attempts: 3, backoff: { type: "exponential", delay: 2000 } }
);
return job.id;
}
Production MCP servers require comprehensive monitoring and logging:
Structured Logging:
import winston from "winston";
const logger = winston.createLogger({
format: winston.format.json(),
transports: [
new winston.transports.File({ filename: "error.log", level: "error" }),
new winston.transports.File({ filename: "combined.log" }),
],
});
logger.info("Tool executed", {
toolName: "get_customer_info",
customerId: "cust_123",
duration: 145,
status: "success",
});
Metrics Collection:
import prometheus from "prom-client";
const toolExecutionDuration = new prometheus.Histogram({
name: "mcp_tool_execution_duration_ms",
help: "Duration of tool execution in milliseconds",
labelNames: ["tool_name", "status"],
});
const toolExecutionCounter = new prometheus.Counter({
name: "mcp_tool_executions_total",
help: "Total number of tool executions",
labelNames: ["tool_name", "status"],
});
async function executeToolWithMetrics(toolName: string, args: any) {
const startTime = Date.now();
try {
const result = await executeTool(toolName, args);
const duration = Date.now() - startTime;
toolExecutionDuration.labels(toolName, "success").observe(duration);
toolExecutionCounter.labels(toolName, "success").inc();
return result;
} catch (error) {
const duration = Date.now() - startTime;
toolExecutionDuration.labels(toolName, "error").observe(duration);
toolExecutionCounter.labels(toolName, "error").inc();
throw error;
}
}
Building an MCP server that connects to OpenAI represents a significant step forward in creating intelligent, integrated AI applications. By following the architectural patterns, implementation strategies, and best practices outlined in this guide, you can create robust, scalable MCP servers that extend OpenAI’s capabilities far beyond their base training.
The key to success lies in careful planning of your tool definitions, thorough implementation of security measures, comprehensive testing, and continuous monitoring in production. Start with a simple set of tools, validate that they work correctly with OpenAI, and gradually expand your server’s capabilities as you gain confidence and experience.
Remember that your MCP server is not a static artifact—it’s a living system that will evolve as your business needs change and as you discover new ways to leverage AI in your operations. Build with modularity and extensibility in mind, document your tools thoroughly, and maintain clear separation between your business logic and the MCP protocol implementation.
The combination of MCP servers and OpenAI’s powerful language models creates unprecedented opportunities for automation, intelligence, and integration. By mastering this technology, you position yourself and your organization at the forefront of AI-driven innovation.
Automate your MCP server development, testing, and deployment workflows. From continuous integration to production monitoring, FlowHunt streamlines every step of your AI development lifecycle.
The Model Context Protocol is a standardized framework that enables AI models like OpenAI's GPT to discover, understand, and execute tools and functions provided by external servers. It acts as a bridge between AI models and custom business logic.
You need a valid OpenAI API key with appropriate permissions. The MCP server itself doesn't require special permissions from OpenAI—it communicates through standard API calls. However, you should implement proper authentication and authorization on your MCP server.
MCP servers can be built in any language that supports HTTP/REST APIs or WebSocket connections. Popular choices include Python, TypeScript/Node.js, Java, C#/.NET, and Go. The language choice depends on your existing infrastructure and team expertise.
Implement rate limiting on your MCP server side, cache frequently requested results, use exponential backoff for retries, and monitor your OpenAI API usage. Consider implementing a queue system for tool requests to manage load effectively.
Arshia is an AI Workflow Engineer at FlowHunt. With a background in computer science and a passion for AI, he specializes in creating efficient workflows that integrate AI tools into everyday tasks, enhancing productivity and creativity.

Streamline your MCP server development, testing, and deployment with FlowHunt's intelligent automation platform.

Explore comprehensive MCP server examples and learn how to build, deploy, and integrate Model Context Protocol servers to enhance AI agent capabilities across e...

Learn how to build and deploy a Model Context Protocol (MCP) server to connect AI models with external tools and data sources. Step-by-step guide for beginners ...

Learn what MCP (Model Context Protocol) servers are, how they work, and why they're revolutionizing AI integration. Discover how MCP simplifies connecting AI ag...
Cookie Consent
We use cookies to enhance your browsing experience and analyze our traffic. See our privacy policy.