Building MCP Servers
MCP (Model Context Protocol) servers extend IntelligenceBox with custom tools. Your MCP server can connect to external APIs, databases, or use the Box’s GPU for AI operations.
What is MCP?
MCP is a protocol that allows AI models to call external tools. When you build an MCP server for IntelligenceBox:
- Your tools become available in the chat interface
- The AI can automatically invoke your tools based on user requests
- Tools can access the Box’s GPU services for AI-heavy operations
Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ IntelligenceBox │────▶│ Your MCP Server │────▶│ External APIs │
│ (Box) │ │ (Docker) │ │ / Databases │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ GPU Service │
│ (on the Box) │
└─────────────────┘Quick Start
1. Create MCP Server
Create a new directory with this structure:
my-mcp-server/
├── Dockerfile
├── package.json
├── src/
│ └── index.ts
└── manifest.json2. Define Your Tools
manifest.json - Describes your tools:
{
"id": "my-mcp-server",
"name": "My MCP Server",
"description": "Custom tools for my workflow",
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a city",
"input_schema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
}
}
]
}3. Implement the Server
src/index.ts - Handle tool calls:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server({
name: "my-mcp-server",
version: "1.0.0",
}, {
capabilities: {
tools: {},
},
});
// List available tools
server.setRequestHandler("tools/list", async () => ({
tools: [{
name: "get_weather",
description: "Get current weather for a city",
inputSchema: {
type: "object",
properties: {
city: { type: "string", description: "City name" }
},
required: ["city"]
}
}]
}));
// Handle tool calls
server.setRequestHandler("tools/call", async (request) => {
const { name, arguments: args } = request.params;
if (name === "get_weather") {
// Your implementation here
const weather = await fetchWeather(args.city);
return {
content: [{
type: "text",
text: JSON.stringify(weather)
}]
};
}
throw new Error(`Unknown tool: ${name}`);
});
// Start server
const transport = new StdioServerTransport();
await server.connect(transport);4. Containerize
Dockerfile:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
CMD ["node", "dist/index.js"]5. Build and Register
# Build the image
docker build -t my-mcp-server:1.0.0 .
# Push to a registry (optional)
docker tag my-mcp-server:1.0.0 ghcr.io/your-org/my-mcp-server:1.0.0
docker push ghcr.io/your-org/my-mcp-server:1.0.0Register in IntelligenceBox via the app or API.
Using GPU Services
Your MCP server can call the Box’s GPU service for AI operations like embeddings, image analysis, or custom models.
GPU Service Endpoints
The GPU service runs on the Box at http://gpu-service:8000 (internal Docker network).
Embeddings:
curl -X POST "http://gpu-service:8000/embed" \
-H "Content-Type: application/json" \
-d '{"texts": ["Hello world", "How are you?"]}'Image Analysis (ColPali):
curl -X POST "http://gpu-service:8000/analyze" \
-H "Content-Type: application/json" \
-d '{"image_base64": "...", "query": "What is in this image?"}'Example: MCP Server with GPU
// Call GPU service from your MCP tool
async function analyzeImage(imageBase64: string, query: string) {
const response = await fetch("http://gpu-service:8000/analyze", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ image_base64: imageBase64, query }),
});
return response.json();
}
server.setRequestHandler("tools/call", async (request) => {
if (request.params.name === "analyze_document") {
const result = await analyzeImage(
request.params.arguments.image,
request.params.arguments.question
);
return {
content: [{ type: "text", text: result.answer }]
};
}
});MCP Manifest Reference
Full manifest structure:
{
"id": "unique-server-id",
"name": "Human Readable Name",
"description": "What this server does",
"version": "1.0.0",
"tools": [
{
"name": "tool_name",
"description": "What the tool does",
"input_schema": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Parameter description"
}
},
"required": ["param1"]
}
}
],
"resources": [],
"settings": {
"auth": "bearer",
"env_vars": ["API_KEY", "DATABASE_URL"]
}
}Environment Variables
Pass secrets to your MCP server via environment variables:
docker run -e API_KEY=secret123 my-mcp-server:1.0.0Access in your code:
const apiKey = process.env.API_KEY;Best Practices
- Version your images - Use semantic versioning, avoid
latest - Validate inputs - Always validate tool arguments
- Handle errors gracefully - Return meaningful error messages
- Keep it focused - One server per domain (e.g., weather, CRM, database)
- Document your tools - Clear descriptions help the AI use them correctly
- Use GPU wisely - GPU operations are powerful but resource-intensive
Testing Locally
Test your MCP server before deploying:
# Run locally
npm run dev
# Test with the MCP inspector
npx @modelcontextprotocol/inspectorPublishing to Registry
Contact IntelligenceBox to publish your MCP server to the public registry, making it available to all users.