#
Model Context Protocol (MCP)
Learn how to integrate microsandbox with AI tools using the Model Context Protocol for seamless code execution and sandbox management.
#
Overview
The Model Context Protocol (MCP) is an open standard that enables AI applications to securely connect to external data sources and tools. microsandbox implements MCP as a built-in server, making it compatible with AI tools like Claude, and other MCP-enabled applications.
#
Connection Details
- Endpoint:
http://127.0.0.1:5555/mcp
- Protocol: Streamable HTTP
- Authentication: Bearer token (if not in dev mode)
Transport Support
microsandbox server only supports the Streamable HTTP transport protocol.
#
Tools
microsandbox exposes tools through the MCP interface for complete sandbox lifecycle management.
#
Sandbox Management Tools
sandbox_start
Start a new sandbox with specified configuration. This creates an isolated environment for code execution.
#
Configuration Options
{
"sandbox": "my-python-env",
"namespace": "default"
}
{
"sandbox": "data-analysis",
"namespace": "research",
"config": {
"image": "microsandbox/python",
"memory": 1024,
"cpus": 2,
"envs": ["PYTHONPATH=/workspace"]
}
}
{
"sandbox": "node-env",
"namespace": "development",
"config": {
"image": "microsandbox/node",
"memory": 512
}
}
Important
Always stop the sandbox when done to prevent it from running indefinitely and consuming resources.
sandbox_stop
Stop a running sandbox and clean up its resources.
{
"sandbox": "my-python-env",
"namespace": "default"
}
Critical
Always call this when you're finished with a sandbox to prevent resource leaks and indefinite running. Failing to stop sandboxes will cause them to consume system resources unnecessarily.
#
Code Execution Tools
sandbox_run_code
Execute code in a running sandbox environment.
{
"sandbox": "my-python-env",
"namespace": "default",
"code": "import math\nresult = math.sqrt(16)\nprint(f'Square root: {result}')",
"language": "python"
}
{
"sandbox": "node-env",
"namespace": "development",
"code": "const fs = require('fs');\nconst data = { message: 'Hello from Node.js!' };\nconsole.log(JSON.stringify(data, null, 2));",
"language": "nodejs"
}
Prerequisites
The target sandbox must be started first using sandbox_start
- this will fail if the sandbox is not running. Code execution is synchronous and may take time depending on complexity.
sandbox_run_command
Execute shell commands in a running sandbox.
{
"sandbox": "my-python-env",
"namespace": "default",
"command": "ls"
}
{
"sandbox": "my-python-env",
"namespace": "default",
"command": "ls",
"args": ["-la", "/workspace"]
}
{
"sandbox": "data-analysis",
"namespace": "research",
"command": "pip",
"args": ["install", "pandas", "numpy", "matplotlib"]
}
Prerequisites
The target sandbox must be started first using sandbox_start
- this will fail if the sandbox is not running. Command execution is synchronous and may take time depending on complexity.
#
Monitoring Tools
sandbox_get_metrics
Get metrics and status for sandboxes including CPU usage, memory consumption, and running state.
{
"sandbox": "my-python-env",
"namespace": "default"
}
{
"namespace": "development"
}
{
"namespace": "*"
}
Returns: JSON object with metrics including:
name
- Sandbox namenamespace
- Sandbox namespacerunning
- Boolean running statuscpu_usage
- CPU usage percentagememory_usage
- Memory usage in MiBdisk_usage
- Disk usage in bytes
Usage
This tool can check the status of any sandbox regardless of whether it's running or not, making it useful for monitoring and cleanup operations.
#
Setting Up microsandbox with an Agent
Let's use Agno to build an AI agent that can execute code in microsandbox.
#
Prerequisites
- Install Agno and dependencies:
pip install agno openai
- Start microsandbox server:
msb server start --dev
#
Integration Example
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.mcp import MCPTools
async def main():
# Connect to microsandbox MCP server
server_url = "http://127.0.0.1:5555/mcp"
async with MCPTools(url=server_url, transport="streamable-http") as mcp_tools:
# Create agent with microsandbox tools
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
tools=[mcp_tools],
description="AI assistant with secure code execution capabilities"
)
# Use the agent with microsandbox integration
await agent.aprint_response(
"Create a Python sandbox and calculate the first 10 fibonacci numbers",
stream=True
)
# Run the example
import asyncio
asyncio.run(main())
#
Other MCP-Compatible Tools
microsandbox works with any MCP-compatible application:
- Claude - AI chat application
- Custom MCP clients - Build your own integrations
#
Examples
#
Complete Workflow
- Start the server:
msb server start --dev
Configure Claude Desktop with the MCP server
Test the integration:
Ask Claude: "Can you start a Python sandbox and run a simple calculation?"
- Claude will:
- Call
sandbox_start
to create a new Python environment - Call
sandbox_run_code
to execute your calculation - Return the results in a natural language response
- Call
#
Next Steps