MCP Code Executor MCP Server
Run Python code, install dependencies, and manage isolated environments directly inside your FlowHunt flows with the MCP Code Executor MCP Server.

What does “MCP Code Executor” MCP Server do?
The MCP Code Executor is an MCP (Model Context Protocol) server that enables language models (LLMs) to execute Python code within a designated Python environment, such as Conda, virtualenv, or UV virtualenv. By connecting AI assistants to real, executable Python environments, it empowers them to perform a wide range of development tasks that require code execution, library management, and dynamic environment setup. This server supports incremental code generation to overcome token limitations, allows for on-the-fly installation of dependencies, and facilitates runtime configuration of the execution environment. Developers can leverage this tool to automate code evaluation, experiment with new packages, and manage computation within a controlled and secure environment.
List of Prompts
No explicit prompt templates are listed in the repository or documentation.
List of Resources
No specific resources are described in the repository or documentation.
List of Tools
- execute_code
- Executes Python code in the configured environment. Suitable for running short code snippets and scripts.
- install_dependencies
- Installs specified Python packages in the current environment, enabling dynamic inclusion of libraries as needed.
- check_installed_packages
- Checks which Python packages are currently installed within the environment.
Use Cases of this MCP Server
- Automated Code Evaluation
- LLMs can execute and test Python code snippets directly, which is useful in educational, review, or debugging contexts.
- Dynamic Dependency Management
- Installs required packages on the fly, allowing LLMs to adapt the execution environment for specialized tasks or libraries.
- Environment Isolation
- Runs code inside isolated Conda or virtualenv environments, ensuring reproducibility and preventing conflicts between dependencies.
- Incremental Code Generation
- Supports incremental code execution, making it possible to handle large code blocks that might exceed token limits in a single LLM response.
- Data Science and Analysis
- Enables AI agents to perform data analysis, run simulations, or visualize results by executing code with common scientific Python libraries.
How to set it up
Windsurf
- Ensure Node.js is installed.
- Clone the MCP Code Executor repository and build the project.
- Locate your MCP servers configuration file.
- Add the MCP Code Executor server using the following JSON snippet:
{ "mcpServers": { "mcp-code-executor": { "command": "node", "args": [ "/path/to/mcp_code_executor/build/index.js" ], "env": { "CODE_STORAGE_DIR": "/path/to/code/storage", "ENV_TYPE": "conda", "CONDA_ENV_NAME": "your-conda-env" } } } }
- Save the file and restart Windsurf. Verify the server is reachable.
Securing API Keys (Environment Variables Example)
{
"mcpServers": {
"mcp-code-executor": {
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env",
"MY_SECRET_API_KEY": "${MY_SECRET_API_KEY}"
},
"inputs": {
"apiKey": "${MY_SECRET_API_KEY}"
}
}
}
}
Claude
- Ensure Node.js is installed.
- Build the MCP Code Executor following repository instructions.
- Open the configuration file for Claude’s MCP servers.
- Insert the following configuration:
{ "mcpServers": { "mcp-code-executor": { "command": "node", "args": [ "/path/to/mcp_code_executor/build/index.js" ], "env": { "CODE_STORAGE_DIR": "/path/to/code/storage", "ENV_TYPE": "conda", "CONDA_ENV_NAME": "your-conda-env" } } } }
- Save and restart Claude. Confirm the server is listed.
Cursor
- Install Node.js.
- Clone and build the MCP Code Executor repository.
- Edit Cursor’s MCP configuration.
- Add:
{ "mcpServers": { "mcp-code-executor": { "command": "node", "args": [ "/path/to/mcp_code_executor/build/index.js" ], "env": { "CODE_STORAGE_DIR": "/path/to/code/storage", "ENV_TYPE": "conda", "CONDA_ENV_NAME": "your-conda-env" } } } }
- Save and restart Cursor. Test by running a sample code execution.
Cline
- Make sure Node.js is available.
- Build the MCP Code Executor using the README instructions.
- Locate Cline’s configuration file for MCP servers.
- Add:
{ "mcpServers": { "mcp-code-executor": { "command": "node", "args": [ "/path/to/mcp_code_executor/build/index.js" ], "env": { "CODE_STORAGE_DIR": "/path/to/code/storage", "ENV_TYPE": "conda", "CONDA_ENV_NAME": "your-conda-env" } } } }
- Save and restart Cline. Verify the MCP server is active.
Note: You can also use Docker. The provided Dockerfile is tested for
venv-uv
environment type:
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}
How to use this MCP inside flows
Using MCP in FlowHunt
To integrate MCP servers into your FlowHunt workflow, start by adding the MCP component to your flow and connecting it to your AI agent:

Click on the MCP component to open the configuration panel. In the system MCP configuration section, insert your MCP server details using this JSON format:
{
"mcp-code-executor": {
"transport": "streamable_http",
"url": "https://yourmcpserver.example/pathtothemcp/url"
}
}
Once configured, the AI agent is now able to use this MCP as a tool with access to all its functions and capabilities. Remember to change “mcp-code-executor” to whatever the actual name of your MCP server is and replace the URL with your own MCP server URL.
Overview
Section | Availability | Details/Notes |
---|---|---|
Overview | ✅ | |
List of Prompts | ⛔ | No prompt templates found |
List of Resources | ⛔ | No explicit resources described |
List of Tools | ✅ | execute_code, install_dependencies, check_installed_packages |
Securing API Keys | ✅ | Example provided with env inputs |
Sampling Support (less important in evaluation) | ⛔ | Not specified |
Our opinion
This MCP server provides essential and robust functionality for code execution with LLM integration, along with clear setup instructions and tooling. However, it lacks prompt templates, explicit resources, and information on roots or sampling support. For a code-execution-focused MCP, it is very solid, scoring high for practical utility and ease of integration, but loses some points for missing advanced MCP features and documentation completeness.
MCP Score
Has a LICENSE | ✅ (MIT) |
---|---|
Has at least one tool | ✅ |
Number of Forks | 25 |
Number of Stars | 144 |
Frequently asked questions
- What is the MCP Code Executor MCP Server?
It is a Model Context Protocol (MCP) server that allows language models to execute Python code in secure, isolated environments (like Conda or venv), manage dependencies, and configure runtime environments. Ideal for code evaluation, data science, automated workflows, and dynamic environment setup with FlowHunt.
- Which tools does this MCP server provide?
It provides tools for executing Python code (`execute_code`), installing dependencies on the fly (`install_dependencies`), and checking installed packages (`check_installed_packages`).
- How do I integrate this server with FlowHunt?
Add the MCP Code Executor as an MCP component in your flow, then configure it with your server's URL and transport method. This enables your AI agents to use its code execution and environment management capabilities inside FlowHunt.
- Can I isolate code execution and manage environments?
Yes, the server supports running code in isolated Conda or virtualenv environments, ensuring reproducibility and preventing conflicts between dependencies.
- Does it support incremental code execution for large code blocks?
Yes, the server can execute code incrementally, which is useful for handling code that exceeds LLM token limits.
- Is it possible to use Docker instead of Node.js?
Yes, you can use the provided Dockerfile and configure the MCP server to run inside a Docker container for additional isolation.
Try MCP Code Executor with FlowHunt
Empower your flows with secure, automated Python code execution. Integrate the MCP Code Executor MCP Server and unlock dynamic workflows for data science, automation, and more.