
LLM Context MCP Server
The LLM Context MCP Server bridges AI assistants with external code and text projects, enabling context-aware workflows for code review, documentation generatio...
Run Python code, install dependencies, and manage isolated environments directly inside your FlowHunt flows with the MCP Code Executor MCP Server.
The MCP Code Executor is an MCP (Model Context Protocol) server that enables language models (LLMs) to execute Python code within a designated Python environment, such as Conda, virtualenv, or UV virtualenv. By connecting AI assistants to real, executable Python environments, it empowers them to perform a wide range of development tasks that require code execution, library management, and dynamic environment setup. This server supports incremental code generation to overcome token limitations, allows for on-the-fly installation of dependencies, and facilitates runtime configuration of the execution environment. Developers can leverage this tool to automate code evaluation, experiment with new packages, and manage computation within a controlled and secure environment.
No explicit prompt templates are listed in the repository or documentation.
No specific resources are described in the repository or documentation.
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
{
"mcpServers": {
"mcp-code-executor": {
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env",
"MY_SECRET_API_KEY": "${MY_SECRET_API_KEY}"
},
"inputs": {
"apiKey": "${MY_SECRET_API_KEY}"
}
}
}
}
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
{
"mcpServers": {
"mcp-code-executor": {
"command": "node",
"args": [
"/path/to/mcp_code_executor/build/index.js"
],
"env": {
"CODE_STORAGE_DIR": "/path/to/code/storage",
"ENV_TYPE": "conda",
"CONDA_ENV_NAME": "your-conda-env"
}
}
}
}
Note: You can also use Docker. The provided Dockerfile is tested for
venv-uv
environment type:
{
"mcpServers": {
"mcp-code-executor": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp-code-executor"
]
}
}
}
Using MCP in FlowHunt
To integrate MCP servers into your FlowHunt workflow, start by adding the MCP component to your flow and connecting it to your AI agent:
Click on the MCP component to open the configuration panel. In the system MCP configuration section, insert your MCP server details using this JSON format:
{
"mcp-code-executor": {
"transport": "streamable_http",
"url": "https://yourmcpserver.example/pathtothemcp/url"
}
}
Once configured, the AI agent is now able to use this MCP as a tool with access to all its functions and capabilities. Remember to change “mcp-code-executor” to whatever the actual name of your MCP server is and replace the URL with your own MCP server URL.
Section | Availability | Details/Notes |
---|---|---|
Overview | ✅ | |
List of Prompts | ⛔ | No prompt templates found |
List of Resources | ⛔ | No explicit resources described |
List of Tools | ✅ | execute_code, install_dependencies, check_installed_packages |
Securing API Keys | ✅ | Example provided with env inputs |
Sampling Support (less important in evaluation) | ⛔ | Not specified |
This MCP server provides essential and robust functionality for code execution with LLM integration, along with clear setup instructions and tooling. However, it lacks prompt templates, explicit resources, and information on roots or sampling support. For a code-execution-focused MCP, it is very solid, scoring high for practical utility and ease of integration, but loses some points for missing advanced MCP features and documentation completeness.
Has a LICENSE | ✅ (MIT) |
---|---|
Has at least one tool | ✅ |
Number of Forks | 25 |
Number of Stars | 144 |
It is a Model Context Protocol (MCP) server that allows language models to execute Python code in secure, isolated environments (like Conda or venv), manage dependencies, and configure runtime environments. Ideal for code evaluation, data science, automated workflows, and dynamic environment setup with FlowHunt.
It provides tools for executing Python code (`execute_code`), installing dependencies on the fly (`install_dependencies`), and checking installed packages (`check_installed_packages`).
Add the MCP Code Executor as an MCP component in your flow, then configure it with your server's URL and transport method. This enables your AI agents to use its code execution and environment management capabilities inside FlowHunt.
Yes, the server supports running code in isolated Conda or virtualenv environments, ensuring reproducibility and preventing conflicts between dependencies.
Yes, the server can execute code incrementally, which is useful for handling code that exceeds LLM token limits.
Yes, you can use the provided Dockerfile and configure the MCP server to run inside a Docker container for additional isolation.
Empower your flows with secure, automated Python code execution. Integrate the MCP Code Executor MCP Server and unlock dynamic workflows for data science, automation, and more.
The LLM Context MCP Server bridges AI assistants with external code and text projects, enabling context-aware workflows for code review, documentation generatio...
The MCP-Server-Creator is a meta-server that enables rapid creation and configuration of new Model Context Protocol (MCP) servers. With dynamic code generation,...
The Code Sandbox MCP Server provides a secure, containerized environment for executing code, enabling AI assistants and developer tools to run, test, and manage...