ZenML MCP Server Integration

Connect your AI agents to ZenML’s MLOps infrastructure using the ZenML MCP Server for real-time pipeline control, artifact exploration, and streamlined ML workflows.

ZenML MCP Server Integration

What does “ZenML” MCP Server do?

The ZenML MCP Server is an implementation of the Model Context Protocol (MCP) that acts as a bridge between AI assistants (such as Cursor, Claude Desktop, and others) and your ZenML MLOps and LLMOps pipelines. By exposing ZenML’s API via the MCP standard, it enables AI clients to access live information about users, pipelines, pipeline runs, steps, services, and more from a ZenML server. This integration empowers developers and AI workflows to query metadata, trigger new pipeline runs, and interact with ZenML’s orchestration features directly through supported AI tools. The ZenML MCP Server is especially useful in enhancing productivity by connecting LLM-powered assistants to robust MLOps infrastructure, facilitating tasks across the ML lifecycle.

List of Prompts

No information found about prompt templates in the repository.

List of Resources

  • Users – Access information about ZenML users.
  • Stacks – Retrieve details on available stack configurations.
  • Pipelines – Query metadata about pipelines managed in ZenML.
  • Pipeline Runs – Get information and status about pipeline executions.
  • Pipeline Steps – Explore details of steps within pipelines.
  • Services – Information about services managed by ZenML.
  • Stack Components – Metadata on different components in the ZenML stack.
  • Flavors – Retrieve information about different stack component flavors.
  • Pipeline Run Templates – Templates for launching new pipeline runs.
  • Schedules – Data about scheduled pipeline executions.
  • Artifacts – Metadata about data artifacts (not the data itself).
  • Service Connectors – Information about connectors to external services.
  • Step Code – Access code related to pipeline steps.
  • Step Logs – Retrieve logs for steps (when run on cloud-based stacks).

List of Tools

  • Trigger New Pipeline Run – Allows triggering a new pipeline run if a run template is present.
  • Read Resources – Tools to read metadata and status from ZenML server objects (users, stacks, pipelines, etc.).

Use Cases of this MCP Server

  • Pipeline Monitoring and Management: Developers can use AI assistants to query the status of pipeline runs, retrieve logs, and monitor progress directly from ZenML.
  • Triggering Pipeline Executions: AI assistants can initiate new pipeline runs through the MCP server, streamlining experiment iterations and deployment cycles.
  • Resource and Artifact Exploration: Instantly retrieve metadata about datasets, models, and other artifacts managed by ZenML, enabling fast context retrieval for experiments.
  • Stack and Service Inspection: Quickly review stack configurations and service details, simplifying troubleshooting and optimization.
  • Automated Reporting: Use AI assistants to generate reports about ML experiments, pipeline history, and artifact lineage by querying the MCP server.

How to set it up

Windsurf

No explicit instructions for Windsurf found; use generic MCP configuration:

  1. Ensure Node.js and uv are installed.
  2. Clone the repository.
  3. Obtain your ZenML server URL and API key.
  4. Edit your Windsurf MCP configuration file to add the ZenML MCP server.
  5. Save and restart Windsurf.
{
  "mcpServers": {
    "zenml": {
      "command": "/usr/local/bin/uv",
      "args": ["run", "/path/to/zenml_server.py"],
      "env": {
        "LOGLEVEL": "INFO",
        "NO_COLOR": "1",
        "PYTHONUNBUFFERED": "1",
        "PYTHONIOENCODING": "UTF-8",
        "ZENML_STORE_URL": "https://your-zenml-server-goes-here.com",
        "ZENML_STORE_API_KEY": "your-api-key-here"
      }
    }
  }
}

Note: Secure your API keys by setting them in the env section as shown above.

Claude

  1. Install Claude Desktop.
  2. Open ‘Settings’ > ‘Developer’ > ‘Edit Config’.
  3. Add the MCP server as shown below.
  4. Replace paths and credentials with your own.
  5. Save and restart Claude Desktop.
{
  "mcpServers": {
    "zenml": {
      "command": "/usr/local/bin/uv",
      "args": ["run", "/path/to/zenml_server.py"],
      "env": {
        "LOGLEVEL": "INFO",
        "NO_COLOR": "1",
        "PYTHONUNBUFFERED": "1",
        "PYTHONIOENCODING": "UTF-8",
        "ZENML_STORE_URL": "https://your-zenml-server-goes-here.com",
        "ZENML_STORE_API_KEY": "your-api-key-here"
      }
    }
  }
}

Note: Always store your API keys securely in the environment variables, as above.

Cursor

  1. Install Cursor.
  2. Locate Cursor’s MCP configuration file.
  3. Add the ZenML MCP server section as shown.
  4. Fill in the correct paths and credentials.
  5. Save and restart Cursor.
{
  "mcpServers": {
    "zenml": {
      "command": "/usr/local/bin/uv",
      "args": ["run", "/path/to/zenml_server.py"],
      "env": {
        "LOGLEVEL": "INFO",
        "NO_COLOR": "1",
        "PYTHONUNBUFFERED": "1",
        "PYTHONIOENCODING": "UTF-8",
        "ZENML_STORE_URL": "https://your-zenml-server-goes-here.com",
        "ZENML_STORE_API_KEY": "your-api-key-here"
      }
    }
  }
}

Note: API keys should be set using environment variables in the env section for security.

Cline

No explicit instructions for Cline found; use generic MCP configuration:

  1. Install any prerequisites required by Cline.
  2. Clone the MCP-ZenML repository.
  3. Obtain your ZenML server credentials.
  4. Edit the Cline MCP configuration file to include the ZenML MCP server.
  5. Save and restart Cline.
{
  "mcpServers": {
    "zenml": {
      "command": "/usr/local/bin/uv",
      "args": ["run", "/path/to/zenml_server.py"],
      "env": {
        "LOGLEVEL": "INFO",
        "NO_COLOR": "1",
        "PYTHONUNBUFFERED": "1",
        "PYTHONIOENCODING": "UTF-8",
        "ZENML_STORE_URL": "https://your-zenml-server-goes-here.com",
        "ZENML_STORE_API_KEY": "your-api-key-here"
      }
    }
  }
}

Note: Secure API keys in the env section as above.

Securing API Keys:
Set your ZenML API key and server URL securely using environment variables in the env section of the config, as in the JSON examples above.

How to use this MCP inside flows

Using MCP in FlowHunt

To integrate MCP servers into your FlowHunt workflow, start by adding the MCP component to your flow and connecting it to your AI agent:

FlowHunt MCP flow

Click on the MCP component to open the configuration panel. In the system MCP configuration section, insert your MCP server details using this JSON format:

{
  "zenml": {
    "transport": "streamable_http",
    "url": "https://yourmcpserver.example/pathtothemcp/url"
  }
}

Once configured, the AI agent is now able to use this MCP as a tool with access to all its functions and capabilities. Remember to change “zenml” to whatever the actual name of your MCP server is and replace the URL with your own MCP server URL.


Overview

SectionAvailabilityDetails/Notes
Overview
List of PromptsNot found in repo
List of ResourcesCovers resources exposed by ZenML’s API
List of ToolsTrigger pipeline, read metadata, etc.
Securing API KeysExample config provided
Sampling Support (less important in evaluation)Not mentioned

Based on the tables above, the ZenML MCP server provides thorough documentation, clear setup guidance, and exposes a wide range of resources and tools. However, it lacks documentation for prompt templates and no explicit mention of sampling or roots support. The repository is active, with a permissive number of stars and forks, but some MCP advanced features are not covered.


MCP Score

Has a LICENSE⛔ (not shown in available files)
Has at least one tool
Number of Forks8
Number of Stars18

Frequently asked questions

What is the ZenML MCP Server?

The ZenML MCP Server bridges AI assistants with your ZenML MLOps and LLMOps pipelines, exposing ZenML’s API via the Model Context Protocol. This enables AI tools to query pipeline metadata, manage runs, and interact with ZenML infrastructure directly.

What resources and tools does the ZenML MCP Server expose?

It provides access to users, stacks, pipelines, pipeline runs, steps, services, stack components, flavors, pipeline run templates, schedules, artifacts, service connectors, step code, and logs. It also enables triggering new pipeline runs and reading metadata from ZenML server objects.

How do I securely configure my ZenML MCP Server?

Always store your ZenML API key and server URL securely using environment variables in the `env` section of your MCP configuration, as shown in the setup examples for each client.

What are the main use cases for the ZenML MCP Server?

Typical use cases include pipeline monitoring and control, triggering new pipeline executions, exploring resources and artifacts, reviewing stack and service details, and generating automated reports via AI assistants.

Does the ZenML MCP Server support prompt templates or sampling?

Prompt template documentation and sampling features are not currently available in the ZenML MCP Server integration.

Boost Your AI Workflows with ZenML MCP

Enable your AI assistants to orchestrate, monitor, and manage ML pipelines instantly by connecting FlowHunt to ZenML’s MCP Server.

Learn more