Databricks MCP Server

Connect your AI agents to Databricks for automated SQL, job monitoring, and workflow management using the Databricks MCP Server in FlowHunt.

Databricks MCP Server

What does “Databricks” MCP Server do?

The Databricks MCP (Model Context Protocol) Server is a specialized tool that connects AI assistants to the Databricks platform, enabling seamless interaction with Databricks resources through natural language interfaces. This server acts as a bridge between large language models (LLMs) and Databricks APIs, allowing LLMs to execute SQL queries, list jobs, retrieve job statuses, and obtain detailed job information. By exposing these capabilities via the MCP protocol, the Databricks MCP Server empowers developers and AI agents to automate data workflows, manage Databricks jobs, and streamline database operations, thus enhancing productivity in data-driven development environments.

List of Prompts

No prompt templates are described in the repository.

List of Resources

No explicit resources are listed in the repository.

List of Tools

  • run_sql_query(sql: str)
    Execute SQL queries on the Databricks SQL warehouse.
  • list_jobs()
    List all Databricks jobs in the workspace.
  • get_job_status(job_id: int)
    Retrieve the status of a specific Databricks job by its ID.
  • get_job_details(job_id: int)
    Obtain detailed information about a specific Databricks job.

Use Cases of this MCP Server

  • Database Query Automation
    Enable LLMs and users to run SQL queries on Databricks warehouses directly from conversational interfaces, streamlining data analysis workflows.
  • Job Management
    List and monitor Databricks jobs, helping users keep track of ongoing or scheduled tasks within their workspace.
  • Job Status Tracking
    Quickly retrieve the status of specific Databricks jobs, allowing for efficient monitoring and troubleshooting.
  • Detailed Job Inspection
    Access in-depth information about Databricks jobs, facilitating debugging and optimization of ETL pipelines or batch jobs.

How to set it up

Windsurf

  1. Ensure Python 3.7+ is installed and Databricks credentials are available.
  2. Clone the repository and install requirements with pip install -r requirements.txt.
  3. Create a .env file with your Databricks credentials.
  4. Add the Databricks MCP Server to your Windsurf configuration:
    {
      "mcpServers": {
        "databricks": {
          "command": "python",
          "args": ["main.py"]
        }
      }
    }
    
  5. Save the configuration and restart Windsurf. Verify setup by running a test query.

Securing API Keys Example:

{
  "mcpServers": {
    "databricks": {
      "command": "python",
      "args": ["main.py"],
      "env": {
        "DATABRICKS_HOST": "${DATABRICKS_HOST}",
        "DATABRICKS_TOKEN": "${DATABRICKS_TOKEN}",
        "DATABRICKS_HTTP_PATH": "${DATABRICKS_HTTP_PATH}"
      }
    }
  }
}

Claude

  1. Install Python 3.7+ and clone the repo.
  2. Set up the .env file with Databricks credentials.
  3. Configure Claude’s MCP interface:
    {
      "mcpServers": {
        "databricks": {
          "command": "python",
          "args": ["main.py"]
        }
      }
    }
    
  4. Restart Claude and validate connection.

Cursor

  1. Clone the repository and set up Python environment.
  2. Install dependencies and create .env with credentials.
  3. Add the server to Cursor’s configuration:
    {
      "mcpServers": {
        "databricks": {
          "command": "python",
          "args": ["main.py"]
        }
      }
    }
    
  4. Save configuration and test connection.

Cline

  1. Prepare Python and credentials as above.
  2. Clone the repository, install requirements, and configure .env.
  3. Add MCP server entry to Cline’s configuration:
    {
      "mcpServers": {
        "databricks": {
          "command": "python",
          "args": ["main.py"]
        }
      }
    }
    
  4. Save, restart Cline, and verify the MCP Server is operational.

Note: Always secure your API keys and secrets by using environment variables as shown in the configuration examples above.

How to use this MCP inside flows

Using MCP in FlowHunt

To integrate MCP servers into your FlowHunt workflow, start by adding the MCP component to your flow and connecting it to your AI agent:

FlowHunt MCP flow

Click on the MCP component to open the configuration panel. In the system MCP configuration section, insert your MCP server details using this JSON format:

{
  "databricks": {
    "transport": "streamable_http",
    "url": "https://yourmcpserver.example/pathtothemcp/url"
  }
}

Once configured, the AI agent is now able to use this MCP as a tool with access to all its functions and capabilities. Remember to change “databricks” to the actual name of your MCP server and replace the URL with your own MCP server URL.


Overview

SectionAvailabilityDetails/Notes
Overview
List of PromptsNo prompt templates specified in repo
List of ResourcesNo explicit resources defined
List of Tools4 tools: run_sql_query, list_jobs, get_job_status, get_job_details
Securing API KeysVia environment variables in .env and config JSON
Sampling Support (less important in evaluation)Not mentioned

| Roots Support | ⛔ | Not mentioned |


Based on the availability of core features (tools, setup and security guidance, but no resources or prompt templates), the Databricks MCP Server is effective for Databricks API integration but lacks some advanced MCP primitives. I would rate this MCP server a 6 out of 10 for overall completeness and utility in the context of the MCP ecosystem.


MCP Score

Has a LICENSE⛔ (not found)
Has at least one tool
Number of Forks13
Number of Stars33

Frequently asked questions

What is the Databricks MCP Server?

The Databricks MCP Server is a bridge between AI assistants and Databricks, exposing Databricks capabilities like SQL execution and job management via the MCP protocol for automated workflows.

What operations are supported by this MCP Server?

It supports executing SQL queries, listing all jobs, retrieving job statuses, and obtaining detailed information about specific Databricks jobs.

How do I securely store my Databricks credentials?

Always use environment variables, for example by placing them in a `.env` file or configuring them in your MCP server setup, instead of hardcoding sensitive information.

Can I use this server in FlowHunt flows?

Yes, simply add the MCP component to your flow, configure it with your Databricks MCP server details, and your AI agents will be able to access all supported Databricks functions.

What is the overall utility score of this MCP Server?

Based on available tools, setup guidance, and security support, but lacking resources and prompt templates, this MCP Server rates a 6 out of 10 for completeness in the MCP ecosystem.

Supercharge Your Databricks Workflows

Automate SQL queries, monitor jobs, and manage Databricks resources directly from conversational AI interfaces. Integrate Databricks MCP Server into your FlowHunt flows for next-level productivity.

Learn more