mcp-vision MCP Server
Add computer vision to your AI workflows with mcp-vision: HuggingFace-powered object detection and image analysis as an MCP server for FlowHunt and multimodal assistants.

What does “mcp-vision” MCP Server do?
The “mcp-vision” MCP Server is a Model Context Protocol (MCP) server that exposes HuggingFace computer vision models—such as zero-shot object detection—as tools to enhance the vision capabilities of large language or vision-language models. By connecting AI assistants with powerful computer vision models, mcp-vision enables tasks like object detection and image analysis directly within development workflows. This allows LLMs and other AI clients to query, process, and analyze images programmatically, making it easier to automate, standardize, and extend vision-based interactions in applications. The server is suited for both GPU and CPU environments and is designed for easy integration with popular AI platforms.
List of Prompts
No specific prompt templates are mentioned in the documentation or repository files.
List of Resources
No explicit MCP resources are documented or listed in the repository.
List of Tools
locate_objects
Detect and locate objects in an image using one of the zero-shot object detection pipelines available through HuggingFace. Inputs include the image path, a list of candidate labels, and an optional model name. Returns a list of detected objects in standard format.zoom_to_object
Zoom into a specific object in an image by cropping the image to the bounding box of the object with the best detection score. Inputs include the image path, a label to find, and an optional model name. Returns a cropped image or None.
Use Cases of this MCP Server
- Automated Object Detection in Images
Developers can use mcp-vision to programmatically detect and locate objects in images, streamlining tasks like image tagging, content moderation, and visual search. - Vision-based Workflow Automation
Integrate object detection into larger workflows, such as sorting images by content, automating report generation based on detected items, or enhancing accessibility tools. - Interactive Image Exploration
AI assistants can help users zoom in on specific objects within images, aiding tasks like quality inspection, medical imaging analysis, or product identification. - Augmenting AI Agents with Visual Capabilities
LLMs can reason about and act on visual data, allowing for richer multimodal interactions and context-aware responses in applications like chatbots, digital assistants, and research tools.
How to set it up
Windsurf
No setup instructions for Windsurf are provided in the repository.
Claude
- Prerequisites:
Ensure you have Docker installed and, if using a GPU, an NVIDIA-enabled environment. - Build or Use Docker Image:
- Build locally:
git clone git@github.com:groundlight/mcp-vision.git cd mcp-vision make build-docker
- Use public image (optional): No build required.
- Build locally:
- Edit Configuration:
Openclaude_desktop_config.json
and add the following undermcpServers
:- For GPU:
"mcpServers": { "mcp-vision": { "command": "docker", "args": ["run", "-i", "--rm", "--runtime=nvidia", "--gpus", "all", "mcp-vision"], "env": {} } }
- For CPU:
"mcpServers": { "mcp-vision": { "command": "docker", "args": ["run", "-i", "--rm", "mcp-vision"], "env": {} } }
- For public image (beta):
"mcpServers": { "mcp-vision": { "command": "docker", "args": ["run", "-i", "--rm", "--runtime=nvidia", "--gpus", "all", "groundlight/mcp-vision:latest"], "env": {} } }
- For GPU:
- Save and Restart:
Save the configuration and restart Claude Desktop. - Verify Setup:
Ensure that mcp-vision is available as an MCP server in the Claude Desktop UI.
Securing API Keys
- No API key requirements or examples are provided in the documentation.
Cursor
No setup instructions for Cursor are provided in the repository.
Cline
No setup instructions for Cline are provided in the repository.
How to use this MCP inside flows
Using MCP in FlowHunt
To integrate MCP servers into your FlowHunt workflow, start by adding the MCP component to your flow and connecting it to your AI agent:

Click on the MCP component to open the configuration panel. In the system MCP configuration section, insert your MCP server details using this JSON format:
{
"mcp-vision": {
"transport": "streamable_http",
"url": "https://yourmcpserver.example/pathtothemcp/url"
}
}
Once configured, the AI agent is now able to use this MCP as a tool with access to all its functions and capabilities. Remember to change “mcp-vision” to whatever the actual name of your MCP server is and replace the URL with your own MCP server URL.
Overview
Section | Availability | Details/Notes |
---|---|---|
Overview | ✅ | HuggingFace computer vision models as tools for LLMs via MCP |
List of Prompts | ⛔ | No prompt templates documented |
List of Resources | ⛔ | No explicit resources listed |
List of Tools | ✅ | locate_objects, zoom_to_object |
Securing API Keys | ⛔ | No API key instructions |
Sampling Support (less important in evaluation) | ⛔ | Not mentioned |
Roots Support: Not mentioned
Overall, mcp-vision provides useful, direct integration with HuggingFace vision models but lacks documentation on resources, prompt templates, or advanced MCP features like roots or sampling. Its setup is well-documented for Claude Desktop but not for other platforms.
Our opinion
mcp-vision is a focused and practical MCP server for adding visual intelligence to AI workflows, especially in environments that support Docker. Its primary strengths are its clear tool offerings and straightforward setup for Claude Desktop, but it would benefit from richer documentation, especially around resources, prompt templates, and support for additional platforms and advanced MCP features.
MCP Score
Has a LICENSE | ✅ MIT |
---|---|
Has at least one tool | ✅ |
Number of Forks | 0 |
Number of Stars | 23 |
Frequently asked questions
- What is the mcp-vision MCP Server?
mcp-vision is an open-source Model Context Protocol server that exposes HuggingFace computer vision models as tools for AI assistants and LLMs, enabling object detection, image cropping, and more in your AI workflows.
- Which tools does mcp-vision provide?
mcp-vision offers tools like locate_objects (for zero-shot object detection in images) and zoom_to_object (for cropping images to detected objects), accessible via the MCP interface.
- What are the main use cases for mcp-vision?
Use mcp-vision for automated object detection, vision-based workflow automation, interactive image exploration, and augmenting AI agents with visual reasoning and analysis capabilities.
- How do I set up mcp-vision with FlowHunt?
Add the MCP component to your FlowHunt flow and insert the mcp-vision server details in the configuration panel using the provided JSON format. Ensure your MCP server is running and reachable from FlowHunt.
- Do I need an API key for mcp-vision?
No API key or special credentials are required to run mcp-vision according to current documentation. Just ensure your Docker environment is configured and the server is accessible.
Integrate mcp-vision with FlowHunt
Supercharge your AI agents with object detection and image analysis using mcp-vision. Plug it into your FlowHunt flows for seamless multimodal reasoning.