Rendervid AI Integration - Generate Videos with Claude Code, Cursor & MCP

Rendervid AI Integration MCP Claude Code

Introduction: AI-Powered Video Generation

Creating videos programmatically has traditionally required deep knowledge of video codecs, animation frameworks, and rendering pipelines. Rendervid eliminates this complexity by accepting JSON templates and outputting finished videos. When you combine this with AI agents that understand natural language, you get something powerful: the ability to describe a video in plain English and receive a rendered MP4 in return.

Rendervid bridges the gap between AI language models and video production. Instead of writing code, designing keyframes, or learning a video editor, you tell an AI agent what you want. The agent generates a valid JSON template , validates it, and renders the final output through Rendervid’s engine. The entire process happens in a single conversation.

This integration is built on the Model Context Protocol (MCP), an open standard that allows AI tools to interact with external services through a structured interface. Rendervid’s MCP server exposes 11 tools covering rendering, validation, template discovery, and documentation, giving AI agents everything they need to produce professional video content autonomously.


What Is the Model Context Protocol (MCP)?

The Model Context Protocol is an open standard developed to give AI assistants structured access to external tools and data sources. Rather than relying on AI models to guess at API formats or generate code that calls REST endpoints, MCP provides a typed, discoverable interface that AI agents can query at runtime.

For video generation, MCP solves a critical problem: AI agents need to know what is possible before they can generate valid output. Without MCP, an AI model would need to be trained on Rendervid’s specific template format, know every available animation preset, and understand the constraints of each layer type. With MCP, the agent simply calls get_capabilities and receives a complete description of the system, including JSON schemas for every component.

Why MCP Matters for AI Video Generation

  • Runtime Discovery: AI agents learn what Rendervid can do at the moment they connect, not at training time. This means new features are immediately available without retraining.
  • Type Safety: Every tool has a defined input and output schema. The AI agent knows exactly what parameters are required and what types they must be.
  • Validation Before Rendering: Instead of submitting a template and hoping it works, the agent can validate the template first and fix any issues before spending time on rendering.
  • Tool Composability: AI agents can chain tools together, calling list_examples to find a starting template, modifying it, calling validate_template to check it, and then calling render_video to produce the output. All in a single conversation turn.

MCP Server Tools Reference

Rendervid’s MCP server exposes 11 tools organized into three categories: Rendering, Validation & Discovery, and Documentation. Each tool is designed to give AI agents maximum autonomy when generating video content.

Rendering Tools

These tools handle the actual production of video and image output from JSON templates.

render_video

Generates a complete video file from a JSON template. This is the primary rendering tool for producing MP4, WebM, or MOV output.

Parameters:

  • template (object, required) – The complete JSON template defining scenes, layers, animations, and output settings.
  • inputs (object, optional) – Key-value pairs for template variable substitution.
  • output_format (string, optional) – Output format: mp4, webm, or mov. Defaults to mp4.

Example usage by an AI agent:

{
  "tool": "render_video",
  "arguments": {
    "template": {
      "outputSettings": {
        "width": 1080,
        "height": 1920,
        "fps": 30,
        "duration": 10
      },
      "scenes": [
        {
          "duration": 10,
          "layers": [
            {
              "type": "text",
              "text": "Summer Sale - 50% Off",
              "fontSize": 72,
              "fontFamily": "Montserrat",
              "color": "#FFFFFF",
              "position": { "x": 540, "y": 960 },
              "animations": [
                {
                  "type": "fadeInUp",
                  "duration": 0.8,
                  "delay": 0.2
                }
              ]
            }
          ]
        }
      ]
    },
    "output_format": "mp4"
  }
}

Returns: A URL or file path to the rendered video file.


render_image

Generates a single frame or still image from a JSON template. Useful for creating thumbnails, social media graphics, poster frames, and static marketing materials.

Parameters:

  • template (object, required) – The JSON template defining the image composition.
  • inputs (object, optional) – Template variable substitution values.
  • output_format (string, optional) – Output format: png, jpeg, or webp. Defaults to png.
  • frame (number, optional) – Which frame to render (for extracting a specific moment from an animated template).

When to use render_image vs render_video:

  • Use render_image for static output: thumbnails, banners, social media posts, presentation slides.
  • Use render_video for anything with motion: animations, transitions, audio, video clips.

start_render_async

Starts an asynchronous render job for long-duration videos (typically over 30 seconds). Instead of waiting for the render to complete synchronously, this tool returns a job ID that you can poll with check_render_status.

Parameters:

  • template (object, required) – The complete JSON template.
  • inputs (object, optional) – Template variable values.
  • output_format (string, optional) – Desired output format.

Returns: A job_id string that can be used with check_render_status and list_render_jobs.

When to use async rendering:

  • Videos longer than 30 seconds
  • Templates with many scenes or complex animations
  • Batch rendering workflows where you want to submit multiple jobs and collect results later
  • Cloud rendering environments where long-running synchronous requests may time out

check_render_status

Checks the current status of an asynchronous render job started with start_render_async.

Parameters:

  • job_id (string, required) – The job ID returned by start_render_async.

Returns: An object containing:

  • status – One of queued, rendering, completed, or failed.
  • progress – A percentage (0-100) indicating render progress.
  • output_url – The URL of the finished video (only present when status is completed).
  • error – Error message if the job failed.

Example polling workflow:

AI Agent:
1. start_render_async → job_id: "abc-123"
2. check_render_status("abc-123") → status: "rendering", progress: 35
3. check_render_status("abc-123") → status: "rendering", progress: 78
4. check_render_status("abc-123") → status: "completed", output_url: "https://..."

list_render_jobs

Lists all asynchronous rendering jobs, both active and completed. Useful for monitoring batch rendering operations or reviewing recent output.

Parameters:

  • status_filter (string, optional) – Filter by status: queued, rendering, completed, failed, or all. Defaults to all.
  • limit (number, optional) – Maximum number of jobs to return.

Returns: An array of job objects, each with job_id, status, progress, created_at, and output_url (if completed).


Validation & Discovery Tools

These tools help AI agents understand what Rendervid can do and verify that templates are correct before rendering.

validate_template

Validates a JSON template before rendering. This tool checks template structure, field types, value constraints, and even verifies that media URLs (images, videos, audio files) are accessible. Running validation before rendering prevents wasted time on templates that would fail during the render process.

Parameters:

  • template (object, required) – The JSON template to validate.
  • check_urls (boolean, optional) – Whether to verify media URLs are accessible. Defaults to true.

Returns: An object containing:

  • valid – Boolean indicating whether the template is valid.
  • errors – Array of error objects with path, message, and severity for each issue found.
  • warnings – Array of warning objects for non-critical issues (e.g., unused variables, very large dimensions).

What validation catches:

  • Missing required fields (e.g., a scene without duration)
  • Invalid field types (e.g., a string where a number is expected)
  • Unknown layer types or animation presets
  • Broken or inaccessible media URLs (images, videos, audio files)
  • Out-of-range values (e.g., negative dimensions, fps above maximum)
  • Template variable syntax errors

Example validation response:

{
  "valid": false,
  "errors": [
    {
      "path": "scenes[0].layers[2].src",
      "message": "URL returned HTTP 404: https://example.com/missing-image.png",
      "severity": "error"
    },
    {
      "path": "scenes[1].duration",
      "message": "Scene duration must be a positive number",
      "severity": "error"
    }
  ],
  "warnings": [
    {
      "path": "outputSettings.width",
      "message": "Width 7680 is very large and may result in slow rendering",
      "severity": "warning"
    }
  ]
}

get_capabilities

Returns a comprehensive description of everything Rendervid can do. This is typically the first tool an AI agent calls when starting a video generation task. The response includes available layer types, animation presets, easing functions, filters, output formats, and their JSON schemas.

Parameters: None.

Returns: A structured object containing:

  • layerTypes – All available layer types (text, image, video, shape, audio, group, lottie, custom) with their JSON schemas and configurable properties.
  • animations – All animation presets grouped by category (entrance, exit, emphasis, keyframe) with descriptions and configurable parameters.
  • easingFunctions – All 30+ easing functions with descriptions and usage examples.
  • filters – Available visual filters (blur, brightness, contrast, saturate, grayscale, sepia, etc.) with parameter ranges.
  • outputFormats – Supported output formats for video and image rendering with their constraints.
  • inputTypes – Template variable types and validation rules.
  • sceneTransitions – All 17 scene transition types with their parameters.

Why this tool is critical for AI agents:

The capabilities response is a self-describing API. An AI agent does not need to be pre-trained on Rendervid’s template format. It can call get_capabilities at runtime, receive the complete schema, and generate valid templates on its first attempt. When Rendervid adds new features, animations, or layer types, AI agents automatically gain access to them through this tool without any code changes.


get_example

Loads a specific example template by name. AI agents use this to retrieve a working template as a starting point, then modify it to match the user’s requirements.

Parameters:

  • name (string, required) – The example template name (e.g., instagram-story, product-showcase, animated-bar-chart).

Returns: The complete JSON template for the requested example, ready to render or modify.

Example:

AI Agent calls: get_example("instagram-story")
Returns: Complete 1080x1920 Instagram story template with text layers,
         background image, and entrance animations

list_examples

Browses the full catalog of 50+ example templates organized by category. AI agents use this to find relevant starting templates for the user’s request.

Parameters:

  • category (string, optional) – Filter by category (e.g., social-media, marketing, data-visualization, typography, e-commerce).

Returns: An array of example metadata objects, each with:

  • name – Template identifier for use with get_example.
  • category – Template category.
  • description – What the template creates.
  • dimensions – Output width and height.
  • duration – Template duration in seconds.

Documentation Tools

These tools provide detailed reference documentation that AI agents can consult when constructing templates.

get_component_docs

Returns detailed documentation for a specific component or layer type . Includes property descriptions, required vs optional fields, default values, and usage examples.

Parameters:

  • component (string, required) – The component/layer type name (e.g., text, image, video, shape, audio, group, lottie, custom, AnimatedLineChart, TypewriterEffect).

Returns: Comprehensive documentation including:

  • Property table with types, defaults, and descriptions
  • JSON schema for the component
  • Usage examples
  • Notes on browser vs Node.js rendering differences

get_animation_docs

Returns the complete animation effects reference, including all entrance, exit, emphasis, and keyframe animation presets.

Parameters:

  • animation (string, optional) – Specific animation name to get detailed docs for (e.g., fadeInUp, bounceIn, slideOutLeft). If omitted, returns the full animation catalog.

Returns: Animation documentation including:

  • Animation name and category (entrance, exit, emphasis, keyframe)
  • Description of the visual effect
  • Configurable parameters (duration, delay, easing)
  • Default values
  • Recommended use cases

get_component_defaults

Returns the default values and full JSON schema for a specific component type. AI agents use this to understand what a minimal valid component looks like and what properties they can override.

Parameters:

  • component (string, required) – The component/layer type name.

Returns: A JSON object with:

  • defaults – Complete default values for every property
  • schema – JSON Schema defining the component’s structure, types, and constraints
  • required – List of required properties

Example response for a text layer:

{
  "defaults": {
    "type": "text",
    "text": "",
    "fontSize": 24,
    "fontFamily": "Arial",
    "color": "#000000",
    "fontWeight": "normal",
    "textAlign": "center",
    "position": { "x": 0, "y": 0 },
    "opacity": 1,
    "rotation": 0,
    "animations": []
  },
  "required": ["type", "text"],
  "schema": {
    "type": "object",
    "properties": {
      "text": { "type": "string", "description": "The text content to display" },
      "fontSize": { "type": "number", "minimum": 1, "maximum": 500 },
      "fontFamily": { "type": "string", "description": "Google Font name or system font" },
      "color": { "type": "string", "pattern": "^#[0-9a-fA-F]{6}$" }
    }
  }
}

get_easing_docs

Returns the complete reference for all available easing functions. Easing functions control the acceleration curve of animations, determining whether they start slow, end slow, bounce, or follow an elastic curve.

Parameters:

  • easing (string, optional) – Specific easing function name for detailed documentation. If omitted, returns the full list.

Returns: Documentation for each easing function including:

  • Function name (e.g., easeInOutCubic, easeOutBounce, spring)
  • Mathematical description of the curve
  • Visual description of the motion feel
  • Recommended use cases
  • CSS equivalent (where applicable)

Setting Up AI Integration

Connecting Rendervid to your AI tool requires adding the MCP server to your tool’s configuration. The setup process varies slightly between tools, but the core concept is the same: point your AI tool at Rendervid’s MCP server entry point.

Prerequisites

Before configuring any AI tool, make sure you have:

  1. Node.js 18+ installed on your system
  2. Rendervid cloned and built from the GitHub repository :
git clone https://github.com/AceDZN/rendervid.git
cd rendervid
npm install
cd mcp
npm install
npm run build
  1. FFmpeg installed (required for video output):
# macOS
brew install ffmpeg

# Ubuntu/Debian
sudo apt install ffmpeg

# Windows (with Chocolatey)
choco install ffmpeg

Claude Desktop / Claude Code

Add the Rendervid MCP server to your Claude Desktop configuration file.

Configuration file location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

Configuration:

{
  "mcpServers": {
    "rendervid": {
      "command": "node",
      "args": ["/path/to/rendervid/mcp/build/index.js"],
      "env": {}
    }
  }
}

Replace /path/to/rendervid with the actual path to your Rendervid installation.

For Claude Code (CLI), add the same configuration to your project’s .claude/mcp.json file or your global Claude Code settings. Claude Code will automatically detect the MCP server and expose all 11 tools during your coding sessions.

After saving the configuration, restart Claude Desktop or Claude Code. You can verify the connection by asking Claude: “What Rendervid tools are available?” Claude should list all 11 MCP tools.

Cursor IDE

Add the Rendervid MCP server to Cursor’s MCP configuration.

Configuration file: .cursor/mcp.json in your project root (or global Cursor settings).

{
  "mcpServers": {
    "rendervid": {
      "command": "node",
      "args": ["/path/to/rendervid/mcp/build/index.js"]
    }
  }
}

After saving, restart Cursor. The Rendervid tools will be available in Cursor’s AI assistant, allowing you to generate videos directly from your editor.

Windsurf IDE

Windsurf supports MCP servers through its AI configuration. Add the Rendervid server to your Windsurf MCP settings:

{
  "mcpServers": {
    "rendervid": {
      "command": "node",
      "args": ["/path/to/rendervid/mcp/build/index.js"]
    }
  }
}

Consult Windsurf’s documentation for the exact configuration file location, as it may vary by version and operating system.

Generic MCP Setup

Any tool that implements the MCP client specification can connect to Rendervid’s MCP server. The server communicates over stdio (standard input/output), which is the default MCP transport.

To integrate with a custom MCP client:

  1. Spawn the MCP server process:
    node /path/to/rendervid/mcp/build/index.js
    
  2. Communicate over stdin/stdout using the MCP JSON-RPC protocol.
  3. Call tools/list to discover available tools.
  4. Call tools/call with the tool name and arguments to execute any tool.

The MCP server is stateless. Each tool call is independent, and the server can handle concurrent requests from multiple clients.


AI Workflow: End-to-End Examples

The following examples show how AI agents use Rendervid’s MCP tools to go from a natural language prompt to a finished video.

Example 1: Social Media Content Creation

User prompt: “Create a 10-second Instagram story promoting a summer sale with animated text and a gradient background”

AI agent workflow:

Step 1 – Discover capabilities:

The agent calls get_capabilities to learn about available layer types, animation presets, and output constraints. It discovers that text and shape layers are available, fadeInUp and scaleIn animations exist, and Instagram stories use 1080x1920 resolution.

Step 2 – Find a starting template:

The agent calls list_examples with category: "social-media" and finds an instagram-story template. It then calls get_example("instagram-story") to load the full template JSON.

Step 3 – Build the template:

Using the example as a reference, the agent constructs a custom template:

{
  "outputSettings": {
    "width": 1080,
    "height": 1920,
    "fps": 30,
    "duration": 10
  },
  "scenes": [
    {
      "duration": 10,
      "layers": [
        {
          "type": "shape",
          "shapeType": "rectangle",
          "width": 1080,
          "height": 1920,
          "gradient": {
            "type": "linear",
            "angle": 135,
            "stops": [
              { "color": "#FF6B35", "position": 0 },
              { "color": "#F72585", "position": 0.5 },
              { "color": "#7209B7", "position": 1 }
            ]
          }
        },
        {
          "type": "text",
          "text": "SUMMER SALE",
          "fontSize": 96,
          "fontFamily": "Montserrat",
          "fontWeight": "bold",
          "color": "#FFFFFF",
          "position": { "x": 540, "y": 700 },
          "animations": [
            { "type": "fadeInUp", "duration": 0.8, "delay": 0.3 }
          ]
        },
        {
          "type": "text",
          "text": "50% OFF EVERYTHING",
          "fontSize": 64,
          "fontFamily": "Montserrat",
          "color": "#FFE066",
          "position": { "x": 540, "y": 850 },
          "animations": [
            { "type": "fadeInUp", "duration": 0.8, "delay": 0.6 }
          ]
        },
        {
          "type": "text",
          "text": "Shop Now  \u2192",
          "fontSize": 48,
          "fontFamily": "Montserrat",
          "color": "#FFFFFF",
          "position": { "x": 540, "y": 1200 },
          "animations": [
            { "type": "scaleIn", "duration": 0.6, "delay": 1.2 }
          ]
        }
      ]
    }
  ]
}

Step 4 – Validate:

The agent calls validate_template with the template JSON. The response comes back as valid: true with no errors.

Step 5 – Render:

The agent calls render_video with the validated template and receives a URL to the finished MP4 file.


Example 2: Marketing Video Automation

User prompt: “Generate a product showcase video for our new headphones. Use this product image: https://example.com/headphones.png . The product name is ‘SoundPro X1’ and the price is $299.”

AI agent workflow:

  1. get_capabilities – Learns about image layers, text styling, and animation options.
  2. list_examples – Finds a product-showcase template in the e-commerce category.
  3. get_example("product-showcase") – Loads the complete product showcase template, which uses template variables for product name, image, and price.
  4. Modifies the template – Updates the inputs with the user’s product data:
    {
      "inputs": {
        "productName": "SoundPro X1",
        "productImage": "https://example.com/headphones.png",
        "price": "$299",
        "tagline": "Premium Sound, Redefined"
      }
    }
    
  5. validate_template – Verifies the template and confirms that https://example.com/headphones.png is accessible.
  6. render_video – Produces the final product showcase video.

This workflow demonstrates how AI agents leverage template variables to create personalized content from reusable templates. The same product showcase template can generate hundreds of unique videos by swapping the inputs.


Example 3: Data Visualization Generation

User prompt: “Create an animated bar chart showing quarterly revenue: Q1: $1.2M, Q2: $1.8M, Q3: $2.1M, Q4: $2.7M”

AI agent workflow:

  1. get_capabilities – Discovers the custom layer type and the AnimatedLineChart built-in component .
  2. get_component_docs("AnimatedLineChart") – Reads the documentation for the chart component, learning about data format, color configuration, axis labels, and animation options.
  3. get_component_defaults("AnimatedLineChart") – Gets the default values and JSON schema to understand the minimum required configuration.
  4. Builds a template with a custom component layer:
    {
      "type": "custom",
      "component": "AnimatedLineChart",
      "props": {
        "data": [
          { "label": "Q1", "value": 1200000 },
          { "label": "Q2", "value": 1800000 },
          { "label": "Q3", "value": 2100000 },
          { "label": "Q4", "value": 2700000 }
        ],
        "colors": ["#4361EE", "#3A0CA3", "#7209B7", "#F72585"],
        "title": "Quarterly Revenue 2025",
        "yAxisLabel": "Revenue (USD)",
        "animationDuration": 2
      }
    }
    
  5. validate_template – Confirms the template structure is correct.
  6. render_video – Generates the animated chart video.

Self-Describing API: How Capabilities Make AI Agents Effective

The get_capabilities tool is the cornerstone of Rendervid’s AI integration. It implements a self-describing API pattern, where the system tells AI agents exactly what it can do, what parameters are required, and what values are valid. This eliminates the need for AI models to memorize or be trained on Rendervid’s specific API.

What the Capabilities Response Contains

When an AI agent calls get_capabilities, it receives a structured response covering every aspect of the rendering system:

Layer Types with JSON Schemas:

{
  "layerTypes": {
    "text": {
      "description": "Renders text with full styling control",
      "schema": {
        "properties": {
          "text": { "type": "string", "required": true },
          "fontSize": { "type": "number", "default": 24, "min": 1, "max": 500 },
          "fontFamily": { "type": "string", "default": "Arial" },
          "color": { "type": "string", "format": "hex-color" },
          "position": { "type": "object", "properties": { "x": {}, "y": {} } },
          "animations": { "type": "array", "items": { "$ref": "#/animations" } }
        }
      }
    },
    "image": { "..." : "..." },
    "video": { "..." : "..." },
    "shape": { "..." : "..." },
    "audio": { "..." : "..." },
    "group": { "..." : "..." },
    "lottie": { "..." : "..." },
    "custom": { "..." : "..." }
  }
}

Animation Presets:

The capabilities response lists every animation preset with its category, configurable parameters, and description. An AI agent receiving this data knows that fadeInUp is an entrance animation with duration, delay, and easing parameters, and that it moves the element upward while fading it in.

Easing Functions:

All 30+ easing functions are listed with descriptions, so the AI agent can select the right curve for each animation. For example, easeOutBounce is described as simulating a bouncing effect at the end of the animation, which the agent can recommend for playful or attention-grabbing content.

Filters and Effects:

Visual filters like blur, brightness, contrast, saturate, grayscale, and sepia are documented with their parameter ranges, letting the AI agent apply post-processing effects to any layer.

Why Self-Describing APIs Matter

Traditional APIs require documentation that AI models may or may not have seen during training. A self-describing API provides documentation at runtime, ensuring the AI agent always has current, accurate information. When Rendervid adds a new animation preset or layer type, every connected AI agent immediately sees it through get_capabilities. No documentation updates, no retraining, no version mismatches.


Best Practices for AI Video Generation

Follow these guidelines to get the best results when using AI agents to generate Rendervid videos.

1. Always Validate Before Rendering

Call validate_template before every render. Rendering is computationally expensive, and validation is nearly instant. The validation tool catches issues that would cause a render to fail or produce unexpected output:

  • Broken media URLs (images, videos, audio files that return 404)
  • Invalid JSON structure or missing required fields
  • Out-of-range values for dimensions, font sizes, or durations
  • Unknown animation presets or layer types

A typical AI workflow should always include validation as a step before calling render_video or render_image.

2. Start from Examples

Instead of building templates from scratch, AI agents should use list_examples and get_example to find a relevant starting template. Example templates are tested and known to produce good output. Starting from an example and modifying it is faster and less error-prone than generating an entirely new template structure.

Recommended approach:

  1. Call list_examples with a relevant category
  2. Call get_example for the closest matching template
  3. Modify the template to match the user’s specific requirements
  4. Validate and render

3. Use Descriptive Prompts

When requesting videos from an AI agent, be specific about:

  • Dimensions and platform – “1080x1920 Instagram story” is better than “a vertical video”
  • Duration – “10-second intro” is better than “a short video”
  • Style and mood – “dark background with neon text and bouncing animations” gives the AI agent clear direction
  • Content structure – “Three text lines appearing one after another with fade-in animations” is more actionable than “some animated text”

4. Iterate on Templates

Video generation is iterative. After the first render, review the output and ask the AI agent to adjust specific elements:

  • “Make the title text larger and change the color to gold”
  • “Slow down the entrance animations and add a 0.5-second delay between each line”
  • “Add a subtle blur filter to the background image”
  • “Change the easing from linear to easeOutCubic for smoother motion”

The AI agent can modify the existing template and re-render without starting over, making iteration fast and efficient.

5. Leverage Template Variables for Batch Production

If you need multiple variations of the same video (different products, different languages, different data), ask the AI agent to create a template with variables . This lets you render many videos from a single template by passing different inputs:

{
  "inputs": {
    "productName": "Running Shoes Pro",
    "productImage": "https://example.com/shoes.png",
    "price": "$149",
    "tagline": "Run Faster, Go Further"
  }
}

6. Use Async Rendering for Long Videos

For videos longer than 30 seconds or templates with complex animations, use start_render_async instead of render_video. This prevents timeouts and allows the AI agent to perform other tasks while the video renders in the background.


Template Discovery: Browsing 100+ Examples

Rendervid includes over 100 example templates spanning 32 categories, giving AI agents a rich library of starting points for any video generation task.

How AI Agents Discover Templates

The template discovery workflow uses two tools in sequence:

  1. list_examples – Browse the catalog with optional category filtering to find relevant templates.
  2. get_example – Load the full JSON template for a specific example.

Template Categories

AI agents can filter examples by category to quickly find relevant starting points:

CategoryDescriptionExample Templates
social-mediaPlatform-optimized contentInstagram story, TikTok video, YouTube thumbnail
e-commerceProduct and sales contentProduct showcase, flash sale, price comparison
marketingPromotional materialsBrand intro, testimonial, feature highlight
data-visualizationCharts and infographicsBar chart, line graph, pie chart, dashboard
typographyText-focused designsKinetic text, quote cards, title sequences
educationLearning materialsExplainer video, step-by-step tutorial, diagram
presentationSlide-style contentPitch deck slides, conference intro, keynote
abstractVisual effects and artParticle systems, wave visualizations, gradients

Template Discovery in Practice

When a user asks for “an animated chart showing sales data,” the AI agent:

  1. Calls list_examples(category: "data-visualization") and receives a list of chart-related templates.
  2. Identifies animated-bar-chart as the best match based on the description.
  3. Calls get_example("animated-bar-chart") to load the complete template.
  4. Examines the template structure to understand how data is formatted.
  5. Replaces the example data with the user’s actual sales figures.
  6. Validates and renders.

This discovery-first approach means AI agents consistently produce well-structured templates because they are building on tested examples rather than generating template JSON from scratch.

Exploring All Available Templates

To see every available template, an AI agent can call list_examples without a category filter. The response includes metadata for all 100+ templates, allowing the agent to search across categories for the best match. Each entry includes the template name, category, description, dimensions, and duration, giving the agent enough information to make an informed selection.


Supported AI Tools

Rendervid’s MCP server works with any tool that implements the Model Context Protocol client specification. The following tools have been tested and confirmed to work with Rendervid:

AI ToolTypeMCP SupportConfiguration File
Claude DesktopDesktop appNativeclaude_desktop_config.json
Claude CodeCLINative.claude/mcp.json
CursorIDENative.cursor/mcp.json
WindsurfIDENativeMCP settings
Google AntigraviteCloud IDENativeMCP settings

Because MCP is an open standard, any future tool that adds MCP client support will automatically be compatible with Rendervid’s MCP server. No changes to the server or its tools are required.


Next Steps

  • Rendervid Overview – Learn about all Rendervid features, output formats, and architecture.
  • Template System – Deep dive into JSON template structure, variables, and the input system.
  • Components Reference – Documentation for all layer types and custom React components.
  • Deployment Guide – Deploy Rendervid to AWS Lambda, Azure Functions, Google Cloud Run, or Docker for cloud-scale rendering.
  • GitHub Repository – Source code, issue tracker, and community contributions.

Frequently asked questions

How does Rendervid integrate with AI agents?

Rendervid provides an MCP (Model Context Protocol) server with 11 tools that AI agents can use to generate videos. AI agents like Claude Code, Cursor, and Windsurf can discover available features, browse example templates, validate templates, and render videos—all through natural language commands.

Which AI tools are compatible with Rendervid?

Rendervid works with any MCP-compatible AI tool, including Claude Desktop, Claude Code (CLI), Cursor IDE, Windsurf IDE, and Google Antigravite. The MCP server exposes a standardized interface that any MCP client can use.

How do I set up Rendervid with Claude Code?

Add the Rendervid MCP server to your Claude Desktop configuration (claude_desktop_config.json) by specifying the path to the MCP server's index.js file. Once configured, Claude can automatically discover and use all 11 rendering tools.

Can AI agents validate templates before rendering?

Yes, the validate_template tool checks template structure, field types, and even validates media URLs to ensure they're accessible. This prevents rendering failures and helps AI agents catch errors before spending time on rendering.

What can I create with AI agents and Rendervid?

Anything from social media content (Instagram stories, TikTok videos, YouTube thumbnails) to marketing materials (product showcases, sale announcements), data visualizations (animated charts), educational content, and more. The AI agent creates the JSON template from your natural language description and renders it into a video or image.

How does the self-describing API help AI agents?

The get_capabilities tool returns complete information about available layer types, animation presets, easing functions, filters, input types, and output formats—all with JSON schemas. This allows AI agents to understand exactly what's possible and generate valid templates without hardcoded knowledge of the API.

Let us build your own AI Team

We help companies like yours to develop smart chatbots, MCP Servers, AI tools or other types of AI automation to replace human in repetitive tasks in your organization.

Learn more

Creatify MCP
Creatify MCP

Creatify MCP

Integrate FlowHunt with Creatify MCP Server to automate AI avatar video generation, streamline video workflows, and enhance content creation using advanced Mode...

4 min read
AI Creatify +4
The Ultimate Guide to the Sora-2 App: Next-Gen AI Video Creation
The Ultimate Guide to the Sora-2 App: Next-Gen AI Video Creation

The Ultimate Guide to the Sora-2 App: Next-Gen AI Video Creation

Discover everything you need to know about the Sora-2 app—its capabilities, use cases, and how it compares to leading AI video generators. Learn how to get star...

5 min read
ai video ai content +1