Rendervid Deployment - Browser, Node.js, Cloud & Docker Rendering

Rendervid Deployment Cloud Rendering Docker

Introduction

Rendervid is designed to render anywhere your workflow demands. Whether you need instant previews in the browser, production-grade video encoding on a server, or massively parallel rendering across cloud infrastructure, Rendervid provides a dedicated package for each environment. Every deployment target shares the same template system and component library , so a template that works in the browser works identically on AWS Lambda or in a Docker container.

This guide covers all four deployment environments, the rendering options available in each, and advanced features like motion blur, GIF export, and performance optimization. By the end, you will know exactly which deployment path fits your project and how to configure it.

                        +---------------------+
                        |   JSON Template     |
                        +----------+----------+
                                   |
              +--------------------+--------------------+
              |                    |                    |
     +--------v--------+  +-------v--------+  +-------v---------+
     |     Browser      |  |    Node.js     |  |     Cloud       |
     | @rendervid/      |  | @rendervid/    |  | @rendervid/     |
     | renderer-browser |  | renderer-node  |  | cloud-rendering |
     +---------+--------+  +-------+--------+  +-------+---------+
               |                   |                    |
        Canvas / WebM         FFmpeg / Playwright   Parallel Workers
               |                   |                    |
     +---------v--------+  +------v---------+  +-------v---------+
     | MP4, WebM, PNG,  |  | MP4, WebM, MOV,|  | AWS Lambda      |
     | JPEG, WebP       |  | GIF, H.265     |  | Azure Functions |
     +------------------+  +----------------+  | GCP Functions   |
                                               | Docker          |
                                               +-----------------+

Browser Rendering

The @rendervid/renderer-browser package handles client-side rendering entirely within the user’s browser. No server infrastructure is required. This makes it the fastest path from template to preview.

When to Use Browser Rendering

  • Real-time previews during template editing in the visual editor
  • Web applications that need to generate video or image assets on the fly
  • Prototyping new templates before committing to server-side rendering
  • Lightweight exports where MP4, WebM, PNG, JPEG, or WebP output is sufficient

Installation

npm install @rendervid/renderer-browser

How It Works

Browser rendering uses the HTML Canvas API to draw each frame of the template. The renderer walks through every scene and layer, applies animations and easing functions, composites the result onto a canvas element, and captures each frame. For video output, the frames are encoded using the browser’s built-in MediaRecorder API (WebM) or a WebAssembly-based MP4 encoder.

Supported Output Formats

FormatExtensionNotes
MP4.mp4H.264 via WebAssembly encoder
WebM.webmVP8/VP9 via MediaRecorder API
PNG.pngSingle frame or image sequence
JPEG.jpegSingle frame, configurable quality
WebP.webpSingle frame, smaller file size

Code Example

import { BrowserRenderer } from "@rendervid/renderer-browser";

const renderer = new BrowserRenderer();

const template = {
  width: 1920,
  height: 1080,
  fps: 30,
  scenes: [
    {
      duration: 5,
      layers: [
        {
          type: "text",
          text: "Hello from the Browser",
          fontSize: 72,
          color: "#ffffff",
          position: { x: 960, y: 540 },
          animation: {
            entrance: { type: "fadeIn", duration: 1 },
          },
        },
      ],
    },
  ],
};

// Render to a canvas element for preview
const canvas = document.getElementById("preview") as HTMLCanvasElement;
await renderer.preview(template, canvas);

// Export as MP4
const mp4Blob = await renderer.render(template, {
  format: "mp4",
  quality: "standard",
});

// Export a single frame as PNG
const pngBlob = await renderer.renderFrame(template, {
  format: "png",
  frameNumber: 0,
});

Browser Rendering Limitations

  • No FFmpeg access, so H.265/HEVC and MOV are not available
  • GIF export requires the Node.js renderer for palette optimization
  • Maximum resolution depends on the browser’s Canvas size limits (typically 4096x4096 or 8192x8192)
  • Rendering speed depends on the client device’s CPU and GPU

Node.js Rendering

The @rendervid/renderer-node package provides server-side rendering with full FFmpeg integration. It uses Playwright or Puppeteer to render each frame in a headless browser, then pipes the frames to FFmpeg for professional-grade video encoding.

When to Use Node.js Rendering

  • Production video encoding with full codec support (H.264, H.265, VP9)
  • Batch processing hundreds or thousands of templates in automated pipelines
  • REST APIs that accept template JSON and return rendered video
  • CI/CD pipelines for automated content generation
  • GIF export with palette optimization and dithering control

Installation

# Install the renderer
npm install @rendervid/renderer-node

# Install Playwright (includes browser binaries)
npx playwright install chromium

# Install FFmpeg (required for video encoding)
# macOS
brew install ffmpeg

# Ubuntu/Debian
sudo apt-get install ffmpeg

# Windows (via Chocolatey)
choco install ffmpeg

Supported Output Formats

FormatExtensionCodecNotes
MP4.mp4H.264Universal compatibility
MP4.mp4H.265/HEVC50% smaller files, newer devices
WebM.webmVP8/VP9Web-optimized
MOV.movProResProfessional editing workflows
GIF.gifPalette-basedAnimated with optimization
PNG.pngLosslessImage sequence or single frame
JPEG.jpegLossyConfigurable quality
WebP.webpLossy/LosslessModern web format

Rendering Quality Presets

Rendervid provides four quality presets that control encoding parameters:

PresetBitrateUse Case
draftLowFast previews during development
standardMediumGeneral-purpose output, good quality/size
highHighMarketing materials, final deliverables
losslessMaximumArchival, further editing, no quality loss

GPU Acceleration

The Node.js renderer supports hardware acceleration to offload encoding to the GPU. This significantly reduces render time for complex templates with many layers, high resolutions, and effects.

const result = await renderer.render(template, {
  format: "mp4",
  quality: "high",
  outputPath: "/output/video.mp4",
  hardwareAcceleration: true,
});

GPU acceleration is available on systems with compatible NVIDIA (NVENC), AMD (AMF), or Intel (Quick Sync) hardware. FFmpeg must be compiled with the corresponding encoder support.

Code Example

import { NodeRenderer } from "@rendervid/renderer-node";

const renderer = new NodeRenderer();

const template = {
  width: 1920,
  height: 1080,
  fps: 60,
  scenes: [
    {
      duration: 10,
      layers: [
        {
          type: "video",
          src: "/assets/background.mp4",
          fit: "cover",
        },
        {
          type: "text",
          text: "{{headline}}",
          fontSize: 64,
          color: "#ffffff",
          fontFamily: "Inter",
          position: { x: 960, y: 540 },
          animation: {
            entrance: { type: "slideInUp", duration: 0.8 },
            exit: { type: "fadeOut", duration: 0.5 },
          },
        },
      ],
    },
  ],
  inputs: {
    headline: {
      type: "text",
      label: "Headline",
      default: "Your Product, Elevated",
    },
  },
};

// Render with custom inputs
const result = await renderer.render(template, {
  format: "mp4",
  quality: "high",
  outputPath: "/output/promo.mp4",
  renderWaitTime: 2000, // Wait 2s for media to load
  inputs: {
    headline: "Summer Sale — 50% Off Everything",
  },
});

console.log(`Rendered: ${result.outputPath}`);
console.log(`Duration: ${result.duration}s`);
console.log(`File size: ${(result.fileSize / 1024 / 1024).toFixed(2)} MB`);

Batch Processing

For processing many templates in sequence, use the batch API:

import { NodeRenderer } from "@rendervid/renderer-node";

const renderer = new NodeRenderer();

const templates = [
  { template: socialTemplate, inputs: { name: "Alice" }, output: "alice.mp4" },
  { template: socialTemplate, inputs: { name: "Bob" }, output: "bob.mp4" },
  { template: socialTemplate, inputs: { name: "Carol" }, output: "carol.mp4" },
];

for (const job of templates) {
  await renderer.render(job.template, {
    format: "mp4",
    quality: "standard",
    outputPath: `/output/${job.output}`,
    inputs: job.inputs,
  });
}

For true parallel rendering on a single machine, see the Docker Local Rendering section below.


Cloud Rendering

The @rendervid/cloud-rendering package enables distributed, parallel rendering across cloud infrastructure. Instead of rendering frames sequentially on one machine, cloud rendering splits the work across many worker functions that render frames simultaneously, then merges them into the final output.

When to Use Cloud Rendering

  • High-throughput pipelines processing hundreds of videos per hour
  • Long-form content where sequential rendering is too slow
  • Time-sensitive workloads where a 10-50x speedup matters
  • Auto-scaling to handle unpredictable demand spikes

Architecture

+------------------+
|   Your App       |
|  (Coordinator)   |
+--------+---------+
         |
         | 1. Split video into frame chunks
         v
+--------+---------+
|   Chunk Splitter |
+--------+---------+
         |
         |  2. Distribute chunks to workers
         v
+--------+---+---+---+---+--------+
|  Worker 1  | Worker 2  | Worker N |
|  (Lambda/  | (Lambda/  | (Lambda/ |
|   Azure/   |  Azure/   |  Azure/  |
|   GCP)     |  GCP)     |  GCP)    |
+-----+------+----+------+----+----+
      |            |           |
      | 3. Each worker renders its frames
      v            v           v
+-----+------+----+------+----+----+
|  Frames    |  Frames   |  Frames |
|  001-030   |  031-060  |  061-090|
+-----+------+----+------+----+----+
      |            |           |
      +------+-----+-----+----+
             |
             v
     +-------+--------+
     |     Merger      |
     | (FFmpeg concat) |
     +-------+---------+
             |
             | 4. Combine into final video
             v
     +-------+---------+
     |  Object Storage  |
     |  S3 / Blob / GCS |
     +------------------+
             |
             | 5. Download or serve
             v
     +-------+---------+
     |   Final Output   |
     |   video.mp4      |
     +------------------+

How it works step by step:

  1. The coordinator analyzes the template and determines how many frames need to be rendered based on the total duration and FPS.
  2. The chunk splitter divides the total frame count into chunks (e.g., 30 frames per chunk for a 30fps video = 1 second per chunk).
  3. Each worker function receives a chunk assignment (start frame, end frame), renders those frames using the Node.js renderer, and uploads the rendered segment to object storage.
  4. The merger downloads all segments and concatenates them into the final video using FFmpeg.
  5. The final output is stored in the cloud provider’s object storage (S3, Azure Blob, or GCS) and optionally downloaded to the local filesystem.

Cloud Configuration

import { CloudRenderer } from "@rendervid/cloud-rendering";

const cloudRenderer = new CloudRenderer({
  provider: "aws", // "aws" | "azure" | "gcp" | "docker"
  quality: "standard", // "draft" | "standard" | "high"
  downloadToLocal: true,
  outputPath: "/output/final.mp4",
});

The full configuration interface:

interface CloudRenderConfig {
  provider: "aws" | "azure" | "gcp" | "docker";
  quality: "draft" | "standard" | "high";
  downloadToLocal: boolean;
  outputPath: string;

  awsConfig?: {
    region: string;
    s3Bucket: string;
    s3Prefix: string;
  };

  azureConfig?: {
    resourceGroup: string;
    storageAccount: string;
    containerName: string;
  };

  gcpConfig?: {
    projectId: string;
    bucketName: string;
    region: string;
  };

  dockerConfig?: {
    volumePath: string;
    workersCount: number;
  };
}

AWS Lambda Setup

AWS Lambda is the most common cloud deployment target. Each worker function runs in a separate Lambda invocation, enabling massive parallelism.

Prerequisites:

  • AWS account with Lambda and S3 access
  • AWS CLI configured
  • Node.js 18+ Lambda runtime

Configuration:

import { CloudRenderer } from "@rendervid/cloud-rendering";

const renderer = new CloudRenderer({
  provider: "aws",
  quality: "high",
  downloadToLocal: true,
  outputPath: "/output/video.mp4",
  awsConfig: {
    region: "us-east-1",
    s3Bucket: "my-rendervid-output",
    s3Prefix: "renders/",
  },
});

const result = await renderer.render(template);
console.log(`Rendered in ${result.renderTime}ms`);
console.log(`Workers used: ${result.workersUsed}`);
console.log(`Output: ${result.outputUrl}`);

Typical AWS Lambda configuration:

  • Memory: 1024-3008 MB (more memory = more CPU = faster rendering)
  • Timeout: 300 seconds (5 minutes)
  • Ephemeral storage: 512 MB - 10 GB
  • Concurrency: 100-1000 (adjust based on workload)

Azure Functions Setup

const renderer = new CloudRenderer({
  provider: "azure",
  quality: "standard",
  downloadToLocal: true,
  outputPath: "/output/video.mp4",
  azureConfig: {
    resourceGroup: "rendervid-rg",
    storageAccount: "rendervidstore",
    containerName: "renders",
  },
});

const result = await renderer.render(template);

Google Cloud Functions Setup

const renderer = new CloudRenderer({
  provider: "gcp",
  quality: "standard",
  downloadToLocal: true,
  outputPath: "/output/video.mp4",
  gcpConfig: {
    projectId: "my-project-id",
    bucketName: "rendervid-output",
    region: "us-central1",
  },
});

const result = await renderer.render(template);

Cost Comparison

ProviderCost per MinuteCost per HourNotes
AWS Lambda~$0.02~$1.00Pay per 1ms of compute
Azure Functions~$0.02~$1.00Consumption plan pricing
Google Cloud Functions~$0.02~$1.00Pay per 100ms of compute
Docker (local)FreeFreeUses your own hardware

All cloud providers offer free tiers that cover significant rendering workloads during development and low-volume production.

Performance Benchmarks

Cloud rendering achieves a 10-50x speedup compared to single-machine sequential rendering. The exact speedup depends on the number of workers, template complexity, and video duration.

Video DurationSequential (1 machine)Cloud (50 workers)Speedup
30 seconds~90 seconds~5 seconds18x
2 minutes~6 minutes~15 seconds24x
10 minutes~30 minutes~45 seconds40x
30 minutes~90 minutes~2 minutes45x

Longer videos benefit more from parallelism because the overhead of worker startup and frame merging is amortized across more frames.


Docker Local Rendering

Docker-based rendering gives you the same parallel rendering architecture as cloud rendering, but running entirely on your local machine. It is completely free, uses no cloud accounts, and is ideal for self-hosted setups, development, and teams that want parallel rendering without cloud costs.

When to Use Docker Rendering

  • Free parallel rendering without cloud provider accounts
  • Self-hosted infrastructure behind a firewall
  • Development and testing of cloud rendering workflows locally
  • Small to medium workloads that benefit from parallelism but do not need auto-scaling

Installation

# Ensure Docker is installed and running
docker --version

# Install the cloud rendering package
npm install @rendervid/cloud-rendering

Configuration

import { CloudRenderer } from "@rendervid/cloud-rendering";

const renderer = new CloudRenderer({
  provider: "docker",
  quality: "high",
  downloadToLocal: true,
  outputPath: "/output/video.mp4",
  dockerConfig: {
    volumePath: "/tmp/rendervid-work",
    workersCount: 8, // Number of Docker containers to run in parallel
  },
});

const result = await renderer.render(template);
console.log(`Rendered in ${result.renderTime}ms using ${result.workersUsed} workers`);

Choosing workersCount: Set this to the number of CPU cores available on your machine. For example, an 8-core machine works well with 8 workers. Going beyond your core count adds overhead from context switching without improving throughput.

Docker Architecture

+------------------+
|   Coordinator    |
|  (your process)  |
+--------+---------+
         |
   +-----+-----+-----+-----+
   |     |     |     |     |
+--v--+ +--v-+ +-v--+ +-v--+
| C1  | | C2 | | C3 | | C4 |  ... Docker containers
+--+--+ +--+-+ +-+--+ +-+--+
   |      |     |      |
   v      v     v      v
+--+------+-----+------+--+
|    Shared Volume          |
|    /tmp/rendervid-work    |
+-------------+-------------+
              |
              v
      +-------+--------+
      |     Merger      |
      +-------+---------+
              |
              v
      +-------+---------+
      |  /output/video  |
      +-----------------+

Each Docker container is a self-contained worker with Node.js, Playwright, and FFmpeg pre-installed. Workers read their frame assignments from the shared volume, render the frames, and write the results back. The coordinator then merges all segments into the final output.


Motion Blur

Rendervid supports motion blur through temporal supersampling. Instead of rendering a single instant per frame, the renderer captures multiple sub-frames at slightly different points in time and blends them together. This produces the natural blur that cameras create when objects move during an exposure.

Quality Presets

PresetSamples per FrameRender Time MultiplierVisual Quality
low55xSubtle smoothing
medium1010xNoticeable blur on fast motion
high1616xCinematic motion blur
ultra3232xFilm-grade, heavy blur

Configuration

const result = await renderer.render(template, {
  format: "mp4",
  quality: "high",
  outputPath: "/output/cinematic.mp4",
  motionBlur: {
    enabled: true,
    quality: "high", // 16 samples per frame
  },
});

How Temporal Supersampling Works

Frame N (no motion blur):          Frame N (with motion blur, 5 samples):

  Single instant:                    5 sub-frames blended:

  +--------+                         +--------+   +--------+   +--------+
  |    O   |                         |   O    | + |    O   | + |     O  |  ...
  +--------+                         +--------+   +--------+   +--------+
                                              |
                                              v
                                     +--------+
                                     |  ~O~   |  <- Blended result
                                     +--------+

Each sub-frame advances the animation timeline by a tiny increment (1/fps divided by the sample count). The sub-frames are then alpha-blended to produce the final frame. Objects that moved between sub-frames appear blurred along their motion path, while stationary elements remain sharp.

Performance Considerations

Motion blur multiplies render time proportionally to the sample count. A 10-second video at 30fps has 300 frames. With high quality (16 samples), the renderer must generate 4,800 sub-frames instead of 300. Use draft quality during development and switch to high or ultra for final exports only.

Cloud rendering and Docker parallel rendering work well with motion blur because the per-frame cost is distributed across workers. A 16x per-frame increase divided across 16 workers results in roughly the same total render time as a non-blurred render on one machine.


GIF Export

Rendervid’s GIF export goes far beyond a simple frame-to-GIF conversion. It uses FFmpeg’s palette generation pipeline to produce optimized, high-quality animated GIFs with configurable dithering, color counts, and file size constraints.

How GIF Optimization Works

Standard GIF encoding uses a single global palette of 256 colors, which often results in banding and poor color reproduction. Rendervid uses a two-pass approach:

  1. Pass 1 (palettegen): Analyze all frames to generate an optimal 256-color palette that best represents the video’s full color range.
  2. Pass 2 (paletteuse): Re-encode each frame using the optimized palette with optional dithering for smooth gradients.

Optimization Presets

PresetResolutionMax ColorsTarget Use Case
social480x480256Instagram, Twitter, Slack
web640x480256Blog posts, documentation
email320x240128Email campaigns, newsletters

Dithering Options

AlgorithmQualityFile SizeDescription
floyd_steinbergBestLargestError-diffusion dithering, smooth gradients
bayerGoodMediumOrdered dithering, consistent pattern
noneLowestSmallestNo dithering, flat color regions

Configuration

const result = await renderer.render(template, {
  format: "gif",
  outputPath: "/output/animation.gif",
  gif: {
    preset: "social",       // 480x480 resolution
    colors: 256,            // 2-256 color palette
    dithering: "floyd_steinberg",
    targetSizeKB: 5000,     // Auto-optimize to stay under 5MB
    fps: 15,                // Lower FPS = smaller file
  },
});

console.log(`GIF size: ${(result.fileSize / 1024).toFixed(0)} KB`);
console.log(`Estimated size was: ${result.estimatedSizeKB} KB`);

File Size Estimation and Auto-Optimization

When you set a targetSizeKB, Rendervid estimates the output file size before rendering and automatically adjusts parameters (color count, resolution, FPS) to meet the target. This is particularly useful for platforms with file size limits (e.g., Slack’s 50 MB limit, email’s typical 10 MB constraint).

// Auto-optimize to fit within a 2MB email constraint
const result = await renderer.render(template, {
  format: "gif",
  outputPath: "/output/email-banner.gif",
  gif: {
    preset: "email",
    targetSizeKB: 2000,
  },
});

Package Architecture

Rendervid is organized as a monorepo with 13 packages. Each package has a focused responsibility, and they compose together to support every deployment scenario.

@rendervid/
├── core                    Engine, types, validation, animation system
│   ├── Template parser and validator (AJV + JSON Schema)
│   ├── Animation engine (40+ presets, 30+ easing functions)
│   ├── Layer system (text, image, video, shape, audio, group, lottie, custom)
│   └── Scene management and transitions (17 types)
│
├── renderer-browser        Client-side rendering
│   ├── Canvas-based frame rendering
│   ├── MediaRecorder for WebM export
│   └── WebAssembly MP4 encoder
│
├── renderer-node           Server-side rendering
│   ├── Playwright/Puppeteer headless browser
│   ├── FFmpeg integration (fluent-ffmpeg)
│   ├── GPU acceleration
│   └── GIF optimization pipeline
│
├── cloud-rendering         Multi-cloud orchestration
│   ├── AWS Lambda provider
│   ├── Azure Functions provider
│   ├── Google Cloud Functions provider
│   ├── Docker local provider
│   ├── Chunk splitter and merger
│   └── Object storage adapters (S3, Blob, GCS)
│
├── player                  Video/template player component
├── editor                  Visual template editor (Zustand state)
├── components              Pre-built React components
│   ├── AnimatedLineChart
│   ├── AuroraBackground
│   ├── WaveBackground
│   ├── SceneTransition
│   └── TypewriterEffect
│
├── templates               Template definitions and examples (100+)
├── testing                 Testing utilities
│   ├── Vitest custom matchers
│   ├── Snapshot testing helpers
│   └── Visual regression utilities
│
├── editor-playground       Editor development environment
├── player-playground       Player development environment
├── mcp                     Model Context Protocol server
└── docs                    VitePress documentation site

How the Packages Connect

  • @rendervid/core is the foundation. Every other package depends on it for template types, validation, and the animation system.
  • @rendervid/renderer-browser and @rendervid/renderer-node both consume core templates but output through different pipelines (Canvas vs. FFmpeg).
  • @rendervid/cloud-rendering wraps renderer-node and distributes its work across cloud functions or Docker containers.
  • @rendervid/player and @rendervid/editor are React-based UI packages for playback and visual editing. The editor uses Zustand for state management.
  • @rendervid/components provides the pre-built React components (AnimatedLineChart, AuroraBackground, etc.) that can be used in templates.
  • @rendervid/testing provides Vitest matchers and snapshot testing helpers for validating templates.
  • mcp is the AI integration layer that exposes Rendervid’s capabilities to AI agents via the Model Context Protocol.

Technology Stack

Rendervid is built on a modern TypeScript stack chosen for reliability, performance, and developer experience.

LayerTechnologyPurpose
LanguageTypeScriptType safety across all 13 packages
Buildtsup, ViteFast builds, tree-shaking, ESM/CJS output
TestingVitestUnit tests, snapshot tests, custom matchers
UI FrameworkReact 18.3.1Component rendering, template composition
State ManagementZustandEditor state (lightweight, no boilerplate)
StylingTailwind CSSEditor and player UI
ValidationAJV with JSON SchemaTemplate validation before rendering
Browser RenderingHTML Canvas APIFrame-by-frame drawing in the browser
Headless BrowserPlaywright, PuppeteerServer-side frame capture
Video EncodingFFmpeg (fluent-ffmpeg)H.264, H.265, VP9, ProRes, GIF encoding
3D GraphicsThree.js (optional), CSS 3D3D scenes and perspective transforms
DocumentationVitePressPackage documentation site

Testing

Rendervid includes a dedicated testing package (@rendervid/testing) that provides custom Vitest matchers, snapshot testing helpers, and visual regression utilities for validating templates.

Vitest Custom Matchers

import { describe, it, expect } from "vitest";
import "@rendervid/testing/matchers";

describe("Product Showcase Template", () => {
  it("should be a valid template", () => {
    expect(template).toBeValidTemplate();
  });

  it("should have the correct dimensions", () => {
    expect(template).toHaveResolution(1920, 1080);
  });

  it("should contain at least one text layer", () => {
    expect(template).toContainLayerOfType("text");
  });

  it("should have animations on the headline", () => {
    expect(template.scenes[0].layers[0]).toHaveAnimation("entrance");
  });
});

Snapshot Testing

Snapshot testing renders a template to an image and compares it against a stored reference. Any visual change causes the test to fail, making it easy to catch unintended regressions.

import { describe, it } from "vitest";
import { renderSnapshot } from "@rendervid/testing";

describe("Template Visual Regression", () => {
  it("should match the reference snapshot at frame 0", async () => {
    const snapshot = await renderSnapshot(template, { frame: 0 });
    expect(snapshot).toMatchImageSnapshot();
  });

  it("should match the reference snapshot at the midpoint", async () => {
    const totalFrames = template.fps * template.scenes[0].duration;
    const snapshot = await renderSnapshot(template, {
      frame: Math.floor(totalFrames / 2),
    });
    expect(snapshot).toMatchImageSnapshot();
  });
});

Visual Regression Testing in CI

Integrate visual regression tests into your CI/CD pipeline to catch rendering changes before they reach production:

# .github/workflows/visual-regression.yml
name: Visual Regression Tests
on: [pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 18
      - run: npx playwright install chromium
      - run: npm ci
      - run: npm run test:visual

Performance Optimization

Getting the fastest possible render times requires understanding where time is spent and which levers you can pull. Here are the most impactful optimization strategies.

1. Choose the Right Deployment Target

ScenarioBest Target
Quick preview during editingBrowser
Single video, production qualityNode.js
Batch of 10-100 videosNode.js or Docker
Batch of 100+ videos or time-criticalCloud (AWS/Azure/GCP)

2. Optimize Template Complexity

  • Reduce layer count. Every layer is rendered independently. Fewer layers means fewer draw operations per frame.
  • Use draft quality during development and testing. Switch to high or lossless only for final exports.
  • Simplify animations during previewing. Complex keyframe sequences with many easing functions add computation per frame.

3. Use renderWaitTime Wisely

The renderWaitTime option pauses rendering to allow external media (images, videos, fonts) to load. Set this to the minimum value that ensures all assets are loaded. A value of 500-2000ms is typical. Setting it too high wastes time on every frame.

await renderer.render(template, {
  renderWaitTime: 1000, // 1 second is usually enough
});

4. Leverage Parallel Rendering

For any video longer than 10 seconds, parallel rendering (Docker or cloud) will be faster than sequential rendering. The break-even point depends on your hardware and cloud configuration, but as a rule of thumb:

  • < 10 seconds: Single Node.js renderer is fine
  • 10-60 seconds: Docker with 4-8 workers
  • 1-10 minutes: Docker with 8-16 workers or cloud
  • > 10 minutes: Cloud rendering with 50+ workers

5. Optimize GIF Output

GIFs are inherently large. To keep file sizes manageable:

  • Lower the FPS to 10-15. Most GIFs look fine at reduced frame rates.
  • Reduce resolution using presets (social, web, email).
  • Limit colors to 128 or fewer for simple animations.
  • Use targetSizeKB to let Rendervid auto-optimize parameters.
  • Avoid dithering (none) if file size matters more than gradient quality.

6. Enable GPU Acceleration

On machines with compatible GPUs, hardware-accelerated encoding can reduce render times by 2-5x for the encoding step. This is most impactful for high-resolution (4K+) and high-bitrate outputs.

7. Pre-load Assets

If your template references external images or videos, pre-download them to local storage before rendering. Network latency during rendering is the most common cause of slow or failed renders.


Next Steps

  • Get started with Rendervid: Visit the Rendervid overview for installation and first render
  • Learn the template system: Read the Template System documentation for JSON template structure, variables, scenes, layers, and animations
  • Explore components: Browse the Component Library for pre-built React components like AnimatedLineChart and AuroraBackground
  • Set up AI integration: See the AI Integration guide to connect Claude Code, Cursor, or Windsurf to Rendervid via MCP
  • View the source: Visit the Rendervid GitHub repository for the full source code and 100+ example templates

Frequently asked questions

What are the different ways to deploy Rendervid?

Rendervid supports four deployment options: browser-based rendering for client-side previews and web apps, Node.js rendering for server-side batch processing with FFmpeg, cloud rendering on AWS Lambda/Azure Functions/GCP for 10-50x parallel speedup, and Docker for free local parallel rendering.

How much does cloud rendering cost?

Cloud rendering costs approximately $0.02 per minute on AWS Lambda, Azure Functions, or Google Cloud Functions—roughly $1 per hour of rendering. Docker-based local rendering is completely free and provides the same parallel rendering benefits.

What is the cloud rendering architecture?

Cloud rendering uses a coordinator that splits videos into frame chunks, distributes them to worker functions (Lambda/Azure/GCP), each worker renders its assigned frames, a merger combines all frames into the final video, and the output is stored in object storage (S3/Azure Blob/GCS).

What are the system requirements for Rendervid?

For browser rendering, any modern browser with Canvas support works. For Node.js rendering, you need Node.js 18+, Playwright or Puppeteer, and FFmpeg installed. For cloud rendering, you need an AWS/Azure/GCP account or Docker installed locally.

Does Rendervid support GPU acceleration?

Yes, the Node.js renderer supports hardware acceleration for faster rendering. GPU acceleration can significantly speed up rendering, especially for complex templates with many layers, effects, and high resolutions.

How does motion blur work in Rendervid?

Rendervid implements motion blur using temporal supersampling, rendering multiple sub-frames per output frame and blending them together. Quality presets range from low (5 samples, 5x render time) to ultra (32 samples, 32x render time), producing cinematic smoothness.

Let us build your own AI Team

We help companies like yours to develop smart chatbots, MCP Servers, AI tools or other types of AI automation to replace human in repetitive tasks in your organization.

Learn more