
Rendervid Template System - JSON Templates, Variables, Animations & Transitions
Complete guide to the Rendervid template system. Learn how to create JSON video templates, use dynamic variables with {{variable}} syntax, configure 40+ animati...

Deploy Rendervid anywhere: browser-based rendering for previews, Node.js for server-side batch processing, or cloud rendering on AWS Lambda, Azure Functions, GCP, and Docker for 10-50x faster parallel rendering.
Rendervid is designed to render anywhere your workflow demands. Whether you need instant previews in the browser, production-grade video encoding on a server, or massively parallel rendering across cloud infrastructure, Rendervid provides a dedicated package for each environment. Every deployment target shares the same template system and component library , so a template that works in the browser works identically on AWS Lambda or in a Docker container.
This guide covers all four deployment environments, the rendering options available in each, and advanced features like motion blur, GIF export, and performance optimization. By the end, you will know exactly which deployment path fits your project and how to configure it.
+---------------------+
| JSON Template |
+----------+----------+
|
+--------------------+--------------------+
| | |
+--------v--------+ +-------v--------+ +-------v---------+
| Browser | | Node.js | | Cloud |
| @rendervid/ | | @rendervid/ | | @rendervid/ |
| renderer-browser | | renderer-node | | cloud-rendering |
+---------+--------+ +-------+--------+ +-------+---------+
| | |
Canvas / WebM FFmpeg / Playwright Parallel Workers
| | |
+---------v--------+ +------v---------+ +-------v---------+
| MP4, WebM, PNG, | | MP4, WebM, MOV,| | AWS Lambda |
| JPEG, WebP | | GIF, H.265 | | Azure Functions |
+------------------+ +----------------+ | GCP Functions |
| Docker |
+-----------------+
The @rendervid/renderer-browser package handles client-side rendering entirely within the user’s browser. No server infrastructure is required. This makes it the fastest path from template to preview.
npm install @rendervid/renderer-browser
Browser rendering uses the HTML Canvas API to draw each frame of the template. The renderer walks through every scene and layer, applies animations and easing functions, composites the result onto a canvas element, and captures each frame. For video output, the frames are encoded using the browser’s built-in MediaRecorder API (WebM) or a WebAssembly-based MP4 encoder.
| Format | Extension | Notes |
|---|---|---|
| MP4 | .mp4 | H.264 via WebAssembly encoder |
| WebM | .webm | VP8/VP9 via MediaRecorder API |
| PNG | .png | Single frame or image sequence |
| JPEG | .jpeg | Single frame, configurable quality |
| WebP | .webp | Single frame, smaller file size |
import { BrowserRenderer } from "@rendervid/renderer-browser";
const renderer = new BrowserRenderer();
const template = {
width: 1920,
height: 1080,
fps: 30,
scenes: [
{
duration: 5,
layers: [
{
type: "text",
text: "Hello from the Browser",
fontSize: 72,
color: "#ffffff",
position: { x: 960, y: 540 },
animation: {
entrance: { type: "fadeIn", duration: 1 },
},
},
],
},
],
};
// Render to a canvas element for preview
const canvas = document.getElementById("preview") as HTMLCanvasElement;
await renderer.preview(template, canvas);
// Export as MP4
const mp4Blob = await renderer.render(template, {
format: "mp4",
quality: "standard",
});
// Export a single frame as PNG
const pngBlob = await renderer.renderFrame(template, {
format: "png",
frameNumber: 0,
});
The @rendervid/renderer-node package provides server-side rendering with full FFmpeg integration. It uses Playwright or Puppeteer to render each frame in a headless browser, then pipes the frames to FFmpeg for professional-grade video encoding.
# Install the renderer
npm install @rendervid/renderer-node
# Install Playwright (includes browser binaries)
npx playwright install chromium
# Install FFmpeg (required for video encoding)
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt-get install ffmpeg
# Windows (via Chocolatey)
choco install ffmpeg
| Format | Extension | Codec | Notes |
|---|---|---|---|
| MP4 | .mp4 | H.264 | Universal compatibility |
| MP4 | .mp4 | H.265/HEVC | 50% smaller files, newer devices |
| WebM | .webm | VP8/VP9 | Web-optimized |
| MOV | .mov | ProRes | Professional editing workflows |
| GIF | .gif | Palette-based | Animated with optimization |
| PNG | .png | Lossless | Image sequence or single frame |
| JPEG | .jpeg | Lossy | Configurable quality |
| WebP | .webp | Lossy/Lossless | Modern web format |
Rendervid provides four quality presets that control encoding parameters:
| Preset | Bitrate | Use Case |
|---|---|---|
draft | Low | Fast previews during development |
standard | Medium | General-purpose output, good quality/size |
high | High | Marketing materials, final deliverables |
lossless | Maximum | Archival, further editing, no quality loss |
The Node.js renderer supports hardware acceleration to offload encoding to the GPU. This significantly reduces render time for complex templates with many layers, high resolutions, and effects.
const result = await renderer.render(template, {
format: "mp4",
quality: "high",
outputPath: "/output/video.mp4",
hardwareAcceleration: true,
});
GPU acceleration is available on systems with compatible NVIDIA (NVENC), AMD (AMF), or Intel (Quick Sync) hardware. FFmpeg must be compiled with the corresponding encoder support.
import { NodeRenderer } from "@rendervid/renderer-node";
const renderer = new NodeRenderer();
const template = {
width: 1920,
height: 1080,
fps: 60,
scenes: [
{
duration: 10,
layers: [
{
type: "video",
src: "/assets/background.mp4",
fit: "cover",
},
{
type: "text",
text: "{{headline}}",
fontSize: 64,
color: "#ffffff",
fontFamily: "Inter",
position: { x: 960, y: 540 },
animation: {
entrance: { type: "slideInUp", duration: 0.8 },
exit: { type: "fadeOut", duration: 0.5 },
},
},
],
},
],
inputs: {
headline: {
type: "text",
label: "Headline",
default: "Your Product, Elevated",
},
},
};
// Render with custom inputs
const result = await renderer.render(template, {
format: "mp4",
quality: "high",
outputPath: "/output/promo.mp4",
renderWaitTime: 2000, // Wait 2s for media to load
inputs: {
headline: "Summer Sale — 50% Off Everything",
},
});
console.log(`Rendered: ${result.outputPath}`);
console.log(`Duration: ${result.duration}s`);
console.log(`File size: ${(result.fileSize / 1024 / 1024).toFixed(2)} MB`);
For processing many templates in sequence, use the batch API:
import { NodeRenderer } from "@rendervid/renderer-node";
const renderer = new NodeRenderer();
const templates = [
{ template: socialTemplate, inputs: { name: "Alice" }, output: "alice.mp4" },
{ template: socialTemplate, inputs: { name: "Bob" }, output: "bob.mp4" },
{ template: socialTemplate, inputs: { name: "Carol" }, output: "carol.mp4" },
];
for (const job of templates) {
await renderer.render(job.template, {
format: "mp4",
quality: "standard",
outputPath: `/output/${job.output}`,
inputs: job.inputs,
});
}
For true parallel rendering on a single machine, see the Docker Local Rendering section below.
The @rendervid/cloud-rendering package enables distributed, parallel rendering across cloud infrastructure. Instead of rendering frames sequentially on one machine, cloud rendering splits the work across many worker functions that render frames simultaneously, then merges them into the final output.
+------------------+
| Your App |
| (Coordinator) |
+--------+---------+
|
| 1. Split video into frame chunks
v
+--------+---------+
| Chunk Splitter |
+--------+---------+
|
| 2. Distribute chunks to workers
v
+--------+---+---+---+---+--------+
| Worker 1 | Worker 2 | Worker N |
| (Lambda/ | (Lambda/ | (Lambda/ |
| Azure/ | Azure/ | Azure/ |
| GCP) | GCP) | GCP) |
+-----+------+----+------+----+----+
| | |
| 3. Each worker renders its frames
v v v
+-----+------+----+------+----+----+
| Frames | Frames | Frames |
| 001-030 | 031-060 | 061-090|
+-----+------+----+------+----+----+
| | |
+------+-----+-----+----+
|
v
+-------+--------+
| Merger |
| (FFmpeg concat) |
+-------+---------+
|
| 4. Combine into final video
v
+-------+---------+
| Object Storage |
| S3 / Blob / GCS |
+------------------+
|
| 5. Download or serve
v
+-------+---------+
| Final Output |
| video.mp4 |
+------------------+
How it works step by step:
import { CloudRenderer } from "@rendervid/cloud-rendering";
const cloudRenderer = new CloudRenderer({
provider: "aws", // "aws" | "azure" | "gcp" | "docker"
quality: "standard", // "draft" | "standard" | "high"
downloadToLocal: true,
outputPath: "/output/final.mp4",
});
The full configuration interface:
interface CloudRenderConfig {
provider: "aws" | "azure" | "gcp" | "docker";
quality: "draft" | "standard" | "high";
downloadToLocal: boolean;
outputPath: string;
awsConfig?: {
region: string;
s3Bucket: string;
s3Prefix: string;
};
azureConfig?: {
resourceGroup: string;
storageAccount: string;
containerName: string;
};
gcpConfig?: {
projectId: string;
bucketName: string;
region: string;
};
dockerConfig?: {
volumePath: string;
workersCount: number;
};
}
AWS Lambda is the most common cloud deployment target. Each worker function runs in a separate Lambda invocation, enabling massive parallelism.
Prerequisites:
Configuration:
import { CloudRenderer } from "@rendervid/cloud-rendering";
const renderer = new CloudRenderer({
provider: "aws",
quality: "high",
downloadToLocal: true,
outputPath: "/output/video.mp4",
awsConfig: {
region: "us-east-1",
s3Bucket: "my-rendervid-output",
s3Prefix: "renders/",
},
});
const result = await renderer.render(template);
console.log(`Rendered in ${result.renderTime}ms`);
console.log(`Workers used: ${result.workersUsed}`);
console.log(`Output: ${result.outputUrl}`);
Typical AWS Lambda configuration:
const renderer = new CloudRenderer({
provider: "azure",
quality: "standard",
downloadToLocal: true,
outputPath: "/output/video.mp4",
azureConfig: {
resourceGroup: "rendervid-rg",
storageAccount: "rendervidstore",
containerName: "renders",
},
});
const result = await renderer.render(template);
const renderer = new CloudRenderer({
provider: "gcp",
quality: "standard",
downloadToLocal: true,
outputPath: "/output/video.mp4",
gcpConfig: {
projectId: "my-project-id",
bucketName: "rendervid-output",
region: "us-central1",
},
});
const result = await renderer.render(template);
| Provider | Cost per Minute | Cost per Hour | Notes |
|---|---|---|---|
| AWS Lambda | ~$0.02 | ~$1.00 | Pay per 1ms of compute |
| Azure Functions | ~$0.02 | ~$1.00 | Consumption plan pricing |
| Google Cloud Functions | ~$0.02 | ~$1.00 | Pay per 100ms of compute |
| Docker (local) | Free | Free | Uses your own hardware |
All cloud providers offer free tiers that cover significant rendering workloads during development and low-volume production.
Cloud rendering achieves a 10-50x speedup compared to single-machine sequential rendering. The exact speedup depends on the number of workers, template complexity, and video duration.
| Video Duration | Sequential (1 machine) | Cloud (50 workers) | Speedup |
|---|---|---|---|
| 30 seconds | ~90 seconds | ~5 seconds | 18x |
| 2 minutes | ~6 minutes | ~15 seconds | 24x |
| 10 minutes | ~30 minutes | ~45 seconds | 40x |
| 30 minutes | ~90 minutes | ~2 minutes | 45x |
Longer videos benefit more from parallelism because the overhead of worker startup and frame merging is amortized across more frames.
Docker-based rendering gives you the same parallel rendering architecture as cloud rendering, but running entirely on your local machine. It is completely free, uses no cloud accounts, and is ideal for self-hosted setups, development, and teams that want parallel rendering without cloud costs.
# Ensure Docker is installed and running
docker --version
# Install the cloud rendering package
npm install @rendervid/cloud-rendering
import { CloudRenderer } from "@rendervid/cloud-rendering";
const renderer = new CloudRenderer({
provider: "docker",
quality: "high",
downloadToLocal: true,
outputPath: "/output/video.mp4",
dockerConfig: {
volumePath: "/tmp/rendervid-work",
workersCount: 8, // Number of Docker containers to run in parallel
},
});
const result = await renderer.render(template);
console.log(`Rendered in ${result.renderTime}ms using ${result.workersUsed} workers`);
Choosing workersCount: Set this to the number of CPU cores available on your machine. For example, an 8-core machine works well with 8 workers. Going beyond your core count adds overhead from context switching without improving throughput.
+------------------+
| Coordinator |
| (your process) |
+--------+---------+
|
+-----+-----+-----+-----+
| | | | |
+--v--+ +--v-+ +-v--+ +-v--+
| C1 | | C2 | | C3 | | C4 | ... Docker containers
+--+--+ +--+-+ +-+--+ +-+--+
| | | |
v v v v
+--+------+-----+------+--+
| Shared Volume |
| /tmp/rendervid-work |
+-------------+-------------+
|
v
+-------+--------+
| Merger |
+-------+---------+
|
v
+-------+---------+
| /output/video |
+-----------------+
Each Docker container is a self-contained worker with Node.js, Playwright, and FFmpeg pre-installed. Workers read their frame assignments from the shared volume, render the frames, and write the results back. The coordinator then merges all segments into the final output.
Rendervid supports motion blur through temporal supersampling. Instead of rendering a single instant per frame, the renderer captures multiple sub-frames at slightly different points in time and blends them together. This produces the natural blur that cameras create when objects move during an exposure.
| Preset | Samples per Frame | Render Time Multiplier | Visual Quality |
|---|---|---|---|
low | 5 | 5x | Subtle smoothing |
medium | 10 | 10x | Noticeable blur on fast motion |
high | 16 | 16x | Cinematic motion blur |
ultra | 32 | 32x | Film-grade, heavy blur |
const result = await renderer.render(template, {
format: "mp4",
quality: "high",
outputPath: "/output/cinematic.mp4",
motionBlur: {
enabled: true,
quality: "high", // 16 samples per frame
},
});
Frame N (no motion blur): Frame N (with motion blur, 5 samples):
Single instant: 5 sub-frames blended:
+--------+ +--------+ +--------+ +--------+
| O | | O | + | O | + | O | ...
+--------+ +--------+ +--------+ +--------+
|
v
+--------+
| ~O~ | <- Blended result
+--------+
Each sub-frame advances the animation timeline by a tiny increment (1/fps divided by the sample count). The sub-frames are then alpha-blended to produce the final frame. Objects that moved between sub-frames appear blurred along their motion path, while stationary elements remain sharp.
Motion blur multiplies render time proportionally to the sample count. A 10-second video at 30fps has 300 frames. With high quality (16 samples), the renderer must generate 4,800 sub-frames instead of 300. Use draft quality during development and switch to high or ultra for final exports only.
Cloud rendering and Docker parallel rendering work well with motion blur because the per-frame cost is distributed across workers. A 16x per-frame increase divided across 16 workers results in roughly the same total render time as a non-blurred render on one machine.
Rendervid’s GIF export goes far beyond a simple frame-to-GIF conversion. It uses FFmpeg’s palette generation pipeline to produce optimized, high-quality animated GIFs with configurable dithering, color counts, and file size constraints.
Standard GIF encoding uses a single global palette of 256 colors, which often results in banding and poor color reproduction. Rendervid uses a two-pass approach:
| Preset | Resolution | Max Colors | Target Use Case |
|---|---|---|---|
social | 480x480 | 256 | Instagram, Twitter, Slack |
web | 640x480 | 256 | Blog posts, documentation |
email | 320x240 | 128 | Email campaigns, newsletters |
| Algorithm | Quality | File Size | Description |
|---|---|---|---|
floyd_steinberg | Best | Largest | Error-diffusion dithering, smooth gradients |
bayer | Good | Medium | Ordered dithering, consistent pattern |
none | Lowest | Smallest | No dithering, flat color regions |
const result = await renderer.render(template, {
format: "gif",
outputPath: "/output/animation.gif",
gif: {
preset: "social", // 480x480 resolution
colors: 256, // 2-256 color palette
dithering: "floyd_steinberg",
targetSizeKB: 5000, // Auto-optimize to stay under 5MB
fps: 15, // Lower FPS = smaller file
},
});
console.log(`GIF size: ${(result.fileSize / 1024).toFixed(0)} KB`);
console.log(`Estimated size was: ${result.estimatedSizeKB} KB`);
When you set a targetSizeKB, Rendervid estimates the output file size before rendering and automatically adjusts parameters (color count, resolution, FPS) to meet the target. This is particularly useful for platforms with file size limits (e.g., Slack’s 50 MB limit, email’s typical 10 MB constraint).
// Auto-optimize to fit within a 2MB email constraint
const result = await renderer.render(template, {
format: "gif",
outputPath: "/output/email-banner.gif",
gif: {
preset: "email",
targetSizeKB: 2000,
},
});
Rendervid is organized as a monorepo with 13 packages. Each package has a focused responsibility, and they compose together to support every deployment scenario.
@rendervid/
├── core Engine, types, validation, animation system
│ ├── Template parser and validator (AJV + JSON Schema)
│ ├── Animation engine (40+ presets, 30+ easing functions)
│ ├── Layer system (text, image, video, shape, audio, group, lottie, custom)
│ └── Scene management and transitions (17 types)
│
├── renderer-browser Client-side rendering
│ ├── Canvas-based frame rendering
│ ├── MediaRecorder for WebM export
│ └── WebAssembly MP4 encoder
│
├── renderer-node Server-side rendering
│ ├── Playwright/Puppeteer headless browser
│ ├── FFmpeg integration (fluent-ffmpeg)
│ ├── GPU acceleration
│ └── GIF optimization pipeline
│
├── cloud-rendering Multi-cloud orchestration
│ ├── AWS Lambda provider
│ ├── Azure Functions provider
│ ├── Google Cloud Functions provider
│ ├── Docker local provider
│ ├── Chunk splitter and merger
│ └── Object storage adapters (S3, Blob, GCS)
│
├── player Video/template player component
├── editor Visual template editor (Zustand state)
├── components Pre-built React components
│ ├── AnimatedLineChart
│ ├── AuroraBackground
│ ├── WaveBackground
│ ├── SceneTransition
│ └── TypewriterEffect
│
├── templates Template definitions and examples (100+)
├── testing Testing utilities
│ ├── Vitest custom matchers
│ ├── Snapshot testing helpers
│ └── Visual regression utilities
│
├── editor-playground Editor development environment
├── player-playground Player development environment
├── mcp Model Context Protocol server
└── docs VitePress documentation site
renderer-node and distributes its work across cloud functions or Docker containers.Rendervid is built on a modern TypeScript stack chosen for reliability, performance, and developer experience.
| Layer | Technology | Purpose |
|---|---|---|
| Language | TypeScript | Type safety across all 13 packages |
| Build | tsup, Vite | Fast builds, tree-shaking, ESM/CJS output |
| Testing | Vitest | Unit tests, snapshot tests, custom matchers |
| UI Framework | React 18.3.1 | Component rendering, template composition |
| State Management | Zustand | Editor state (lightweight, no boilerplate) |
| Styling | Tailwind CSS | Editor and player UI |
| Validation | AJV with JSON Schema | Template validation before rendering |
| Browser Rendering | HTML Canvas API | Frame-by-frame drawing in the browser |
| Headless Browser | Playwright, Puppeteer | Server-side frame capture |
| Video Encoding | FFmpeg (fluent-ffmpeg) | H.264, H.265, VP9, ProRes, GIF encoding |
| 3D Graphics | Three.js (optional), CSS 3D | 3D scenes and perspective transforms |
| Documentation | VitePress | Package documentation site |
Rendervid includes a dedicated testing package (@rendervid/testing) that provides custom Vitest matchers, snapshot testing helpers, and visual regression utilities for validating templates.
import { describe, it, expect } from "vitest";
import "@rendervid/testing/matchers";
describe("Product Showcase Template", () => {
it("should be a valid template", () => {
expect(template).toBeValidTemplate();
});
it("should have the correct dimensions", () => {
expect(template).toHaveResolution(1920, 1080);
});
it("should contain at least one text layer", () => {
expect(template).toContainLayerOfType("text");
});
it("should have animations on the headline", () => {
expect(template.scenes[0].layers[0]).toHaveAnimation("entrance");
});
});
Snapshot testing renders a template to an image and compares it against a stored reference. Any visual change causes the test to fail, making it easy to catch unintended regressions.
import { describe, it } from "vitest";
import { renderSnapshot } from "@rendervid/testing";
describe("Template Visual Regression", () => {
it("should match the reference snapshot at frame 0", async () => {
const snapshot = await renderSnapshot(template, { frame: 0 });
expect(snapshot).toMatchImageSnapshot();
});
it("should match the reference snapshot at the midpoint", async () => {
const totalFrames = template.fps * template.scenes[0].duration;
const snapshot = await renderSnapshot(template, {
frame: Math.floor(totalFrames / 2),
});
expect(snapshot).toMatchImageSnapshot();
});
});
Integrate visual regression tests into your CI/CD pipeline to catch rendering changes before they reach production:
# .github/workflows/visual-regression.yml
name: Visual Regression Tests
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 18
- run: npx playwright install chromium
- run: npm ci
- run: npm run test:visual
Getting the fastest possible render times requires understanding where time is spent and which levers you can pull. Here are the most impactful optimization strategies.
| Scenario | Best Target |
|---|---|
| Quick preview during editing | Browser |
| Single video, production quality | Node.js |
| Batch of 10-100 videos | Node.js or Docker |
| Batch of 100+ videos or time-critical | Cloud (AWS/Azure/GCP) |
draft quality during development and testing. Switch to high or lossless only for final exports.renderWaitTime WiselyThe renderWaitTime option pauses rendering to allow external media (images, videos, fonts) to load. Set this to the minimum value that ensures all assets are loaded. A value of 500-2000ms is typical. Setting it too high wastes time on every frame.
await renderer.render(template, {
renderWaitTime: 1000, // 1 second is usually enough
});
For any video longer than 10 seconds, parallel rendering (Docker or cloud) will be faster than sequential rendering. The break-even point depends on your hardware and cloud configuration, but as a rule of thumb:
GIFs are inherently large. To keep file sizes manageable:
social, web, email).targetSizeKB to let Rendervid auto-optimize parameters.none) if file size matters more than gradient quality.On machines with compatible GPUs, hardware-accelerated encoding can reduce render times by 2-5x for the encoding step. This is most impactful for high-resolution (4K+) and high-bitrate outputs.
If your template references external images or videos, pre-download them to local storage before rendering. Network latency during rendering is the most common cause of slow or failed renders.
Rendervid supports four deployment options: browser-based rendering for client-side previews and web apps, Node.js rendering for server-side batch processing with FFmpeg, cloud rendering on AWS Lambda/Azure Functions/GCP for 10-50x parallel speedup, and Docker for free local parallel rendering.
Cloud rendering costs approximately $0.02 per minute on AWS Lambda, Azure Functions, or Google Cloud Functions—roughly $1 per hour of rendering. Docker-based local rendering is completely free and provides the same parallel rendering benefits.
Cloud rendering uses a coordinator that splits videos into frame chunks, distributes them to worker functions (Lambda/Azure/GCP), each worker renders its assigned frames, a merger combines all frames into the final video, and the output is stored in object storage (S3/Azure Blob/GCS).
For browser rendering, any modern browser with Canvas support works. For Node.js rendering, you need Node.js 18+, Playwright or Puppeteer, and FFmpeg installed. For cloud rendering, you need an AWS/Azure/GCP account or Docker installed locally.
Yes, the Node.js renderer supports hardware acceleration for faster rendering. GPU acceleration can significantly speed up rendering, especially for complex templates with many layers, effects, and high resolutions.
Rendervid implements motion blur using temporal supersampling, rendering multiple sub-frames per output frame and blending them together. Quality presets range from low (5 samples, 5x render time) to ultra (32 samples, 32x render time), producing cinematic smoothness.
We help companies like yours to develop smart chatbots, MCP Servers, AI tools or other types of AI automation to replace human in repetitive tasks in your organization.

Complete guide to the Rendervid template system. Learn how to create JSON video templates, use dynamic variables with {{variable}} syntax, configure 40+ animati...

Discover Rendervid, the free open-source alternative to Remotion for programmatic video generation. AI-first design with MCP integration, JSON templates, cloud ...

Learn how to integrate Rendervid with AI agents using MCP (Model Context Protocol). Generate videos from natural language prompts with Claude Code, Cursor, Wind...
Cookie Consent
We use cookies to enhance your browsing experience and analyze our traffic. See our privacy policy.