Demos written by your AI assistant.
Hand your agent the CaptureBeam skill bundle and an API key. It knows how to read the schema, probe a URL for real ARIA targets, author a YAML, render it, and self-correct on failure. Works with Claude Code Skills, Anthropic / OpenAI agent loops, and any generic agent runtime that supports tool calls.
Get the skill bundle
Eight files: a SKILL.md, a system prompt, a JSON Schema, four worked examples, and a README. All publicly downloadable.
The skill itself. Drop into ~/.claude/skills/capturebeam/SKILL.md for Claude Code; or use as context in any agent runtime.
Open / downloadDrop-in system prompt for OpenAI / Anthropic / generic agent loops. Pass as the system message.
Open / downloadJSON Schema (draft-07) for demo.yaml. The same content lives at /api/v1/schema; this is the static snapshot.
Open / downloadThree steps, no styling — the bare-minimum shape.
Open / downloadA typical 7-step welcome flow. Captions, presets, cameraPan.
Open / downloadA 9:16 vertical clip with keyPress chords and hover. Social-ready.
Open / downloadA 1:1 square clip exercising a single Storybook component.
Open / downloadHow to install the bundle. Curl-loop snippets for Claude Code Skills.
Open / downloadmkdir -p ~/.claude/skills/capturebeam/examples
cd ~/.claude/skills/capturebeam
curl -L https://capturebeam.com/agents/SKILL.md -o SKILL.md
curl -L https://capturebeam.com/agents/system-prompt.md -o system-prompt.md
curl -L https://capturebeam.com/agents/schema.json -o schema.json
for n in 01-minimal 02-onboarding 03-feature-tour 04-storybook-component; do
curl -L "https://capturebeam.com/agents/examples/$n.yaml" \
-o "examples/$n.yaml"
doneAfter install, ask Claude Code anything like “render an onboarding demo for https://app.acme.com” and the skill kicks in automatically.
The agent loop
- Read the schema. Either fetch
/api/v1/schemalive, or read the static /agents/schema.json. - Probe the URL. Hit
POST /api/v1/probewith{ url }. The response lists every interactive element (role, name, label, hint). - Draft the YAML. Use NL targets like
{ role: "button", name: "Sign in" }— never CSS selectors. Keep demos short: 5–12 steps, ~30 seconds. - Render. POST to
/api/v1/renderswith raw YAML or a project ID. Poll/api/v1/renders/{id}every 2-3 seconds. - Self-correct. On
failed, re-probe, identify the broken step from the per-step trace, patch, retry. Works without re-drafting the whole YAML.
Drop-in system prompt
Paste this into your agent's system prompt — or pull the current version directly from /agents/system-prompt.md.
You can produce demo videos using CaptureBeam.
Tools / endpoints (Bearer auth with the API key in $CAPTUREBEAM_KEY):
GET $BASE/api/v1/schema -> JSON Schema for demo.yaml
POST $BASE/api/v1/probe { url } -> list of clickable elements
POST $BASE/api/v1/renders { yaml } -> { id, status: "pending" }
GET $BASE/api/v1/renders/{id} -> { status, videoUrl?, error? }
To make a demo:
1. Read the schema (live or from /agents/schema.json).
2. Probe the deployed URL to identify real ARIA targets.
3. Author YAML matching the schema. Use NL targets like
{ role: "button", name: "Sign in" }
{ role: "textbox", placeholder: "Email" }.
4. POST the YAML to /api/v1/renders. Poll the job ID every 2-3s.
5. On success, return the videoUrl. On failure, read the error,
adjust the offending step (often: wrong target name or missing
wait), and re-render.
Notes:
- Each render is independent. No project setup required.
- Steps run in order. A failed step doesn't fail the whole render —
the runner records what it could and produces partial output.
- Use `waitAfterMs` after navigations and form submits.
- Add `caption` blocks to make the video self-explanatory.
- render.quality: "1080p" (default, fast) | "1440p" | "4k" (3-5x slower).Complete worked example
From scratch — one shell script that drafts, renders, and prints the video URL. Replace $CAPTUREBEAM_KEY and the YAML body.
#!/usr/bin/env bash
set -euo pipefail
KEY=${CAPTUREBEAM_KEY:?}
BASE=${CAPTUREBEAM_BASE:-https://capturebeam.com}
YAML='title: Sign-up tour
subtitle: New user from scratch
render:
quality: "1080p"
preset: "midnight"
aspect: "16:9"
titleCard: true
steps:
- type: goto
url: https://app.example.com/signup
- type: wait
networkIdle: true
ms: 1500
- type: type
target: { role: "textbox", placeholder: "Email" }
text: "demo@example.com"
caption: { text: "Enter your email" }
- type: type
target: { role: "textbox", placeholder: "Password" }
text: "supersecret"
- type: highlight
target: { role: "button", name: "Create account" }
durationMs: 700
- type: click
target: { role: "button", name: "Create account" }
waitAfterMs: 2000
caption: { text: "And you are in." }'
JOB=$(curl -sS -X POST $BASE/api/v1/renders \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d "$(jq -nc --arg y "$YAML" '{yaml: $y}')" | jq -r .id)
echo "Render queued: $JOB"
while :; do
R=$(curl -sS $BASE/api/v1/renders/$JOB -H "Authorization: Bearer $KEY")
STATUS=$(echo "$R" | jq -r .status)
case "$STATUS" in
succeeded) echo "$R" | jq -r .videoUrl; break ;;
failed) echo "$R" | jq -r .error >&2; exit 1 ;;
*) sleep 3 ;;
esac
doneCommon patterns
Self-correcting agent
When a render fails on a missing target, the agent should probe the URL, find the closest matching element, and retry. The poll response includes a per-step steps array — index, type, status (ok/skipped/failed), optional error, and the resolved selector that was tried. That's enough for the agent to fix exactly the broken step instead of redrafting the whole YAML.
# Pseudo-code for a self-correcting agent
yaml = author_yaml_from_repo()
for attempt in range(3):
job = post_renders(yaml)
result = poll(job) # blocks until done
if result.status == "succeeded" \
and not any(s.status == "failed" for s in result.steps or []):
return result.videoUrl
# Find the first failed step and ask: "what did the runner try?"
first_fail = next(s for s in result.steps if s.status == "failed")
print("step", first_fail.index, "failed:", first_fail.error)
# Probe the page that step ran on, find a closest match by name.
page = post_probe(deployed_url)
fixed_target = best_match(first_fail, page.elements)
yaml = patch_step(yaml, first_fail.index, fixed_target)Per-PR demo bot
On every PR that touches a UI route, generate a demo video and comment with the embed. The agent reads the diff, identifies the changed page, and renders a flow that exercises the new code.
Onboarding videos from docs
For every "Getting started" markdown page in your docs, render a 30s walkthrough that demonstrates the steps. Re-render the whole set on a nightly cron — broken videos surface UI changes before users do.
API surface used by agents
GET /api/v1/schema— JSON Schema + quick reference. No auth required.POST /api/v1/probe— discover elements at a URL. Bearer auth.POST /api/v1/renders— submit a render. Bearer + active subscription.GET /api/v1/renders/{id}— poll a render. Bearer auth.
Full reference + status codes: /docs/api.
Rate limits & soft caps
- Concurrency: max 3 renders in flight per account. The 4th returns 429 — agents should back off or queue.
- Yearly safety net: 5,000 successful renders/year per account. Marketing copy says “unlimited fair use” — this is anti-abuse only. Email if you need it lifted.
- Probe: Bearer-authed but not subscription-gated, so an evaluating agent can use it before the user has subscribed.
What's coming next
- Synchronous render endpoint for short demos — saves the poll loop when an agent only cares about the result.
- Webhooks on render completion — for fully async flows (e.g. GitHub bots).
- Playwright-backed probe for SPA-heavy sites — current probe is static-HTML only.