AI Video Reference

Use the AI video hooks to submit long-running video jobs, poll for completion, and play back hosted MP4 results. Video generation is job-based so apps can survive reloads, slow model runs, and flaky networks.

Job-Based Video Generation

Video generation uses two hooks:

  • useSubmitVideoJob submits a request and returns a jobId.
  • useVideoJobStatus polls that job until it completes or fails.

Import both hooks from the AI hook module:

import { useSubmitVideoJob, useVideoJobStatus } from "@/hooks/use-ai";
import { usePersistentItem } from "@/hooks/use-persistent-item";

export default function App() {
  const [prompt, setPrompt] = React.useState("");
  const [jobId, setJobId] = usePersistentItem<string | null>("videoJobId", null);
  const [videoUrl, setVideoUrl] = usePersistentItem<string | null>("videoUrl", null);
  const [videoError, setVideoError] = React.useState<string | null>(null);
  const { submitVideo, isSubmitting, error: submitError } = useSubmitVideoJob();

  const job = useVideoJobStatus(jobId, {
    onComplete: ({ url }) => {
      setVideoUrl(url);
      setJobId(null);
    },
    onError: (message) => {
      setVideoError(message);
      setJobId(null);
    },
  });

  const isGenerating = job.status === "pending" || job.status === "processing";

  const handleGenerate = async () => {
    if (!prompt.trim()) return;
    setVideoError(null);

    const result = await submitVideo({
      prompt: prompt.trim(),
      aspectRatio: "16:9",
      durationSeconds: 4,
    });

    if (result) setJobId(result.jobId);
  };

  const errorMessage = submitError?.message || videoError || job.error;

  return (
    <div>
      <input
        value={prompt}
        onChange={(event) => setPrompt(event.target.value)}
        placeholder="Describe the video you want..."
      />
      <button onClick={handleGenerate} disabled={isSubmitting || isGenerating}>
        {isSubmitting ? "Submitting..." : isGenerating ? "Generating..." : "Generate video"}
      </button>
      {errorMessage && <p className="text-destructive">{errorMessage}</p>}
      {videoUrl && <video src={videoUrl} controls className="w-full rounded-lg" />}
    </div>
  );
}

Video Utilities (Experimental)

The platform also includes client-side video helpers for workflows around generated videos. These hooks use browser media APIs and return local File objects that work with useDownload, useFileUpload, or video generation inputs.

useVideoFrame

Extract a still frame from a video URL, Blob, File, or { url } object.

import { useVideoFrame } from "@/hooks/use-video-frame";

const { extractFrame, isLoading, error, clearError } = useVideoFrame();

const frameFile = await extractFrame({
  source: previousVideoUrl,
  time: "last",
  format: "image/png",
});

time can be "first", "last", or a timestamp in seconds. Use the returned File directly as the image for the next submitVideo call.

useVideoStitch

Concatenate two or more clips into one MP4 or WebM file.

import { useDownload } from "@/hooks/use-download";
import { useVideoStitch } from "@/hooks/use-video-stitch";

const { stitch, isProcessing, progress, cancel } = useVideoStitch();
const { download } = useDownload();

const result = await stitch({
  sources: [firstClipUrl, secondClipUrl],
  muted: false,
});

if (result) download(result);

stitch accepts sources, width, height, videoBitsPerSecond, and muted. Progress is reported from 0 to 1. All clips should use the same aspect ratio for best results.

useVideoAudioOverlay

Replace or mix audio on top of a video.

import { useVideoAudioOverlay } from "@/hooks/use-video-audio-overlay";

const { overlay, isProcessing, progress, cancel } = useVideoAudioOverlay();

const result = await overlay({
  video: videoUrl,
  audio: speechUrl,
  audioVolume: 1,
  videoVolume: 0,
});

videoVolume defaults to 0, so the overlay replaces the original audio. Set videoVolume above 0 to mix both tracks, and use audioVolume to keep background music under the original audio.

Model Selection

When no model is specified, the platform chooses the best available model for the user's subscription tier. To expose model choice, pass a model ID and build selectors from listVideoModels():

import { listVideoModels, useSubmitVideoJob } from "@/hooks/use-ai";

const models = listVideoModels();
const [modelId, setModelId] = React.useState<string | undefined>(undefined);
const { submitVideo } = useSubmitVideoJob();

await submitVideo({
  prompt: "A drone shot over mountains",
  model: modelId,
});

listVideoModels() returns ModelInfo[] with id, displayName, provider, tier, description, and capabilities. Video model capabilities include:

  • image_input
  • reference_images
  • reference_videos
  • reference_audio
  • video_extension
  • video_editing
  • last_frame

Prompt-only text-to-video is implicit for video models and is not listed as a capability. Let the server validate exact combinations such as duration, aspect ratio, and incompatible inputs.

Best Practices

  • Persist the jobId until completion so polling survives reloads.
  • Use resultUrl for playback and persistence.
  • Treat job.status as the source of truth for loading UI.
  • Use onComplete and onError to copy terminal state into your own durable state.
  • Choose one video task per submission instead of mixing incompatible inputs.
  • Prefer frame extraction for cross-model continuation, then generate the next clip from the extracted File.
  • Use useVideoStitch to assemble multi-shot workflows into a downloadable file.
  • Use useVideoAudioOverlay for narration, background music, or replacing original audio.
  • Include subject, action, style, camera motion, composition, focus, and ambiance in video prompts.