Introduction
Introduction
PureFrame is a video intelligence API. Upload your footage and PureFrame indexes every frame, sound, and spoken word — then find any moment with a natural language query or a reference image.
Access these core capabilities through simple REST APIs:
- Search — find specific moments using text or image queries, across visual content, speech, and audio
- Organize — group videos into collections and scope searches to what matters
- Integrate — give any AI agent vision over your video library via MCP or function calling
Jump in
Upload your first video and run your first search in under 5 minutes.
Complete documentation for every endpoint.
Give Claude, GPT-4o, or any LLM the ability to search your video library.
Organize videos and scope searches to a specific library.
How PureFrame works
Upload
Send a video file to POST /v1/upload. PureFrame returns a job_id immediately — processing happens in the background.
Index
PureFrame extracts frames every 5 seconds, embeds them with CLIP, and transcribes speech with Whisper. The job status moves from processing → done.
Base URL
All endpoints are versioned under /v1.
Response envelope
Every response shares the same structure:
Errors return a non-2xx status with a machine-readable code:
See Errors for the full list of error codes.