Agentic Vision

Agentic Vision

Give any AI agent the ability to see and search your video library. PureFrame exposes its tools over MCP and a direct HTTP API so agents can find video moments the same way a human would — by describing what they’re looking for.

Available tools

ToolDescription
search_videosSearch for moments matching a text query or image. Returns clips with timestamps and embedded frames.
list_collectionsList all video collections in the library.
get_collectionGet details about a specific collection.
get_videoGet details and a presigned playback URL for a video.

Visual responses

search_videos results include thumbnail_base64 — a base64-encoded JPEG of the matched frame. Vision-capable models like GPT-4o and Claude can see this directly without following a URL, making it possible to ask follow-up questions about the visual content of a clip.

MCP integration

The fastest path is installing the PureFrame MCP server. Add the following config to your client:

1{
2 "mcpServers": {
3 "pureframe": {
4 "command": "npx",
5 "args": ["-y", "pureframe-mcp"],
6 "env": { "PUREFRAME_API_KEY": "pf_key_..." }
7 }
8 }
9}
Claude Desktop

~/Library/Application Support/Claude/claude_desktop_config.json

Claude Code

~/.claude.json

Cursor

.cursor/mcp.json

Windsurf

~/.codeium/windsurf/mcp_config.json

Remote MCP (no install)

For Claude.ai web or any client that supports remote MCP — no npx required:

1{
2 "mcpServers": {
3 "pureframe": {
4 "url": "https://mcp.pureframe.ai",
5 "headers": { "Authorization": "Bearer pf_key_..." }
6 }
7 }
8}

HTTP function calling

For OpenAI, Gemini, or any LLM with function calling support — fetch the tool schema once and call tools directly over HTTP:

1import httpx
2
3BASE = "https://api.pureframe.ai"
4HEADERS = {"Authorization": "Bearer pf_key_..."}
5
6# Get the OpenAI-compatible tool schema
7schema = httpx.get(f"{BASE}/v1/agent/schema.json", headers=HEADERS).json()
8
9# Call a tool
10result = httpx.post(f"{BASE}/v1/agent/call", headers=HEADERS, json={
11 "tool": "search_videos",
12 "input": {
13 "query": "person waving at the camera",
14 "limit": 5
15 }
16}).json()

The schema endpoint returns an OpenAI-compatible tools array you can pass directly to client.chat.completions.create(tools=schema).

Example: search + vision with Claude

1import anthropic, httpx
2
3client = anthropic.Anthropic()
4pf = {"Authorization": "Bearer pf_key_..."}
5
6# Search for a moment
7clips = httpx.post("https://api.pureframe.ai/v1/agent/call",
8 headers=pf,
9 json={"tool": "search_videos", "input": {"query": "explosion scene", "limit": 1}}
10).json()["data"]
11
12# Pass the matched frame directly to Claude — no URL fetching needed
13frame_b64 = clips[0]["segments"][0]["thumbnail_base64"]
14
15response = client.messages.create(
16 model="claude-opus-4-7",
17 max_tokens=1024,
18 messages=[{
19 "role": "user",
20 "content": [
21 {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": frame_b64}},
22 {"type": "text", "text": "Describe what's happening in this frame."}
23 ]
24 }]
25)
26print(response.content[0].text)