Agentic Vision
Agentic Vision
Give any AI agent the ability to see and search your video library. PureFrame exposes its tools over MCP and a direct HTTP API so agents can find video moments the same way a human would — by describing what they’re looking for.
Available tools
Visual responses
search_videos results include thumbnail_base64 — a base64-encoded JPEG of the matched frame. Vision-capable models like GPT-4o and Claude can see this directly without following a URL, making it possible to ask follow-up questions about the visual content of a clip.
MCP integration
The fastest path is installing the PureFrame MCP server. Add the following config to your client:
~/Library/Application Support/Claude/claude_desktop_config.json
~/.claude.json
.cursor/mcp.json
~/.codeium/windsurf/mcp_config.json
Remote MCP (no install)
For Claude.ai web or any client that supports remote MCP — no npx required:
HTTP function calling
For OpenAI, Gemini, or any LLM with function calling support — fetch the tool schema once and call tools directly over HTTP:
The schema endpoint returns an OpenAI-compatible tools array you can pass directly to client.chat.completions.create(tools=schema).