Odock Documentation
Build governed AI products faster.
Odock sits between your applications and model or MCP providers, giving teams one place to manage access, safety, budgets, routing, usage, and observability.
Choose your path
The most useful starting points for evaluating, integrating, and operating Odock.
Start with the platform
Understand how Odock governs LLM and MCP traffic across the gateway, UI, and observability stack.
Read moreRun it locally
Bring up Odock with Docker Compose and get the core services ready for development.
Read moreUse the LLM gateway
Send OpenAI-compatible, Anthropic, Gemini, vLLM, and unified chat requests through one governed endpoint.
Read moreControl access
Configure API keys, providers, models, MCP servers, teams, and policy boundaries from the control plane.
Read moreSet budgets and quotas
Reserve, settle, and enforce spend or usage limits across organisations, teams, users, and API keys.
Read moreOperate with visibility
Track logs, metrics, traces, alerts, dashboards, request identity, usage, latency, and cost.
Read more