Product Introduction
- Definition: CtrlAI is a protocol-level HTTP proxy designed for AI agent security. It operates transparently between AI agent SDKs (like OpenClaw) and Large Language Model (LLM) providers (e.g., Anthropic, OpenAI), intercepting and inspecting traffic without SDK modifications.
- Core Value Proposition: CtrlAI enforces configurable guardrail rules, audits all agent behavior, and blocks unsafe LLM tool calls in real-time. Its zero-code deployment eliminates integration overhead while providing critical security for autonomous agents interacting with tools like shell commands, file systems, and APIs.
Main Features
Protocol-Level Interception:
- How it works: CtrlAI acts as a reverse proxy, routing requests from
agent SDK → CtrlAI (:3100) → LLM provider. It buffers streaming responses (e.g., Anthropic Messages, OpenAI Chat Completions), reconstructs tool calls, and applies security rules before forwarding sanitized responses. - Technology: Uses Go’s
net/httpfor HTTP/S routing, SSE (Server-Sent Events) buffering, and JSON parsing for tool-call extraction.
- How it works: CtrlAI acts as a reverse proxy, routing requests from
Configurable Guardrail Engine:
- How it works: Evaluates tool calls against YAML-based rules (
~/.ctrlai/rules.yaml). Rules support glob patterns (**/.env), regex (rm\s+-rf), agent-specific policies, and 19+ built-in security rules (e.g., block SSH key access, destructive commands). First-match logic determinesalloworblock. - Technology: Custom rule engine with condition matchers (AND/OR logic), built-in rule toggles, and CLI testing (
ctrlai rules test '{"name":"exec", ...}').
- How it works: Evaluates tool calls against YAML-based rules (
Kill Switch & Agent Isolation:
- How it works: Instantly terminates agents via CLI (
ctrlai kill main) or API (POST /api/kill). Persists state tokilled.yaml, forcing the proxy to return syntheticend_turnresponses. Supports per-agent isolation using URL paths (/provider/anthropic/agent/main). - Technology: File-watched state management with atomic writes.
- How it works: Instantly terminates agents via CLI (
Tamper-Evident Audit Logging:
- How it works: Logs all tool calls with SHA-256 hash chaining for integrity verification. Stores entries in daily JSONL files + SQLite index. CLI tools enable querying (
ctrlai audit query --agent work --decision block). - Technology: Append-only logs with hash-linking, SQLite for indexed queries.
- How it works: Logs all tool calls with SHA-256 hash chaining for integrity verification. Stores entries in daily JSONL files + SQLite index. CLI tools enable querying (
Real-Time Dashboard & API:
- How it works: Web UI (
:3100/dashboard) shows agent activity, audit trails, and kill controls. REST API (/api/rules,/api/audit) and WebSocket feed enable integrations. - Technology: Go templating for UI, WebSocket broadcasts for live updates.
- How it works: Web UI (
Problems Solved
- Pain Point: Uncontrolled tool execution by LLM agents (e.g., reading
.env, runningrm -rf /). CtrlAI enforces least-privilege access via centralized rules. - Target Audience:
- AI Developers deploying agents like OpenClaw.
- Security Engineers needing compliance for AI workflows.
- Compliance Teams requiring audit trails for LLM actions.
- Use Cases:
- Blocking credential exposure via file-read tools in financial AI agents.
- Preventing production environment writes by developer assistants.
- Auditing tool calls in healthcare chatbots for HIPAA compliance.
- Terminating agents exhibiting suspicious behavior (e.g., brute-force attempts).
Unique Advantages
- Differentiation: Unlike SDK-based solutions (e.g., LlamaGuard), CtrlAI requires no code changes, works with any LLM provider, and operates at the network layer. Versus API gateways, it offers agent-aware policies and tool-call introspection.
- Key Innovation: Protocol-level tool-call interception with zero-trust evaluation. Combines regex/glob matching, hash-based auditing, and kill switches in a single lightweight proxy.
Frequently Asked Questions (FAQ)
How does CtrlAI deploy without SDK changes?
CtrlAI acts as an HTTP proxy. Developers reconfigure their agent’s LLMbaseUrlto point to CtrlAI (e.g.,http://localhost:3100/provider/anthropic), enabling transparent traffic interception.Can I customize rules for specific agents?
Yes. Rules can target agents using theagentfield (e.g.,match: tool: exec, agent: intern). Each agent’s tool calls are evaluated against scoped policies.What happens if CtrlAI blocks a tool call?
The entire LLM response is replaced with a block notice (e.g.,[CtrlAI] Blocked: Destructive command). This prevents partial execution of interdependent tool chains.Does CtrlAI support OpenAI’s GPT-4?
Yes. It fully supports OpenAI Chat Completions (/v1/chat/completions) and Anthropic Messages. Other providers use pass-through routing.How is audit log integrity verified?
Usectrlai audit verifyto check SHA-256 hash chains. Any modification breaks subsequent hashes, ensuring tamper evidence.
