Linchpin logo

Linchpin

Open-source, self-hostable runtime for managed AI agents

2026-05-13

Product Introduction

  1. Definition: Linchpin is an open-source, self-hostable runtime platform for managed AI agents. Technically, it is a containerized orchestration system that provides a complete backend for developing, deploying, and managing autonomous AI agents in isolated environments.
  2. Core Value Proposition: Linchpin exists to give developers and organizations full control, privacy, and infrastructure ownership over their AI agent operations. It enables the deployment of production-ready agentic workflows on a single VM, eliminating reliance on proprietary, closed-source managed platforms and their associated data routing, lock-in, and cost opacity.

Main Features

  1. Multi-Model Runtime Adapter: The platform abstracts model provider APIs through a unified interface. It supports integration with OpenRouter for access to approximately 200 cloud-based models (like Anthropic Claude, OpenAI GPT, Google Gemini) and Ollama for running any locally hosted open-source LLM. Agents can be configured to use different providers, allowing for cost and performance optimization per task.
  2. Docker-Based Session Sandboxing: Every agent session is instantiated within a dedicated, ephemeral Docker container. Each sandbox comes pre-installed with a development toolset (Python, Node.js, git, ripgrep). Networking is policy-driven via environment definitions, offering choices between completely isolated (none network) or internet-egress enabled (open network) containers for security control.
  3. Built-in & Extensible Tool System: Agents have access to eight core, secure tools (bash, read, write, edit, glob, grep, web_fetch, web_search) that execute strictly within the session's container. The system is extensible via the Model Context Protocol (MCP) over stdio and standard HTTP endpoints, with the Linchpin connector managing subprocess lifecycle and secure credential injection.
  4. Encrypted Credential Vaults: Sensitive data like API keys are stored in a Fernet-encrypted vault. Secrets are referenced by name in agent configurations and are decrypted in-memory only at session startup. This ensures credentials are never persisted to disk in plaintext, significantly enhancing security for self-hosted deployments.
  5. Event-Driven Architecture with SSE Streaming: All agent interactions are recorded in an append-only, cursor-paginated event log per session. Clients can subscribe to real-time updates via Server-Sent Events (SSE), which automatically replays historical events from a given cursor before delivering live streams. This is engineered for building resilient, stateful user interfaces that can recover from disconnections.

Problems Solved

  1. Pain Point: Vendor lock-in and lack of control in hosted AI agent platforms, where user prompts, data, and agent logic are processed through a third-party's infrastructure, creating privacy, security, and portability concerns.
  2. Target Audience: DevOps engineers and backend developers building internal AI agent tools; startups and enterprises requiring private, auditable agent workflows; researchers and hobbyists experimenting with agentic AI who need a production-like local environment.
  3. Use Cases: Deploying internal coding assistants that interact with private repositories; running customer support analysis agents on sensitive ticket data; orchestrating multi-step research and content generation agents with custom tools; creating besktop automation agents that require executing shell commands safely.

Unique Advantages

  1. Differentiation: Unlike SaaS agent platforms (e.g., LangChain, CrewAI cloud offerings), Linchpin is entirely self-hosted and Apache-2.0 licensed. The control plane, data (Postgres), and compute (Docker) all reside on the user's infrastructure. Prompts are sent directly from the user's runtime to their chosen model provider, with no intermediary.
  2. Key Innovation: Its "one docker compose up" deployable full-stack architecture, combining a FastAPI backend, session orchestration, per-session Docker sandboxing, and a React management console into a single, coherent system. This simplifies the immense complexity of building a secure, stateful agent platform from scratch.

Frequently Asked Questions (FAQ)

  1. Is Linchpin truly open-source and self-hostable? Yes. Linchpin is licensed under the permissive Apache-2.0 license, and its entire codebase is available on GitHub. It is designed to be deployed end-to-end on your own infrastructure using Docker Compose, giving you full ownership of the database, API keys, and compute.
  2. How does Linchpin ensure agent safety and prevent misuse? Safety is enforced through Docker container sandboxing, which isolates each agent session. Tool execution policies (always_allow, always_ask) provide granular control over permissions. Furthermore, network egress can be completely disabled at the environment level, preventing unwanted external calls.
  3. What are the infrastructure requirements to run Linchpin? The primary requirement is a Linux server or virtual machine with Docker and Docker Compose installed. The platform itself runs as a set of containers (API, Connector, Console, Postgres). Adequate resources (CPU, RAM) must be allocated for the host Docker daemon to spawn additional agent session containers concurrently.
  4. Can I use local LLMs with Linchpin? Absolutely. Linchpin has native support for Ollama, allowing you to configure agents to use any LLM model you have pulled and run locally on the same host, enabling fully offline and private AI agent workflows.
  5. How is Linchpin different from using LangChain or LlamaIndex directly? Linchpin is a runtime and orchestration platform, not just a framework. While LangChain provides libraries to build agent logic, Linchpin provides the production backend to deploy, execute, monitor, and secure those agents at scale, including session management, sandboxing, and a streaming API.

Subscribe to Our Newsletter

Get weekly curated tool recommendations and stay updated with the latest product news