shutup-mcp logo

shutup-mcp

Zero-config MCP proxy that hides 99% of tools

2026-04-14

Product Introduction

  1. Definition: shutup-mcp is a zero-configuration Model Context Protocol (MCP) proxy and optimization layer designed to act as an intelligent gateway between AI agents and MCP servers. It functions as a middleware that filters and prioritizes tool definitions based on real-time user intent, effectively serving as a semantic router for tool execution.

  2. Core Value Proposition: The primary purpose of shutup-mcp is to solve the "tool bloat" problem in agentic workflows where Large Language Models (LLMs) are overwhelmed by excessive tool definitions. By utilizing semantic search to present only the most relevant 3-5 tools instead of thousands, it achieves a 98% reduction in token consumption and significantly improves tool selection accuracy. It enables high-performance, privacy-conscious tool management without requiring external API keys or complex manual configuration.

Main Features

  1. Multi-Server Aggregation and Discovery: shutup-mcp automatically parses the standard Claude Desktop configuration file (claude_desktop_config.json) to identify all connected MCP servers. It establishes connections to multiple disparate servers simultaneously, such as filesystem, GitHub, or database servers, and aggregates their disparate toolsets into a single, unified searchable index. This eliminates the need for agents to manually navigate multiple server schemas.

  2. Intent-Based Semantic Tool Filtering: Unlike standard proxies that pass through all tool definitions, shutup-mcp intercepts the tools/list request. Using a local vector embedding engine, it compares the user's specific "intent" or task description against the metadata and descriptions of every available tool. It then dynamically subsets the tool list, returning only the top-K (defaulting to 5) most relevant functions. This process uses RAG (Retrieval-Augmented Generation) principles applied specifically to function calling schemas.

  3. Privacy-First Local Embedding Backends: The product offers two primary modes for local data processing to ensure zero-leakage of sensitive tool schemas. By default, it utilizes sentence-transformers with the all-MiniLM-L6-v2 model (approximately 80MB) for high-speed local inference. Alternatively, users can leverage Ollama integration (using models like nomic-embed-text) for a fully self-hosted, offline embedding pipeline, ensuring that tool descriptions and user intents never leave the local environment.

  4. Dynamic Config Monitoring and Hot-Reloading: The proxy includes a file-watching mechanism that monitors the Claude Desktop configuration. When a user adds a new MCP server or modifies arguments in their JSON config, shutup-mcp automatically detects the change, reconnects to the servers, and rebuilds its internal vector index in real-time without requiring a manual restart of the agent session.

Problems Solved

  1. LLM Context Window Exhaustion: Modern MCP ecosystems can expose thousands of tools, where every tool definition consumes precious context tokens. shutup-mcp addresses the "lost in the middle" phenomenon and context overflow by stripping away 99% of irrelevant noise, allowing the agent to operate within a much smaller, more efficient context window.

  2. Poor Tool Selection Accuracy: Benchmarks show that when faced with a vast array of tools, leading LLMs often fail to select the correct function. By narrowing the search space to only the most relevant candidates, shutup-mcp increases tool selection accuracy by over 2x, reducing "hallucinated" tool calls and execution errors.

  3. Target Audience: This tool is essential for AI Engineers building complex agentic systems, Power Users of Claude Desktop who utilize extensive MCP libraries, and Developers working in privacy-sensitive environments where sending tool metadata to third-party embedding APIs is prohibited.

  4. Use Cases: Ideal for scenarios involving massive filesystem structures, large-scale GitHub repository management, or cross-platform automation where an agent needs to switch context between terminal commands, database queries, and web searches without being confused by the sheer volume of available parameters.

Unique Advantages

  1. Zero-Config Implementation: shutup-mcp requires no modification to existing MCP server code or Claude Desktop settings. It wraps around the existing infrastructure, making it a "drop-in" performance booster for any MCP-compatible workflow.

  2. Extreme Efficiency Gains: The product delivers documented performance metrics including an 85% reduction in response latency and a 98% reduction in token overhead. This not only makes agents faster but significantly lowers the cost of using proprietary models like Claude 3.5 Sonnet or GPT-4o.

  3. Standalone Portability: Built in Python with minimal dependencies, it provides a lightweight CLI interface that can be easily integrated into shell scripts or automated dev-environments.

Frequently Asked Questions (FAQ)

  1. How does shutup-mcp improve Claude Desktop performance? shutup-mcp sits between Claude and your MCP servers. When Claude asks "what tools do you have?", the proxy looks at your current task, searches your local tool index, and only tells Claude about the 5 tools that actually help with that task. This prevents the "Too Many Tools" error and makes Claude respond much faster because it has less text to read.

  2. Does using shutup-mcp require an OpenAI API key for embeddings? No. shutup-mcp is designed for local privacy. It uses sentence-transformers or Ollama to create vector embeddings on your own machine. This means your tool names, descriptions, and task intents are processed locally and are never sent to a third-party embedding provider.

  3. Can I use shutup-mcp with multiple MCP servers at once? Yes. It is designed specifically for multi-server aggregation. It reads your entire claude_desktop_config.json, connects to every server listed (like Google Drive, Slack, and Filesystem), and creates a single searchable pool of tools from all of them combined.

Subscribe to Our Newsletter

Get weekly curated tool recommendations and stay updated with the latest product news