Product Introduction
Definition: WUPHF by Nex.ai is an open-source, local-first multi-agent orchestration framework designed to simulate a collaborative office environment. It functions as a decentralized autonomous workspace where specialized AI agents—such as CEO, Engineering, Design, and Marketing—interact within shared channels to execute complex projects. Technically, it is a Go-based runtime that manages agent lifecycles, state persistence via local SQLite, and tool-calling execution through a terminal-based or web-integrated interface.
Core Value Proposition: WUPHF eliminates the "routing layer" burden where human users must manually prompt and hand off context between different AI tools. By utilizing a "shared brain" architecture and persistent knowledge graphs, WUPHF allows users to drop a single high-level goal into a channel and walk away while the agents self-organize, resolve dependencies, and ship deliverables. Its primary appeal lies in its local execution, MIT-licensed transparency, and the removal of per-seat SaaS pricing models.
Main Features
Autonomous Multi-Agent Coordination: Unlike standard prompt chains, WUPHF agents operate based on role-specific JSON configurations that define their system prompts and toolsets. Coordination is emergent; agents use @mentions in a shared channel (similar to Slack) to request assets, flag blockers, or trigger peer reviews. The "CEO Agent" acts as a natural language router, decomposing goals and assigning tasks to specialized agents like ENG (Engineering) or DSG (Design) based on the project’s real-time needs.
Shared Brain and Persistent Knowledge Base: Every office instance maintains a localized knowledge base to prevent context drift. This is powered by a dual-layer memory system: private notebooks for individual agent focus and a shared workspace wiki for team-wide conclusions. Built on knowledge graph architectures like Garry Tan's GBrain or Nex, these systems ensure that day-to-day decisions, such as PR numbers, specific naming conventions, and previous blockers, are remembered across sessions without accumulating massive, expensive token windows.
Extensible Tool Integration and "Real" Execution: WUPHF agents are equipped with functional tools rather than just text generation capabilities. Through the bash tool and GitHub CLI (gh), agents can perform real-world operations such as cloning repositories, opening Pull Requests, reading local files, and running grep commands. While some integrations like Figma are currently natural-language placeholders, the system allows developers to fork JSON configs and wire in internal APIs, creating a custom "founding team" pack in a matter of hours.
Multi-LLM Runtime Support: The platform is provider-agnostic, allowing an office to run a hybrid workforce. Users can mix and match models within a single channel—for example, running a PM agent on Claude Opus for synthesis while the ENG agent utilizes Codex or a local LLM via OpenCode and OpenClaw. This flexibility optimizes cost and performance by matching the specific task complexity to the most capable model.
Problems Solved
Agent Routing Fatigue: Most AI workflows require a human to act as the "middleman," copying output from one agent to feed into another. WUPHF solves this by allowing agents to talk to each other directly in threads, reducing the human role to high-level oversight and final approval ("LGTM").
Context Loss in Long-Running Projects: Standard LLM chats lose context as the session length increases. WUPHF's use of local SQLite state (~/.wuphf/state) and knowledge graphs ensures that the "AI employees" maintain a consistent understanding of the codebase and project history over weeks or months.
Target Audience: The product is specifically designed for Technical Founders, Software Engineers (specifically those utilizing Claude Code or Codex), Product Managers, and DevOps teams who want to automate the "scaffolding" of project management. It also serves privacy-conscious enterprises that require local execution of AI agents to keep proprietary data off third-party cloud servers.
Use Cases: Essential for rapid prototyping, managing "vision sprints," automated PR management, cross-functional documentation updates (CMO agent updating READMEs alongside ENG code changes), and maintaining open-source repositories where repetitive coordination is required.
Unique Advantages
Differentiation: Traditional AI agents are often "prompts in costumes" that require constant hand-holding. WUPHF differentiates itself by treating the coordination between agents as the product itself. It moves away from the "chat-with-a-bot" UI toward a "watch-a-team" TUI/Web experience. Its commitment to a local-first, zero-telemetry architecture contrasts sharply with the subscription-heavy, cloud-dependent models of competitors.
Key Innovation: The "Knowledge Promotion" logic is a significant technical shift. When a conclusion in an agent’s private notebook is validated or frequently referenced, it is promoted to the team's shared wiki. This simulates human organizational learning, where individual insights become institutional knowledge, resulting in 7x fewer tokens per session compared to re-pasting context into every prompt.
Frequently Asked Questions (FAQ)
Is WUPHF truly open-source and free to run? Yes. WUPHF is MIT-licensed and designed to run on your local machine. There are no per-seat licenses or cloud usage fees. You only pay for the LLM API tokens you use (e.g., Anthropic or OpenAI), or zero if you use local models via OpenCode.
How does WUPHF handle agent loops or getting stuck? Every agent execution has a predefined step budget and timeout. If an agent fails to resolve a task or enters a logical loop, it triggers an escalation to the #general channel, providing a full "Receipts" panel and tool-call trace so the human user can diagnose and provide a manual correction.
Can I customize the AI agents to use my company's specific tools? Absolutely. Every agent is defined by a readable JSON configuration file. You can fork the default "founding-team" pack and modify the tool list to include your internal CLI tools, database connectors, or specific API endpoints, allowing for a tailored AI workforce in a single afternoon.
Does my data leave my machine when using WUPHF? The WUPHF runtime and project state are stored locally in ~/.wuphf. Data only leaves your machine to reach the LLM provider you have configured for inference. If you configure WUPHF to use a local LLM runner, no data leaves your local environment, ensuring total privacy and security.### Product Introduction
Definition: WUPHF by Nex.ai is an open-source, local-first multi-agent orchestration framework designed to simulate a collaborative office environment. It functions as a decentralized autonomous workspace where specialized AI agents—such as CEO, Engineering, Design, and Marketing—interact within shared channels to execute complex projects. Technically, it is a Go-based runtime that manages agent lifecycles, state persistence via local SQLite, and tool-calling execution through a terminal-based or web-integrated interface.
Core Value Proposition: WUPHF eliminates the "routing layer" burden where human users must manually prompt and hand off context between different AI tools. By utilizing a "shared brain" architecture and persistent knowledge graphs, WUPHF allows users to drop a single high-level goal into a channel and walk away while the agents self-organize, resolve dependencies, and ship deliverables. Its primary appeal lies in its local execution, MIT-licensed transparency, and the removal of per-seat SaaS pricing models.
Main Features
Autonomous Multi-Agent Coordination: Unlike standard prompt chains, WUPHF agents operate based on role-specific JSON configurations that define their system prompts and toolsets. Coordination is emergent; agents use @mentions in a shared channel (similar to Slack) to request assets, flag blockers, or trigger peer reviews. The "CEO Agent" acts as a natural language router, decomposing goals and assigning tasks to specialized agents like ENG (Engineering) or DSG (Design) based on the project’s real-time needs.
Shared Brain and Persistent Knowledge Base: Every office instance maintains a localized knowledge base to prevent context drift. This is powered by a dual-layer memory system: private notebooks for individual agent focus and a shared workspace wiki for team-wide conclusions. Built on knowledge graph architectures like Garry Tan's GBrain or Nex, these systems ensure that day-to-day decisions, such as PR numbers, specific naming conventions, and previous blockers, are remembered across sessions without accumulating massive, expensive token windows.
Extensible Tool Integration and "Real" Execution: WUPHF agents are equipped with functional tools rather than just text generation capabilities. Through the bash tool and GitHub CLI (gh), agents can perform real-world operations such as cloning repositories, opening Pull Requests, reading local files, and running grep commands. While some integrations like Figma are currently natural-language placeholders, the system allows developers to fork JSON configs and wire in internal APIs, creating a custom "founding team" pack in a matter of hours.
Multi-LLM Runtime Support: The platform is provider-agnostic, allowing an office to run a hybrid workforce. Users can mix and match models within a single channel—for example, running a PM agent on Claude Opus for synthesis while the ENG agent utilizes Codex or a local LLM via OpenCode and OpenClaw. This flexibility optimizes cost and performance by matching the specific task complexity to the most capable model.
Problems Solved
Agent Routing Fatigue: Most AI workflows require a human to act as the "middleman," copying output from one agent to feed into another. WUPHF solves this by allowing agents to talk to each other directly in threads, reducing the human role to high-level oversight and final approval ("LGTM").
Target Audience: The product is specifically designed for Technical Founders, Software Engineers (specifically those utilizing Claude Code or Codex), Product Managers, and DevOps teams who want to automate the "scaffolding" of project management. It also serves privacy-conscious enterprises that require local execution of AI agents to keep proprietary data off third-party cloud servers.
Use Cases: Essential for rapid prototyping, managing "vision sprints," automated PR management, cross-functional documentation updates (CMO agent updating READMEs alongside ENG code changes), and maintaining open-source repositories where repetitive coordination is required.
Unique Advantages
Differentiation: Traditional AI agents are often "prompts in costumes" that require constant hand-holding. WUPHF differentiates itself by treating the coordination between agents as the product itself. It moves away from the "chat-with-a-bot" UI toward a "watch-a-team" TUI/Web experience. Its commitment to a local-first, zero-telemetry architecture contrasts sharply with the subscription-heavy, cloud-dependent models of competitors.
Key Innovation: The "Knowledge Promotion" logic is a significant technical shift. When a conclusion in an agent’s private notebook is validated or frequently referenced, it is promoted to the team's shared wiki. This simulates human organizational learning, where individual insights become institutional knowledge, resulting in 7x fewer tokens per session compared to re-pasting context into every prompt.
Frequently Asked Questions (FAQ)
Is WUPHF truly open-source and free to run? Yes. WUPHF is MIT-licensed and designed to run on your local machine. There are no per-seat licenses or cloud usage fees. You only pay for the LLM API tokens you use (e.g., Anthropic or OpenAI), or zero if you use local models via OpenCode.
How does WUPHF handle agent loops or getting stuck? Every agent execution has a predefined step budget and timeout. If an agent fails to resolve a task or enters a logical loop, it triggers an escalation to the #general channel, providing a full "Receipts" panel and tool-call trace so the human user can diagnose and provide a manual correction.
Does my data leave my machine when using WUPHF? The WUPHF runtime and project state are stored locally in
~/.wuphf. Data only leaves your machine to reach the LLM provider you have configured for inference. If you configure WUPHF to use a local LLM runner (like Ollama or OpenCode), no data leaves your local environment, ensuring total privacy.
