Product Introduction
- Definition: ManePaw is a native macOS application leveraging local artificial intelligence (AI) for private document retrieval and multimodal chat. It operates as an offline RAG (Retrieval-Augmented Generation) system, processing text, code, images, and audio entirely on-device.
- Core Value Proposition: It eliminates cloud dependency for AI-powered knowledge management, ensuring zero data exposure, no mandatory accounts, and complete user privacy while enabling semantic search and contextual chat with personal files.
Main Features
Local AI Processing:
- How it works: Integrates Ollama to run open-source LLMs (e.g.,
qwen2.5) locally. All AI computations—embedding generation, inference, and transcription—execute on the user’s Mac hardware. - Technologies: SwiftUI for native UI, Metal for GPU acceleration, Ollama for model orchestration.
- How it works: Integrates Ollama to run open-source LLMs (e.g.,
Multimodal Indexing:
- How it works: Automatically ingests and indexes diverse file types:
- Text/Code: Chunks documents (PDF, Markdown, code) and extracts semantic embeddings using LanceDB.
- Images: Generates AI captions via local vision models.
- Audio: Transcribes speech to text using offline ASR (Automatic Speech Recognition).
- Supported Formats:
.txt,.md,.py,.js,.png,.jpg,.mp3,.wav, and 20+ others.
- How it works: Automatically ingests and indexes diverse file types:
Project-Aware Semantic Search:
- How it works: Detects codebases via manifest files (
package.json,Cargo.toml) and indexes function/class signatures. Uses vector similarity search in LanceDB to retrieve results by contextual meaning, not just keywords. - Search Scope: Cross-file queries (e.g., "Find authentication middleware in my Node.js projects").
- How it works: Detects codebases via manifest files (
RAG-Powered Chat:
- How it works: For each query, retrieves relevant document snippets using semantic search, then feeds context to the local LLM to generate sourced responses. Citations link to original files.
- Use Case: Ask "Summarize my Rust API documentation" to get AI summaries with linked source files.
Problems Solved
- Pain Point: Sensitive data exposure in cloud-based AI tools (e.g., ChatGPT, Gemini). ManePaw ensures confidential documents, proprietary code, or private media never leave the device.
- Target Audience:
- Developers needing private codebase Q&A.
- Researchers handling confidential data.
- Privacy-conscious professionals (legal, healthcare) requiring offline document analysis.
- Use Cases:
- Auditing code for security flaws without uploading to third parties.
- Searching meeting recordings via transcribed audio.
- Querying internal wikis on air-gapped networks.
Unique Advantages
- Differentiation vs. Competitors:
Feature ManePaw Cloud Tools (e.g., Dropbox AI) Data Location On-device Remote servers Internet Requirement None Mandatory Pricing One-time purchase (free) Subscription-based - Key Innovation: Hybrid native-local architecture—SwiftUI frontend + NestJS backend + LanceDB—enables complex RAG workflows entirely offline. Native macOS APIs enable file system integration and Metal-accelerated AI.
Frequently Asked Questions (FAQ)
Is ManePaw truly private?
Yes. All data processing occurs locally—no telemetry, cloud uploads, or external servers. Files are stored in~/Library/Application Support/ManePaw.What file types can ManePaw search?
It supports text (.txt,.md), code (.js,.py,.rs), images (.png,.jpg), and audio (.mp3,.wav). See README for full list.Which macOS versions are supported?
Requires macOS Sonoma (14+) or later due to SwiftUI 5 and Metal 3 dependencies for on-device AI.How to use custom Ollama models with ManePaw?
Pull any Ollama-supported model (e.g.,llama3,mistral), then configure the backendconfig.jsonto point to your local model.Can ManePaw index entire code repositories?
Yes. It auto-detects projects viapackage.json(Node.js),Cargo.toml(Rust), and other manifests, indexing code structure for semantic queries like "Find database schema handlers."
