GetProfile logo

GetProfile

User profiles and long-term memory for your AI agents

2025-12-21

Product Introduction

  1. Definition: GetProfile is an open-source, self-hosted middleware for AI agents that creates structured user profiles and long-term memory systems. It operates as an OpenAI-compatible proxy, injecting contextual user data into LLM prompts.
  2. Core Value Proposition: It solves the "context blindness" of generic AI agents by transforming raw interactions into PostgreSQL-stored profiles with natural language summaries, confidence-scored traits, and importance-ranked memories—enabling personalized, context-aware responses without compromising data ownership.

Main Features

  1. OpenAI-Compatible Proxy Gateway:
    • Routes LLM requests through a lightweight Hono-based proxy, injecting structured user context (summary/traits/memories) as system messages.
    • Maintains low latency by forwarding enriched requests to upstream providers (e.g., OpenAI) and asynchronously updating profiles post-response.
  2. Structured PostgreSQL Profiles:
    • Stores user data as JSON with three core components:
      • Natural Language Summary: AI-generated user synopsis (e.g., "Experienced Python engineer exploring distributed systems").
      • Typed Traits: Customizable attributes (e.g., communication_style: technical) with confidence scores (0–1 scale) for extraction accuracy.
      • Memory System: Event/fact-based memories ranked by importance (e.g., "Uses Kubernetes at work" @ 0.7 importance).
  3. Customizable Trait Engine:
    • Allows defining trait schemas via JSON configuration, including:
      • valueType (enum/text/number), confidenceThreshold, and extraction rules.
      • Injection templates (e.g., "User prefers {{value}} communication") with priority weighting.
  4. Self-Hosted Docker Deployment:
    • Deploys via Docker Compose with minimal dependencies—PostgreSQL for storage and Hono for the proxy layer. Scales horizontally for enterprise workloads.

Problems Solved

  1. Pain Point: Eliminates inefficient context handling in AI agents (e.g., bloated prompts, irrelevant memory snippets) that degrade response quality and increase token costs.
  2. Target Audience:
    • AI developers building personalized chatbots/coding assistants.
    • Product teams requiring user-specific context in SaaS applications.
    • Privacy-focused enterprises needing GDPR-compliant memory storage.
  3. Use Cases:
    • Coding assistants recalling a developer’s preferred frameworks (e.g., "Prefers async/await over callbacks").
    • Customer support bots referencing past ticket resolutions.
    • Healthcare AI tracking patient interaction histories securely.

Unique Advantages

  1. Differentiation: Contrasts with text-blob memory systems (e.g., vector databases) by using extractive AI to distill interactions into structured, query-optimized profiles—reducing prompt noise by 40–60%.
  2. Key Innovation: Confidence-scored trait extraction ensures only high-certainty attributes influence responses, while importance-ranked memories prevent context window pollution.

Frequently Asked Questions (FAQ)

  1. How does GetProfile integrate with existing AI agents?
    It acts as a drop-in proxy for OpenAI API calls—add headers (X-GetProfile-Id, X-Upstream-Key) to route requests through GetProfile without code changes.
  2. Can I customize what user data GetProfile stores?
    Yes, define custom traits (e.g., preferred_framework, subscription_tier) via JSON schema, controlling extraction rules and injection formats.
  3. Is GetProfile suitable for high-traffic production environments?
    Its Docker/PostgreSQL stack supports horizontal scaling, and asynchronous updates ensure sub-100ms latency during LLM request enrichment.
  4. How does memory importance scoring work?
    Memories are assigned 0.1–1.0 importance scores during extraction, with higher values prioritized for prompt injection. Scores adjust dynamically based on recency and usage patterns.
  5. What makes GetProfile more private than cloud-based alternatives?
    As self-hosted OSS, all data remains in your infrastructure—PostgreSQL databases never leave your VPC, and the Apache 2.0 license allows full code audits.

Subscribe to Our Newsletter

Get weekly curated tool recommendations and stay updated with the latest product news