Product Introduction
- Custom Dashboards in OpenLIT is a drag-and-drop Dashboard Builder designed for full-control observability of Large Language Models (LLMs) and Generative AI applications. It integrates natively with OpenTelemetry and supports self-hosting, enabling users to create vendor-neutral monitoring views for tracking metrics like cost, accuracy, and performance. The dashboards are fully customizable and can be imported or exported as JSON for seamless collaboration and reproducibility.
- The core value lies in its ability to unify fragmented observability data into tailored visualizations, empowering teams to monitor multi-provider LLM workflows without vendor lock-in. By combining OpenTelemetry-native instrumentation with flexible dashboard design, it simplifies debugging, cost optimization, and performance analysis for AI-driven applications.
Main Features
- Drag-and-Drop Interface: Users construct dashboards without coding by dragging pre-built widgets for metrics like token usage, latency percentiles, and error rates onto interactive canvases. Widgets automatically connect to OpenTelemetry-collected data from supported SDKs (Python/TypeScript).
- Vendor-Neutral Cost Tracking: Aggregates spending across LLM providers (e.g., OpenAI, Anthropic) into unified visualizations, displaying cost-per-request comparisons and budget alerts. Supports custom currency conversion rates for hybrid cloud deployments.
- JSON-Based Portability: Dashboards are stored as version-controlled JSON files, enabling one-click exports for team sharing or imports to replicate monitoring setups across environments. JSON schemas include metadata for widget positioning and data source mappings.
Problems Solved
- Fragmented Observability: Addresses the challenge of correlating metrics across disjointed tools by centralizing traces, spans, and cost data from multiple LLM providers into a single pane.
- Target Users: AI engineers, MLOps teams, and developers building GenAI applications who require granular visibility into model performance and infrastructure costs.
- Use Cases: Comparing response accuracy between fine-tuned and base LLM models, auditing API spending per project team, or debugging latency spikes in RAG pipelines using trace-to-dashboard mappings.
Unique Advantages
- OpenTelemetry-Native Design: Unlike bolt-on solutions, OpenLIT auto-instruments LLM calls via OpenTelemetry without requiring manual span creation, ensuring compatibility with existing Prometheus/Grafana or Datadog pipelines.
- Self-Hosted Deployment: Provides air-gapped installation via Docker Compose, avoiding cloud service dependencies while retaining integration with SaaS observability platforms like Grafana Cloud.
- Unified Secret Management: Combines dashboarding with Vault integration, allowing environment variables (e.g., API keys) to be securely injected into visualizations without exposing credentials in JSON exports.
Frequently Asked Questions (FAQ)
- How does OpenLIT integrate with existing OpenTelemetry setups? OpenLIT acts as an OpenTelemetry collector, ingesting spans from any OTLP-compatible source and enriching them with LLM-specific attributes like model names and token counts.
- Can I use Custom Dashboards without self-hosting? Yes, OpenLIT offers a managed cloud version with encrypted data storage, though self-hosting is recommended for environments with strict compliance requirements.
- What LLM providers are supported for cost tracking? The tool natively tracks costs for OpenAI, Anthropic, Cohere, and open-source models running on Hugging Face or custom endpoints, with extensible plugins for new providers.
- Is GPU performance monitoring included? Yes, dashboards can display GPU utilization metrics when integrated with NVIDIA DCGM or PyTorch Profiler traces via OpenTelemetry.
- How are access controls handled for shared dashboards? Role-based permissions are enforced through OpenLIT’s JWT integration, with JSON exports optionally encrypted using AES-256 for secure distribution.
