Product Introduction
- Definition: Tellme is a browser-based, computer-vision-powered conversational AI platform for cultural and visitor attractions. It technically falls under the categories of Generative AI for Enterprise, On-Device AI, and Augmented Reality (AR) Interpretation.
- Core Value Proposition: Tellme exists to transform static physical exhibits into dynamic, interactive learning experiences by providing governed AI interpretation directly through a visitor's smartphone browser, eliminating the need for app downloads, physical audio guides, or manual lookups. Its core value is delivering personalised visitor engagement while maintaining strict institutional content control.
Main Features
- Visual Object Recognition: The platform uses on-device computer vision to identify exhibits in real-time through the smartphone camera. This works by processing the visual input locally to match against a pre-trained model of the venue's collection, allowing for instant, code-free recognition without requiring constant internet connectivity for the initial identification step.
- Governed Conversational AI & Knowledge Retrieval (RAG): After recognition, the system uses a Retrieval-Augmented Generation (RAG) architecture. It grounds all AI-generated responses strictly within the institution's approved knowledge base, which can include catalogues, signage text, and curated narratives. This ensures answers are accurate, on-brand, and "defensible for curators."
- Browser-First, No-App Experience: The entire interactive guide functions within a mobile web browser. Visitors access it via a QR code or direct link, bypassing app store downloads. This technical approach significantly reduces friction for visitors and simplifies IT deployment and maintenance for the venue.
- Institutional Governance Dashboard: Venue operators have a centralized content management system to upload, review, and approve all source material. The dashboard allows control over AI parameters like tone of voice, reading level, safety guardrails, and accessibility features (e.g., enabling audio narration or translation) before any content goes live to the public.
- Analytics & Visitor Insight Tools: The platform provides anonymized, aggregate data on visitor interactions. Operators gain operational insight into which exhibits hold the most attention, where visitor journeys commonly stop, and what questions are asked, enabling data-driven decisions for programming, gallery planning, and funding reports.
Problems Solved
- Pain Point: Static Interpretation & Limited Engagement. Traditional plaques, audio guides, and fixed tours offer a one-way, linear experience that fails to cater to diverse visitor curiosities, learning paces, and accessibility needs.
- Target Audience: Museum Curators & Interpretation Teams, Heritage Site Managers, Digital Experience Directors, and Board Trustees at museums, art galleries, science centres, zoos, aquariums, and historic houses.
- Use Cases:
- A visitor at a Roman history museum can point their phone at a piece of pottery and ask, "What was this used for in daily life?" receiving an answer sourced from the museum's approved archaeological notes.
- A science centre can offer "Facts Mode" for younger audiences, providing scannable, bite-sized information about complex exhibits like engine models.
- A national museum can meet Welsh-language accessibility requirements by providing full AI narration and text in Welsh directly through the browser-based platform, without producing separate physical guides.
Unique Advantages
- Differentiation: Unlike general AI chatbots (e.g., ChatGPT) or unmanaged consumer AI apps, Tellme is a governed enterprise platform. It prioritizes source-grounded accuracy over generative creativity, tying every output to verified institutional content. Compared to traditional audio guide hardware or bespoke mobile apps, it offers a lower-cost, more scalable, and instantly updatable solution with deeper analytical capabilities.
- Key Innovation: The integration of on-device visual recognition with a strictly governed RAG system in a browser-based environment. This combination uniquely addresses the major hurdles of visitor friction (no app install), institutional risk (unverified AI content), and operational insight (meaningful analytics) in one unified platform.
Frequently Asked Questions (FAQ)
- How does Tellme's AI ensure accuracy and avoid making up information about exhibits? Tellme uses a Retrieval-Augmented Generation (RAG) system. It first retrieves relevant information only from the museum's pre-approved, uploaded knowledge base (catalogues, signage text). The generative AI then formulates answers strictly based on this retrieved data, with line-level source attribution, preventing hallucination and ensuring curator-approved accuracy.
- Do visitors need to download an app to use Tellme at a museum? No, visitors do not need to download any app. Tellme is a 100% browser-based experience. Visitors simply scan a QR code or follow a link on their smartphone to open the interactive guide directly in their mobile web browser (like Chrome or Safari), ensuring instant, low-friction access.
- What kind of data and analytics does Tellme provide to the museum? Tellme provides anonymized, aggregate analytics to protect visitor privacy. This includes data on which exhibits are most engaged with, common drop-off points in visitor journeys, popular questions asked, and overall usage patterns. This data is designed to inform curatorial decisions, gallery planning, and provide evidence for funding reports.
- Can Tellme be used for accessibility and multilingual support? Yes, the platform is built to scale accessibility features. Institutions can configure the AI to deliver audio narration and translate content into multiple languages (like Welsh-first support) directly within the browser experience. This provides dynamic, cost-effective accessibility without the need for separate, static physical guides per language.
- How does Tellme's visual recognition work in low-light conditions or with crowded exhibits? The computer vision model is trained specifically on the venue's own collection images, optimizing it for the actual exhibit environment. While performance can vary with extreme conditions, the technology is designed for real-world use and is already deployed in live, busy public venues with strong reported uptake.
