Reachy Mini logo

Reachy Mini

A new open-source robot for your desk

2025-07-10

Product Introduction

  1. Reachy Mini is an open-source, desktop-sized robot kit developed collaboratively by Pollen Robotics and Hugging Face for AI experimentation and creative coding applications. The robot features 6 degrees of freedom in head movement, animated antennas, and multimodal sensors including a wide-angle camera and multiple microphones. It ships as an assemble-it-yourself kit with two versions: a wired Lite model ($299) and a wireless Compute model ($449) with onboard Raspberry Pi 5 and battery.
  2. The core value lies in democratizing physical AI development through affordable hardware, seamless integration with Hugging Face's AI ecosystem, and community-driven knowledge sharing. It enables real-world testing of vision, speech, and interaction models while maintaining full open-source transparency across hardware designs, Python SDK, and simulation environments.

Main Features

  1. Full programmability through Python SDK with native integration for Hugging Face Transformers, Diffusers, and Safetensors libraries, enabling direct deployment of state-of-the-art AI models for real-time audio-visual interactions. Future updates will add JavaScript and Scratch support for educational applications.
  2. Modular sensing array including 4 microphones (Compute version), 5W speaker, wide-angle RGB camera, and 6-axis accelerometer, designed for multimodal AI experiments in human-robot interaction scenarios. Sensor data streams are accessible through standardized APIs compatible with PyTorch and TensorFlow frameworks.
  3. Offline simulation SDK allows pre-deployment testing of robot behaviors in digital twin environments, reducing hardware dependency during development cycles. The simulation environment replicates exact motor specifications (0.1° servo precision) and sensor characteristics for behavior validation.

Problems Solved

  1. Addresses the accessibility gap in physical AI development by offering enterprise-grade sensor capabilities at consumer pricing (starting at $299), significantly lower than industrial robotics platforms averaging $5,000+.
  2. Serves multiple user segments: AI developers prototyping human-robot interaction models, educators teaching robotics/AI concepts, hobbyists building custom behaviors, and researchers testing embodied AI systems in controlled environments.
  3. Enables practical implementation of voice-controlled assistants, emotion recognition systems, educational storytelling robots, and experimental human-robot collaboration setups through its combination of precise motion control (11" height with 6-DoF head movement) and AI-ready sensor payload.

Unique Advantages

  1. Differentiates from comparable educational robots like TurtleBot or Misty II through direct Hugging Face Hub integration, offering one-click deployment of 15+ pre-trained behavior models including object recognition, speech response, and gesture sequences at launch.
  2. Implements unique dual-antenna LED system for non-verbal communication, programmable through RGB values and animation patterns to convey operational states or emotional responses without speech output.
  3. Combines Raspberry Pi 5 compute power (Wireless version) with hardware-accelerated AI inference through Hugging Face Optimum framework, achieving 12ms latency for vision pipelines and 200ms end-to-end response time for voice interactions.

Frequently Asked Questions (FAQ)

  1. What pre-built behaviors are included? The launch package contains 15+ Hub-hosted behaviors including face tracking (using MediaPipe), voice command recognition (with Wav2Vec2), object detection (YOLOv8 implementation), and emotional response patterns using text-to-speech models.
  2. When will Windows support be available? The SDK currently supports Mac and Linux, with Windows compatibility scheduled for Q4 2025 through a WSL2-based solution being tested in public beta.
  3. How does the simulation SDK work? Developers can test motor controls and sensor outputs in a Unity-based virtual environment that mirrors physical unit specifications, with API compatibility ensuring seamless code migration to hardware.
  4. What community resources exist? Users share custom behaviors through Hugging Face Spaces, with 120+ community models already uploaded for gesture control, multilingual support, and educational games.
  5. What's the shipping timeline? Lite version ships in late summer 2025 via DHL Express, while Compute version ships in phased batches from fall 2025 through 2026 based on production capacity.

Subscribe to Our Newsletter

Get weekly curated tool recommendations and stay updated with the latest product news