Claude Code /ultrareview logo

Claude Code /ultrareview

Cloud code review using a fleet of parallel agents

2026-04-23

Product Introduction

  1. Definition: Claude Code /ultrareview is a high-fidelity, multi-agent automated code review system designed for deep-layer bug detection. It functions as a research-level extension within the Claude Code CLI (Command Line Interface), operating within a remote cloud sandbox environment to perform parallel analysis and independent verification of software defects.

  2. Core Value Proposition: The primary objective of /ultrareview is to provide developers with "pre-merge confidence" by identifying high-signal, verifiable bugs that traditional static analysis or single-pass AI reviews might overlook. By leveraging a fleet of reviewer agents, it eliminates the noise of subjective style suggestions and focuses exclusively on reproducible functional errors, security vulnerabilities, and logic flaws, thereby reducing technical debt and preventing regressions in mission-critical codebases.

Main Features

  1. Multi-Agent Parallel Fleet Execution: Unlike standard sequential analysis, /ultrareview initiates a fleet of independent reviewer agents in a remote cloud infrastructure. These agents explore the diff of a branch or pull request simultaneously, covering a broader surface area of the code logic. This parallelization allows for exhaustive exploration of edge cases and complex state interactions that a single-pass reviewer would likely miss.

  2. Independent Bug Verification and Reproduction: The core technical differentiator of /ultrareview is its verification engine. When an agent identifies a potential bug, the system does not report it immediately. Instead, it attempts to independently reproduce the issue within the remote sandbox. Only findings that are successfully verified and confirmed as real bugs are presented to the user, ensuring an exceptionally high signal-to-noise ratio compared to standard AI feedback.

  3. Remote Sandbox Offloading: All computational heavy lifting, including code cloning, environment setup, and multi-agent processing, occurs in an Anthropic-managed remote sandbox. This architecture ensures that the developer’s local machine resources (CPU, RAM) remain unburdened, allowing the user to continue coding or running local tests while the 5-to-10-minute deep review process completes in the background.

  4. Seamless GitHub Pull Request Integration: The tool supports direct ingestion of GitHub PRs through the command /ultrareview <PR-number>. In this mode, the remote sandbox clones the PR directly from the repository host, facilitating a streamlined workflow for teams utilizing CI/CD pipelines and centralized version control.

Problems Solved

  1. Pain Point: AI Hallucinations and False Positives in Code Review. Traditional AI-driven reviews often suggest stylistic changes or hallucinate bugs that do not exist, leading to "alert fatigue." /ultrareview solves this by requiring independent verification for every reported finding, ensuring that every notification represents a genuine issue.

  2. Target Audience: The product is specifically engineered for Senior Software Engineers, Tech Leads, and QA Automators who manage substantial code changes. It is also highly relevant for Open Source Maintainers who need to vet external contributions and DevOps teams focused on shifting-left their security and reliability testing.

  3. Use Cases:

  • Large-Scale Refactoring: Verifying that structural changes haven't introduced subtle logic errors across a distributed codebase.
  • Critical Security Patches: Performing a deep-dive analysis on sensitive code paths before merging a fix.
  • Pre-Merge Sanity Checks for Substantial PRs: Using the multi-agent fleet to catch "needle-in-a-haystack" bugs in complex pull requests involving thousands of lines of code.

Unique Advantages

  1. Differentiation: Compared to the local /review command, /ultrareview offers significantly higher depth and reliability. While /review is optimized for speed (seconds to minutes) and local context, /ultrareview is optimized for discovery and verification (5-10 minutes), utilizing a cloud-native architecture that /review cannot replicate. It moves beyond "static suggestion" into the realm of "dynamic verification."

  2. Key Innovation: The integration of a "reproduction-first" reporting model. By treating every suspected bug as a hypothesis that must be proven in a sandbox before the developer is notified, /ultrareview shifts the paradigm of AI coding assistants from "generative advisors" to "automated quality assurance agents."

Frequently Asked Questions (FAQ)

  1. How much does a Claude Code /ultrareview cost? Outside of the initial three free runs provided to Pro and Max subscribers (available through May 5, 2026), each /ultrareview run is billed as "extra usage." The typical cost ranges between $5 and $20 per review, depending on the complexity and size of the code changes. Users must have "extra usage" enabled in their billing settings to initiate a paid review.

  2. How long does a deep code review with /ultrareview take? A typical /ultrareview session takes between 5 to 10 minutes to complete. Because it runs as a background task in the remote cloud sandbox, developers can continue using the Claude Code CLI for other tasks, or even close their terminal entirely, without interrupting the review process. Results can be tracked using the /tasks command.

  3. Does /ultrareview support all cloud environments like Bedrock or Vertex AI? No. Currently, /ultrareview is a research preview feature exclusive to the Claude Code on the web infrastructure. It is not available for users accessing Claude via Amazon Bedrock, Google Cloud Vertex AI, or Microsoft Foundry. Additionally, it is disabled for organizations with Zero Data Retention (ZDR) policies due to the requirement for remote sandbox processing.

  4. What is the difference between /review and /ultrareview? The /review command is a fast, single-pass review that runs locally and is best for quick feedback during active iteration. In contrast, /ultrareview is a deep, multi-agent review that runs in the cloud, verifies bugs independently, and is intended for high-stakes, pre-merge validation of significant code changes.

Subscribe to Our Newsletter

Get weekly curated tool recommendations and stay updated with the latest product news