Claude Code Introduces /ultrareview: New Fleet of Bug-Hunters

Explore the new /ultrareview feature in Claude Code, a cloud-based multi-agent system designed to identify and verify deep-seated bugs before code merges.

by HowAIWorks Team
claudeclaude-codeanthropicbug-huntingultrareviewai-agentscode-analysissoftware-developmentdevopsagentic-workflows

Introduction

In the rapidly evolving world of agentic software development, the gap between "AI-assisted coding" and "AI-automated engineering" is narrowing. Claude Code, Anthropic's flagship CLI for agentic coding, has just taken a massive leap forward with the introduction of the /ultrareview command.

Available as a research preview, /ultrareview moves beyond simple syntax checking and pattern matching. It initiates a cloud-based "fleet" of specialized bug-hunting agents that work in parallel to stress-test your code. Unlike standard linters or even basic AI reviews, this feature is designed to find deep logical flaws, security vulnerabilities, and performance bottlenecks that typically require hours of human peer review.

What is /ultrareview?

While the standard /review command in Claude Code performs a quick, local analysis of your changes, /ultrareview is an entirely different beast. When you run this command, Claude Code bundles your current branch (including uncommitted and staged changes) and offloads the entire analysis to a remote cloud sandbox.

This offloading is critical for two reasons:

  1. Zero Local Impact: Your local machine remains free to continue development while the review runs in the background.
  2. Infinite Scaling: Anthropic can spin up multiple high-reasoning models (leveraging "xhigh" effort settings) to work on your codebase simultaneously.

The Fleet of Bug-Hunters

The core innovation of /ultrareview is its multi-agent architecture. Instead of a single model reading your code from top to bottom, the system deploys a fleet of agents, each with a specific "personality" or objective:

  • The Logic Specialist: Focuses on control flow, edge cases, and architectural consistency.
  • The Security Auditor: Specifically looks for common vulnerabilities like SQL injection, improper authentication, or insecure data handling.
  • The Performance Optimizer: Identifies inefficient algorithms, redundant database queries, and memory leaks.
  • The Verification Lead: Coordinates the other agents and ensures that every reported issue is reproducible.

Verification-First Philosophy

One of the most frustrating aspects of AI-driven code review is the high rate of "hallucinated" bugs or false positives. Anthropic has addressed this by implementing a mandatory reproduction phase.

When a bug-hunting agent thinks it has found an issue, it doesn't just report it. Instead, it must create a reproduction case within the cloud sandbox to prove the bug exists. Only verified findings are included in the final report that is delivered back to your CLI or Desktop interface. This drastically increases the signal-to-noise ratio, making /ultrareview a high-trust tool for critical changes.

When to Use /ultrareview

Because /ultrareview typically takes 10 to 20 minutes to complete and consumes significant compute resources, it isn't meant for every minor commit. Anthropic recommends using it for "mission-critical" changes, such as:

  • Authentication Logic: Changes to login flows, JWT handling, or permission systems.
  • Data Migrations: Complex database schema changes where data integrity is at risk.
  • Architectural Refactors: Moving large blocks of logic between services or modules.
  • Final PR Reviews: A last line of defense before merging high-impact features into the main branch.

Availability and Pricing

The /ultrareview feature is currently in a research preview and requires Claude Code v2.1.86 or later.

  • Eligibility: You must be signed in with a Pro or Max subscriber account via the /login command.
  • Initial Offer: To encourage testing, Anthropic is offering 3 free reviews to all Pro and Max users until May 5, 2026.
  • Pricing: After the free allotment, reviews are billed as "extra usage." Depending on the size of the codebase and the complexity of the review, costs typically range between $5 and $20 per run.

Conclusion

The release of /ultrareview signals a shift in how we think about the development lifecycle. We are moving toward a future where "Code Review" is no longer just a human-to-human interaction, but a collaborative process involving a fleet of highly specialized AI agents.

By automating the deep, tedious work of bug-hunting and verification, Claude Code allows developers to focus on higher-level design and creativity, while resting assured that their most critical changes have been scrutinized by a cloud-scale audit team.

Sources


Ready to level up your development workflow? Check out our guide to agentic AI tools or explore the latest Claude models to see the power of Anthropic's reasoning engines in action.

Frequently Asked Questions

/ultrareview is a high-effort, multi-agent code analysis feature that runs in a cloud sandbox to find and verify bugs before code is merged.
While /review is a local, single-pass scan, /ultrareview uses a fleet of parallel agents in the cloud to reproduction and verify findings, significantly reducing false positives.
Currently, Pro and Max users receive 3 free reviews until May 5, 2026. Subsequent usage is billed as 'extra usage' based on the review's complexity.

Continue Your AI Journey

Explore our lessons and glossary to deepen your understanding.