- Code Review is a multi-agent Claude Code tool to fix any AI generated code issues
- Token-based pricing typically results in a $15 to $25 fee, Anthropic says
- Issues were reported in 84% of large pull requests, with an average of 7.5 hits.
In response to ongoing studies questioning the accuracy of AI coding tools, particularly the security and privacy credentials of their output, Anthropic has released a new tool for reviewing code in GitHub pull requests.
Code Review for Claude Code uses multiple agents to maximize accuracy, looking for “logical errors, security vulnerabilities, broken edge cases, and subtle regressions” (via supporting documentation).
The tool will then highlight any findings as inline comments, with a summary comment, in the pull request to help developers identify any issues more clearly.
Article continues below.
Code Review is Claude’s answer to insecure AI-generated code
Anthropic also noted that agents verify findings and classify issues by severity to reduce false positives, while cross-referencing the entire codebase for context, not just the difference.
Pricing for the new tool is token-based, but Anthropic said that, on average, a typical pull request will cost between $15 and $25, depending on size and complexity. A review usually takes about 20 minutes, but again, this is just a guide.
The timing of this new feature’s release isn’t insignificant either: Anthropic says code output per engineer has grown 200% in the last year with a marked increase in jitter coding, and there’s now a clear bottleneck in the review process.
Anthropic also claims to use Code Review for Claude Code on almost all pull requests internally, and has seen great success with the tool. Early data shows that 84% of large pull requests (over 1000 lines) contain findings, with an average of 7.5 issues, while small pull requests (less than 50 lines) still have an average of 0.5 issues, and 31% of them receive feedback.
Emphasizing the accuracy of the tool, the company also added that less than 1% of detected issues are rejected by human developers.
The code review is currently available as a research preview for the Claude Teams and Enterprise plans. Anthropic didn’t share details of a broader rollout, but we expect it to come eventually.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




