
Anthropic has introduced a new AI-powered code review feature designed to help developers handle the rapidly growing volume of software being generated by artificial intelligence tools. The feature, called Code Review, has been launched as part of the company’s Claude Code platform.
The tool is designed to detect bugs, security vulnerabilities, and inconsistencies in code before it is merged into a company’s software codebase. As AI-assisted programming tools become more widely used, developers are producing significantly larger amounts of code, increasing the need for efficient review systems to maintain quality and reliability.
Anthropic said the surge in AI-driven programming—often referred to as “vibe coding”—is changing how engineers build software. Developers can now generate large portions of code using natural-language prompts, which speeds up development but can also introduce errors, poorly understood logic, or potential security issues.
The new AI reviewer automatically analyzes code changes submitted through pull requests, which developers typically use to propose updates before integrating them into a project’s main codebase. The system evaluates these changes and flags potential problems so developers can review and address them before deployment.
According to Anthropic’s head of product Cat Wu, the feature was developed in response to enterprise customers who were struggling to review the rising number of pull requests generated by AI coding tools. As Claude Code generates more software automatically, organisations are facing bottlenecks in reviewing and approving those changes.
The Code Review tool is intended to streamline this process by acting as an automated assistant that inspects the generated code and highlights areas that require human attention. This allows engineering teams to focus on critical decisions while the system handles routine checks.
The feature is currently being released as a research preview and is available to users of Claude for Teams and Claude for Enterprise. Anthropic plans to expand access gradually based on feedback from early adopters.
The launch reflects a broader shift in software development workflows, where engineers are increasingly supervising and validating code produced by AI systems rather than writing every line themselves.




