The bottleneck moved. Most teams didn't.
For years, the constraint in software delivery was writing code. Not anymore. AI coding tools now generate 41% of all code, and developers using them ship PRs at a measurably higher rate — roughly 20% more per author year-over-year.
But here's what the productivity dashboards aren't showing: the review layer hasn't kept up.
The Faros AI Engineering Report 2026, based on two years of telemetry from 22,000 developers, found that code churn — lines deleted relative to lines added — increased 861% under high AI adoption. Pull requests merged without any review, human or automated, are up 31.3%. Not because teams decided to skip review. Because reviewers can't keep pace with the volume.
This is what Faros calls the "Acceleration Whiplash." AI flooded a system built for human-paced development with output it was never designed to absorb.
The numbers behind the strain
The data points converge from multiple sources:
- Review times increased 91% while PR volume climbed 20%
- Incidents per pull request jumped 23.5%
- AI-generated PRs wait 4.6x longer for review than human-authored ones
- Code acceptance rates look healthy at 80-90%, but real-world retention drops to 10-30% once post-merge rework is counted
Developers feel faster. The metrics that leadership tracks — PRs merged, tasks completed — look great. But the quality signal is decaying underneath.
The review problem is structural, not motivational
Code review depends on human availability. Reviews happen between meetings, feature work, and production issues. As PR volume rises, reviewer capacity stays flat. This mismatch turns the review queue into the primary throughput constraint.
Stale PRs breed merge conflicts. Context decays. Feedback quality drops. The longer a PR sits, the more expensive it becomes to merge — and the more likely it ships without meaningful review.
GitHub's recent launch of native stacked PRs acknowledges part of this problem: large PRs are hard to review. But PR size is only one dimension. The bigger issue is PR volume outpacing review bandwidth.
What actually helps
There's no single fix, but the teams navigating this well share a few habits:
Visibility comes first. You can't fix review bottlenecks you can't see. Teams need a clear view of what's waiting for review, how long it's been waiting, and which PRs carry the most risk. Tools like Code Board exist specifically for this — aggregating PRs across repos into a single board where stale and high-risk PRs surface automatically.
Automated first-pass review saves human attention for what matters. AI can flag style violations, missing tests, and common security patterns. Humans should focus on architecture decisions, edge cases, and whether the change actually solves the right problem.
Measure outcomes, not just output. PR merge count without rework tracking is a vanity metric in 2026. Teams need to track code churn, revert rates, and incidents-per-PR alongside velocity.
The goal isn't to slow down code generation. It's to make sure the review layer evolves at the same pace. Speed without oversight isn't velocity — it's just faster accumulation of debt.
Top comments (0)