DEV Community

Tal Vardi
Tal Vardi

Posted on

How I Use AI to Cut My Code Review Prep Time in Half (Step-by-Step)

Code review is one of those tasks that looks passive but actually demands a lot of mental context-switching. You're jumping between files, reconstructing intent, and trying to spot problems the author couldn't see. I started using AI as a first-pass layer before I open a PR — and it's changed how much headspace I have left for the review that actually matters.

Here's the exact workflow I use. Everything is copy-paste ready.


Step 1: Dump the diff into context

Before doing anything else, I grab the raw diff from the branch:

git diff main...feature/my-branch > /tmp/review_diff.txt
Enter fullscreen mode Exit fullscreen mode

Then I open my AI tool of choice (I use Claude or GPT-4 depending on context length) and paste the diff in. Don't ask it anything yet — just load the context first. If the diff is huge, trim it to the files that matter most.


Step 2: Ask for a plain-English summary of intent

Most review friction comes from not knowing why a change exists. Start here:

Copy-paste prompt:

Here is a code diff. In 3-5 bullet points, summarize:
1. What this change is doing at a high level
2. Which components or modules are affected
3. Any assumptions the author appears to be making

Diff:
[paste diff here]
Enter fullscreen mode Exit fullscreen mode

This takes about 10 seconds to run and gives you the mental model you'd normally spend 5 minutes building manually. I treat this output like a co-author's explanation before I read their PR description.


Step 3: Surface edge cases and missing tests

This is where AI genuinely earns its keep. After reading the summary, I ask:

Copy-paste prompt:

Based on this diff, identify:
1. Input edge cases that are not handled or tested
2. Any error paths that appear to be swallowed or ignored
3. Potential race conditions or state mutation issues
4. Missing test coverage (be specific about which functions or branches)

Focus on things a human reviewer might miss on a first pass.

Diff:
[paste diff here]
Enter fullscreen mode Exit fullscreen mode

The key phrase is "things a human reviewer might miss on a first pass." Without it, you get surface-level feedback. With it, you tend to get the stuff that slips through.


Step 4: Run a consistency check against your team's patterns

If your codebase has conventions — naming, error handling style, logging patterns — you can paste a representative snippet alongside the diff and ask:

Compare the style and patterns in [EXISTING CODE SNIPPET] with [NEW DIFF].
List any inconsistencies in naming conventions, error handling, or logging approach.
Do not flag stylistic preferences — only deviations from patterns already established in the existing code.
Enter fullscreen mode Exit fullscreen mode

This avoids the AI going rogue and flagging perfectly valid code that just doesn't match its training preferences.


Step 5: Generate your review comment drafts

Once you've identified real issues, AI is useful for drafting the actual comments — especially for sensitive feedback:

I want to leave a code review comment about the following issue: [describe issue].
Write a comment that is direct and specific, explains the risk, and suggests a fix or asks a clarifying question.
Tone: constructive, peer-to-peer. Max 3 sentences.
Enter fullscreen mode Exit fullscreen mode

I almost never paste these verbatim, but they get me 80% of the way there and stop me from writing comments that are either too vague or accidentally harsh.


What this workflow actually looks like in practice

End-to-end, this takes me about 10 minutes before I open the PR in GitHub. I come in with:

  • A mental model of the change
  • A shortlist of specific things to probe
  • Draft comments for the tricky feedback

The review itself is faster and higher quality. I'm not wasting cycles on orientation — I'm spending them on judgment.


A few caveats

  • AI will hallucinate issues that don't exist. Treat the output as a checklist to verify, not a verdict.
  • Never paste proprietary code into public AI endpoints. Use local models or your org's approved tooling.
  • This workflow gets better the more you tune the prompts for your stack.

If you want more prompts like these organized by workflow — standup prep, architecture review, debugging sessions — I put together a prompt playbook that covers the ones I reach for most: check it out here.

Top comments (0)