Code Quality Tools Deep Dive · 5 of 6

Code Review — The Last Sanity Check Before Merge

A code review is a human reading your change before it lands. It catches bugs the tests didn't think of, keeps designs consistent, spreads knowledge across the team, and gives newcomers the feedback loop they need to get good. Done well, it's the highest-leverage hour anyone spends on code. Done badly, it's where PRs go to die.

GitHub PRsGitLab MRsGerritCodeOwnersAsync Review
← Back to Testing
Quick Facts

What Code Review Is For

Basic Concepts

  • Catch bugs the tests missed. A second set of eyes spots assumptions the author held implicitly.
  • Maintain consistency. Patterns, naming, architecture stay aligned across the codebase as the team grows.
  • Spread knowledge. The reviewer learns what's changing; the team's bus factor goes up.
  • Mentor. Newer engineers learn the codebase's standards faster from review than from documentation.
  • Audit. In regulated environments, review is the auditable record of who approved what.

What it isn't for: arguing about formatting (the formatter decides), catching style mistakes (the linter does), or ensuring tests exist (CI does). When tools take over the mechanical checks, review becomes about meaning.

Tools

The Common Platforms

ToolStyleNotes
GitHub Pull RequestsBranch-based, batch reviewThe default. Reviews entire branch as a single unit.
GitLab Merge RequestsSame ideaSimilar feature set; integrated CI/CD.
GerritPer-commit reviewEach commit is reviewed separately, amend-and-rebase workflow. Common at Google, the LKML, OpenStack.
Phabricator (legacy) / PhorgePer-diff reviewStacked diffs, tight pre-commit linting. Phorge is the community fork.
Graphite, Reviewable.io, CodeStreamStacked / overlay toolsAdd stacked-PR support and richer review UI on top of GitHub/GitLab.
Bitbucket, Azure ReposSame idea, different UICommon where the rest of the toolchain is Atlassian or Azure.
Authoring

How to Write a Reviewable PR

Keep PRs Small

The single biggest determinant of review quality. Studies (and every senior engineer's experience) show review effectiveness drops sharply past ~400 lines. Aim for <200 lines of meaningful diff. Bigger changes get rubber-stamped or sit for days.

How: stack PRs (refactor → feature flag → behavior → cleanup), feature-flag work in progress, separate refactors from feature changes.

Write a Good Description

Three sections, in order: What changed, why, and how to verify. Link to the issue, the design doc, or the bug. Screenshots for UI. The reviewer should know in 60 seconds whether to dive in or pass.

Self-Review First

Read your own diff before requesting review. Half the embarrassing comments — debug print statements, commented-out code, leftover TODOs — get caught here.

Make CI Green Before Asking

A red CI is a "not ready for review" signal. Reviewers waste time on changes that the tools already say are broken.

Reviewing

How to Review Effectively

Read the Description First

Understand the goal before reading the code. Code that looks weird in isolation often makes sense once you know what it's solving — and code that looks fine sometimes solves the wrong problem.

Ask, Don't Demand

"Why this approach instead of X?" beats "Use X." The author has context the reviewer doesn't. Asking surfaces it; demanding either steamrolls a good reason or invites a fight.

Distinguish Blocking from Suggestion

Use prefixes: nit: (cosmetic, optional), question: (curiosity, not blocking), blocking: (must fix before merge). The author knows what to act on without guessing.

Don't Be the Style Police

If you find yourself commenting on indentation, quote style, or import order, the linter and formatter aren't doing their jobs. Fix them; reclaim review time for things humans need to think about.

Review Promptly

Reviews older than ~24 hours grow stale; the author has moved on, context is lost, the rebase pile-up makes a small change feel like a saga. Treat review as a same-day commitment when reasonable.

Approve Without Perfection

If the change is good and a few minor things are not blocking, approve and leave the comments. "Approve with optional changes" is healthier than holding the PR for cosmetic improvements.

Process

What to Wire Up

CODEOWNERS

A file in the repo mapping paths to required reviewers. Changes to billing/ require a billing-team approval; changes to infra/ require infra. Auto-routes review requests; ensures critical paths get qualified eyes.

Branch Protection & Required Checks

Block merge to main without: passing CI, at least one approval, no outstanding review-requested-changes. Most accidents happen in branches that bypass the rules.

Conventional Commits / Changelog Generators

If commit messages follow a convention (feat:, fix:, chore:), tools can generate changelogs and bump versions automatically. Conventional Commits, Changesets, release-please.

AI Review Assistance

By 2026, AI review tools (CodeRabbit, GitHub Copilot Workspace, Cursor's review mode, Sourcegraph Cody) annotate PRs with summaries, surface likely bugs, and explain unfamiliar diffs. They speed up reviewers but don't replace them — final judgment still belongs to humans accountable for the code.

Common Failure Modes

Where Review Goes Wrong

  • Rubber-stamping. "LGTM" without reading. Solve with smaller PRs, clearer descriptions, and team norms about taking review seriously.
  • Bottleneck on one senior. One person reviews everything; they burn out. Distribute via CODEOWNERS and pairing; rotate review ownership.
  • Reviewer-as-gatekeeper. Aggressive nitpicking, demanding "the way I would have done it." Reviews should improve the change, not impose taste.
  • PRs that sit for days. The author rebases, context is lost, the team's velocity quietly tanks. Make review a same-day expectation.
  • "Trivial" changes that aren't. A 5-line config change that takes prod down because nobody scrutinized it. Even small PRs deserve real attention.
Continue

Other Code Quality Tools