Code Quality Tools Deep Dive · 3 of 6

Static Analysis — Reasoning About Code Without Running It

Linters spot patterns line by line; static analysis reasons about flow — what data flows from where to where, what calls what, what might be null on this path. The tools find SQLi, command injection, race conditions, null dereferences, leaked file handles, and entire categories of bugs that no test will catch unless someone first thought to write that test. They're SAST when used for security; "deeper static analysis" otherwise.

SonarQubeSemgrepCodeQLSnyk CodeSASTData Flow
← Back to Testing
Quick Facts

What Sets Static Analysis Apart

Basic Concepts

  • Cross-procedural analysis. Tracks data across functions and files — "user input from this controller flows into this DB query without sanitization."
  • Two flavors: security (SAST — Snyk, Semgrep, CodeQL) and quality / maintainability (SonarQube, CodeClimate, Codacy).
  • Pattern-based vs flow-based. Pattern (Semgrep) — match shapes; fast and customizable. Flow (CodeQL, SonarQube deep) — track data through the program; slower, deeper.
  • Almost always cloud-hosted now. Real-time analysis lives in CI or as a SaaS dashboard. The output is a list of findings, ranked by severity.
  • Reachability is the new frontier. Modern tools tell you which findings are actually reachable from external input — most reduce a 500-finding wall to 30 that matter today.
Tools

The Common Players

Semgrep

Pattern matching with code-aware syntax. You write rules in the language being analyzed: "find every db.query with a string concatenation." Open source, fast, runs in seconds in CI. Free Community Edition for OSS; paid tiers for orgs. Excellent for custom rules unique to your codebase.

CodeQL

GitHub's deep-flow analyzer. Queries are written in QL, a declarative query language over a code database. Catches subtle data-flow bugs Semgrep can't. Free for open source via GitHub Code Scanning; commercial otherwise. Heavyweight to write custom rules — but the bundled rule set is excellent.

SonarQube / SonarCloud

The veteran. Wide language coverage, dashboards, "quality gates" that can block PR merges. Strongest for code-quality metrics (complexity, duplication, coverage) plus a security ruleset. Self-hosted (SonarQube) or SaaS (SonarCloud).

Snyk Code

SaaS-first SAST with strong reachability analysis. Pairs with Snyk's dependency scanning (SCA), container scanning, and IaC scanning to give one platform across the supply chain. Common in security-led organizations.

Veracode, Checkmarx, Coverity, Fortify

The enterprise SAST tier. Extensive language coverage, compliance reporting, deep flow analysis. Heavier to operate; common in regulated industries.

Language-Specific Deep Analyzers
  • Java/Kotlin: SpotBugs, ErrorProne, IntelliJ inspections.
  • .NET: Roslyn analyzers, NDepend.
  • C/C++: Coverity, Cppcheck, clang-tidy.
  • Python: Bandit (security-focused), mypy + Ruff.
  • Go: staticcheck, govet, semgrep with Go rules.
  • Rust: the compiler does most of it; clippy adds the rest.
Discipline

How to Make Findings Useful

Run on Every PR — But Only Show New Findings

The first scan on a 5-year-old codebase produces 5,000 findings. Don't try to fix all of them. Set a baseline; have CI block new findings only. Quality ratchet — you stop adding new bugs, and the backlog gets paid down opportunistically.

Tune for Signal

A scanner that emits 200 false positives per PR will get ignored. Suppress rules that don't apply to your stack; tune severity thresholds; combine with reachability filtering. The goal is a small set of findings developers actually act on.

Custom Rules for Your Code's Sharp Edges

"Never use InternalUserService.findById without first checking org membership." A custom Semgrep or CodeQL rule encodes architectural constraints permanently. Far more durable than wiki documentation.

Treat It as Part of Code Review, Not a Replacement

Static analysis catches mechanical issues. It can't tell you the design is wrong, the caching strategy is overcomplicated, or the abstraction is misnamed. Pair it with human review; don't expect tools to substitute.

Wire to the Issue Tracker

Critical findings should auto-create tickets with severity, file, line, and the rule's reasoning. Findings that live only in the scanner UI get ignored.

Common Mistakes

Where Static Analysis Fails to Help

  • "Buy SAST, ignore findings." The most expensive shelfware in security. Without ownership and triage, the dashboard is theater.
  • Adopting too many tools at once. Three SAST tools producing overlapping findings creates 3× the triage with not much extra signal. Pick one; layer specific tools where they add unique value.
  • Treating "compliant" as "secure." Passing the SAST gate isn't the same as being secure — it's not having any of the bugs the tool can spot. Real attackers don't only use the bugs static analysis catches.
  • Slow scans. A 30-minute scan on every PR doesn't get run. Tune scope (full codebase nightly; PR diffs in CI), use incremental analysis where supported.
Continue

Other Code Quality Tools