Linters spot patterns line by line; static analysis reasons about flow — what data flows from where to where, what calls what, what might be null on this path. The tools find SQLi, command injection, race conditions, null dereferences, leaked file handles, and entire categories of bugs that no test will catch unless someone first thought to write that test. They're SAST when used for security; "deeper static analysis" otherwise.
← Back to TestingPattern matching with code-aware syntax. You write rules in the language being analyzed: "find every db.query with a string concatenation." Open source, fast, runs in seconds in CI. Free Community Edition for OSS; paid tiers for orgs. Excellent for custom rules unique to your codebase.
GitHub's deep-flow analyzer. Queries are written in QL, a declarative query language over a code database. Catches subtle data-flow bugs Semgrep can't. Free for open source via GitHub Code Scanning; commercial otherwise. Heavyweight to write custom rules — but the bundled rule set is excellent.
The veteran. Wide language coverage, dashboards, "quality gates" that can block PR merges. Strongest for code-quality metrics (complexity, duplication, coverage) plus a security ruleset. Self-hosted (SonarQube) or SaaS (SonarCloud).
SaaS-first SAST with strong reachability analysis. Pairs with Snyk's dependency scanning (SCA), container scanning, and IaC scanning to give one platform across the supply chain. Common in security-led organizations.
The enterprise SAST tier. Extensive language coverage, compliance reporting, deep flow analysis. Heavier to operate; common in regulated industries.
The first scan on a 5-year-old codebase produces 5,000 findings. Don't try to fix all of them. Set a baseline; have CI block new findings only. Quality ratchet — you stop adding new bugs, and the backlog gets paid down opportunistically.
A scanner that emits 200 false positives per PR will get ignored. Suppress rules that don't apply to your stack; tune severity thresholds; combine with reachability filtering. The goal is a small set of findings developers actually act on.
"Never use InternalUserService.findById without first checking org membership." A custom Semgrep or CodeQL rule encodes architectural constraints permanently. Far more durable than wiki documentation.
Static analysis catches mechanical issues. It can't tell you the design is wrong, the caching strategy is overcomplicated, or the abstraction is misnamed. Pair it with human review; don't expect tools to substitute.
Critical findings should auto-create tickets with severity, file, line, and the rule's reasoning. Findings that live only in the scanner UI get ignored.