How the server moves data between memory and durable storage — through ORMs, query builders, transactions, and migrations. Where most performance and correctness problems eventually surface.
← Back to Server Sidepg, mysql2, jdbc. Fastest, most explicit, no magic. Pair with parameterized queries always.
Knex, jOOQ, sqlx, Kysely. Programmatic SQL — composable, type-safe, no surprises.
Drizzle, Prisma, SQLAlchemy Core. Schema-aware but stay close to SQL.
Hibernate/JPA, Entity Framework, Django ORM, ActiveRecord. Rich object graphs, lazy loading, lifecycle hooks.
Heavier ORMs trade clarity for productivity. They shine on CRUD-heavy apps and obscure performance characteristics on hot paths — know when to drop down.
| Level | Allows | When |
|---|---|---|
| Read Committed | Non-repeatable reads, phantoms. | Default in Postgres & Oracle. Fine for most CRUD. |
| Repeatable Read | Phantoms (in some DBs). | Reports needing a stable snapshot mid-transaction. |
| Serializable | Nothing — behaves as if transactions ran one at a time. | Money, inventory, anywhere correctness > throughput. Expect retries on conflict. |
Optimistic: include a version column; on UPDATE, check the version hasn't changed; retry on conflict. Great for low contention.
Pessimistic: SELECT ... FOR UPDATE — others wait. Right when contention is high or retries are expensive.
Group all changes a request makes, flush once, commit once. Don't sprinkle individual saves through your service — it makes rollback ineffective and produces N round-trips where one would do.
If a transaction must also publish an event ("order placed → email user"), write the event to an outbox table in the same transaction. A separate worker reads the outbox and publishes — guaranteeing the event ships iff the data was committed. Avoids the dual-write problem.
You load 50 orders, then iterate to load each one's customer — 1 + 50 round-trips instead of 1. Fix with eager loading (JOIN FETCH, include, select_related) or DataLoader-style batching. The single biggest performance bug in ORM-heavy code.
App pool size should be smaller than the DB's max_connections divided by the number of app instances. Postgres usually wants fewer connections than developers expect — pgBouncer or RDS Proxy in front lets thousands of app connections share a small pool of real ones.
EXPLAIN ANALYZE until you can. Sequential scan on a million-row table is the warning sign.WHERE lower(email) = …) — use a functional index or store normalized.Migrations are code. Check them in. Run them automatically. Make them backwards-compatible with the previous app version so deploys can roll back.
CREATE INDEX CONCURRENTLY; add columns with no default, then backfill.WHERE deleted_at IS NULL forever. Use them deliberately, not by default.timestamptz; never as a naive timestamp.