A container is a Linux process with its own filesystem, namespaces, and resource limits. The image is a frozen snapshot of everything that process needs. Together they solved the hardest problem in operations: making "it works on my machine" actually true everywhere else.
← Back to DevOpsContainer ≠ VM. A VM ships a kernel; a container shares the host's. That's why containers are seconds-to-start instead of minutes.
A Dockerfile is a recipe. Each instruction creates a new layer; layers cache aggressively. The order of instructions is the difference between a 30-second rebuild and a 12-minute one.
# 1. Build stage — has compilers, build deps, source FROM node:20-alpine AS build WORKDIR /app COPY package*.json ./ # cached if package.json hasn't changed RUN npm ci COPY . . RUN npm run build # produces /app/dist # 2. Runtime stage — minimal, no build tools FROM node:20-alpine WORKDIR /app ENV NODE_ENV=production COPY package*.json ./ RUN npm ci --omit=dev COPY --from=build /app/dist ./dist USER node # don't run as root EXPOSE 3000 HEALTHCHECK --interval=30s --timeout=3s CMD wget -qO- http://localhost:3000/health || exit 1 CMD ["node", "dist/server.js"]
Multi-stage = small final image. COPY package*.json first means dependency layer is cached until those files change.
alpine, distroless, or debian:slim beats a full distro by 100s of MB.node:20@sha256:… — tags can move under you.USER. Many CVEs need root inside the container to matter.RUNs where it makes sense; clean up apt caches in the same RUN that installs them..git, node_modules, or local secrets into the build context.RUN apt-get update && apt-get install foo without a version.For a single developer machine, docker compose (or podman compose) is enough — declarative file, multi-service, one command up.
# docker-compose.yml
services:
app:
build: .
ports: ["3000:3000"]
env_file: .env
depends_on: [db, redis]
db:
image: postgres:16-alpine
environment: { POSTGRES_PASSWORD: dev }
volumes: ["pgdata:/var/lib/postgresql/data"]
redis:
image: redis:7-alpine
worker:
build: .
command: ["node", "dist/worker.js"]
depends_on: [db, redis]
volumes:
pgdata:
Compose is great for dev and tiny prod. Once you need scheduling, healing, or multi-host networking, it's time for an orchestrator.
| Option | Best for | Cost of entry |
|---|---|---|
One VM + systemd + docker run | Small projects, hobby workloads. | Trivial. |
| Compose on a single host | Side projects, internal tools. | Low. |
| Nomad / Docker Swarm | Mid-sized ops without K8s overhead. | Modest. |
| PaaS — Fly.io, Render, Railway, Heroku, App Runner | Most teams under 10 services. | Low; vendor coupling. |
| Kubernetes (self-hosted) | Many teams, many services, real complexity. | High. Don't pick it for one app. |
| Managed K8s — EKS, GKE, AKS | K8s benefits without running the control plane. | Still high; the cluster's still yours. |
| Serverless containers — Cloud Run, ECS Fargate, Container Apps | Stateless services, scale-to-zero. | Lowest among hosted options. |
Don't reach for Kubernetes by default. The right answer for most teams is "PaaS or serverless containers, until you can articulate why not."
SIGTERM; the orchestrator gives you ~30s before SIGKILL.COPY . . before npm ci reinstalls everything on every change. Order matters.USER. Container escape and lateral movement become trivial.arm64, prod runs amd64. Use buildx with explicit platforms.A multi-stage build for the shortener app, with a separate worker process started from the same image. One artifact, two roles, picked by command.
# Build
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Runtime — distroless for minimal attack surface
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY package.json .
USER nonroot
EXPOSE 3000
CMD ["dist/server.js"] # default = web server
# `docker run … dist/worker.js` for the queue consumer
Distroless: no shell, no package manager, no tools an attacker could use. The image is small, signed, scanned, and shared between the web and worker deployments — same digest, different command.