DevOps & CI/CD Deep Dive · 8 of 18

Kubernetes — The Container Operating System

Born at Google as Borg, open-sourced in 2014, blessed by the CNCF. Kubernetes turned a fleet of machines into one big computer — you declare desired state, and controllers reconcile reality to match. It's the standard. It's also probably overkill for your three-service app.

PodsDeploymentsServicesIngressOperatorsCRDs
← Back to DevOps & CI/CD
Anatomy

The Building Blocks

Basic Concepts

  • Pod — the unit of scheduling: one or more containers sharing network and storage.
  • Deployment — declares "I want N replicas of this pod template," handles rolling updates.
  • Service — stable virtual IP/DNS in front of a set of pods.
  • Ingress / Gateway API — HTTP routing into the cluster (host + path → service).
  • ConfigMap / Secret — non-code config and credentials, mounted as files or env vars.
  • Namespace — soft tenancy boundary; RBAC and quotas attach here.
  • StatefulSet, Job, CronJob, DaemonSet — workloads with different identity/scheduling semantics.
  • CRDs & operators — extend the API; encode app-specific automation as controllers.
Control Plane

How a Spec Becomes a Running Pod

kubectl apply YAML
API server (etcd)
Controller writes desired pods
Scheduler picks a node
kubelet starts container
CNI wires network, kube-proxy adds Service IP

Every component is a control loop watching the API. There's no central orchestrator; each controller owns one resource type and reconciles it.

Why It Won
  • Declarative + idempotent. Apply the same YAML twice; second time is a no-op.
  • Self-healing. Pod dies → Deployment makes another. Node dies → workloads reschedule.
  • Portable. Same manifests run on EKS, AKS, GKE, on-prem k3s, your laptop in kind.
  • Extensible. Custom resources + operators turn the cluster into a substrate for almost any platform.
  • Massive ecosystem — Helm, Kustomize, ArgoCD, Istio, cert-manager, Prometheus, you name it.
Tradeoffs

The Honest Costs

  • Steep learning curve. 30+ resource kinds before you've shipped a "hello world."
  • YAML sprawl. One service can be 6+ files; reach for Helm or Kustomize early.
  • Networking rabbit hole — CNI, Services, Ingress, NetworkPolicy, Service Mesh — debugging spans many layers.
  • Cluster ops are real work. Upgrades, etcd backups, node lifecycle, addon CVEs. Use a managed offering unless you have a platform team.
  • Cost. Idle control planes, over-provisioned requests, and 100 sidecars eat a lot of money.
  • Often overkill. For a few small services, ECS / Cloud Run / Fly / Render is faster to operate.
Continue

Other DevOps & CI/CD Tools