Pub/Sub flips the addressing model. Publishers emit events to a topic without knowing who's listening; subscribers receive copies of every event for topics they care about. It's how a single OrderPlaced event reaches inventory, billing, fraud, analytics, and the customer-email service from one publish — without those teams knowing each other exists.
← Back to APIs & NetworkingThe order service publishes OrderPlaced. It doesn't know that emails get sent, analytics gets updated, the warehouse picks the package, and the loyalty system credits points. It just emits the event. Add a new consumer (fraud detection, recommendation training) without touching the producer or coordinating with anyone.
One event, ten subscribers, ten copies — each delivered, acked, and tracked independently. A slow subscriber doesn't slow down a fast one; they just lag at different rates.
Pub/Sub is how a 200-engineer org integrates services without weekly cross-team coordination. Producer teams ship contracts (event schemas); consumer teams subscribe at their own pace. The dependency graph becomes "everyone reads from these topics" rather than "every service calls every other service."
Coarse topics (orders) force every subscriber to filter; fine topics (order.placed, order.shipped, order.refunded) let subscribers attach only to what they care about. Most teams settle on noun-dot-verb naming, past tense for events: order.placed, not place-order.
Most managed pub/sub systems support attribute filters or routing rules. AWS SNS lets a subscription specify a JSON filter policy on message attributes. Google Pub/Sub supports filter expressions. RabbitMQ topic exchanges route by routing-key wildcards (order.*). Use these to keep topics simple and let subscribers pick the slice they want.
Most pub/sub systems guarantee at-least-once. Build idempotent subscribers — store the event ID, skip if already processed. The same event can and will arrive twice; pretending it can't is how you double-charge a card.
Each subscription tracks its own delivery state. A slow subscriber accumulates a backlog without affecting the others. Watch backlog metrics — sustained growth means the subscriber can't keep up, time to scale or fix.
The classic pattern is SNS + SQS fan-out: a publisher writes to one SNS topic; multiple SQS queues subscribe; each queue is read by its own pool of competing consumers. Pub/sub does the fan-out; queues do the per-subscriber load balancing. Same pattern in Google Pub/Sub (one topic → many subscriptions, each backed by competing pull subscribers).
Pub/sub typically delivers each event once per subscription and forgets. A streaming log (Kafka, Kinesis) persists every event for a retention window and lets consumers replay from any offset. Need replay or "new consumer reads from the beginning"? Use streaming. Need plain fan-out delivery without history? Use pub/sub.
| System | Style | Notes |
|---|---|---|
| AWS SNS | Managed pub/sub with multiple delivery protocols | Pairs with SQS for fan-out + work-distribution; also delivers to Lambda, HTTP, email, SMS. |
| Google Pub/Sub | Managed, global, durable | Push or pull; per-message ack with retention up to 7 days. |
| Azure Service Bus Topics | Managed pub/sub with subscription rules | Sessions, transactions, scheduled delivery. |
| RabbitMQ topic exchanges | Self-hosted AMQP | Routing-key wildcards make selective subscriptions easy. |
| NATS | Lightweight pub/sub (with optional JetStream persistence) | Edge, IoT, and microservices needing low overhead. |
| Redis Pub/Sub | In-memory, fire-and-forget | Ephemeral — no delivery guarantees if a subscriber is offline. Fine for transient signals; not for business events. |
| Kafka | Streaming log used as pub/sub | Many teams run Kafka as their pub/sub layer to keep replay as an option. |
Pick pub/sub when one event genuinely needs to reach multiple independent consumers and you don't need replay or long-term retention. It's the right substrate for cross-team integration inside a platform — producers don't know who consumes, consumers add themselves without coordinating.
Pick a queue when one producer hands work to one consumer pool. Pick streaming when replay, ordered history, and long retention matter. The three patterns aren't competitors — most mature platforms use all three for the use cases each fits best.