Write a function. The cloud runs it on demand, scales it from zero to thousands per second, and charges only when it's actually executing. The most "cloud-native" of all the models.
← Back to Cloud| Platform | Notes |
|---|---|
| AWS Lambda | The original; broadest trigger ecosystem. |
| Azure Functions | Multiple plans (Consumption, Premium, Flex). |
| Google Cloud Functions / Cloud Run Functions | Cloud Run is the modern recommended path. |
| Cloudflare Workers | V8 isolates, sub-ms cold starts, edge global. |
| Vercel Functions | Edge & serverless functions tied to Next.js deploys. |
| Netlify Functions | Same idea, Netlify-native. |
| Deno Deploy | Deno-native edge runtime. |
| Fastly Compute | WASM at the edge. |
| Modal / Beam / Banana | Serverless GPUs for AI workloads. |
The biggest pain. Mitigations:
| Limit | Typical value |
|---|---|
| Max execution time | 15 min (Lambda); 60 min (Cloud Run); 30s (edge) |
| Max memory | 10 GB (Lambda) |
| Max payload | 6 MB sync, 256 KB async |
| Concurrency / region | 1000 default (raise on request) |
| Local disk (ephemeral) | 512 MB – 10 GB (/tmp) |
| Code package size | 50 MB zipped, 250 MB unzipped |
Sporadic webhooks, jobs that run a few times a day.
S3 trigger → resize → save → notify.
Auth, A/B, redirect logic at the CDN.
SaaS-to-SaaS workflows, slack bots, ChatOps.