Big-O describes how the work an algorithm does grows as the input grows. Not how fast it is on your laptop today, but what happens when the input is 10× or 1,000× larger. Get this right and you'll spot the algorithms that fall over at scale before they ship.
← Back to Foundations| Class | Name | Example | n=1M feels like |
|---|---|---|---|
| O(1) | Constant | Hash map lookup, array index. | Instant. |
| O(log n) | Logarithmic | Binary search, balanced tree lookup. | ~20 steps. |
| O(n) | Linear | Single pass over an array, sum, max. | Fast — milliseconds. |
| O(n log n) | Linearithmic | Mergesort, quicksort (avg), heapsort. | Comfortable — seconds at worst. |
| O(n²) | Quadratic | Nested loops, naive duplicate check. | ~10¹² ops — minutes to hours. |
| O(n³) | Cubic | Triple loops, naive matrix multiply. | Don't. |
| O(2ⁿ) | Exponential | Subset enumeration, naive recursion. | Heat death. |
| O(n!) | Factorial | Permutation enumeration, brute-force TSP. | Already failed at n=20. |
The jump from O(n log n) to O(n²) is where most real-world performance disasters live. Loops that look fine in code review become unworkable when input grows from 100 to 100,000.
One loop over n inputs is O(n). A nested loop where both iterate n is O(n²). A loop whose inner work halves n each iteration is O(log n). Multiply nested loops, add sequential ones — then drop everything but the dominant term.
A recursive call doing constant work and recursing twice on n/2 is T(n) = 2T(n/2) + O(1) → O(n). Mergesort splits into two halves and does O(n) work to merge: T(n) = 2T(n/2) + O(n) → O(n log n). The Master Theorem solves the common cases; for trickier ones, draw the recursion tree.
A dynamic array's append is O(n) when it resizes — but resizes happen rarely enough that the average cost is O(1). Same for hash maps. "Amortized O(1)" means total cost over many operations divided by their count, not each individual call.
Quicksort is O(n log n) average, O(n²) worst. Hash map lookup is O(1) average, O(n) worst (everything hashes to one bucket). Pick the bound that matches your input — adversarial inputs need worst-case guarantees.
Big-O isn't just time. Space complexity describes how much extra memory an algorithm needs as input grows. An in-place sort is O(1) extra space; mergesort is O(n) extra space. For large inputs that don't fit in RAM, space dominates the conversation — a slower algorithm that streams beats a faster one that allocates.
Recursion has hidden space cost: each recursive call adds a stack frame. A "simple" recursive tree walk is O(h) space where h is the tree's height — fine for balanced trees, stack-overflow for a million-node linked list.
An O(n log n) algorithm with a 1000× constant beats an O(n²) algorithm only past some crossover. For small n, simple O(n²) (insertion sort, linear search) often wins. Production sort implementations use insertion sort below ~16 elements for this reason.
Two algorithms with the same Big-O can differ 10–100× in real time depending on cache behavior. A linear scan of an array beats a "smarter" linked-list traversal because contiguous memory is prefetched. Big-O ignores this entirely.
An O(n²) algorithm running entirely in CPU is often faster than an O(n) algorithm that makes n network calls. When work crosses a process or disk boundary, Big-O on the in-memory part is the wrong model.
"O(n)" — but what's n? Number of users? Number of orders per user? If you have multiple inputs, write them: O(u·o), not O(n). This is where production performance bugs hide.
| Structure | Access | Search | Insert | Delete |
|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | O(n) |
| Dynamic array | O(1) | O(n) | O(1) amortized at end | O(n) |
| Linked list | O(n) | O(n) | O(1) given node | O(1) given node |
| Hash map | — | O(1) avg / O(n) worst | O(1) avg | O(1) avg |
| Balanced BST | — | O(log n) | O(log n) | O(log n) |
| Heap (priority queue) | O(1) peek | — | O(log n) | O(log n) |
| Stack / Queue | O(1) top/front | O(n) | O(1) | O(1) |