Most programs are a thin layer of business logic on top of a handful of data structures. Choosing the wrong one isn't a "performance problem" — it's a design problem that shows up later as one. This page is the short version of when to use each.
← Back to FoundationsContiguous block of same-type elements. O(1) index access, O(n) insert/delete in the middle. Dynamic arrays (Java ArrayList, Python list, Go slice, C++ vector, JS array) double their capacity on resize, giving amortized O(1) append.
The default. Cache-friendly, simple, and almost always faster than a linked list at any reasonable size. Reach for something else only when you have a specific reason.
Nodes pointing to next (and optionally previous). O(1) insert/delete given a node, O(n) to find one. Pays a pointer chase per element — terrible cache behavior.
Use when: you splice nodes in/out frequently and already hold the node, or you need a stable iterator under modification. Most "linked list" use-cases in app code should be a deque or array instead.
Push and pop from the same end. Function calls, undo histories, expression parsing, depth-first traversal. Implement with a dynamic array — O(1) at the top, no allocator pressure.
Add at one end, remove from the other. Job pipelines, BFS, producer/consumer buffers. Deque (double-ended queue) lets you do either end — used for sliding windows and work-stealing. Ring buffers are the fast bounded variant.
Average O(1) lookup, insert, delete by key. Worst case O(n) if everything collides — modern implementations (Java, Go, Rust, Python 3.7+) randomize the seed and fall back to a tree on bad buckets to defang adversarial inputs.
No ordering guarantee. Iteration order is implementation-defined or insertion-ordered depending on the language. If you need ordered iteration, use a tree map.
A hash map without values — just keys. The right tool for "have I seen this?" and "is x in the set?" Don't fake it with a list and contains(); that's O(n) per check.
hashCode/__hash__ whenever you override equality. Mismatched contracts are a top-tier bug.Red-black trees, AVL trees, B-trees. O(log n) insert/lookup/delete, and iteration in sorted order. Use when you need both key-lookup and ordered traversal — range queries, "next-largest," sorted iteration.
Java TreeMap, C++ std::map, .NET SortedDictionary. Databases use B-trees for indexes — same idea, optimized for disk pages.
A tree where the parent is always ≤ (or ≥) its children. O(log n) push, O(log n) pop-min, O(1) peek. The right tool for: top-K, scheduling, Dijkstra's algorithm, event simulation.
Tree keyed by character, one path per word. Autocomplete, IP routing tables, dictionary lookup. Lookup is O(k) where k is key length — independent of how many keys are stored.
Specialized trees for range queries (sum/min/max over [i, j]) with point updates in O(log n). Show up in competitive programming and time-series rollups.
Permission inheritance, organizational hierarchies, foreign-key relationships, dependency resolution, social connections, configuration overrides — all graphs. Recognizing the shape is half the work; the algorithm follows.