Dynamic programming (DP) is a powerful technique for solving problems that exhibit overlapping sub‑problems and optimal substructure. Two complementary approaches dominate DP implementations: memoization (top‑down) and tabulation (bottom‑up). This tutorial explores both strategies in depth, compares their trade‑offs, and provides practical code examples in Python and JavaScript.
1. Core Concepts
Before diving into the implementations, let’s clarify the fundamental ideas that underpin memoization and tabulation.
- Overlapping Sub‑problems: The same sub‑problem is solved multiple times during recursion.
- Optimal Substructure: An optimal solution can be constructed from optimal solutions of its sub‑problems.
- State Space: The set of all unique sub‑problem inputs that must be evaluated.
1.1 Memoization (Top‑Down DP)
Memoization augments a naive recursive solution by caching the result of each sub‑problem the first time it is computed. Subsequent calls with the same arguments retrieve the result from the cache, eliminating redundant work.
- Implementation style: Recursive.
- Cache location: Usually a hash map (dictionary) keyed by function arguments.
- When to use: When the state space is sparse or when the recursive formulation is natural.
1.2 Tabulation ( Bottom‑Up DP )
Tabulation builds a table (often an array) iteratively, starting from the smallest sub‑problems and working up to the target problem. Each entry in the table represents the solution for a particular state.
- Implementation style: Iterative.
- Table location: Usually an array or matrix.
- When to use: When the state space is dense, when you need the full table for reconstruction, or when you want guaranteed O(1) access.
2. Comparative Analysis
| Aspect | Memoization (Top‑Down) | Tabulation (Bottom‑Up) |
|---|---|---|
| Code style | Recursive, easier to express mathematically | Iterative, often more verbose |
| Memory usage | Cache + call stack (O(depth)) | Table only (O(number of states)) |
| Execution order | On‑demand (only visited states) | Full table (all states) |
| Ease of debugging | Straightforward due to recursion | Can be harder; must track indices |
| Typical use‑case | Sparse state space, complex recurrence | Dense state space, need solution reconstruction |
3. Practical Implementations
3.1 Fibonacci Numbers – Memoization (Python)
from functools import lru_cache
@lru_cache(maxsize=None)
def fib(n: int) -> int:
"""Return the nth Fibonacci number using memoization."""
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
# Example usage
print([fib(i) for i in range(10)]) # -> [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
The lru_cache decorator automatically stores previously computed results, turning the exponential naïve recursion into linear time.
3.2 Fibonacci Numbers – Tabulation (Python)
def fib_tab(n: int) -> int:
"""Iterative bottom‑up Fibonacci using a table (list)."""
if n < 2:
return n
table = [0] * (n + 1)
table[1] = 1
for i in range(2, n + 1):
table[i] = table[i - 1] + table[i - 2]
return table[n]
print([fib_tab(i) for i in range(10)]) # -> [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
3.3 Longest Common Subsequence (LCS) – Memoization (JavaScript)
function lcsMemo(str1, str2) {
const memo = new Map();
function helper(i, j) {
const key = i + ',' + j;
if (memo.has(key)) return memo.get(key);
if (i === 0 || j === 0) return '';
if (str1[i - 1] === str2[j - 1]) {
const res = helper(i - 1, j - 1) + str1[i - 1];
memo.set(key, res);
return res;
}
const res = (helper(i - 1, j).length > helper(i, j - 1).length)
? helper(i - 1, j)
: helper(i, j - 1);
memo.set(key, res);
return res;
}
return helper(str1.length, str2.length);
}
console.log(lcsMemo('ABCDGH', 'AEDFHR')); // -> "ADH"
3.4 Longest Common Subsequence – Tabulation (JavaScript)
function lcsTab(str1, str2) {
const m = str1.length, n = str2.length;
const dp = Array.from({ length: m + 1 }, () => Array(n + 1).fill(''));
for (let i = 1; i <= m; i++) {
for (let j = 1; j <= n; j++) {
if (str1[i - 1] === str2[j - 1]) {
dp[i][j] = dp[i - 1][j - 1] + str1[i - 1];
} else {
dp[i][j] = dp[i - 1][j].length > dp[i][j - 1].length ? dp[i - 1][j] : dp[i][j - 1];
}
}
}
return dp[m][n];
}
console.log(lcsTab('ABCDGH', 'AEDFHR')); // -> "ADH"
TypeError: unhashable type.4. When to Prefer One Over the Other
- Use memoization when:
- • The recursive formulation is natural and concise.
- • Only a fraction of the state space is reachable (sparse).
- • You need early termination (e.g., pruning in search problems).
- Use tabulation when:
- • The state space is dense and you can pre‑compute all entries efficiently.
- • You require the entire table for reconstruction (e.g., printing the optimal path).
- • Memory overhead of recursion stack must be avoided.
In practice, start with a memoized recursive solution for clarity. If profiling reveals performance bottlenecks, refactor to a tabulated version.
5. Advanced Topics
5.1 Space Optimization
Many tabulation algorithms only need the previous row (or column) of the DP table. By reusing a rolling array, you can reduce space from O(n·m) to O(min(n,m)).
def fib_optimized(n: int) -> int:
if n < 2:
return n
a, b = 0, 1
for _ in range(2, n + 1):
a, b = b, a + b
return b
5.2 Iterative Deepening with Memoization
For problems where the depth of recursion is unknown (e.g., game tree search), combine iterative deepening with memoization to keep the cache across depth levels.
5.3 Parallel DP
Tabulation lends itself to parallelization because independent cells can be computed simultaneously when their dependencies are satisfied. Memoization, being recursive, is harder to parallelize without sophisticated task‑graph frameworks.
6. Summary
7. Frequently Asked Questions
Q: Can I mix memoization and tabulation in the same algorithm?
A: Yes. A hybrid approach is common: use memoization for a complex sub‑problem while tabulating another part that has a dense state space. The key is to keep the interfaces clear and avoid duplicate caching.
Q: Is memoization always slower than tabulation because of dictionary look‑ups?
A: Not necessarily. For sparse problems, memoization may compute far fewer states, offsetting the overhead of hash‑map access. Profiling is the best way to determine the actual cost.
Q: How do I clear a memoization cache in Python?
A: If you used functools.lru_cache, call func.cache_clear(). For custom dictionaries, assign a new empty dict or use dict.clear().
8. Quick Quiz
Q. Which of the following statements about memoization is FALSE?
- It stores results of function calls in a cache.
- It guarantees O(1) space complexity.
- It can be implemented with decorators in Python.
- It avoids recomputation of overlapping sub‑problems.
Answer: It guarantees O(1) space complexity.
Memoization stores one entry per unique sub‑problem, so space grows with the number of distinct states, not constant.