**Dynamic Programming and Memoization**

Example: Fibonacci numbers

Direct recursive algorithm:

Fibonacci-R(n) 1 if n < 2 2 then return n 3 return Fibonacci-R(n-1) + Fibonacci-R(n-2)Complexity: Θ(φ

Repeated work: to calculate F(4), the algorithm calculates F(3) and F(2) separately though in calculating F(3), F(2) is already calculated.

*Dynamic programming*: to turn a "top-down" recursion into a "bottom-up" construction.

Fibonacci-DP(n) 1 A[0] <- 0 // assume 0 is a valid index 2 A[1] <- 1 3 for j <- 2 to n 4 A[j] <- A[j-1] + A[j-2] 5 return A[n]Complexity: Θ(n).

*Memoization*: a variation of dynamic programming that keeps the efficiency, while maintaining a top-down strategy.

Fibonacci-M(n) 1 A[0] <- 0 // assume 0 is a valid index 2 A[1] <- 1 3 for j <- 2 to n 4 A[j] <- -1 5 Fibonacci-M2(A, n) Fibonacci-M2(A, n) 1 if A[n] < 0 2 then A[n] <- Fibonacci-M2(A, n-1) + Fibonacci-M2(A, n-2) 3 return A[n]Complexity: also Θ(n).

Dynamic programming used in *optimization*: the optimal solution corresponds to a special way to divide the problem into sub-problems. When different ways of division are compared, it is quite common that some sub-problems are involved in multiple times. The complexity of such a solution comes from the need of keeping intermediate results, as well as remembering the structure of the optimal solution, i.e., how the problem is divided into sub-problems, step by step.

To calculate the product of a matrix-chain *A _{1}A_{2}...A_{n}*, n-1 matrix multiplications are needed, though different orders have different costs.

For matrix-chain *A _{1}A_{2}A_{3}*, if the three have sizes 10-by-2, 2-by-20, and 20-by-5, respectively, then the cost of

For matrix-chain *A _{i}...A_{j}* where each

m[i, j] = 0 if i = j min{ m[i,k] + m[k+1,j] + PUse dynamic programming to solve this problem: calculating_{i-1}P_{k}P_{j}} if i < j

Example: Given the following matrix dimensions:

A_{1} is 30-by-35

A_{2} is 35-by-15

A_{3} is 15-by-5

A_{4} is 5-by-10

A_{5} is 10-by-20

A_{6} is 20-by-25

then the output of the program is

which means that *A _{1}A_{2}A_{3}A_{4}A_{5}A_{6}* should be calculated as

Time cost: Θ(n^{3}), space cost: Θ(n^{2}).

Please note that this algorithm determines the best order to carry out the matrix multiplications, without doing the multiplications themselves.

For sequence X and Y, their LCS can be recursively constructed:

c[i,j] = 0 if i == 0 or j == 0 c[i-1,j-1]+1 if i,j>0 and Xi == Yj max(c[i,j-1],c[i-1,j]) if i,j>0 and Xi != YjTime cost: Θ(mn). Space cost: Θ(mn). Demo applet.

(2) Optimal Binary Search Tree

If the keys in a BST have different probabilities to be searched, then a balanced tree may not give the lowest average cost (though it gives the lowest worst cost). Using dynamic programming, BST with optimal average search time can be built, in a way similar to matrix-chain multiplication.

Time cost: Θ(n^{3}). Space cost: Θ(n^{2}).
Demo applet.