**Dynamic Programming**

For example, Fibonacci numbers are defined by the following recurrence:

F(0) = 0 F(1) = 1 F(i) = F(i-1) + F(i-2)It is easy to directly write a recursive algorithm according to this definition:

Fibonacci-R(i) 1 if i == 0 2 return 0 3 if i == 1 4 return 1 5 return Fibonacci-R(i-1) + Fibonacci-R(i-2)This algorithm contains repeated calculations, so is not efficient. For example, to get F(4), it calculates F(3) and F(2) separately, though in calculating F(3), F(2) is already calculated. The complexity of this algorithm is O(2

It is easy to see that the algorithm can be rewritten in a "bottom-up" manner with the intermediate results saved in an array. This gives us a "dynamic programming" version of the algorithm:

Fibonacci-DP(i) 1 A[0] = 0 // assume 0 is a valid index 2 A[1] = 1 3 for j = 2 to i 4 A[j] = A[j-1] + A[j-2] 5 return A[i]

From this simple example, we see the basic elements of *dynamic programming*, that is, though the solution is still defined in a "top-down" manner (from solution to subsolution), it is built in a "bottom-up" manner (from subsolution to solution). Since the subsolutions to subproblems are kept in a data structure for possible later use, each of them is only calculated once, so the complexity is O(n).

*Memoization* is variation of dynamic programming that keeps the efficiency, while maintaining a top-down structure.

Fibonacci-M(i) 1 A[0] = 0 // assume 0 is a valid index 2 A[1] = 1 3 for j = 2 to i 4 A[j] = -1 5 Fibonacci-M2(A, i) Fibonacci-M2(A, i) 1 if A[i] < 0 2 A[i] = Fibonacci-M2(A, i-1) + Fibonacci-M2(A, i-2) 3 return A[i]The complexity of this algorithm is also O(n).

Dynamic programming is often used in optimization where the optimal solution corresponds to a special way to divide the problem into subproblems. When different ways of division are compared, it is quite common that some subproblems are involved in multiple times, therefore dynamic programming provides a more efficient algorithm than (direct) recursion.

The complexity of dynamic programming comes from the need of keeping intermediate results, as well as remembering the structure of the optimal solution, i.e., where the solution comes from in each step.

The product of a p-by-q matrix and a q-by-r matrix is a p-by-r matrix, which contains p*r elements, each calculated from q scalar multiplications and q − 1 additions. If we count the number of scalar multiplications and use it to represent running time, then for such an operation, it is p*q*r.

To calculate the product of a matrix-chain *A _{1}A_{2}...A_{n}*, n − 1 matrix multiplications are needed. Though in each step any pair of matrices can be multiplied, different choices have different costs.
For matrix-chain

In general, for matrix-chain *A _{i}...A_{j}* where each

m[i, j] = 0 if i = j min{ m[i,k] + m[k+1,j] + PSince certain_{i-1}P_{k}P_{j}} if i < j

Example: Given the following matrix dimensions:

A_{1} is 30-by-35

A_{2} is 35-by-15

A_{3} is 15-by-5

A_{4} is 5-by-10

A_{5} is 10-by-20

A_{6} is 20-by-25

P: [30, 35, 15, 5, 10, 20, 25]

then the output of the program is

which means that

The matrix-chain multiplication problem can be solved by either a bottom-up dynamic programming algorithm or a top-down, memoized dynamic-programming algorithm. The running time of both algorithms is Θ(n^{3}), and the algorithm needs Θ(n^{2}) space. Please note that this algorithm determines the best order to carry out the matrix multiplications, without doing the multiplications themselves.

Algorithm:

Time cost: Θ(m*n). Space cost: Θ(m*n).

Example: