/Lecture4 /Lecture6
###### Introduction to Dynamic Programming

Computing the nth Fibonacci number using recursion (fib(n) = fib(n-1)+fib(n-2)) takes time
at least 2^{n/2}. We improve this by noting that this computation is highly redundant.
For example, fib(3) is recomputed many times. To see this, look at the recursion tree
for, say, fib(8):

(This was generated using the "show_call_graph" utility from /Prog1, applied to call graphs generated by running the fib() routine given to you there.)

To improve this, we can use dynamic programming. This is implemented one of two ways:

- caching results of computations for later re-use
- filling out a table bottom up.

For details, see S04_CS141:FibonacciByDP .

We also considered counting paths: S04_CS141:CountingPathsByDP, and did a class exercise about counting the number of subsets of {1,2,..,n} that have size k: S04_CS141:NChooseKByDP .

For more examples: S04_CS141:DynamicProgramming

Generally, the difficult part of dynamic programming is figuring out what class of subproblems to look at. Here, the thinking is very similar to what you need when you are trying to apply any divide and conquer approach (e.g. from mergesort). You identify smaller or simpler problems such that, if you had those solved, you could solve the given problem easily. Then you recursively ask what subproblems you would need to solve to solve those subproblems, and so on.

If you get stuck figuring out how to "divide and conquer", consider small concrete examples of the problem, try brute force enumeration (by hand), and look for patterns.

Finally, the running time of your algorithm will be something like

- (the number of distinct subproblems)×(the work per subproblem).