In this article, we will look forward to an exciting approach, dynamic programming, and how to solve a dynamic programming problem. But, before proceeding ahead, first, let’s understand what exactly this dynamic programming is.
Dynamic problem is an algorithm for solving bigger and harder problems by dividing them into simpler sub-problems, keeping in mind that the optimal solution to bigger problems lies in smaller sub-problems. The main components of dynamic programming are:
The properties of dynamic programming which can predict the solution of a problem are:
Optimal substructure:An optimal solution for a problem gives optimal solution to its sub-problems.
Overlapping subproblems: a recursive solution contains a small number of distinct sub-problems repeated many times.
Now, the question is can dynamic programming solve every problem. Come, let’s figure it out.
Can dynamic programming solve all problems?
Like the greedy and divide and conquer technique, dynamic programming cannot give a solution to every problem, it also has some pros and cons which we will find later on. But yes, it can give an answer to some such topics which greedy and divide and conquer fails to give so.
The difference between dynamic programming and straightforward recursive solutions is in the memoization of recursive calls. If the subproblems are independent and recursive calls are independent of each other then memoization does not help.
Dynamic programming approaches:
Basically, there are two approaches for solving dynamic problems:
Top-down dynamic programming.
Bottom-up dynamic programming.
Bottom-up dynamic programming:
In this method, we start from the smallest possible solution, followed by all possible values, by slowly increasing the values. While computing the values, we keep on storing them on a tabular structure(memory). As larger elements are evaluated, pre-computed values for smaller values can be utilized.
Top-down dynamic programming:
In this method, the problem is broken down into sub-problems; each of these sub-problems is solved, and the solutions are remembered, in case they need to be solved. Also, we save each precomputed value as the final action of the recursive function, and as the first action, we check if a pre-computed value exists.
Understanding dynamic programming:
Before moving forward, let us understand how dynamic programming works through examples.
Fibonacci series:
In the Fibonacci series, the current number is the sum of its previous two numbers. It is defined as follows:
The recursive implementation of the Fibonacci series is:
#include<bits/stdc++.h> usingnamespacestd;
/* Function which returns the fibonacci of number n */ intfun(int n) { // Base cases if(n == 0) { return0; } elseif(n == 1) { return1; } else { return fun(n - 1) + fun(n - 2); } }
// Driver function intmain() { int n = 12; // Calling function cout<<"Fibonacci of "<<n<<": "<<fun(n); return0; }
Output:
Fibonacci of 12: 144
Solving the above recurrence gives:
How does memoization help here:
Calling fib(5) produces a calling tree that calls the functions repeatedly several times.
The recursive tree looks like this:
Solved fib(n)
Number of times
n=5
1
n=4
1
n = 3
2
n = 2
3
n = 1
5
n = 0
3
In the above example, fib(2) is calculated three times (overlapping of subproblems). For large N values, we need to calculate many problems, again and again, this leads to exponential time. Instead of calling the same function again and again and calculating the values, we can store the previously calculated values and use them accordingly.
It begins with the recursive function and uses a table that will map any function's parameter values to the results computed by the function. If the need for the function arises more than once, we look for its result from the table.
Improving:
Till now we have seen how dynamic programming helps in reducing the complexity of the problem from exponential to polynomial. Now, there are two approaches for doing this: one is a bottom-up approach. It starts with small values and keeps on calculating for larger values.
Bottom-up approach:
int fib[n]; intfib(int n) { // Checking for base cases Fib[0] = 1; Fib[1] = 1; for(int i = 2; i < n; i++) { Fib[i] = fib[i-1] + fib[i-2]; } return fib[n-1]; }
Top-down approach:
In this approach, we reserve the recursive calls and use the already computed values. The implementation is given below:
For any number ‘N’, we are checking all the possible combinations by going through all the recursive calls. Hence, a recursive function can be called maximum n number of times. As we are iterating once over each number till ‘N’ therefore the overall time complexity will be O(N).
Space Complexity: O(N), for table
Till now, both the approaches discussed for Fibonacci series implementations clearly reduce the complexity of the problem to O(N). The reason is we are storing the precomputed values and using the already computed values when required from the table, instead of calling the function again.
Can the complexities be improved a little?
This is an interesting question to think about. Come let’s figure it out.
From the Fibonacci series, we can clearly observe that the current value is the sum of the previous two values. This signifies that we don't need to store all pre-computed values, we just need to store the last two values. Using them, we can calculate the current value. The below is the implementation for this logic:
intfibonacci(int n) { int a = 0, b = 1, sum, i; for(i = 0; i < n; i++) { Sum = a + b; a = b; b = sum; } return sum; }
Time complexity: O(N)
As we are iterating once over each number till ‘N’ therefore the overall time complexity will be O(N).
Space complexity: O(1)
As we are using constant space in this approach. So. space complexity will be O(1).
What is the difference between the bottom-up and top-down approaches?
In bottom-up programming, the programmer has to select values to calculate and decide the order of calculation. In this case, all subproblems that might be needed are solved in advance and then used to build up solutions to larger problems. In top-down programming, the recursive structure of the original code is preserved, but unnecessary recalculation is avoided. The problem is broken into subproblems, these subproblems are solved and the solutions will be remembered.
Give some examples where dynamic programming can be applied?
Following are the examples where dynamic programming can be applied:
Longest common subsequence
0-1 Knapsack
Coin change problem
All pair shortest path problem
Reliability design problem
Word break problem
Matrix chain multiplication
Partition problem
Rod cutting
Longest increasing subsequence
Write down the steps to use dynamic programming in any problem.
Following are the steps :
Identify all the data variables.
Figure out the recurrence relation.
Note down the base cases.
Decide whether to use recursion or iteration.
Add memoization.
Find out the time and space complexity.
Key Takeaways:
In this article, we have discussed dynamic programming, how to approach and solve dynamic programming problems such as Longest Palindromic Subsequence, Subsequence Of A String, and a discussion around its time and space complexity.
But this isn’t the end, right?
Keeping the theoretical knowledge at our fingertips helps us get about half the work done. To gain complete understanding, practice is a must. A variety of coding questions from interviews are available. If you think that this blog helped you, then share it with your friends!.