Table of contents
1.
Introduction
2.
Gate questions on Dynamic Programming  
3.
Frequently Asked Questions
4.
Conclusion
Last Updated: Mar 27, 2024

GATE Questions on Dynamic Programming - Part 3

Author Muskan Gupta
1 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

Dynamic Programming is a technique of doing an optimization over recursion. We store the answer for the overlapping subproblems and use that result if we need the answer for the same problem in the future. Hence, we save the time of re-computing the answer for the same problem. It is a great technique that helps us to reduce the exponential time complexity to polynomial time complexity.

You can check out our top dynamic programming problems.

This blog covers the previous year's gate exam questions related to the topic of dynamic programming

Gate questions on Dynamic Programming  

1. Which of the following options of algorithm design techniques is used in finding all pairs of shortest distances in a graph? (1998 GATE Questions on DP )
(A)Divide and conquer
(B)Greedy
(C)Backtracking
(D)Dynamic programming

Answer: (D)
Explanation: Floyd Warshall Algorithm is the All Pairs Shortest Path problem. Dynamic Programming calculates the shortest distances between each pair of vertices in a specified edge-weighted directed Graph.
 

2. If we can break a problem into subproblems that are reused many times, the problem is said to possess which of the following property?
a) Overlapping subproblems
b) Greedy
c) Memoization
d)Optimal substructure 

Answer: (A)
Explanation: The problems which have the property in which the value of a subproblem is used several times is said to possess the property of Overlapping subproblems.
 

3. When we apply dynamic programming to a problem, it takes far less time than the methods that don’t take benefit of property overlapping subproblems.

a) True

b) False
 

Answer: (A)

Explanation: In dynamic programming, the value of a subproblem is calculated only once, while in other methods that don’t benefit from the overlapping subproblems property, the value of the same subproblem might be calculated several times. So, dynamic programming saves recalculation time and takes far less time than other methods that don’t take advantage of the overlapping subproblems property.
 

4. Assume multiplying a matrix M1 of dimension p×q with another matrix M2 of dimension q×r requires pqr scalar multiplications. Computing the product of n matrices M1M2M3 ….. Mn can be done by parenthesizing in different ways. Define MiMi+1 as an explicitly computed pair for a given parenthesization if they are directly multiplied. For example, in the matrix multiplication chain M1M2M3M4M5M6 using parenthesization (M1(M2M3))(M4(M5M6)), M2M3 and M5M6 are only explicitly computed pairs.

(A) K1K2 and K3K4 only
(B) K2K3 only
(C) K3K4 only
(D) K1K2 and K4K5 only

Answer: (C)
Explanation: Matrix K5 is of dimension 1 X 1000, which will cause a very much multiplication cost. So evaluating K5, at last, is compelling.
Here ,Total number of scalar multiplications are 48 + 75 + 50 + 2000 = 2173 and effective parenthesis is ((K1(K2(K3 K4)))K5).
As concluded, K3 and K4 have explicitly computed pairs.
Option (C) is Correct.
 

5. Given below are some algorithms and some algorithm design paradigms.

Column A

Column B

A. Dijkstra's Shortest Path

1. Divide and Conquer

 

B. Floyd-Warshall algorithm to compute the all-pairs shortest path

 

2. Dynamic Programming

 

3. Greedy design

 

C. Binary search on a sorted array

 

4. Depth-first search
D. Backtracking search on a graph

5. Breadth-first search 

 

Match the above algorithms on column A to the corresponding design paradigm in column B.
Codes:
A B C D
(a) 1 3 1 5
(b) 3 3 1 5
(c) 3 2 1 4
(d) 3 2 1 5 
(A) a
(B) b
(C) c
(D) d

Answer: (C)
Explanation:
Dijkstra's Shortest Path is a Greedy Algorithm.
Floyd-Warshall's algorithm is Dynamic Programming.
Binary search is a Divide and Conquer.
Backtracking is a Depth-first search.

6. A binary search tree is obtained by inserting the following integers in order, and you need to seek out the total number of nodes in the left and right subtree of the root, respectively.( 1996 GATE Questions on DP)
50 , 15, 62, 5, 20, 58, 91, 3, 8, 37, 60 and 24
(A) (7, 4)
(B) (4, 7)
(C) (3, 8)
(D) (8, 3)

Answer: (A)
Explanation: Just jot down the sorted sequence. Since it's not a balanced binary tree, numbers in the sorted list smaller than the first element inserted, i.e., 50, in this case, will be accommodated in the left subtree, and the rest will be adjusted in the right subtree.

7. A sub-sequence of a given sequence is defined as the given sequence with some elements (possibly all or none) left out. We are given two sequences, X[a] and Y[b], of lengths a and b, respectively, with X and Y indexes that start from 0.
Here ,We have to find the length of the longest common subsequence (LCS) of X[a] and Y[b] as f(a,b), where an incomplete recursive definition for the function F(i,j) to calculate the length of the LCS of X[a] and Y[b] is given below:
f(i,j)=0, if either i=0 or j=0

      =expr1, if i,j>0 and X[i−1]=Y[j−1]
      =expr2, if i,j>0 and X[i−1]≠Y[j−1]

The value of F(i,j) could be obtained by dynamic programming based on the correct recursive definition of f(i,j) of the form given above, using an array K[A,B], where A=a+1 and B=b+1, such that K[i,j]=f(i,j).
Which of the following statements would be TRUE regarding the dynamic programming solution for the recursive definition of f(i,j)?
(A)All elements of K should be initialized to 0 for the values of f(i,j) to be correctly computed.
(B)The values of f(i,j) may be calculated in a row-major order or column-major order of K[A, B].
(C)The values of f(i,j) cannot be computed in either row-major ordering or column-major order of K[A, B].
(D)K[p,q] needs to be calculated before K[r, s] if either p<r or q<s.

Check out Longest Common Substring

Answer: (B)
Explanation: LCS procedure can be followed in either row-major or column-major order. We can get the exact solution by any order. The above question looks very big, but they explain the procedure of LCS.

 

8. Which of the problems written below cannot be solved by using dynamic programming?

a) Fractional knapsack problem

b) Matrix chain multiplication problem

c) Edit distance problem

d)0/1 knapsack problem
 

Answer: (A)

Explanation: The problem involving fractional knapsack is solved using a greedy algorithm.

 

9 . You can use a greedy algorithm to solve all the dynamic programming problems.

(A) True

(B) False
 

Answer: (B)

Explanation: A greedy algorithm gives an optimal solution for all subproblems, but when these locally optimal solutions are combined, it may NOT result in a globally optimal solution. Hence, a greedy algorithm CAN NOT be used to solve all the dynamic programming problems.
 

10. Which of the following written below should be solved using dynamic programming?

A) Quicksort

B) Longest common subsequence 

C ) Binary search

D) Mergesort
 

Answer: (B)

Explanation: The problem involving the longest common subsequence has both overlapping subproblems and optimal substructure. Hence, this problem should be solved by the use of dynamic programming.
 

Refer to Gate Questions on Dynamic Programming: Part 1 and Gate Questions on Dynamic Programming: Part 2 for more questions. Also, check out the Gate Syllabus for CSE.

Must Read Julia Programming Language

Frequently Asked Questions

  1. What is the dynamic programming approach? 
    Dynamic programming is an approach for optimizing recursive solutions when recursive calls repeatedly happen on the same input. The idea is quite simple: store the result that needs to be calculated repeatedly.   
     
  2. Write the different types of dynamic programming approaches.
    There are generally two types of approaches:
    Top-Down Approach: In this approach, we break the bigger problem into smaller subproblems. 
    Bottom-Up Approach: In this approach, we join the smaller subproblem to find the solution to a bigger problem.
     
  3. What are overlapping subproblems?
    A problem has overlapping subproblems if it can be divided into smaller problems for reusing multiple times.
     
  4. What is an optimal substructure?
    A problem has an optimal substructure if the optimal solution can be obtained from optimal solutions of its subproblems.
     
  5. Write the difference between DP and recursion?
    Recursion occurs when a function may be called and executed by itself, whereas dynamic programming is the process of resolving big issues by breaking them down into smaller problems.One of the most prevalent reasons individuals want to comprehend recursion is the questions related to this topic frequently asked during interviews.
    And, Dynamic Programming (DP) is an algorithmic technique for solving a problem optimally by breaking it down into simpler subproblems and utilizing the information that the optimal solution to the whole problem relies upon the optimal solution to its subproblems.

Conclusion

In this article, we have gone through important previous years' GATE exam questions on Dynamic Programming. 

Recommended Readings:

Check out this problem - Minimum Coin Change Problem 

We hope that this blog has helped you enhance your knowledge regarding Dynamic programming and if you would like to learn more, check out our articles on gate questions on greedy algorithm. Do upvote our blog to help other ninjas grow. Happy Coding!

Live masterclass