Data structures & algorithms (Beginner to Intermediate)

Free guided path13 chapters99+ problems

Earn badges and level up

Introduction

Dynamic Programming is a class of algorithms that comes into the picture wherever there is Recursion and repeating subproblems. It stores the result of the subproblems which are already computed. So that we donâ€™t have to do the repetitive work. Simply put, Dynamic Programming says us to solve only unique subproblems. In this blog, we will discuss a standard DP problem namely Matrix Chain Multiplication. We will try solving it in the most optimal way in terms of time and space complexity. So let us look at the problem statement and various algorithms to solve the problem.

What is the Matrix Chain Multiplication method?

Matrix Chain Multiplication is an algorithmic technique used to determine the most efficient way to multiply a chain of matrices together.

The essence of the problem lies in finding the order in which the matrices should be multiplied to minimize the total number of scalar multiplications, which directly translates to reducing computational complexity and optimizing performance.

The approach involves dynamic programming, where the problem is broken down into smaller subproblems, and the optimal solution is built up iteratively from the solutions to these subproblems.

Get the tech career you deserve, faster!

Connect with our expert counsellors to understand how to hack your way to success

User rating 4.7/5

1:1 doubt support

95% placement record

Akash Pal

Senior Software Engineer

326% Hike After Job Bootcamp

Himanshu Gusain

Programmer Analyst

32 LPA After Job Bootcamp

After Job Bootcamp

Problem Statement

Given â€˜Nâ€™ matrices of varying dimensions, which are multiplication compatible, find an order to multiply them such that the number of unit operations is minimized. In short, we need to find the most efficient way to multiply the matrices.

Example

Given three matrices â€˜Aâ€™, â€˜Bâ€™, and â€˜Câ€™ having dimensions 10 x 20, 20 x 15, and 15 x 30, respectively. We need to multiply A, B, and C. Now there are two ways in which we can multiply them.

1. A * (B * C): The dimension of matrix A: 10 x 20, B: 20 x 15, C: 15 x 30. Therefore, the cost fof multiplication of B(20 x 15) and C(15 x 30) is BC(20 x 30) = 20 * 15 * 30 = 9000. Now, multiplying A(10 x 20) and B(20 x 30), the multiplication cost will be ABC(10 x 30) = 10 x 20 x 30 = 6000. Therefore, the total multiplication cost = 6000 + 9000 = 15000.

2. (A * B) * C : Now, let us first multiply matrices â€˜Aâ€™ and â€˜Bâ€™ which will have cost of 10 * 20 * 15 associated with it. Then, we will multiply matrices AB and C, which has the cost 10 * 15 * 30. The the total multiplication cost = 10 * 20 * 15 + 10 * 15 * 30 = 3000 + 4500 = 7500.

Thus in case (1), there are 15000 unit multiplications whereas in case (2), there are only 7500 unit multiplications. These are half than case (a). Thus we have just reduced the time taken by half. In conclusion, the order of multiplication has a significant impact on calculation cost. When the number of matrices and their dimensions increase, this saves a lot of time.

Let us consider the example of four matrices. Have a look at the picture shown below.

So, as we can see in the above image, there are five ways to multiply the four matrices. We need to choose the most optimal amongst them. We need to choose one of the N - 1 position to put parenthesis. Thus after adding the parenthesis, our problem divides into two smaller subproblems. As a result, we can use a recursive approach to solve these smaller subproblems.

Naive Recursive Approach

The Naive Recursive Approach for Matrix Chain Multiplication involves solving the problem recursively by considering all possible partition points in the chain of matrices.

At each step of the recursion, the algorithm selects a partition point and computes the cost of multiplying the matrices before and after that point separately. It then recursively applies the same process to each partition until reaching the base case.

The base case of the recursion occurs when there is only one matrix left in the chain, for which the cost of multiplication is zero.

However, this approach has exponential time complexity since it explores all possible partition points, leading to redundant computations. As a result, it becomes impractical for large chains of matrices.

Let us look at the recursive implementation of the problem above.

1. Matrix Chain Multiplication using Recursion:

C++

Java

Python

C++

#include <iostream> #include <vector> #include <climits> using namespace std;

int matrixChainMultiplication(vector<int> &arr, int left, int right) { // Base condition. if (left == right) { return 0; }

int minCost = INT_MAX; int tempCost;

for (int k = left; k < right; k++) { // Dividing the problems into subproblems. // Calculate cost from 'left' to 'K' and from 'K + 1' to 'right'. // Finally add them. tempCost = matrixChainMultiplication(arr, left, k) + matrixChainMultiplication(arr, k + 1, right) + arr[left - 1] * arr[k] * arr[right];

int main() { // Taking user input. int n; cin >> n; vector<int> v(n); for (int i = 0; i < n; i++) { cin >> v[i]; }

// Calling 'matrixChainMultiplication()' function to find the optimal cost of multiplication. cout << matrixChainMultiplication(v, 1, n - 1); return 0; }

Java

import java.util.*;

public class MatrixChainMultiplication {

static int matrixChainMultiplication(int[] arr, int left, int right) { // Base condition. if (left == right) { return 0; }

int minCost = Integer.MAX_VALUE; int tempCost;

for (int k = left; k < right; k++) { // Dividing the problems into subproblems. // Calculate cost from 'left' to 'k' and from 'k + 1' to 'right'. // Finally add them. tempCost = matrixChainMultiplication(arr, left, k) + matrixChainMultiplication(arr, k + 1, right) + arr[left - 1] * arr[k] * arr[right];

public static void main(String[] args) { Scanner scanner = new Scanner(System.in);

// Taking user input. int n = scanner.nextInt(); int[] arr = new int[n]; for (int i = 0; i < n; i++) { arr[i] = scanner.nextInt(); }

// Calling 'matrixChainMultiplication()' function to find the optimal cost of multiplication. System.out.println(matrixChainMultiplication(arr, 1, n - 1)); scanner.close(); } }

Python

def matrixChainMultiplication(arr, left, right): # Base condition. if left == right: return 0

minCost = float('inf')

for k in range(left, right): # Dividing the problems into subproblems. # Calculate cost from 'left' to 'k' and from 'k + 1' to 'right'. # Finally add them. tempCost = matrixChainMultiplication(arr, left, k) + \ matrixChainMultiplication(arr, k + 1, right) + \ arr[left - 1] * arr[k] * arr[right]

# Minimizing the cost. minCost = min(tempCost, minCost)

return minCost

# Taking user input. n = int(input()) arr = list(map(int, input().split()))

# Calling 'matrixChainMultiplication()' function to find the optimal cost of multiplication. print(matrixChainMultiplication(arr, 1, n - 1))

Input

4
10 20 15 20

Output

6000

Time Complexity

O(2^{N}), 'N' is the number of matrices.

The time complexity of the dynamic programming approach is O(N3). It is pretty better than the naive approach, which takes O(2N) time.

Here we are trying to put parentheses at every possible position. There are N matrices, therefore we can put parenthesis at N - 1 position. Each position has two choices. Either a parenthesis can be put at that position or not. Therefore there are 2^{N - 1} total combinations. Finally, the time complexity is O(2^{N}).

Space Complexity

O(N), â€˜Nâ€™ is the number of matrices.

The recursion stack can grow up to size â€˜Nâ€™. Each function record in the stack takes constant space. Therefore time complexity is O(N * 1) = O(N).

Let us check if we can apply dynamic programming to solve the above problem.

Optimal Substructure: First, we can place parenthesis at all possible places. If the length of the chain of matrices is â€˜Nâ€™, then the number of places we can put the parenthesis is â€˜N - 1â€™. For Example, if we have four matrices, i.e., N = 4, we can put parentheses at three places for the first set. These are (A) * (BCD), (A * B) * (C * D) and (ABC) * (D). Thus we divided the problem into a similar problem of smaller size. Therefore it has optimal substructure property, which gives us an idea to use DP.

Overlapping Subproblems: In the above figure, we see that the subproblem (B * C) is repeating twice, (C * D) is repeating twice, and (A * B) is repeating twice. If in such a smaller problem there are so many repeating subproblems, then Dynamic Programming would be of great help here.

Now let us look at the top-up dynamic programming approach. This is also called memoized approach. Herein we store the results of smaller subproblems in a table. Now when we need them, we can get them in O(1) time. Thus we dynamically decide whether to use the value from the table or call the function.

2. Dynamic Programming Solution for Matrix Chain Multiplication using Memoization:

Suppose we have a sequence of matrices that we want to multiply optimally. Each matrix has specific dimensions, and our goal is to find the best order of multiplication that minimizes the number of scalar multiplications needed. The memoization approach keeps track of previously solved subproblems so we don't have to redo calculations.

Here are the steps:

Problem Setup: Start by defining the matrices and their dimensions. Each matrix A[i] has dimensions p[i-1] x p[i].

Base Case: Consider the smallest possible subproblems where you have only one matrix to multiply. The number of scalar multiplications required for these cases is 0 because you're just dealing with a single matrix.

Recursive Breakdown: Work your way from minor to more significant problems. For each subproblem, you calculate the optimal solution by considering different ways to split the sequence of matrices and evaluating the costs of multiplying subgroups.

Memoization: Record their solutions in the memoization table. If you reencounter the same subproblem later, you can look up its solution instead of recalculating it.

Final Solution: Once you've worked through all possible subproblems and recorded their solutions in the memoization table, the optimal solution for the entire problem (multiplying all matrices) can be found in the top-right cell of the table.

The memoization approach ensures that you don't waste time recalculating the same subproblems multiple times, dramatically improving the efficiency of finding the optimal matrix multiplication order.

3. Dynamic Programming Solution for Matrix Chain Multiplication using Tabulation (Iterative Approach):

Suppose we have a series of matrices that need to be multiplied, and we want to find the most efficient way to perform these multiplications. The tabulation or iterative approach involves systematically solving smaller subproblems and using their solutions to build up to the final solution.

Here are the steps:

Problem Setup: Defines the matrices and their dimensions.

Base Case: Start with the smallest subproblems, the cases where you deal with a single matrix. Record the cost of multiplying each matrix (which is 0).

Build Up: Move on to solving slightly larger subproblems. Use the solutions of the already solved smaller subproblems to calculate the optimal solutions for these bigger subproblems.

Tabulation Table: Set up a table where each cell corresponds to a specific subproblem. You fill in this table step by step, building solutions from the bottom up.

Iterative Process: Work your way through the table, systematically solving each subproblem based on previously solved ones. As you move forward, you'll finally reach the final problem.

Final Solution: Once you have filled up the entire table and arrived at the last problem, the solution you seek will be in a specific cell.

The tabulation approach avoids redundancy by ensuring that each subproblem is solved only once and then used to build solutions for more significant problems. This systematic method allows you to find the optimal way to multiply the matrices.

Need of Dynamic Programming

The need for Dynamic Programming arises from situations where the problem can be broken down into smaller subproblems, and the solution to each subproblem can be used to solve larger instances of the problem efficiently. Here's a detailed explanation:

Optimal Substructure: Many problems exhibit the property of optimal substructure, meaning that an optimal solution to the problem can be constructed from optimal solutions to its subproblems. In other words, the problem can be decomposed into smaller subproblems, and the optimal solution can be obtained by combining the solutions to these subproblems.

Overlapping Subproblems: In some problems, the same subproblem can recur multiple times during the computation. Repeatedly solving these subproblems from scratch leads to redundant computations, resulting in inefficient algorithms. Dynamic Programming addresses this issue by storing the solutions to subproblems in a table or memoization array, allowing for constant-time retrieval of previously computed solutions.

Memoization and Tabulation: Dynamic Programming employs two main techniques: memoization and tabulation. Memoization involves storing the results of recursive calls in a data structure (such as an array or hashmap) to avoid redundant computations. Tabulation, on the other hand, involves building the solution iteratively, usually using bottom-up or top-down approaches.

Efficiency Improvement: By avoiding redundant computations and reusing the solutions to overlapping subproblems, Dynamic Programming significantly improves the efficiency of algorithms. It can lead to exponential improvements in runtime complexity, making previously intractable problems solvable within reasonable time and resource constraints.

Applications: Dynamic Programming finds applications in various fields, including optimization, algorithm design, computational biology, economics, and artificial intelligence. It is used to solve problems such as the knapsack problem, longest common subsequence, shortest path algorithms, matrix chain multiplication, and many others.

Frequently asked questions

What is the time complexity of the matrix chain multiplication?

The time complexity of matrix chain multiplication is O(n^3), where n is the number of matrices in the chain.

What is the best time complexity for matrix multiplication?

The best time complexity for matrix multiplication is O(n^2.3727), achieved by the Strassen algorithm.

Can matrix chain multiplication be solved using greedy?

No, a greedy can't solve matrix chain multiplication efficiently. Greedy strategies may not lead to the optimal solution since the best choice at each step doesn't necessarily result in the overall optimal multiplication order.

What is the purpose of matrix chain multiplication?

The purpose of matrix chain multiplication is to find the most efficient order to multiply a sequence of matrices. It minimizes the number of scalar multiplications required. This optimization is crucial in various fields like computer graphics and scientific simulations.

Conclusion

In this blog, we learned how to implement the matrix chain multiplication problem. We learned the recursive algorithm and the bottom-up dynamic programming approach. To learn more, visit Coding Ninjas Studio today and prepare for your coding interviews flawlessly.