Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Problem Statement
2.1.
Example
3.
1. Matrix Chain Multiplication using Recursion:
3.1.
C++
3.2.
Input
3.3.
Output
3.4.
Time Complexity
3.5.
Space Complexity
4.
2. Dynamic Programming Solution for Matrix Chain Multiplication using Memoization:
5.
3. Dynamic Programming Solution for Matrix Chain Multiplication using Tabulation (Iterative Approach):
6.
Frequently asked questions
6.1.
Can we solve matrix chain multiplication using dynamic programming?
6.2.
Can matrix chain multiplication be solved using greedy?
6.3.
Can matrix multiplication be distributed? 
6.4.
What is the purpose of matrix chain multiplication? 
7.
Conclusion
Last Updated: Mar 27, 2024
Easy

Matrix Chain Multiplication Algorithm

Author Husen Kagdi
0 upvote
gp-icon
Data structures & algorithms (Beginner to Intermediate)
Free guided path
13 chapters
99+ problems
gp-badge
Earn badges and level up

Introduction

Dynamic Programming is a class of algorithms that comes into the picture wherever there is Recursion and repeating subproblems. It stores the result of the subproblems which are already computed. So that we don’t have to do the repetitive work. Simply put, Dynamic Programming says us to solve only unique subproblems. In this blog, we will discuss a standard DP problem namely Matrix Chain Multiplication. We will try solving it in the most optimal way in terms of time and space complexity. So let us look at the problem statement and various algorithms to solve the problem.

Problem Statement

Given ‘N’ matrices of varying dimensions, which are multiplication compatible, find an order to multiply them such that the number of unit operations is minimized. In short, we need to find the most efficient way to multiply the matrices.

Example

Given three matrices ‘A’, ‘B’, and ‘C’ having dimensions 10 x 20, 20 x 15, and 15 x 30, respectively. We need to multiply A, B, and C. Now there are two ways in which we can multiply them.

1. A * (B * C): The dimension of matrix A: 10 x 20, B: 20 x 15, C: 15 x 30. Therefore, the cost fof multiplication of B(20 x 15) and C(15 x 30) is BC(20 x 30) = 20 * 15 * 30 = 9000. Now, multiplying A(10 x 20) and B(20 x 30), the multiplication cost will be ABC(10 x 30) = 10 x 20 x 30 = 6000. Therefore, the total multiplication cost = 6000 + 9000 = 15000.
 

2. (A * B) * C : Now, let us first multiply matrices ‘A’ and ‘B’ which will have cost of 10 * 20 * 15 associated with it. Then, we will multiply matrices AB and C, which has the cost 10 * 15 * 30. The the total multiplication cost = 10 * 20 * 15 + 10 * 15 * 30 = 3000 + 4500  = 7500.

 

Thus in case (1), there are 15000 unit multiplications whereas in case (2), there are only 7500 unit multiplications. These are half than case (a). Thus we have just reduced the time taken by half. In conclusion, the order of multiplication has a significant impact on calculation cost. When the number of matrices and their dimensions increase, this saves a lot of time.

 Let us consider the example of four matrices. Have a look at the picture shown below.
 

Example of Matrix Chain Multiplication Algorithm


So, as we can see in the above image, there are five ways to multiply the four matrices. We need to choose the most optimal amongst them. We need to choose one of the N - 1 position to put parenthesis. Thus after adding the parenthesis, our problem divides into two smaller subproblems. As a result, we can use a recursive approach to solve these smaller subproblems. 

 

Let us look at the recursive implementation of the problem above.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

1. Matrix Chain Multiplication using Recursion:

  • C++

C++

#include <iostream>
#include <vector>
#include <climits>
using namespace std;

int matrixChainMultiplication(vector<int> &arr, int left, int right)
{
// Base condition.
if (left == right) {
return 0;
}

int minCost = INT_MAX;
int tempCost;

for (int k = left; k < right; k++)
{
// Dividing the problems into subproblems.
// Calculate cost from 'left' to 'K' and from 'K + 1' to 'right'.
// Finally add them.
tempCost = matrixChainMultiplication(arr, left, k) + matrixChainMultiplication(arr, k + 1, right) + arr[left - 1] * arr[k] * arr[right];

// Minimizing the cost.
minCost = min(tempCost, minCost);
}
return minCost;
}

int main()
{
// Taking user input.
int n;
cin >> n;
vector<int> v(n);
for (int i = 0; i < n; i++)
{
cin >> v[i];
}

// Calling 'matrixChainMultiplication()' function to find the optimal cost of multiplication.
cout << matrixChainMultiplication(v, 1, n - 1);
return 0;
}

Input

4

10 20 15 20

Output

6000

Time Complexity

O(2N), 'N' is the number of matrices.

The time complexity of the dynamic programming approach is O(N3). It is pretty better than the naive approach, which takes O(2N) time.

Here we are trying to put parentheses at every possible position. There are N matrices, therefore we can put parenthesis at N - 1 position. Each position has two choices. Either a parenthesis can be put at that position or not. Therefore there are 2N - 1 total combinations. Finally, the time complexity is O(2N).

Space Complexity

O(N),  ‘N’ is the number of matrices.

The recursion stack can grow up to size ‘N’. Each function record in the stack takes constant space. Therefore time complexity is O(N * 1) O(N).

Let us check if we can apply dynamic programming to solve the above problem.

  1. Optimal Substructure: First, we can place parenthesis at all possible places. If the length of the chain of matrices is ‘N’, then the number of places we can put the parenthesis is ‘N - 1’. For Example, if we have four matrices, i.e., N = 4, we can put parentheses at three places for the first set. These are (A) * (BCD), (A * B) * (C * D) and (ABC) * (D). Thus we divided the problem into a similar problem of smaller size. Therefore it has optimal substructure property, which gives us an idea to use DP.
     
  2. Overlapping Subproblems: In the above figure, we see that the subproblem (B * C) is repeating twice, (C * D) is repeating twice, and (A * B) is repeating twice. If in such a smaller problem there are so many repeating subproblems, then Dynamic Programming would be of great help here.


Now let us look at the top-up dynamic programming approach. This is also called memoized approach. Herein we store the results of smaller subproblems in a table. Now when we need them, we can get them in O(1) time. Thus we dynamically decide whether to use the value from the table or call the function.

2. Dynamic Programming Solution for Matrix Chain Multiplication using Memoization:

Suppose we have a sequence of matrices that we want to multiply optimally. Each matrix has specific dimensions, and our goal is to find the best order of multiplication that minimizes the number of scalar multiplications needed. The memoization approach keeps track of previously solved subproblems so we don't have to redo calculations.

Here are the steps:

  1. Problem Setup: Start by defining the matrices and their dimensions. Each matrix A[i] has dimensions p[i-1] x p[i].
     
  2. Base Case: Consider the smallest possible subproblems where you have only one matrix to multiply. The number of scalar multiplications required for these cases is 0 because you're just dealing with a single matrix.
     
  3. Recursive Breakdown: Work your way from minor to more significant problems. For each subproblem, you calculate the optimal solution by considering different ways to split the sequence of matrices and evaluating the costs of multiplying subgroups.
     
  4. Memoization: Record their solutions in the memoization table. If you reencounter the same subproblem later, you can look up its solution instead of recalculating it.
     
  5. Final Solution: Once you've worked through all possible subproblems and recorded their solutions in the memoization table, the optimal solution for the entire problem (multiplying all matrices) can be found in the top-right cell of the table.
     

The memoization approach ensures that you don't waste time recalculating the same subproblems multiple times, dramatically improving the efficiency of finding the optimal matrix multiplication order.

3. Dynamic Programming Solution for Matrix Chain Multiplication using Tabulation (Iterative Approach):

Suppose we have a series of matrices that need to be multiplied, and we want to find the most efficient way to perform these multiplications. The tabulation or iterative approach involves systematically solving smaller subproblems and using their solutions to build up to the final solution.

Here are the steps:

  1. Problem Setup: Defines the matrices and their dimensions.
     
  2. Base Case: Start with the smallest subproblems, the cases where you deal with a single matrix. Record the cost of multiplying each matrix (which is 0).
     
  3. Build Up: Move on to solving slightly larger subproblems. Use the solutions of the already solved smaller subproblems to calculate the optimal solutions for these bigger subproblems.
     
  4. Tabulation Table: Set up a table where each cell corresponds to a specific subproblem. You fill in this table step by step, building solutions from the bottom up.
     
  5. Iterative Process: Work your way through the table, systematically solving each subproblem based on previously solved ones. As you move forward, you'll finally reach the final problem.
     
  6. Final Solution: Once you have filled up the entire table and arrived at the last problem, the solution you seek will be in a specific cell.
     

The tabulation approach avoids redundancy by ensuring that each subproblem is solved only once and then used to build solutions for more significant problems. This systematic method allows you to find the optimal way to multiply the matrices.

Frequently asked questions

Can we solve matrix chain multiplication using dynamic programming?

Yes, we can solve matrix chain multiplication by using dynamic programming. It involves breaking down the problem into subproblems and optimizing the order of matrix multiplication to minimize computation.

Can matrix chain multiplication be solved using greedy?

No, a greedy can't solve matrix chain multiplication efficiently. Greedy strategies may not lead to the optimal solution since the best choice at each step doesn't necessarily result in the overall optimal multiplication order.

Can matrix multiplication be distributed? 

Yes, matrix multiplication can be distributed among multiple processors or machines to speed up computation, especially for large matrices. This can be improved performance.

What is the purpose of matrix chain multiplication? 

The purpose of matrix chain multiplication is to find the most efficient order to multiply a sequence of matrices. It minimizes the number of scalar multiplications required. This optimization is crucial in various fields like computer graphics and scientific simulations.
 

Conclusion

In this blog, we learned how to implement the matrix chain multiplication problem. We learned the recursive algorithm and the bottom-up dynamic programming approach. To learn more, visit Coding Ninjas Studio today and prepare for your coding interviews flawlessly.

Recommended Problems:

Thank You and Happy Coding!

Previous article
Distinct Subsequences
Next article
Evaluate expression to true
Guided path
Free
gridgp-icon
Data structures & algorithms (Beginner to Intermediate)
13 chapters
109+ Problems
gp-badge
Earn badges and level up
Live masterclass