Table of contents
1.
Introduction
2.
Basic linear solving
3.
Least squares solving
4.
Checking if a matrix is singular
5.
Computing eigenvalues and eigenvectors
6.
Separating the computation from the construction
7.
Rank-revealing decompositions
8.
Frequently Asked Questions
8.1.
How does a computer find eigenvalues?
8.2.
Why are eigenvalues important in machine learning?
8.3.
How are eigenvectors used?
9.
Conclusion
Last Updated: Mar 27, 2024

How to solve linear systems

Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

The system of linear equations in mathematics is the set of 2 or more linear equations having the same variables, such as x, y, z. Here, linear equations can be said as the equations of the 1st order, i.e., the greatest power of the variable is 1. For example

linear system

In matrix form it can be represented as:

matrix reprentation

With the help of Eigen, which is a C++ template library we can solve linear algebra problems like matrices, vectors, numerical solvers, and related algorithms. To do this, we must add #include <Eigen/Dense> as used in Eigen documentation. So in this article, we will see how Eigen can solve basic linear problems, find error margins, or even compute eigenvalues and eigenvectors.

intro image

Basic linear solving

To solve a system of equations written as a single matrix equation form of Ax=b; here A and b are matrix. To find the solution x, we can pick between any decompositions based on properties of our matrix A, and depending on whether you favor accuracy or speed. For example: In the below case, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR.It returns the answer with maximum accuracy.

Example:

#include <iostream>
#include <Eigen/Dense>
int main()
{
    Eigen::Matrix3f matrix;
    Eigen::Vector3f vector;
    matrix << 1,4,7, 2,5,8, 3,6,10;
    vector << 3,0,0 3,0, 0,3, 0, 0;
    std::cout << "The matrix is :\n" << matrix << std::endl;
    std::cout << "The vector is :\n" << vector << std::endl;
    Eigen::Vector3f answer = matrix.colPivHouseholderQr().solve(vector);
    std::cout << "The solution is:\n" << answer << std::endl;
}

Output:

The matrix is :
1 4 7
2 5 8
3 6 10
The vector is :
3 0 0
3 0 0
3 0 0
The solution is:
-1
 1
 0

Least squares solving

We generally use the SVD decomposition to solve under or over-determined linear systems in the least squares sense. With the help of Eigen we can implement two implementations:

  1. BDCSVD class, which is good for large problems 
  2. JacobiSVD class for smaller problems.
     

For both classes, their solve() method calculated the linear system in the least-squares sense. There is also an alternative to the SVD, which is CompleteOrthogonalDecomposition class.

Example:

#include <iostream>
#include <Eigen/Dense>
int main()
{
    Eigen::MatrixXf mat1 = Eigen::MatrixXf::Random(3, 2);
    Eigen::VectorXf vec = Eigen::VectorXf::Random(3);
    std::cout << "The matrix mat1 is :\n" << mat1  << std::endl;
    std::cout << "The right-hand side vec is:\n" << vec << std::endl;
    Eigen::Vector3f soln= mat1 .template bdcSvd<Eigen::ComputeThinU | 
    Eigen::ComputeThinV>().solve(vec);
    std::cout << "After calculating, the least squares solution is:\n"<< soln; << std::endl;
}

Output:

The matrix mat1  is :
 -0.878 -2
  0.678 0.35
  0.34 0.897
The right- hand side vec is:
  0.776
  0.665
  0.78
After calculating, the least squares solution is
  1.181
  -0.67

Checking if a matrix is singular

Eigen allows you to choose your own error margin to consider the answer valid.

Example:

#include <iostream>
#include <Eigen/Dense>
using Eigen::MatrixXd;
int main()
{
  MatrixXd mat1 = MatrixXd::Random(1,3);
  MatrixXd mat2 = MatrixXd::Random(2,2);
  MatrixXd solution = mat1.fullPivLu().solve(mat2);
  double rel_error = (mat1*solution - mat2).norm() / mat2.norm(); // the norm() is L2 norm
  std::cout << "The relative error as calculated is as:\n" << rel_error << std::endl;
}

Output:

The relative error as calculated is as:
3.74165738677

Computing eigenvalues and eigenvectors

It would be best if you had an eigendecomposition here. Ensure that your matrix is self-adjoint, as it mostly happens in these problems. We can use SelfAdjointEigenSover, to compute such values. It could be adapted to normal matrices using EigenSolver or ComplexEigenSolver.

The computation of eigenvalues and eigenvectors rarely converge. The call to info() is to see this possibility.

Example:

#include <iostream>
#include <Eigen/Dense>
int main()
{
    Eigen::Matrix2f matrix1;
    A << 1, 2, 2, 3;
    std::cout << "Matrix matrix1:\n" << matrix1 << std::endl;
    Eigen::SelfAdjointEigenSolver<Eigen::Matrix2f> eigensolver(matrix1);
    if (eigensolver.info() != Eigen::Success) abort();
    //calculating eigen value for 1st matrix
    Eigen::Matrix3f soln1= eigensolver.eigenvalues();
    std::cout << "The eigen values of matrix1 are:\n" 
    <<soln1 << std::endl;
    //calculating eigen value for 2nd matrix
    Eigen::Matrix3f soln2= eigensolver.eigenvectors();
    std::cout << "Here is the matrix whose columns are eigen vectors of matrix1 
                  corresponding to these eigenvalues:\n"
         <<soln2 << std::endl;
}

Output:

Matrix matrix1:
 0  1
-2 -3
The eigen values of matrix1 are:
-1
-2
Here is the matrix whose columns are eigen vectors of matrix1 
corresponding to these eigenvalues:
  0.7071 -0.4472
 -0.7071 -0.8944

Separating the computation from the construction

In case we want to reuse an existing decomposition object, it would not be possible in above examples. This is so because the decomposition is computed at the same time the decomposition object was constructed. So, we must need to separate both these things. We can do so by:

  • every decompositions have a default constructor,
  • every decompositions have a compute(matrix) method that does the solving, and that can be called again on an already-computed decomposition, reinitializing it.

Example:

#include <iostream>
#include <Eigen/Dense>
int main()
{
    Eigen::Matrix2f mat1, b;
    Eigen::LLT<Eigen::Matrix2f> llt;
    mat1 << 2, -1, -1, 3;
    mat2 << 1, 2, 3, 1;
    std::cout << "Matrix mat1:\n" << mat1 << std::endl;
    std::cout << "The right hand side mat2 is:\n" << mat2 << std::endl;
    std::cout << "After Solving for LLT decomposition.." << std::endl;
    llt.compute(A);
    std::cout << "The Solution is:\n" << llt.solve(mat2) << std::endl;
    mat1(1,1)++;
    std::cout << "Now, The matrix mat1 is:\n" << mat1 << std::endl;
    std::cout << "Solving for LLT decomposition.." << std::endl;
    llt.compute(mat1);
    std::cout << "Solution is :\n" << llt.solve(mat2) << std::endl;
}

Output:

Matrix mat1:
 2 -1
-1 3
The right hand side vec is:
1 2
3 1
After Solving for LLT decomposition..
Solution is:
1.2 1.4
1.4 0.8
Now, The matrix mat1 is:
 2 -1
-1 4
Solving for LLT decomposition..
Solution is :
    1 1.29
    1 0.571

Rank-revealing decompositions

Certain decompositions can calculate the rank of a matrix. These are also the decompositions that behave best in the case of a non-full-rank matrix. 

Rank-revealing decompositions gives at least a rank() method. They can also offer methods for our convenience such as isInvertible(), and methods to calculate the image (column-space) and kernel (null-space) of the matrix.

Example:

#include <iostream>
#include <Eigen/Dense>
int main()
{
    Eigen::Matrix3f mat1;
    mat1 <<  3, 4, 0,  7, 
             1, −5, 2, −2, 
             −1,  4, 0,  3,
             1, −1, 2, 2;
    std::cout << "The matrix mat1:\n" << mat1 << std::endl;
    Eigen::FullPivLU<Eigen::Matrix3f> lu_decomp(mat1);
    Int rank= lu_decomp.rank();
    std::cout << "The rank of mat1 is " << rank << std::endl;
    std::cout << "The matrix whose columns make a basis of the null-space of mat1 are:\n"
         << lu_decomp.kernel() << std::endl;
    std::cout << "The matrix whose columns make a basis of the column-space of mat1 
                  are:\n"<< lu_decomp.image(mat1) << std::endl;
}

Output:

The matrix mat1:
 3  4   0  7 
 1  −5  2  −2 
 −1  4  0  3
 1  −1  2  2
The rank of mat1 is 3
The matrix whose columns make a basis of the null-space of mat1 are:
 -1
 -1
 -1
  1
The matrix whose columns make a basis of the column-space of mat1 are:
 3  4  0
 1 −5  2
−1  4  0
 1 −1  2

Frequently Asked Questions

How does a computer find eigenvalues?

The standard algorithm for solving eigenvalues is called the QR algorithm. This involves the QR-factorization of the matrix in question (as a quick reminder, the QR-factorization encodes the Gram–Schmidt process for orthonormalizing a basis).

Why are eigenvalues important in machine learning?

Eigenvalues are used to find features of large data sets to perform dimensionality reduction, allowing for prioritizing computational resources.

How are eigenvectors used?

Eigenvectors are used to make linear transformation understandable. Think of eigenvectors as stretching/compressing an X-Y line chart without changing their direction.

Conclusion

In this article we learnt the basic method to linear problems with the help of C++ library called as Eigen. We also learnt to calculate ranks, error margins and eigenvalues and eigenvectors.

Please refer to our guided pathways on Code studio to learn more about DSACompetitive ProgrammingJavaScriptSystem Design, etc. Enroll in our courses, and use the accessible sample exams and questions as a guide. For placement preparations, look at the interview experiences and interview package.

Please vote 🏆 our blogs if you find them helpful and informative!

Happy coding🤗

Thankyou

Grammarly Report: Report

Live masterclass