Table of contents
1.
Introduction
2.
Sparse Matrix
2.1.
The SparseMatrix class
2.1.1.
Matrix and vector properties
2.1.2.
Iterating over the nonzero coefficients
2.2.
Filling a sparse matrix
2.3.
Supported operators and functions
2.3.1.
Basic operations
2.3.2.
Matrix Products
2.3.3.
Block operations
2.3.4.
Triangular and self adjoint views
3.
Frequently Asked Questions
3.1.
What is a sparse matrix in data structure?
3.2.
What is the application of a sparse matrix?
3.3.
What are the properties of a sparse matrix?
4.
Conclusion
Last Updated: Mar 27, 2024

Introduction to Sparse Matrix

Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

Let's ensure we understand the foundational concepts before delving further into the subjects. Here is a brief introduction if you are unfamiliar with the Matrix.

Introduction to Sparse Matrix

A two-dimensional array with "m" rows and "n" columns is known as a matrix. A matrix with m rows and n columns is known as the m × n matrix. It consists of a collection of numbers arranged in either horizontal or vertical rows of entries

matrix image

In this blog, we will discuss a Sparse Matrix in which we have explained The sparse Matrix class, Filling a Sparse matrix, and supported operators and functions.

Without further ado, let's get started.

Sparse Matrix

A sparse Matrix is a matrix with huge numbers of entries with a value of zero. In other words, the Sparse Matrix is a matrix with very few nonzero elements.

Sparse matrix

Advantages of the sparse Matrix:

Storage: Given that a sparse matrix has fewer nonzero elements than zero, its elements can be stored in less memory. It only assesses the elements that are not zero.

Computing time: When searching in a sparse matrix, we should only explore the nonzero entries rather than the entire sparse Matrix. Computation time is reduced by logically creating a data structure that traverses nonzero components.

The SparseMatrix class

The primary representation of a sparse matrix in Eigen's sparse module is the class SparseMatrix, which boasts fast performance and little memory usage. It carries out a more flexible version of the common Compressed Column (or Row) Storage method.

💁 It includes four  small arrays:

  • Values: keeps track of the nonzero coefficient values.
     
  • InnerIndices: keeps track of the nonzero row (or column) indices.
     
  • OuterStarts: saves the index of the first nonzero in the previous two arrays for each column (or row).
     
  • InnerNNZs: the number of nonzero values for each column is stored (resp. row). The term "inner" defines an inner vector that is either a row for a matrix with a row-major or a column for a matrix with a column major. The word outer indicates the opposite direction.
     

Here is an example that will help to further explain this storage system.

matrix image

Column major representation:

Stepnimage

The elements of the given vector are always sorted by increasing inner indices. The ‘_’ represents available free space to insert new elements.

Suppose the case where no free space is available is a compressed mode. It is comparable to the widely used Compressed Column (or Row) Storage methods (CCS or CRS). Using the SparseMatrix::makeCompressed() function, any SparseMatrix can be converted to this format. Given that we have the equivalence, we can say that the InnerNNZs array is redundant with the OuterStarts array in this situation: InnerNNZs[j] == OuterStarts[j+1] - OuterStarts[j]. Therefore, in actual usage, a call to SparseMatrix::makeCompressed() releases this buffer.

Compressed sparse matrices are the end product of all Eigen processes. On the other hand, adding a new element causes a SparseMatrix to switch from the compressed mode to the uncompressed mode later.

Here is the previous Matrix in compressed mode.

Step image

A Sparse vector is also a sparse matrix where values and inner indices are stored. There is no concept of compressed and uncompressed modes.

Matrix and vector properties

The scalar type, the sparse Matrix, and the sparse vector classes each require three template arguments (e.g., double), the inner index type, and the storage order (ColMajor or RowMajor; ColMajor is the default) (default is int).

Constructors use the object's size when creating dense matrix objects. Here are a few instances:

SparseMatrix<std::complex<float> > matrix(1000,2000);
SparseMatrix<double,RowMajor> matrix(1000,2000);
SparseVector<std::complex<float> > vector(1000);              
SparseVector<double,RowMajor> vector(1000);                   
You can also try this code with Online C++ Compiler
Run Code

 

Iterating over the nonzero coefficients

The coeffRef(i,j) function provides random access to the components of a sparse object. However, this function necessitates a somewhat costly binary search. Most of the time, one simply wishes to repeat iterations over nonzero elements. This is accomplished using a regular loop over the outer dimension followed by an InnerIterator iterating through the nonzero elements of the current inner vector. To visit the nonzero items, one must do so in the same order as the storage order. 

Here is an example:

SparseMatrix<double> matrix(rows,cols);
for (int m=0; m<matrix.outerSize(); ++m)
 for (SparseMatrix<double>::InnerIterator i(matrix,m); i; ++i)
 {
   i.value();
   i.row();   // It is for row index
   i.col();   // It is for column index 
   i.index(); // It is for inner index
 }
You can also try this code with Online C++ Compiler
Run Code

The valueRef() function allows the referenced value for a writable expression to be changed.

Filling a sparse matrix

💁 It is important to take extra precautions when adding new nonzero entries because of the SparseMatrix's unique storage design. One pure random insertion, for instance, into a SparseMatrix would cost O(nnz), where nnz is the number of non-zero coefficients now present.

Thus, building a list of so-called triplets first, then converting it to a SparseMatrix, is the quickest way to produce a sparse matrix while still ensuring decent performance.

Here's an example of how it's usually used:

typedef Eigen::Triplet<double> T;
std::vector<T> tripletList;
tripletList.reserve(estimation_of_entries);
for(...)
{
 // ...
 tripletList.push_back(T(i,j,v_ij));
}
SparseMatrixType matrix(rows,cols);
matrix.setFromTriplets(tripletList.begin(), tripletList.end());
// matrix is ready.
You can also try this code with Online C++ Compiler
Run Code

 

However, in rare circumstances, directly entering the nonzeroes into the destination matrix can result in improved performance and reduced memory usage. Below is an example of a typical case using this method:

1: SparseMatrix<double> matrix(rows,cols);         
2: matrix.reserve(VectorXi::Constant(cols,6));
3: for each m,n such that v_mn != 0
4: matrix.insert(i,j) = v_mn;                    
5: matrix.makeCompressed();                                        
You can also try this code with Online C++ Compiler
Run Code

Supported operators and functions

💁 Sparse matrices cannot provide the same degree of flexibility as dense matrices due to their unique storage style. Only the portion of the dense matrix API that can be effectively implemented was chosen to be exposed in Eigen's sparse module. The terms "sparse matrix," "sparse vector," "dense matrix," and "dense vector" are used in the following.

Basic operations

Sparse expressions allow most of the unary and binary coefficient wise operations:

sm1.real()    sm1.imag()    -sm1                    0.5*sm1
sm1+sm2      sm1-sm2      sm1.cwiseProduct(sm2)
You can also try this code with Online C++ Compiler
Run Code

 

The storage orders need to match, though, which is a significant restriction. Consider the example that follows:

sm4 = sm1 + sm2 + sm3;
You can also try this code with Online C++ Compiler
Run Code

 

Binary coefficient wise operators

sm2 = sm1.cwiseProduct(dm1);
dm2 = sm1 + dm1;
dm2 = dm1 - sm1; 
You can also try this code with Online C++ Compiler
Run Code

 

Performance-wise, two steps are preferable for adding or subtracting sparse and dense matrices. Rather than writing dm2 = sm1 + dm1, for example, write:

dm2 = dm1;
dm2 += sm1;
You can also try this code with Online C++ Compiler
Run Code

 

This variant has the benefit of fully utilising the faster performance of dense storage (no indirection, SIMD, etc.) and paying the price of sluggish sparse evaluation on the sparse matrix's few nonzero elements only.

Also supported by transposition are sparse expressions:

sm1 = sm2.transpose();
sm1 = sm2.adjoint();
You can also try this code with Online C++ Compiler
Run Code

Matrix Products

📜 Following is a list of the different types of sparse matrix products that Eigen supports:


Sparse-dense.

dv2 = sm1 * dv1;
dm2 = dm1 * sm1.adjoint();
dm2 = 2. * sm1 * dm1;
You can also try this code with Online C++ Compiler
Run Code

 

Symmetric sparse-dense.

dm2 = sm1.selfadjointView<>() * dm1;          
dm2 = sm1.selfadjointView<Upper>() * dm1;     
dm2 = sm1.selfadjointView<Lower>() * dm1;
You can also try this code with Online C++ Compiler
Run Code


Sparse-sparse.

sm3 = sm1 * sm2;
sm3 = 4 * sm1.adjoint() * sm2;
You can also try this code with Online C++ Compiler
Run Code


Permutations.

PermutationMatrix<Dynamic,Dynamic> P = ...;
sm2 = P * sm1;
sm2 = sm1 * P.inverse();
sm2 = sm1.transpose() * P;

dv2 = sm1 * dv1;
dm2 = dm1 * sm1.adjoint();
dm2 = 2. * sm1 * dm1;
You can also try this code with Online C++ Compiler
Run Code

 

Block operations

📜 For read access to sub-matrices, such as blocks, columns, and rows, sparse matrices expose the same API as dense matrices. For a thorough introduction, see Block operations. On the other hand, writing to a sub-sparse matrix is significantly more constrained due to performance issues, and only contiguous groups of columns (or rows) of a column-major (or row-major) SparseMatrix are currently writable. Additionally, block(...) and corner* methods must not be used because this information must be known at compile-time (...). Below is a list of the APIs that can be used to write to a sparse matrix:

SparseMatrix<double,ColMajor> sm1;
sm1.col(j) = ...;
sm1.leftCols(ncols) = ...;
sm1.middleCols(j,ncols) = ...;
sm1.rightCols(ncols) = ...;
SparseMatrix<double,RowMajor> sm2;
sm2.row(i) = ...;
sm2.topRows(nrows) = ...;
sm2.middleRows(i,nrows) = ...;
sm2.bottomRows(nrows) = ...;


Triangular and self adjoint views

📜 In the same way, as with dense matrices, the triangularView() method can be used to address a triangle portion of the Matrix and carry out triangular solves with a dense right-hand side:

dm2 = sm1.triangularView<Lower>(dm1);
dv2 = sm1.transpose().triangularView<Upper>(dv1);

 

The selfadjointView() method allows for a number of operations:

Improved sparse-dense matrix products.

dm2 = sm1.selfadjointView<>() * dm1;         
dm2 = sm1.selfadjointView<Upper>() * dm1;     
dm2 = sm1.selfadjointView<Lower>() * dm1;    
You can also try this code with Online C++ Compiler
Run Code

Copy of triangular parts:

sm2 = sm1.selfadjointView<Upper>();                               
sm2.selfadjointView<Lower>() = sm1.selfadjointView<Upper>();      
You can also try this code with Online C++ Compiler
Run Code

Use of symmetric permutation:

PermutationMatrix<Dynamic,Dynamic> P = ...;
sm2 = A.selfadjointView<Upper>().twistedBy(P);                                
sm2.selfadjointView<Lower>() = A.selfadjointView<Lower>().twistedBy(P);     
You can also try this code with Online C++ Compiler
Run Code

Frequently Asked Questions

What is a sparse matrix in data structure?

A sparse Matrix is a matrix with huge numbers of entries with a value of zero. In other words, the Sparse Matrix is a matrix with very few nonzero elements

What is the application of a sparse matrix?

For processing large-scale tasks that dense matrices couldn't handle, sparse matrices can be helpful. One such application includes using the finite element method to solve partial differential equations. One technique for solving partial differential equations is the finite element method (PDEs).

What are the properties of a sparse matrix?

A matrix with a large percentage of zero values is said to be sparse. Matrix types containing a large percentage of nonzero values are known as dense matrices, in contrast to sparse matrices. A matrix is sparse if a significant portion of its coefficients are zero.

Conclusion

Congratulations on finishing the blog! We have discussed a Sparse Matrix in which we have explained The sparse Matrix class, Matrix, and vector properties, Iterating over the nonzero coefficients, Filling a Sparse matrix, and supported operators and functions.

We hope this blog has helped you enhance your Introduction to Sparse Matrix knowledge. If you'd like to learn more, Check out the following links:

🔥 Matrix

🔥 Types of Matrix

🔥 01 Matrix

Please refer to our guided pathways on Code studio to learn more about DSACompetitive ProgrammingJavaScriptSystem Design, etc. Enroll in our courses, and use the accessible sample exams and questions as a guide. For placement preparations, look at the interview experiences and interview package.

Please vote 🏆 our blogs if you find them helpful and informative!

Happy coding🤗

Live masterclass