Sometimes, it is challenging to solve mathematical equations, so Eigen is invented for such mathematical operations, which makes the calculation more accessible and more superficial. To know more, see Eigen values and Eigen Vectors.

In this blog, we will discuss BLAS/LAPACK and Intel MKL from Eigen in detail. Let’s start going!

What are BLAS and LAPACK

BLAS stands for Basic Linear Algebra Subprograms. It is a library of vector, vector-vector, matrix-vector, and matrix-matrix operations.

LAPACK (Linear Algebra Package) is a library of dense and banded matrix linear algebra routines for tasks like eigenvalue and singular value decomposition and solving linear systems. For all of the routines, LAPACK95 defines Fortran95 interfaces.

Using BLAS or Lapack from Eigen

Since Eigen version 3.3 and later, backends for dense matrix products and dense matrix decompositions can be created using any F77-compatible BLAS or LAPACK libraries. One may employ tools like Intel® MKL, OpenBLAS, Netlib LAPACK, etc.

You must link your application to the relevant libraries and their dependencies to use an external BLAS or LAPACK library. The standard Lapacke library, which serves as a simple think layer between Eigen's C++ code and the LAPACK F77 interface, must also be linked to LAPACK. Then, you must make one or more of the following macros defined below to make use of them.

When doing so, calls to BLAS or LAPACK routines are silently substituted for some of Eigen's algorithms. Only objects that are dynamic or large enough and have one of the four standard scalar types—float, double, complex<float>, and complex<double>—are eligible for these substitutions.

Macros in Eigen

The following defined macros are used in substituting several Eigen Algorithms.

1. EIGEN_USE_BLAS: It allows for external BLAS levels 2 and 3 routines.

2. EIGEN_USE_LAPACKE: It utilizes the Lapacke C interface to Lapack to allow external Lapack routines.

3. EIGEN_USE_LAPACKE_STRICT: Similar to EIGEN_USE_LAPACKE but with disabled lower robustness algorithms.

4. EIGEN_USE_MKL_VML: It allows using Intel VML (vector operations).

5. EIGEN_USE_MKL_ALL: It defines EIGEN_USE_BLAS, EIGEN_USE_LAPACKE, and EIGEN_USE_MKL_VML.

Vector Operations using BLAS/LAPSACK

The vector operations using BLAS/LAPACK are:-

1. Matrix-matrix Operations: It is performed by EIGEN_USE_BLAS macros. The BLAS routines for this operation are ?gemm, ?symm/?hemm, ?trmm, and dsyrk/ssyrk.

In the below example, ‘m1’ and ‘m2’ are two matrices, and we are performing Matrix-matrix operations.

// Transpose of m2 matrix
m1*m2.transpose();
m1.selfadjointView<Lower>()*m2;
m1*m2.triangularView<Upper>();
m1.selfadjointView<Lower>().rankUpdate(m2,1.0);

2. Matrix-vector Operations: It is also performed by EIGEN_USE_BLAS macros. The BLAS routines for this operation are ?gemv, ?symv/?hemv, and ?trmv.

In the below example,m1 is the matrix, and b is the vector, and we are performing Matrix-vector operations.

3. LU Decomposition: It is performed by macros such as EIGEN_USE_LAPACKE and EIGEN_USE_LAPACKE_STRICT. The LAPACK routine for the following operation is ?getrf.

In the below example, we are decomposing the matrix m1 into the upper and lower triangular matrix.

v1 = m1.lu().solve(v2)

4. Cholesky Decomposition: It is also performed by macros such as EIGEN_USE_LAPACKE and EIGEN_USE_LAPACKE_STRICT. The LAPACK routine for the following operation is ?potrf.

In the below example, we are performing Cholesky decomposition in which we are dividing matrix ‘m2’ into the lower triangular matrix and its transpose.

v1 = m2.selfadjointView<Upper>().llt().solve(v2);

5. QR Decomposition: It is also performed by macros such as EIGEN_USE_LAPACKE and EIGEN_USE_LAPACKE_STRICT. The LAPACK routine for the following operations are ?geqrf and ?geqp3.

In the below example, we are performing QR decomposition in which matrix ‘m1’ is divided into the orthogonal and upper-triangular matrix.

m1.householderQr();
m1.colPivHouseholderQr();

6. Singular Value Decomposition: It is performed by macros such as EIGEN_USE_LAPACKE. The LAPACK routine for the following operation is ?gesvd.

In the below example, we are performing QR decomposition in which matrix ‘m1’ is divided into three matrices, namely orthonormal, transpose, and a diagonal matrix.

7. Eigen-value Decomposition: It is also performed by macros such as EIGEN_USE_LAPACKE and EIGEN_USE_LAPACKE_STRICT. The LAPACK routine for the following operations are ?gees, ?gees, ?syev/?heev, ?syev/?heev, and ?potrf.

In the below example, we are performing Eigen-value decomposition in which matrix m1 is divided into three matrices such as Eigenvectors matrix, Eigenvalues matrix, and Inverse of Eigenvectors matrix.

8. Schur Decomposition: It is also performed by macros such as EIGEN_USE_LAPACKE and EIGEN_USE_LAPACKE_STRICT. The LAPACK routine for the following operation is ?gees.

In the below example, we are performingSchur decomposition in which matrix m1 is divided into three matrices such as unitary matrix, transpose of a unitary matrix, and upper triangular matrix.

Intel MKL (Math Kernel Library) is a Basic Linear Algebra subprogram (BLAS) that easily optimizes code for upcoming Intel processor generations. It works with the linking and threading models, operating systems, compilers, and languages of your choice.

The Intel64 and IA32 architectures of Linux, Mac, and Windows support Intel MKL.

Note: Users must purchase or sign up for community (free) licenses for Intel MKL because it is proprietary software before using it on their products. Additionally, the license of the user product must permit linking to proprietary software while disallowing any unmodified GPL-compliant software.

Using Intel MKL from Eigen

Since Eigen version 3.1 and later, users with an installed copy of Intel MKL 10.3 can take advantage of built-in Intel® Math Kernel Library (MKL) optimizations.

Steps to use Intel MKL through Eigen:-

1. Define the EIGEN_USE_MKL_ALL macro before introducing any Eigen's header.

2. Link MKL libraries into your program.

3. You must use the LP64 interface on a 64-bit system (not the ILP64 one).

While using Intel MKL through Eigen, calls to Intel MKL routines are silently substituted for some of Eigen's algorithms. Only objects that are dynamic or large enough and have one of the four common scalar types—float, double, complex<float>, and complex<double>-are eligible for these substitutions. To substitute any part of Eigen Algorithms, see Macros in Eigen.

When combined with the following macros EIGEN_USE_MKL, the EIGEN_USE_BLAS and EIGEN_USE_LAPACKE macros explicitly inform Eigen that Intel MKL is the implementation used for BLAS/Lapack. The MKL direct call feature is enabled as the primary result. This could improve the efficiency of some MKL BLAS. By setting EIGEN_MKL_NO_DIRECT _CALL, MKL direct calls can be prevented.

Vector Operations using Intel VML

The table below lists all of the functions that EIGEN_USE_MKL_VML supports.

MKL Routines

Code Example

v?Sin

v2=v1.array().sin();

v?Asin

v2=v1.array().asin();

v?Cos

v2=v1.array().cos();

v?Acos

v2=v1.array().acos();

v?Tan

v2=v1.array().tan();

v?Exp

v2=v1.array().exp();

v?Ln

v2=v1.array().log();

v?Sqrt

v2=v1.array().sqrt();

v?Sqr

v2=v1.array().square();

Frequently Asked Questions

Is LAPACK parallel?

A collection of Fortran subroutines called LAPACK includes many different linear algebra algorithms. It was created to be adaptable to various parallel processing environments.

Does LAPACK depend on BLAS?

To provide effective and portable computational building blocks for its routines, LAPACK depends on an underlying BLAS implementation.

What are some normal tasks that the Eigen Library can carry out?

The Eigen's Geometry Library can also carry out mathematical operations like vector addition and subtraction, dot product, and cross product, as well as functions like the definition of matrices, quaternions, and vectors.

Is Eigen faster than Lapack?

Yes, Eigen is faster than Lapack because it uses optimization flags (-O3) and is a good compiler compared to Lapack.

What is the difference between BLAS and LAPACK?

A library of vector, vector-vector, matrix-vector, and matrix-matrix operations is called BLAS. In contrast, LAPACK is a library of dense and banded matrix linear algebra routines for tasks like eigenvalue and singular value decomposition and solving linear systems.

Conclusion

Congratulations on finishing the blog! We have discussed the BLAS/LAPACK and Intel MKL from Eigen. We further looked at macros and vector operations in Eigen.

We hope this blog has helped you to prepare for the BLAS/LAPACK and Integer MKL from Eigen. Do not stop learning! We recommend you read some of our articles related to the Eigen:

But suppose you have just started your learning process and are looking for questions from tech giants like Amazon, Microsoft, Uber, etc. For placement preparations, you must look at the problems, interview experiences, and interview bundles.