**Introduction**

Using **Principal Component Analysis **and** Autoencoders**, we can go for dimensionality reduction. Here is the link for __Autoencoder__ and __PCA__ for detailed information. So definitely, we can establish some relationship between PCA and Autoencoder.

There is a Linear transformation from n-dimensional space to two or three-dimensional space in the Principal component analysis. This mapping is linear depending on the number of principal components that we want to use.

**Comparisons**

Autoencoders being neural networks, can implement nonlinear functions. So the mapping is nonlinear. We can say that the principal components and the Autoencoders are the generalisations of principal components. So whatever the principal component analysis can do, Autoencoders can also do the same thing. But the Autoencoder can do something more because here, we can have nonlinear mapping, not simply linear mapping. Whereas in the case of principal components, we have only linear mapping.

**Source: **__researchgate.net__

Given a set of data with red dots in this figure, we want to convert this data set using principal component analysis into principal components. We are using two principal components, PC1 and PC2. I am transforming this data set into a plane as defined by PC1 and PC2. And this being a linear transformation or linear mapping, we can approximate the set of data on a straight line or the straight line in space PC1 and PC2. In contrast, Autoencoders are capable of imparting nonlinear functions. So using Autoencoders, we can even establish or extract the nonlinear structure present in the data. So the same data Autoencoder may have the inner system that is just not linear but nonlinear, as shown by the blue curve. So all the data points with red points can now be represented by points on this blue curve, whereas principal components will describe the data set by points on the red line.

This is another reconstruction example where the reduction is from 784 to 30. The top row gives the original image. We have the reconstruction using Autoencoder in the middle. The activation function here is ReLU which is a rectified linear unit.

The bottom row is the reconstruction using principal component analysis. The reconstruction done by Autoencoder is much better than reconstruction from principal components, though the reduced dimension was the same. So that tells the power of Autoencoder over Principal components.