Table of contents
1.
Introduction
2.
Comparisons
3.
Frequently Asked Questions 
4.
Key Takeaways
Last Updated: Mar 27, 2024

Autoencoders vs Principal Component Analysis

Author Rajkeshav
0 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

Using Principal Component Analysis and Autoencoders, we can go for dimensionality reduction. Here is the link for Autoencoder and PCA for detailed information. So definitely, we can establish some relationship between PCA and Autoencoder.

There is a Linear transformation from n-dimensional space to two or three-dimensional space in the Principal component analysis. This mapping is linear depending on the number of principal components that we want to use. 

Comparisons

Autoencoders being neural networks, can implement nonlinear functions. So the mapping is nonlinear. We can say that the principal components and the Autoencoders are the generalisations of principal components. So whatever the principal component analysis can do, Autoencoders can also do the same thing. But the Autoencoder can do something more because here, we can have nonlinear mapping, not simply linear mapping. Whereas in the case of principal components, we have only linear mapping.

 

Source: researchgate.net

 

Given a set of data with red dots in this figure, we want to convert this data set using principal component analysis into principal components. We are using two principal components, PC1 and PC2. I am transforming this data set into a plane as defined by PC1 and PC2. And this being a linear transformation or linear mapping, we can approximate the set of data on a straight line or the straight line in space PC1 and PC2. In contrast, Autoencoders are capable of imparting nonlinear functions. So using Autoencoders, we can even establish or extract the nonlinear structure present in the data. So the same data Autoencoder may have the inner system that is just not linear but nonlinear, as shown by the blue curve. So all the data points with red points can now be represented by points on this blue curve, whereas principal components will describe the data set by points on the red line.

 

 

This is another reconstruction example where the reduction is from 784 to 30. The top row gives the original image. We have the reconstruction using Autoencoder in the middle. The activation function here is ReLU which is a rectified linear unit. 

The bottom row is the reconstruction using principal component analysis. The reconstruction done by Autoencoder is much better than reconstruction from principal components, though the reduced dimension was the same. So that tells the power of Autoencoder over Principal components.

Frequently Asked Questions 

  1. What is an Autoencoder?
    An Autoencoder is an unsupervised learning technique. Which forces the feedforward or deep neural networks to learn what is known as representation learning. Autoencoder is a handy tool in dimensionality reduction.
     
  2. What is Principal component analysis?
    PCA is a dimensionality reduction technique as it can help us reduce dimensions based on the linear transformation.
     
  3. What is the Bottleneck layer in Autoencoder?
    A Bottleneck layer in Autoencoder Passes the input information through a restricted layer, and then subsequently, the decoder side tries to reconstruct the original image that is the specified output. 
     
  4. How is the Bottleneck layer different from the input and output layer?
    A Bottleneck layer has nodes much less than that of the input layer and output layer.
     
  5. What is Linear transformation?
    Linear transformation takes a vector as an input and transforms it into a new output vector.

Key Takeaways

We looked at the two very important dimensionality reduction techniques in great detail. I hope you find it interesting. Visit here for more information.

Recommended reading: 

Difference Between Compiler and Interpreter and Assembler

Live masterclass