Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Use of Self-Organizing Maps
3.
Architecture Of Self-Organizing Maps
4.
Working of Self-Organizing Maps
5.
Algorithm
6.
Implementation
7.
FAQs
8.
Key Takeaways
Last Updated: Mar 27, 2024

Self Organizing Maps

Author Mayank Goyal
0 upvote

Introduction

With the increasing recent developments in innovative and artificially intelligent systems, the use of Neural Networks has become highly apparent. The Neural Networks use processing, inspired by the human brain, to develop algorithms that can be used to create a model and understand complex patterns. There are several types of neural networks, and each has its unique use. The Self Organizing Map is one such variant of the neural network, also known as Kohonen's Map.

Self Organizing Map ( SOM) is a type of Artificial Neural Network inspired by biological models of neural systems from the 1970s. We use unsupervised learning to train Self Organizing maps; it is slightly different from other artificial neural networks. SOM doesn't learn by backpropagation with SGD. It uses competitive knowledge to adjust weights in neurons. Self-Organizing Map is used for clustering and mapping (or dimensionality reduction) to map multidimensional data onto lower-dimensional, allowing people to reduce complex problems for straightforward interpretation. It also helps us discover the correlation between data.

Also Read, Resnet 50 Architecture

Use of Self-Organizing Maps

One of the significant advantages of SOMs is dimension reduction. So, instead of dealing with hundreds of rows and columns, the data is processed into a simplified feature map; that's why we call it a self-organizing map(SOM). The simplified Map provides a two-dimensional representation of the same data set, which is easier to read. Using Principal Component Analysis(PCA) on high dimensional data may cause data loss when the dimension gets reduced into two. If the data comprises many sizes and if every dimension preset is valuable, in such cases, Self-Organizing Maps can be handy over PCA for dimensionality reduction.

Architecture Of Self-Organizing Maps

SOMs have two layers, the first one is the input layer, and the second one is the output layer or the feature map. Unlike other ANN types, SOM doesn't have an activation function in neurons. We directly pass weights to the output layer without any preprocessing. Each neuron in a Self-organizing Map is assigned a weight vector with the same dimensionality as the input space.

Working of Self-Organizing Maps

The Self-Organizing Maps' mapping steps start from initializing the weight to vectors. Initially, a random vector as the sample is selected, and the mapped vectors are searched to find which weight best represents the chosen sample. Each weighted vector has neighboring weights present that is close to it. The preferred weight is then rewarded by becoming a random sample vector. This helps the Map to grow and form different shapes. They usually form a square or hexagonal shapes in a 2D feature space. This whole process is repeatedly performed many times and more than 1000 times. 

SOM doesn't use backpropagation with Stochastic Gradient Descent(SGD) to update weights, as we mentioned before. This unsupervised artificial neural network uses competitive learning to update its weights.

Competitive learning is based on three processes :

  • Competition
  • Cooperation
  • Adaptation

We compute the distance between each neuron (the output layer) and the input data in competition. The neuron with the minimum distance will be considered the competition's winner. The Euclidean metric is commonly used to calculate distance. Updating the vector of the suitable neuron in the final process is called adaptation, along with its neighbor in cooperation. To choose neighbors, we use the neighborhood kernel function. This function depends on time ( time incremented each new input data) and distance between the winner and the other neurons (How far is the neuron from the winner neurons). After choosing the winner neuron and its neighbors, we compute neurons update. Those selected neurons will be updated, but the factor we update is not the same. The distance between the neuron and the input data keeps on decreasing, and we adjust it.

Algorithm

  • Initializing weights to some small random values.
  • We examine every node to calculate which suitable weights are similar to the input vector. We compute the winning vector.
  • The neighborhood value of the winning vector is then calculated. The number of neighbors tends to decrease over time.
  • Repeat steps two and three for N iterations.

    You can also read about GCD Euclidean Algorithm here

Implementation

As seen above, we only need two functions; one to declare the winner and the second to update the values accordingly. Firstly, we take the input randomly and randomly initialize the weights. 

We calculate the euclidean distance to declare the winner

 def winner(self, weights, data):
        d0 = 0
        d1 = 0
 
        for i in range(len(sample)):
            d0 = d0 + math.pow((data[i] - weights[0][i]), 2)
            d1 = d1 + math.pow((data[i] - weights[1][i]), 2)
 
            if d0 > d1:
                return 0
            else:
                return 1
You can also try this code with Online Python Compiler
Run Code

Function to update the winning vector

 def update(self, weights, data, J, alpha):
        for i in range(len(weights)):
            weights[J][i] = weights[J][i] + alpha * (data[i] - weights[J][i])
 
        return weights
You can also try this code with Online Python Compiler
Run Code

 

def main():
 
    data = [[1, 1, 0, 0], [1, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]]
 
    m, n = len(data), len(data[0])
 
    # weight initialization 
    weights = [[0.2, 0.6, 0.7, 0.9], [0.1, 0.4, 0.7, 0.3r]]
 
    ob = SOM()
 
    epochs = 3
    alpha = 0.5
 
    for i in range(epochs):
        for j in range(m):
 
            # training sample
            sample = T[j]
 
            # Compute winner vector
            win = ob.winner(weights, sample)
  
            # Update winning vector
            weights = ob.update(weights, sample, win, alpha)
You can also try this code with Online Python Compiler
Run Code

That's what the pseudo-code of SOM looks like. We can also use the minisom library to implement the same.

FAQs

  1. What does a self-organizing map do?
    A self-organizing map (SOM) is an unsupervised neural network that reduces the dimensionality to represent its distribution as a map. Therefore, SOM forms a map where similar samples are mapped closely together.
     
  2. What are the benefits of SOM?
    The main advantage of using a Self Organizing Map is that the data is easily interpreted and understood. The reduction of dimensionality and grid clustering makes it easy to observe similarities in the data.
     
  3. How does SOM learn?
    SOM is trained using unsupervised learning. It is slightly different from other artificial neural networks; SOM doesn't learn by backpropagation with SGD. It uses competitive learning to adjust weights in neurons
     
  4. Are self-organizing maps useful?
    Self Organizing Map(SOM) provides a data visualization technique that helps understand high dimensional data by reducing or decreasing the dimensions of data to a simplified feature map. SOM also represents the clustering concept by similar grouping data.

Key Takeaways

Let us brief the article.

Firstly, we saw the meaning of self-organizing maps and their basic architecture. Moving on, we studied its working and uses. Later, we saw how SOMs help reduces the dimension in high-dimensional data. Lastly, we saw the algorithm followed by the self-organizing maps and their implementation.

I hope you all like this article.

Happy Learning Ninjas!

Live masterclass