Architecture Of Self-Organizing Maps
SOMs have two layers, the first one is the input layer, and the second one is the output layer or the feature map. Unlike other ANN types, SOM doesn't have an activation function in neurons. We directly pass weights to the output layer without any preprocessing. Each neuron in a Self-organizing Map is assigned a weight vector with the same dimensionality as the input space.
Working of Self-Organizing Maps
The Self-Organizing Maps' mapping steps start from initializing the weight to vectors. Initially, a random vector as the sample is selected, and the mapped vectors are searched to find which weight best represents the chosen sample. Each weighted vector has neighboring weights present that is close to it. The preferred weight is then rewarded by becoming a random sample vector. This helps the Map to grow and form different shapes. They usually form a square or hexagonal shapes in a 2D feature space. This whole process is repeatedly performed many times and more than 1000 times.
SOM doesn't use backpropagation with Stochastic Gradient Descent(SGD) to update weights, as we mentioned before. This unsupervised artificial neural network uses competitive learning to update its weights.
Competitive learning is based on three processes :
- Competition
- Cooperation
- Adaptation
We compute the distance between each neuron (the output layer) and the input data in competition. The neuron with the minimum distance will be considered the competition's winner. The Euclidean metric is commonly used to calculate distance. Updating the vector of the suitable neuron in the final process is called adaptation, along with its neighbor in cooperation. To choose neighbors, we use the neighborhood kernel function. This function depends on time ( time incremented each new input data) and distance between the winner and the other neurons (How far is the neuron from the winner neurons). After choosing the winner neuron and its neighbors, we compute neurons update. Those selected neurons will be updated, but the factor we update is not the same. The distance between the neuron and the input data keeps on decreasing, and we adjust it.
Algorithm
- Initializing weights to some small random values.
- We examine every node to calculate which suitable weights are similar to the input vector. We compute the winning vector.
- The neighborhood value of the winning vector is then calculated. The number of neighbors tends to decrease over time.
-
Repeat steps two and three for N iterations.
You can also read about GCD Euclidean Algorithm here
Implementation
As seen above, we only need two functions; one to declare the winner and the second to update the values accordingly. Firstly, we take the input randomly and randomly initialize the weights.
We calculate the euclidean distance to declare the winner
def winner(self, weights, data):
d0 = 0
d1 = 0
for i in range(len(sample)):
d0 = d0 + math.pow((data[i] - weights[0][i]), 2)
d1 = d1 + math.pow((data[i] - weights[1][i]), 2)
if d0 > d1:
return 0
else:
return 1
You can also try this code with Online Python Compiler
Run Code
Function to update the winning vector
def update(self, weights, data, J, alpha):
for i in range(len(weights)):
weights[J][i] = weights[J][i] + alpha * (data[i] - weights[J][i])
return weights
You can also try this code with Online Python Compiler
Run Code
def main():
data = [[1, 1, 0, 0], [1, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]]
m, n = len(data), len(data[0])
# weight initialization
weights = [[0.2, 0.6, 0.7, 0.9], [0.1, 0.4, 0.7, 0.3r]]
ob = SOM()
epochs = 3
alpha = 0.5
for i in range(epochs):
for j in range(m):
# training sample
sample = T[j]
# Compute winner vector
win = ob.winner(weights, sample)
# Update winning vector
weights = ob.update(weights, sample, win, alpha)
You can also try this code with Online Python Compiler
Run Code
That's what the pseudo-code of SOM looks like. We can also use the minisom library to implement the same.
FAQs
-
What does a self-organizing map do?
A self-organizing map (SOM) is an unsupervised neural network that reduces the dimensionality to represent its distribution as a map. Therefore, SOM forms a map where similar samples are mapped closely together.
-
What are the benefits of SOM?
The main advantage of using a Self Organizing Map is that the data is easily interpreted and understood. The reduction of dimensionality and grid clustering makes it easy to observe similarities in the data.
-
How does SOM learn?
SOM is trained using unsupervised learning. It is slightly different from other artificial neural networks; SOM doesn't learn by backpropagation with SGD. It uses competitive learning to adjust weights in neurons
-
Are self-organizing maps useful?
Self Organizing Map(SOM) provides a data visualization technique that helps understand high dimensional data by reducing or decreasing the dimensions of data to a simplified feature map. SOM also represents the clustering concept by similar grouping data.
Key Takeaways
Let us brief the article.
Firstly, we saw the meaning of self-organizing maps and their basic architecture. Moving on, we studied its working and uses. Later, we saw how SOMs help reduces the dimension in high-dimensional data. Lastly, we saw the algorithm followed by the self-organizing maps and their implementation.
I hope you all like this article.
Happy Learning Ninjas!