Table of contents
1.
Introduction
2.
Single-layered perceptron
2.1.
Implementing AND function
2.2.
Plotting the decision boundary
3.
Multi-layered perceptron
3.1.
Implementation
3.2.
Plotting the decision boundary
4.
FAQs
5.
Key Takeaways
Last Updated: Mar 27, 2024

Visualizing Decision Boundary (Perceptron)

Author soham Medewar
0 upvote

Introduction

Before going on to the topic, let us revise what perceptron is. Perceptron is the simple model in an artificial neural network(ANN). In other words, it is an algorithm in supervised learning that helps in binary classification and multi-class classification.

There are two types of perceptrons. They are as follows:

  1. Single-layered perceptron.
  2. Multi-layered perceptron.

Let us see about these two models in detail.

Single-layered perceptron

A single-layered perceptron is a simple artificial neural network that includes only a forward feed. This model works on the threshold transfer function. It is one of the easiest artificial neural networks used in the binary classification of linearly separable objects. The output of the model is 1 or 0.

The single-layered perceptron doesn't have any previous information. It includes only two layers one is input, and another is output. The output is determined by the sum of the product of weights and input values. Below is the simple structure of a single-layered perceptron. (w1 & w2 are initialized randomly)

source

In the above figure, x1 and x2 are inputs of the perceptron, and y is the result. wand w2 are weights of the edges x1-y and x2-y. Let us define a threshold limit 𝛳. If the value of y exceeds the threshold value, the output will be 1. Else the output will be 0.

The equation is as follows.

y = x1*w1 + x2*w2

If y > 𝛳: output is 1

y ≤ 𝛳: output is 0.

Furthermore, weights are updated to minimize the error using the perceptron learning rule. The rule states that the algorithm automatically updates the optimal weight coefficient.

Let us understand the single-layered perceptron network by implementing the AND function.

Implementing AND function

Let x1 and x2 be the input of the AND function. The values of x1 and x2 are shown below.

x1 x2 y
1 1 1
1 0 0
0 1 0
0 0 0

Let us import essential libraries for preparing the model.

import numpy as np

# for plotting the graphs
import matplotlib.pyplot as plt

# for implementing perceptron model
from sklearn.linear_model import Perceptron

Preparing the dataset.

x1 = [1001]
x2 = [1010]

Perceptron model do not accept the above format of the data. Let us convert it into the proper format.

x = [[1,1], [0,0], [01], [10]]
y = [1000]

Checking whether the AND function is linearly separable.

plt.figure(figsize=(33), dpi=80)
plt.xlabel("x1")
plt.ylabel("x2")
plt.scatter(x1, x2, c = y)

The above plot clearly shows that the AND function is linearly separable.

Let us draw a decision boundary to easily distinguish between the output(1 and 0).

Training the data.

clf = Perceptron(max_iter=100).fit(x, y)

After training the dataset we will print the information of the model.

print("Classes of the model : ",clf.classes_)
print("Intercept of the decision boundary : ",clf.intercept_)
print("Coefficients of the decision boundary : ",clf.coef_)

 

Classes of the model :  [0 1]
Intercept of the decision boundary :  [-2.]
Coefficients of the decision boundary :  [[2. 1.]]

Plotting the decision boundary

# line segment
ymin, ymax = -1,2
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(ymin, ymax)
yy = a * xx - (clf.intercept_[0]) / w[1]

# plotting the decision boundary
plt.figure(figsize=(44))
ax = plt.axes()
ax.scatter(x1, x2, c = y)
plt.plot(xx, yy, 'k-')
ax.set_xlabel('X1')
ax.set_ylabel('X2')
plt.show()

We can see the decision boundary classifies the four points. If x1 & x2 are 1, the output will be 1, and in the rest of the cases, the output is 0. Therefore, the yellow point with output 1 is separated from the purple data points with 0 output.

Multi-layered perceptron

A multi-layered perceptron has a structure similar to a single-layered perceptron, but it has one or more hidden layers. Multi-layered perceptron works on both forward as well backward propagation.

In the forward propagation, neurons of hidden layers contain the sum of input features × weights. For each neuron, the value will be ∑wi,j.xi.

The weights and biases are updated using the calculated error between the actual value and the model's predicted value in the backward propagation.

The above process is done until the error is minimized.

Structure of multi-layered perceptron.

source

Let us understand the Multi-layer perceptron and the decision boundary that separates the two classes with an example.

I am taking the beginner's classification dataset for the model.

Implementation

Importing libraries

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from matplotlib.colors import ListedColormap

Reading the dataset.

data = pd.read_csv("classification.csv")

Analyzing the dataset.

data.head()

The dataset has three columns 'age', 'interest', 'success'.

data.shape

The dataset has 297 training examples.

(297, 3)

Plotting the dataset.

plt.scatter(data['age'], data['interest'], c = data['success'])

Preparing dataset as per the input format of the MLPClassifier.

X = []
Y = []
forin range(len(data)):
    tmp = []
    tmp.append(data['age'][i])
    tmp.append(data['interest'][i])
    X.append(tmp)
    Y.append(data['success'][i])

Splitting the dataset into training and testing parts. We have divided the dataset into a 9:1 ratio. 90% to training and 10% to testing. The random_state is the randomness of splitting the data into train and test datasets.

x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size = 0.1, random_state=42)

Size of the train and the test dataset.

len(x_train), len(x_test)

 

(26730)

Preparing the model. We have 4 hidden layers each having 10, 40, 40, 40 neurons respectively. Activation function is set to ‘relu’ and optimizer is set to ‘adam’, max_iter = 200 that is number of epochs.

classifier = MLPClassifier(hidden_layer_sizes=(10,40,40,40), max_iter=200,activation = 'relu',solver='adam',random_state=1)

Training the model using the x_train dataset.

classifier.fit(x_train,y_train)

Predicting the x_test data.

y_pred = classifier.predict(x_test)

Let us draw a confusion matrix for the predicted values.

cm = confusion_matrix(y_pred, y_test)
cm

 

array([[ 90],
      [ 021]])

In the above confusion matrix the array[0][0] and array[1][1] are correctly predicted values. While the rest are wrongly predicted values. Due to the smaller of the x_test dataset, we have zero errors.

Plotting the decision boundary

Let us plot the decision boundary for the above model.

x_min = X[0][0]
x_max = X[0][0]
y_min = X[1][1]
y_max = X[1][1]
forin range(len(X)):
    x_min = min(x_min, X[i][0])
    x_max = max(x_max, X[i][0])
    y_min = min(y_min, X[i][1])
    y_max = max(y_max, X[i][1])
x_min, x_max, y_min, y_max

Creating the mesh grid. x-axis values will be from x_min-1 to x_max+1 where y_axis values will be from y_min-1 to y_max+1. 

xx, yy = np.meshgrid(np.arange(x_min-1, x_max+10.2), np.arange(y_min-1, y_max+10.2))

Printing the shape of the mesh grid.

xx.shape, yy.shape

 

((501228), (501228))

Converting the meshgrid to one-d matrix.

z = np.array([xx.ravel(), yy.ravel()]).T

Shape after converting the matrix to a 2-d array.

z.shape

 

(1142282)

Now we will predict the output for every z[i].

labels = classifier.predict(z)
labels.shape

 

(114228,)

Mapping colors for the scatter plot according to labels.

colors = {0.0:"purple"1.0:"goldenrod"}

Plotting the decision boundary.

plt.contourf(xx, yy, labels.reshape(xx.shape), alpha = 0.3)
plt.scatter(data['age'], data['interest'], c = data['success'].map(colors))

In the above, plot we can see that a multi-layered perceptron performs non-linear binary classification.

FAQs

  1. What is perceptron in scikit learn?
    The module sklearn contains a Perceptron class. We saw that a perceptron is an algorithm to solve binary classifier problems. This means that a Perceptron is abinary classifier, which can decide whether or not an input belongs to one or the other class. 
     
  2. What is the perceptron learning algorithm?
    The Perceptron algorithm is a two-class (binary) classification machine learning algorithm. It is a type of neural network model, perhaps the simplest type of neural network model.
     
  3. What is the objective of perceptron learning?
    The objective of perceptron learning is to adjust weight along with class identification.

Key Takeaways

In this article, we have covered the following topics:

  • Perceptron
  • Single-layered perceptron and its example. (linear binary classification)
  • Multi-layered perceptron and its example. (non-linear binary classification)
  • Plotting decision boundary.


Hello readers, here's a perfect course that will guide you to dive deep into Machine learning.

Happy Coding!

Live masterclass