1.
Introduction
2.
Techniques for Edge Detection
2.1.
Prewitt edge detection
2.2.
Sobel edge detection
2.3.
Laplacian edge detection
2.4.
Canny edge detection
3.
3.1.
What is the best way to detect edges in a binary image?
3.2.
How do you calculate the degree of similarity between two images?
3.3.
To blur an image, can you use a linear filter?
3.4.
There is a finite difference. Image processing filters are incredibly vulnerable to noise. Which strategy can you use to deal with this such that noise distortions are kept to a minimum?
4.
Conclusion
Last Updated: Mar 27, 2024
Easy

# Edge Detection

Create a resume that lands you SDE interviews at MAANG
Speaker
Anubhav Sinha
SDE-2 @
12 Jun, 2024 @ 01:30 PM

## Introduction

Computers identify objects in the same way that humans do, which means they also go through the process of separating boundaries, identifying essential features, and putting together different feature values in order to make sense of an image. Just like a dog's tail, shape, nose, and other characteristics help us distinguish a dog from other animals, similarly, a computer can recognize an object by recognizing traits that are useful in determining the object's structure and qualities. Edges are an example of such characteristic features.

A line connecting two corners or surfaces is known as an edge in mathematics. Edge detection is based on the premise that areas with significant variances in pixel brightness indicate a boundary. As a result, edge detection is a metric for measuring the intensity of a  discontinuity and differences in brightness in the picture.

Source : Image showing the edges detected

In fields including image processing, computer vision, and machine vision, edge detection is used to segment images from unwanted backgrounds and extract valuable data.

## Techniques for Edge Detection

The process of representing edges in an image reduces the amount of data that must be processed while preserving important information about the forms of objects in the picture.

The technique of edge detection is used in image processing, particularly computer vision, to locate significant differences in a grey-level image, determine objects' physical and geometrical attributes, and transform original images into edge images. The technique separates the objects in an image and their surrounding backgrounds. Edge detection is used for identifying substantial intensity level discontinuities. Local fluctuations in visual intensity are referred to as edges, and these edges are often seen at the intersection of two zones. The significant features of an image can be derived from the edges in that particular image.

For image analysis, edge detection is a critical feature. Object detection with edge detection is utilized in various applications such as medical image processing, biometrics, etc. because it allows higher-level picture analysis.

Discontinuities in the grey level are divided into three categories: point, line, and edges.

In the literature for picture segmentation, there are several edge detection algorithms. The following section goes over the most common discontinuity-based edge detection methods.

### Prewitt edge detection

This is a typical edge detector used to detect horizontal and vertical edges in photos. All masks used for edge detection are referred to as derivative masks. Because a picture is also a signal, the changes in a signal can only be computed via differentiation. As a result, derivative masks or derivative operators are occasionally used to refer to these operators.

The following properties should be present in all derivative masks:

• The mask should have the opposite indication.
• The sum of the masks should be 0.
• Greater edge detection means more weight.

Therefore, the Prewitt operator gives us two masks, one for recognizing horizontal edges and the other for detecting vertical edges.

As you can see, by using masks on a picture, we can identify both horizontal and vertical edges.

### Sobel edge detection

One of the most extensively used edge detection methods is Sobel Edge Detection. As seen in the diagram below, the Sobel Operator recognizes edges characterized by abrupt variations in pixel intensity.

When the first derivative of the intensity function is plotted, the increase in intensity becomes much more apparent.

Edges can be spotted in locations where the gradient is higher than a specific threshold value, as shown in the graph. Furthermore, a quick change in the derivative will result in a change in pixel intensity. With this in mind, we can use a 33 kernel to approximate the derivative. One kernel detects sudden changes in pixel intensity in the X direction, while the other detects changes in the Y direction.

Kernels in X-Y Directions

These kernels, one for each of the two perpendicular orientations, are designed to respond primarily to edges traveling vertically and horizontally relative to the pixel grid. The kernels may be used separately on the input picture to produce distinct gradient component measurements in each orientation, which we refer to as Gx and Gy. By integrating these findings, the absolute size of the gradient at each point, as well as its direction, may be established. The magnitude of the gradient is calculated as follows:

Final Magnitude

The spatial gradient is caused by the angle of orientation of the edge (relative to the pixel grid):

### Laplacian edge detection

The Laplacian edge detector, unlike the Sobel edge detector, uses only one kernel. In a single pass, it calculates second-order derivatives. Here's the kernel that was used:

The kernel used for the Laplacian operator is shown above.

Laplacian edge detection performs second-order derivatives in a single pass, making it susceptible to noise. Before using this procedure, the image is smoothed with Gaussian smoothing to avoid this sensitivity to noise.

source

Laplacians are computationally faster (one kernel vs. two kernels) and occasionally give spectacular results!

### Canny edge detection

Because of its robustness and flexibility, Canny Edge Detection is one of the most widely used edge detection technologies today.

It entails following the methods listed below while detecting edges in an image.

1. Using a Gaussian filter, remove noise from the input image.
2. Calculating the gradient of picture pixels using the derivative of a Gaussian filter to obtain magnitude along the x and y dimensions.
3. Suppress the non-max edge contributor pixel points while considering a group of neighbors for any curve in a direction perpendicular to the provided edge.
4. Finally, use the Hysteresis Thresholding method to keep pixels larger than the gradient magnitude and ignore smaller ones.

Noise Removal or Image Smoothing:

During the presence of noise, the pixel may not be comparable to its neighboring pixels. As a result, edges may be detected incorrectly or inappropriately. To avoid this, we employ a Gaussian filter, which is convolved with the image and removes noise, preventing the desired edges from appearing in the output images.

We convolve a Gaussian filter or kernel G(x,y) with an image I in the example below. In this case, we make sure that any given pixel in the output is identical to its neighbors; therefore, we use the matrix [1 1 1] to keep the pixels comparable and remove the noise.

source

Where: G(x,y) symbolises Gaussian Distribution, and I represents the input image.

Derivative:

Calculate the filter's derivative in the X and Y dimensions, then convolve it with I to get the gradient magnitude across the dimensions. The tangent of the angle between the two dimensions can also compute the image's direction.

Non-Max Suppression:

Few spots along an edge are often noted to enhance the visibility of the edge clearer. As a result, we can ignore those edge locations that don't contribute much to feature visibility. We employ the Non-Maximum Suppression approach to achieve this. The spots on the edge curve where the magnitude is most significant are marked here. This can be found by looking for a maximum and a perpendicular slice to the curve.

source

Take a look at the edge in the diagram, which contains three edge points. Assume that point (x,y) has the most significant gradient of edge. In the path perpendicular to the edge, look for edge points and see if their gradient is less than (x,y). We can suppress non-maxima locations along the curve if the values are less than the (x,y) gradient.

Hysteresis Thresholding:

Two threshold values are used to compare the gradient magnitudes in the last phase of Canny Edge Detection, where one is smaller than the other.

• Those pixels are associated with solid edges and are included in the final edge map if the gradient magnitude value is greater than the more considerable threshold value.
• The pixels are suppressed and excluded from the final edge map if the gradient magnitude values are less than the smaller threshold value.
• All remaining pixels with gradient magnitudes in the middle of these two thresholds are labeled as 'weak' edges (i.e., they become candidates for being included in the final edge map).
• The 'weak' pixels are included in the final edge map if connected to solid edges.
Also see, Sampling and Quantization
Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

#### What is the best way to detect edges in a binary image?

We can use the Hough Transform to detect rectangles in the image directly and then locate  white pixel areas in the discovered rectangles to determine which ones are defective.

#### How do you calculate the degree of similarity between two images?

The degree of similarity between two images is measured using a distance metric established on the appropriate multi-dimensional feature space. The Minkowski distance, the Manhattan distance, the Euclidean distance, and the Hausdorff distance are standard distance metrics.

#### To blur an image, can you use a linear filter?

In a filter, blurring compares and smooths neighboring pixels. You can't use a linear filter for this.

#### There is a finite difference. Image processing filters are incredibly vulnerable to noise. Which strategy can you use to deal with this such that noise distortions are kept to a minimum?

Smoothing reduces noise by causing pixels to align more closely with their neighbors.

## Conclusion

In this blog we discussed Edge Detection and its various methods. Primarily, we learned about Prewitt Edge Detection, Sobel Edge Detection, Laplacian Edge Detection and Canny Edge Detection in detail. We hope you had fun learning about these methods and will implement these methods in your projects too.

Keep Learning and Developing !!

Live masterclass