Introduction
Computers identify objects in the same way that humans do, which means they also go through the process of separating boundaries, identifying essential features, and putting together different feature values in order to make sense of an image. Just like a dog's tail, shape, nose, and other characteristics help us distinguish a dog from other animals, similarly, a computer can recognize an object by recognizing traits that are useful in determining the object's structure and qualities. Edges are an example of such characteristic features.
A line connecting two corners or surfaces is known as an edge in mathematics. Edge detection is based on the premise that areas with significant variances in pixel brightness indicate a boundary. As a result, edge detection is a metric for measuring the intensity of a discontinuity and differences in brightness in the picture.
Source : Image showing the edges detected
In fields including image processing, computer vision, and machine vision, edge detection is used to segment images from unwanted backgrounds and extract valuable data.
Techniques for Edge Detection
The process of representing edges in an image reduces the amount of data that must be processed while preserving important information about the forms of objects in the picture.
Source : Link
The technique of edge detection is used in image processing, particularly computer vision, to locate significant differences in a greylevel image, determine objects' physical and geometrical attributes, and transform original images into edge images. The technique separates the objects in an image and their surrounding backgrounds. Edge detection is used for identifying substantial intensity level discontinuities. Local fluctuations in visual intensity are referred to as edges, and these edges are often seen at the intersection of two zones. The significant features of an image can be derived from the edges in that particular image.
For image analysis, edge detection is a critical feature. Object detection with edge detection is utilized in various applications such as medical image processing, biometrics, etc. because it allows higherlevel picture analysis.
Discontinuities in the grey level are divided into three categories: point, line, and edges.
In the literature for picture segmentation, there are several edge detection algorithms. The following section goes over the most common discontinuitybased edge detection methods.
Prewitt edge detection
This is a typical edge detector used to detect horizontal and vertical edges in photos. All masks used for edge detection are referred to as derivative masks. Because a picture is also a signal, the changes in a signal can only be computed via differentiation. As a result, derivative masks or derivative operators are occasionally used to refer to these operators.
The following properties should be present in all derivative masks:
 The mask should have the opposite indication.
 The sum of the masks should be 0.
 Greater edge detection means more weight.
Therefore, the Prewitt operator gives us two masks, one for recognizing horizontal edges and the other for detecting vertical edges.
Prewitt filter for vertical detection Prewitt filter for horizontal detection
As you can see, by using masks on a picture, we can identify both horizontal and vertical edges.
Sobel edge detection
One of the most extensively used edge detection methods is Sobel Edge Detection. As seen in the diagram below, the Sobel Operator recognizes edges characterized by abrupt variations in pixel intensity.
Source: Link
Source: Link
When the first derivative of the intensity function is plotted, the increase in intensity becomes much more apparent.
Edges can be spotted in locations where the gradient is higher than a specific threshold value, as shown in the graph. Furthermore, a quick change in the derivative will result in a change in pixel intensity. With this in mind, we can use a 33 kernel to approximate the derivative. One kernel detects sudden changes in pixel intensity in the X direction, while the other detects changes in the Y direction.
These kernels, one for each of the two perpendicular orientations, are designed to respond primarily to edges traveling vertically and horizontally relative to the pixel grid. The kernels may be used separately on the input picture to produce distinct gradient component measurements in each orientation, which we refer to as Gx and Gy. By integrating these findings, the absolute size of the gradient at each point, as well as its direction, may be established. The magnitude of the gradient is calculated as follows:
The spatial gradient is caused by the angle of orientation of the edge (relative to the pixel grid):
Laplacian edge detection
The Laplacian edge detector, unlike the Sobel edge detector, uses only one kernel. In a single pass, it calculates secondorder derivatives. Here's the kernel that was used:
Source: Link
The kernel used for the Laplacian operator is shown above.
Laplacian edge detection performs secondorder derivatives in a single pass, making it susceptible to noise. Before using this procedure, the image is smoothed with Gaussian smoothing to avoid this sensitivity to noise.
Laplacians are computationally faster (one kernel vs. two kernels) and occasionally give spectacular results!
Canny edge detection
Because of its robustness and flexibility, Canny Edge Detection is one of the most widely used edge detection technologies today.
It entails following the methods listed below while detecting edges in an image.
 Using a Gaussian filter, remove noise from the input image.
 Calculating the gradient of picture pixels using the derivative of a Gaussian filter to obtain magnitude along the x and y dimensions.
 Suppress the nonmax edge contributor pixel points while considering a group of neighbors for any curve in a direction perpendicular to the provided edge.
 Finally, use the Hysteresis Thresholding method to keep pixels larger than the gradient magnitude and ignore smaller ones.
Noise Removal or Image Smoothing:
During the presence of noise, the pixel may not be comparable to its neighboring pixels. As a result, edges may be detected incorrectly or inappropriately. To avoid this, we employ a Gaussian filter, which is convolved with the image and removes noise, preventing the desired edges from appearing in the output images.
We convolve a Gaussian filter or kernel G(x,y) with an image I in the example below. In this case, we make sure that any given pixel in the output is identical to its neighbors; therefore, we use the matrix [1 1 1] to keep the pixels comparable and remove the noise.
Where: G(x,y) symbolises Gaussian Distribution, and I represents the input image.
Derivative:
Calculate the filter's derivative in the X and Y dimensions, then convolve it with I to get the gradient magnitude across the dimensions. The tangent of the angle between the two dimensions can also compute the image's direction.
NonMax Suppression:
Few spots along an edge are often noted to enhance the visibility of the edge clearer. As a result, we can ignore those edge locations that don't contribute much to feature visibility. We employ the NonMaximum Suppression approach to achieve this. The spots on the edge curve where the magnitude is most significant are marked here. This can be found by looking for a maximum and a perpendicular slice to the curve.
Take a look at the edge in the diagram, which contains three edge points. Assume that point (x,y) has the most significant gradient of edge. In the path perpendicular to the edge, look for edge points and see if their gradient is less than (x,y). We can suppress nonmaxima locations along the curve if the values are less than the (x,y) gradient.
Hysteresis Thresholding:
Two threshold values are used to compare the gradient magnitudes in the last phase of Canny Edge Detection, where one is smaller than the other.
 Those pixels are associated with solid edges and are included in the final edge map if the gradient magnitude value is greater than the more considerable threshold value.
 The pixels are suppressed and excluded from the final edge map if the gradient magnitude values are less than the smaller threshold value.
 All remaining pixels with gradient magnitudes in the middle of these two thresholds are labeled as 'weak' edges (i.e., they become candidates for being included in the final edge map).

The 'weak' pixels are included in the final edge map if connected to solid edges.
Also see, Sampling and Quantization