Do you think IIT Guwahati certified course can help you in your career?
No
Introduction
Ever wondered how we can relate two images together and find out which features of both the images match each other? It can be found using OpenCV feature matching techniques. This method uses different techniques to correlate two pictures and discover their similarities. This feature is helpful in many areas, like computer vision.
In this article, we will look at different techniques for Feature Matching OpenCV.
What is Feature Matching using OpenCV?
Feature matching using OpenCV involves detecting and matching features between two images. It finds regions of matching between the two images. It is used in computer vision, like object tracking, object detection, etc.
First of all, to install OpenCV in your system, run the following command in your command prompt:
pip install opencv-python
You can also try this code with Online Python Compiler
But make sure that Python and pip are already installed on your system.
Methods
Now, let us see three different methods for feature matching using OpenCV in Python.
Brute Force Using ORB Detector
First, let us discuss the method for feature matching using OpenCV with the brute force of the ORB detector. ORB stands for Oriented FAST and Rotated BRIEF. Here, FAST stands for Features from Accelerated Segment Test) and BRIEF stands for Binary Robust Independent Elementary Features. This method combines FAST keypoint detectors and BRIEF descriptors to enhance the overall performance of the matching methodology. Here, the descriptors are binary. It requires two libraries: OpenCV and numpy.
For example, we have taken the two pictures below.
For this, we import the required libraries for it. Then, we shall load the images and convert them into grayscale. Converting the images into their grayscale form reduces redundancies and simplifies the calculations.
Next, we create an ORB detector using the ORB_create() function.
Furthermore, we create the key points and descriptors of the two images using the .detectAndCompute() function. For this, two parameters are passed. The first one is the image, and the second one is Mask. In our case, the Mask is not required, so we pass None.
Next, we create a matcher object using the cv2.BFMatcher() function. It again requires two parameters. The first one is the normType. It specifies the distance symbolizing the measure of similarity between the two descriptors. The following parameter is the crossCheck value. By default, it is false. If it is set to true, the matcher finds the best matches.
Now, we check the matches between the two descriptors.
Then, we sort the matches found in ascending order according to their distance. For this, we use the lambda x:x.distance parameter.
Finally, we draw the matches between the two images using the .drawMatches() function. We pass both the images, their key points, the sorted matches list, and the bool value for the Mask for this.
Next, we print the feature matching for two images using the .imshow() function. We pass the finally drawn matches as a parameter with the name for the new window. Also, we pass the .waitKety() function with a 0 value as a parameter. We write this so that the image stays on screen or vanishes just after it shows the final result. Also, we close all the windows in the end internally using the .destroyAllWindows() function.
The final output for this code is as below.
We can see that the lines of matching similarities can be shown in the final output between the two images.
Brute Force Using SIFT Detector
SIFT stands for Scale Invariant Feature Transform. This technique also uses key points and descriptors to find the matches between two images. It is also a brute force matching method. SIFT has descriptors that are floating-point in nature. Also, it is slower than the ORB method. Also, it identifies the key points using the DOG(Difference of Gaussians) method.
We use the same images abovementioned. Let us see the Python code below.
First, we include the required libraries: OpenCV, numpy, and matplotlib.
Next, we import the two images as grayscale same as before.
Now, we create the SIFT detector using .xfeatures2d.SIFT_create() function.
Furthermore, we create the key points and descriptors of the two images using the .detectAndCompute() function. For this, two parameters are passed. The first one is the image, and the second one is Mask. In our case, the Mask is not required, so we pass None.
Next, we create a matcher object using the cv2.BFMatcher() function.
Now, we check the matches between the two descriptors.
Then, we sort the matches found in ascending order according to their distance. For this, we use the lambda x:x.distance parameter.
Finally, we draw the matches between the two images using the .drawMatches() function. We pass both the images, their key points, the sorted matches list, and the bool value for the Mask for this. In this code, we have passed the first 50 matches.
Next, we print the feature matching for two images using the .imshow() function. We pass the finally drawn matches as a parameter with the name for the new window. Also, we pass the .waitKety() function with a 0 value as a parameter. We write this so that the image stays on screen or vanishes just after it shows the final result. Also, we close all the windows in the end internally using the .destroyAllWindows() function.
The final output of this code is as below.
Using FLANN Technique
The third method is using the FLANN technique. It is more complex, which draws only clear matches between the images. FLANN stands for Fast Library for Approximate Nearest Neighbours. It is an optimized algorithm to perform nearest-neighbor searches for feature matching using OpenCV.
We use the same two images abovementioned. Let us see the Python code for it.
import cv2
import numpy as np
from matplotlib import pyplot as plt
img1=cv2.imread("one.jpg", cv2.IMREAD_GRAYSCALE)
img2=cv2.imread("two.jpg", cv2.IMREAD_GRAYSCALE)
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1=sift.detectAndCompute(img1, None)
kp2, des2=sift.detectAndCompute(img2, None)
FLANN_INDEX_KDTREE = 0
i_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 2)
s_params = dict(checks=20)
flann = cv2.FlannBasedMatcher(i_params,s_params)
matches = flann.knnMatch(des1,des2,k=2)
matches_mask = [[0,0] for i in range(len(matches))]
for i,(m,n) in enumerate(matches):
if m.distance < 0.5*n.distance:
matches_mask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),singlePointColor = (255,0,0),matchesMask = matches_mask,flags = 0)
matched_img = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(matched_img),plt.show()
You can also try this code with Online Python Compiler
First, we import the required libraries: OpenCV, numpy, and matplotlib.
Next, we import the two images as grayscale same as before.
Now, we create the SIFT detector using .xfeatures2d.SIFT_create() function.
Furthermore, we create the key points and descriptors of the two images using the .detectAndCompute() function. For this, two parameters are passed. The first one is the image, and the second one is Mask. In our case, the Mask is not required, so we pass None.
For constructing FLANN parameters, we first create a k-dimensional tree. Then, we pass two sets of parameters as index parameters and search parameters. We create a dictionary passing the FLANN tree into the dict() algorithm. In our case, we will set the number of trees equal to 2. Finally, the search parameter is set to dictionary with check=20.
Next, we create FLANN based matching object passing the two parameters created before. And then, we calculate the matches passing the descriptors of the two images.
Next, we create a mask only to draw good matches. And then, we make a ratio test for all the matches based on their distance.
Finally, we draw the matches using key points, with specific masks with matching values, and setting the matching key point lines with the color: green. Furthermore, we draw the matches with the images, their key points, and specified drawing parameters.
The final output of the code is as follows.
Frequently Asked Questions
What is OpenCV?
OpenCV stands for Open Source Computer Vision Library. It is a machine-learning software library. It provides tools and algorithms for image processing, video processing, 3D reconstruction, etc. It is written in C++ but provides interfaces for other programming languages like Python, Java, etc.
What do you mean by feature matching OpenCV?
Feature matching using OpenCV involves detecting and matching features between two images. It finds regions of matching between the two images. It is used in computer vision, like object tracking, object detection, etc.
What is the FLANN technique for feature matching OpenCV?
FLANN technique is more complex, which draws only clear matches between the images. FLANN stands for Fast Library for Approximate Nearest Neighbours. It is an optimized algorithm to perform nearest-neighbor searches for feature matching using OpenCV.
What is the ORB technique for feature matching OpenCV?
ORB stands for Oriented FAST and Rotated BRIEF. Here, FAST stands for Features from Accelerated Segment Test)and BRIEF stands for Binary Robust Independent Elementary Features.This method combines FAST keypoint detectors and BRIEF descriptors to enhance the overall performance of the matching methodology.
Conclusion
OpenCV is a machine-learning software library used for functionalities like image processing, video processing, 3D reconstruction, etc. Feature matching is one such feature that can be availed with it. Two images are taken in it, and similarities are found between them regarding regional points. We discussed various methods for feature matching OpenCV.
If you wish to learn more about this topic, we recommend you read the following articles:-