Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
What is Feature Matching using OpenCV?
3.
Methods
3.1.
Brute Force Using ORB Detector
3.2.
Brute Force Using SIFT Detector
3.3.
Using FLANN Technique
4.
Frequently Asked Questions
4.1.
What is OpenCV?
4.2.
What do you mean by feature matching OpenCV?
4.3.
What is the FLANN technique for feature matching OpenCV?
4.4.
What is the ORB technique for feature matching OpenCV?
5.
Conclusion
Last Updated: Mar 27, 2024
Medium

Feature Matching OpenCV

Create a resume that lands you SDE interviews at MAANG
Speaker
Anubhav Sinha
SDE-2 @
12 Jun, 2024 @ 01:30 PM

Introduction

Ever wondered how we can relate two images together and find out which features of both the images match each other? It can be found using OpenCV feature matching techniques. This method uses different techniques to correlate two pictures and discover their similarities. This feature is helpful in many areas, like computer vision.

Feature matching OpenCV

In this article, we will look at different techniques for Feature Matching OpenCV.

What is Feature Matching using OpenCV?

Feature matching using OpenCV involves detecting and matching features between two images. It finds regions of matching between the two images. It is used in computer vision, like object tracking, object detection, etc.

First of all, to install OpenCV in your system, run the following command in your command prompt:

pip install opencv-python

 

But make sure that Python and pip are already installed on your system.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

Methods

Now, let us see three different methods for feature matching using OpenCV in Python.

Brute Force Using ORB Detector

First, let us discuss the method for feature matching using OpenCV with the brute force of the ORB detector. ORB stands for Oriented FAST and Rotated BRIEF. Here, FAST stands for Features from Accelerated Segment Test) and BRIEF stands for Binary Robust Independent Elementary Features. This method combines FAST keypoint detectors and BRIEF descriptors to enhance the overall performance of the matching methodology. Here, the descriptors are binary. It requires two libraries: OpenCV and numpy

For example, we have taken the two pictures below.

image1
image 2

Now, let us see the Python code for this below:

import cv2
import numpy as np

img1=cv2.imread("one.jpg", cv2.IMREAD_GRAYSCALE)
img2=cv2.imread("two.jpg", cv2.IMREAD_GRAYSCALE)

orbb=cv2.ORB_create()
kp1, des1 = orbb.detectAndCompute(img1, None)
kp2, des2 = orbb.detectAndCompute(img2, None)

brute_force=cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches=brute_force.match(des1, des2)
matches=sorted(matches, key=lambda x:x.distance)

matching_results=cv2.drawMatches(img1, kp1, img2, kp2, matches, None)


cv2.imshow("Feature_Matching", matching_results)

cv2.waitKey(0)
cv2.destroyAllWindows()

 

Let us understand the program now.

  • For this, we import the required libraries for it. Then, we shall load the images and convert them into grayscale. Converting the images into their grayscale form reduces redundancies and simplifies the calculations.
     
  • Next, we create an ORB detector using the ORB_create() function.
     
  • Furthermore, we create the key points and descriptors of the two images using the .detectAndCompute() function. For this, two parameters are passed. The first one is the image, and the second one is Mask. In our case, the Mask is not required, so we pass None.
     
  • Next, we create a matcher object using the cv2.BFMatcher() function. It again requires two parameters. The first one is the normType. It specifies the distance symbolizing the measure of similarity between the two descriptors. The following parameter is the crossCheck value. By default, it is false. If it is set to true, the matcher finds the best matches.
     
  • Now, we check the matches between the two descriptors.
     
  • Then, we sort the matches found in ascending order according to their distance. For this, we use the lambda x:x.distance parameter.
     
  • Finally, we draw the matches between the two images using the .drawMatches() function. We pass both the images, their key points, the sorted matches list, and the bool value for the Mask for this.
     
  • Next, we print the feature matching for two images using the .imshow() function. We pass the finally drawn matches as a parameter with the name for the new window. Also, we pass the .waitKety() function with a 0 value as a parameter. We write this so that the image stays on screen or vanishes just after it shows the final result. Also, we close all the windows in the end internally using the .destroyAllWindows() function.
     

The final output for this code is as below.

ORB Technique

We can see that the lines of matching similarities can be shown in the final output between the two images.

Brute Force Using SIFT Detector

SIFT stands for Scale Invariant Feature Transform. This technique also uses key points and descriptors to find the matches between two images. It is also a brute force matching method. SIFT has descriptors that are floating-point in nature. Also, it is slower than the ORB method. Also, it identifies the key points using the DOG(Difference of Gaussians) method.

We use the same images abovementioned. Let us see the Python code below.

import cv2
import numpy as np
from matplotlib import pyplot as plt

img1=cv2.imread("one.jpg", cv2.IMREAD_GRAYSCALE)
img2=cv2.imread("two.jpg", cv2.IMREAD_GRAYSCALE)

sift = cv2.xfeatures2d.SIFT_create()

kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

brute_force=cv2.BFMatcher()
matches = brute_force.match(des1,des2)
matches=sorted(matches, key=lambda x:x.distance)
matching_results=cv2.drawMatches(img1, kp1, img2, kp2, matches[:50], None)


cv2.imshow("Feature_Matching", matching_results)

cv2.waitKey(0)
cv2.destroyAllWindows()

 

Now, let us understand this program.

  • First, we include the required libraries: OpenCV, numpy, and matplotlib.
     
  • Next, we import the two images as grayscale same as before.
     
  • Now, we create the SIFT detector using .xfeatures2d.SIFT_create() function.
     
  • Furthermore, we create the key points and descriptors of the two images using the .detectAndCompute() function. For this, two parameters are passed. The first one is the image, and the second one is Mask. In our case, the Mask is not required, so we pass None.
     
  • Next, we create a matcher object using the cv2.BFMatcher() function.
     
  • Now, we check the matches between the two descriptors.
     
  • Then, we sort the matches found in ascending order according to their distance. For this, we use the lambda x:x.distance parameter.
     
  • Finally, we draw the matches between the two images using the .drawMatches() function. We pass both the images, their key points, the sorted matches list, and the bool value for the Mask for this. In this code, we have passed the first 50 matches.
     
  • Next, we print the feature matching for two images using the .imshow() function. We pass the finally drawn matches as a parameter with the name for the new window. Also, we pass the .waitKety() function with a 0 value as a parameter. We write this so that the image stays on screen or vanishes just after it shows the final result. Also, we close all the windows in the end internally using the .destroyAllWindows() function.

 

The final output of this code is as below.

SIFT technique

Using FLANN Technique

The third method is using the FLANN technique. It is more complex, which draws only clear matches between the images. FLANN stands for Fast Library for Approximate Nearest Neighbours. It is an optimized algorithm to perform nearest-neighbor searches for feature matching using OpenCV.

We use the same two images abovementioned. Let us see the Python code for it.

import cv2
import numpy as np
from matplotlib import pyplot as plt

img1=cv2.imread("one.jpg", cv2.IMREAD_GRAYSCALE)
img2=cv2.imread("two.jpg", cv2.IMREAD_GRAYSCALE)

sift = cv2.xfeatures2d.SIFT_create()

kp1, des1=sift.detectAndCompute(img1, None)
kp2, des2=sift.detectAndCompute(img2, None)

FLANN_INDEX_KDTREE = 0
i_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 2)
s_params = dict(checks=20)

flann = cv2.FlannBasedMatcher(i_params,s_params)
matches = flann.knnMatch(des1,des2,k=2)

matches_mask = [[0,0] for i in range(len(matches))]

for i,(m,n) in enumerate(matches):
   if m.distance < 0.5*n.distance:
      matches_mask[i]=[1,0]

draw_params = dict(matchColor = (0,255,0),singlePointColor = (255,0,0),matchesMask = matches_mask,flags = 0)
matched_img = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(matched_img),plt.show()

 

Now, let us understand this program.

  • First, we import the required libraries: OpenCV, numpy, and matplotlib.
     
  • Next, we import the two images as grayscale same as before.
     
  • Now, we create the SIFT detector using .xfeatures2d.SIFT_create() function.
     
  • Furthermore, we create the key points and descriptors of the two images using the .detectAndCompute() function. For this, two parameters are passed. The first one is the image, and the second one is Mask. In our case, the Mask is not required, so we pass None.
     
  • For constructing FLANN parameters, we first create a k-dimensional tree. Then, we pass two sets of parameters as index parameters and search parameters. We create a dictionary passing the FLANN tree into the dict() algorithm. In our case, we will set the number of trees equal to 2. Finally, the search parameter is set to dictionary with check=20.
     
  • Next, we create FLANN based matching object passing the two parameters created before. And then, we calculate the matches passing the descriptors of the two images.
     
  • Next, we create a mask only to draw good matches. And then, we make a ratio test for all the matches based on their distance.
     
  • Finally, we draw the matches using key points, with specific masks with matching values, and setting the matching key point lines with the color: green. Furthermore, we draw the matches with the images, their key points, and specified drawing parameters.

 

The final output of the code is as follows.

FLANN Technique

Frequently Asked Questions

What is OpenCV?

OpenCV stands for Open Source Computer Vision Library. It is a machine-learning software library. It provides tools and algorithms for image processing, video processing, 3D reconstruction, etc. It is written in C++ but provides interfaces for other programming languages like Python, Java, etc.

What do you mean by feature matching OpenCV?

Feature matching using OpenCV involves detecting and matching features between two images. It finds regions of matching between the two images. It is used in computer vision, like object tracking, object detection, etc.

What is the FLANN technique for feature matching OpenCV?

FLANN technique is more complex, which draws only clear matches between the images. FLANN stands for Fast Library for Approximate Nearest Neighbours. It is an optimized algorithm to perform nearest-neighbor searches for feature matching using OpenCV.

What is the ORB technique for feature matching OpenCV?

ORB stands for Oriented FAST and Rotated BRIEF. Here, FAST stands for Features from Accelerated Segment Test) and BRIEF stands for Binary Robust Independent Elementary Features. This method combines FAST keypoint detectors and BRIEF descriptors to enhance the overall performance of the matching methodology.

Conclusion

OpenCV is a machine-learning software library used for functionalities like image processing, video processing, 3D reconstruction, etc. Feature matching is one such feature that can be availed with it. Two images are taken in it, and similarities are found between them regarding regional points. We discussed various methods for feature matching OpenCV.

If you wish to learn more about this topic, we recommend you read the following articles:-

To learn more about DSA, competitive coding, and many more knowledgeable topics, please look into the guided paths on Codestudio. Also, you can enroll in our courses and check out the mock test and problems available. Please check out our interview experiences and interview bundle for placement preparations.

 

Happy Coding!

Previous article
Image Read-Write-Display in OpenCV
Next article
Color Filtering
Live masterclass