Table of contents
1.
Introduction
2.
Motion Tracking
2.1.
Equipment that empowers Motion Tracking
2.1.1.
Accelerometer 
2.1.2.
Gyrator
2.1.3.
Telephone Camera
2.2.
Equipment that empowers area based AR
2.2.1.
Magnetometer
2.2.2.
GPS
2.3.
Tracking in AR
2.3.1.
Outside-In Tracking
2.3.2.
Inside out Tracking
2.3.3.
Motion Tracking
2.4.
Essential motion detection and tracking with Python and OpenCV
3.
FAQs
4.
Key takeaways
Last Updated: Mar 27, 2024
Easy

Motion Tracking

Author Adya Tiwari
0 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

Motion tracking helps track the development of articles and move the detected information to an application for additional handling. Motion tracking incorporates catching the motions of papers coordinating with its put-away motion format. This has a broad scope of utilizations, for example, in military, diversion, sports, clinical applications, approval of PC vision, and mechanical technology. Besides, it is likewise utilized in filmmaking and computer game turn of events. In many cases, motion tracking is called motion catch, while in film making and games, motion tracking is usually called match moving.

Motion Tracking

Before diving into Motion Tracking in ARCore and its execution, it is essential to find out about the different equipment of a telephone utilized by ARCore and their motivation in making a superior Augmented encounter for the client.

The portable equipment can be Bradly sorted into three classes given their usefulness:-

  1. Hardware that enables motion tracking
  2. Hardware that enables location-based AR
  3. Hardware that enables a view of real-world with AR

Equipment that empowers Motion Tracking

Accelerometer 

Measures speed increase, which is speed isolated by time. It's the proportion of progress in speed. Speed increase powers can be static/persistent — like gravity — or dynamic, like development or vibrations.

Gyrator

Measures and additionally keeps up with the direction and rakish speed. Whenever you change the revolution of your telephone while utilizing an AR experience, the whirligig estimates that turn, and ARCore guarantees that the advanced resources answer accurately.

Telephone Camera

With versatile AR, your telephone camera supplies a live feed of the encompassing natural world. After that, AR content is overlaid. Notwithstanding the actual camera, ARCore-proficient telephones like the Google Pixel depend on reciprocal advancements like AI, complex picture handling, and PC vision to deliver top-notch pictures and spatial guides for portable AR.

Equipment that empowers area based AR

Magnetometer

Gives cell phones a primary direction connected with the Earth's attractive field. Due to the magnetometer, your telephone generally knows which course is North, permitting it to auto-turn advanced maps relying upon your actual direction. This gadget is vital to area-based AR applications.

GPS

A worldwide route satellite framework that gives geolocation and time data to a GPS collector, as in your cell phone. For ARCore-able cell phones, this gadget empowers area-based AR applications.

Equipment that empowers perspective on the genuine world with AR

Show: the showcase on your cell phone is significant for new symbolism and showing 3D-delivered resources. For example, Google Pixel XL's showcase is a 5.5" AMOLED QHD (2560 x 1440) 534ppi presentation, which implies that the telephone can show 534 pixels for each inch — making for rich, striking pictures.

Tracking in AR

AR depends on PC vision to see the world and perceive its items. The initial phase in the PC vision process is getting the visual data, the climate around the equipment, to the cerebrum inside the gadget. In vivid advances, tracking is the most common way of checking, perceiving, sectioning, and examining raw data. For AR, there are two different ways tracking occurs, back to front and outside-in tracking.

Outside-In Tracking

With Outside-in Tracking, cameras or sensors aren't housed inside the AR gadget. All things being equal, they're mounted somewhere else in the space and typically mounted on dividers or stands to have an unhindered perspective on the AR gadget. They then, at that point, feed data to the AR gadget straightforwardly or through a PC. Outside-in Tracking conquers a portion of the space and power that can happen with AR gadgets. The outer cameras or sensors can be as extensive as you need, hypothetically. You don't need to stress over individuals wearing them on their appearances or conveying them in their pockets. However, what you gain in work, you lose inconvenience. If your headset loses association with the external sensors for even a second, then, at that point, they can lose track. The visuals will endure breaking submersion.

Inside out Tracking

With back-to-front tracking, cameras and sensors are incorporated squarely into the gadget's body. Cell phones are the most precise illustration of this kind of tracking. They have cameras for seeing and processors for thinking in one remote battery-controlled convenient gadget. Microsoft's HoloLens is one more gadget that purposes back-to-front tracking in AR on the AR headset side. However, all that equipment occupies the room and power and creates heat. The genuine force of independent AR gadgets will arise when they become as pervasive and valuable as cell phones.

Motion Tracking

Whether on a cell phone or inside an independent headset, each AR application is planned to show persuading virtual articles. Perhaps the main thing that framework like ARCore does is motion tracking. AR stages need to know when you move. The overall innovation behind this is called Simultaneous Localisation and Mapping SLAM. This is the interaction by which advances like robots and cell phones dissect, comprehend, and arrange themselves to the actual world. Hammer processes require information gathering equipment like cameras, profundity sensors, light sensors, whirligigs, and accelerometers. ARCore utilizes these to establish comprehension of your current circumstance and utilize that data to accurately deliver expanded encounters by recognizing planes and elements focused on setting fitting anchors. Specifically, ARCore uses an interaction called Concurrent Odometry and Mapping COM. That could sound complex, yet COM tells a cell phone where it's situated in space about its general surroundings. 

Essential motion detection and tracking with Python and OpenCV

Open up a supervisor, make another document, name it motion_detector.py , and how about we get coding: import our necessary packages. These should look pretty familiar, except perhaps the imutils box, which is a set of convenience functions that I have created to simplify basic image processing tasks. If you do not already have imutils installed on your system, you can install it via pip: pip install imutils.

# import the necessary packages
from imutils.video import VideoStream
import argparse
import datetime
import imutils
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
    vs = VideoStream(src=0).start()
    time.sleep(2.0)
# otherwise, we are reading from a video file
else:
    vs = cv2.VideoCapture(args["video"])
# initialize the first frame in the video stream
firstFrame = None
# loop over the frames of the video
while True:
    # grab the current frame and initialize the occupied/unoccupied
    # text
    frame = vs.read()
    frame = frame if args.get("video", None) is None else frame[1]
    text = "Unoccupied"
    # if the frame could not be grabbed, then we have reached the end
    # of the video
    if frame is None:
        break
    # resize the frame, convert it to grayscale, and blur it
    frame = imutils.resize(frame, width=500)
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (21, 21), 0)
    # if the first frame is None, initialize it
    if firstFrame is None:
        firstFrame = gray
        continue
        # compute the absolute difference between the current frame and
    # first frame
    frameDelta = cv2.absdiff(firstFrame, gray)
    thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
    # dilate the thresholded image to fill in holes, then find contours
    # on thresholded image
    thresh = cv2.dilate(thresh, None, iterations=2)
    cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
        cv2.CHAIN_APPROX_SIMPLE)
    cnts = imutils.grab_contours(cnts)
    # loop over the contours
    for c in cnts:
        # if the contour is too small, ignore it
        if cv2.contourArea(c) < args["min_area"]:
            continue
        # compute the bounding box for the contour, draw it on the frame,
        # and update the text
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
        text = "Occupied"
        # draw the text and timestamp on the frame
    cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
    cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
        (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
    # show the frame and record if the user presses a key
    cv2.imshow("Security Feed", frame)
    cv2.imshow("Thresh", thresh)
    cv2.imshow("Frame Delta", frameDelta)
    key = cv2.waitKey(1) & 0xFF
    # if the `q` key is pressed, break from the lop
    if key == ord("q"):
        break
# cleanup the camera and close any open windows
vs.stop() if args.get("video", None) is None else vs.release()
cv2.destroyAllWindows()
You can also try this code with Online Python Compiler
Run Code

I need to ensure that our motion location framework is working before James, the brew stealer, visits me once more — we'll save that for Part 2 of this series. I have made two video records to try out our motion location framework utilizing Python and OpenCV.

FAQs

1. What are the three sorts of motion tracking?

There are six essential kinds of motion tracking: single-point tracking, two-point tracking, corner pin tracking, planar tracking, spline tracking, and 3D camera tracking.

2. How precise is motion tracking?

End. Motion tracking with customer-grade computerized cameras and the APAS programming can accomplish sub-millimeter exactness at outline rates that are proper for kinematic examinations of lip/jaw developments for exploration and clinical purposes.

3. What is four-point tracking in After Effects?

Four-point tracking is the most well-known tracking utilized for things like screen substitutions. It permits you to distinguish the edges of a screen and track them so you can embed your new picture for a consistent impact.

4. What is the contrast between track camera and track motion?

Motion tracking makes a motion way of an article in the 2D comp space. Camera tracking examines the visual parallax in a shot to cause a 3D situation and camera in the After Effects 3D world space that matches the picture.

5. Why is track motion turn out grey later?

On the off chance that you don't have a second layer in your structure, or on the other hand, on the off chance that you don't characterize the second layer utilizing the 'Alter focus' in the motion tracker window, then, at that point, the 'apply' button is turned gray out.

Key takeaways

Motion catch is the most common way of recording the development of articles or individuals. Match moving is an artistic method that permits the inclusion of PC illustrations into true-to-life film with the correct position, scale, direction, and motion comparative with the articles in the shot. Motion detection using Python and Open CV While essential, this framework is equipped for taking video transfers and investigating them for motion while getting genuinely sensible outcomes given the limits of the technique we used.

Hey Ninjas! Don’t stop here; check out Coding Ninjas for Python and AWS related more unique courses and guided paths. Also, try Coding Ninjas Studio for more exciting articles, interview experiences, and excellent Machine Learning and Python problems. 

Happy Learning!

Live masterclass