Featured Content Ads
add advertising hereThe image alignment and registration pipeline takes two input photos that comprise the identical scene from quite barely a number of viewing angles. The image above displays each and every input photos aspect by aspect with the normal scene (object) being the painting Las Meninas (1656) by Velázquez, at show camouflage on the Museo del Prado in Madrid (Spain).
The first step is computing the projection that establishes the mathematical relationships which maps pixel coordinates from one describe to 1 other 1. The most normal planar 2D transformation is the eight-parameter perspective transform or homography denoted by a traditional $ 3×3 $ matrix $ mathbf{H} $. It operates on 2D homogeneous coordinate vectors, $mathbf{x’}=(x’,y’,1)$ and $mathbf{x}=(x,y,1)$, as follows:
$$ mathbf{x’} sim mathbf{Hx} $$
Afterwards, we rep the homographic matrix and use it to warp the angle of 1 of the photos over the opposite, aligning the photos collectively. With assignment clearly defined and the pipeline launched, the next sections picture the map it would possibly perhaps per chance presumably perhaps presumably be archieved the use of OpenCV.
Featured Content Ads
add advertising hereCharacteristic Detection
To computer the angle transform matrix $ mathbf{H} $, we would favor the link each and every input photos and assess which regions are the identical. Shall we manually exhaust the corners of each and every painting and use that to compute the homography, alternatively this methodology has several considerations: the corners of a painting would possibly presumably perhaps presumably be occluded in one of the indispensable scenes, now not all scenes are rectangular paintings so this wouldn’t be gorgeous for these conditions, and it would possibly perhaps per chance presumably perhaps presumably require handbook work per scene, which is now not preferrred if must direction of barely a number of scenes in an computerized manner.
Resulting from this truth, a characteristic detection and matching direction of is ragged to link normal regions in each and every photos. The most good limitation of this methodology is that the scene must encompass ample capabilities evenly allotted. The ragged methodology here turned into once ORB 2, nonetheless other characteristic extraction suggestions are moreover available — the code of the class FeatureExtraction
is equipped on the pause of the put up for brevity.
img0=cv.imread("lasmeninas0.jpg", cv.COLOR_BGR2RGBA) img1=cv.imread("lasmeninas1.jpg", cv.COLOR_BGR2RGBA) features0=FeatureExtraction(img0) features1=FeatureExtraction(img1)
Characteristic Matching
The aforementioned class computed the keypoints (plot of a characteristic) and descriptors (description of acknowledged characteristic) for every and every photos, so now we have to pair them up and rep away the outliers. Firstly, FLANN (Lickety-split Library for Approximate Nearest Neighbors) computes the pairs of matching capabilities while taking into memoir the closest neighbours of each and every characteristic. Secondly, the handiest capabilities are selected the use of the Lowe’s ratio of distances take a look at, which goals to set up away with inaccurate suits from the previous fragment 3. The code is equipped under, and the rotund feature on the pause. Correct after the code, the image items each and every input photos aspect by aspect with the matching pairs of capabilities.
Featured Content Ads
add advertising heresuits=feature_matching(features0, features1) matched_image=cv.drawMatches(img0, features0.kps, img1, features1.kps, suits, None, flags=2)
Homography Computation
After computing the pairs of matching capabilities of the input photos, it’s far doable to compute the homography matrix. It takes as input the matching aspects on every describe and the use of RANSAC (random sample consensus) we’re in a plot to effectively compute the projective matrix. Even though the characteristic pairs had been already filtered in the previous fragment, they’re filtered again so that simplest the inliers are ragged to compute the homography. This gets rid of the outliers from the calculation, which ends in a minimization of the error connected with the homography computation.
H, _=cv.findHomography( features0.matched_pts, features1.matched_pts, cv.RANSAC, 5.0)
This option affords as output the next $ 3×3 $ matrix (for our input):
$$
mathbf{H}=
starting up up{bmatrix}
+7.85225708textual announce material{e-}01 & -1.28373989textual announce material{e-}02 & +4.06705815textual announce material{e}02 cr
-4.21741196textual announce material{e-}03 & +7.76450089textual announce material{e-}01 & +8.15665534textual announce material{e}01 cr
-1.20903215textual announce material{e-}06 & -2.34464498textual announce material{e-}05 & +1.00000000textual announce material{e}00 cr
pause{bmatrix}
$$
Viewpoint Warping & Overlay
Now that we have computed the transformation matrix that establishes the mathematical relationships which maps pixel coordinates from one describe to 1 other, we can procure the image registration direction of. This direction of will procure a perspective warp of 1 of the input photos so that it overlaps on the opposite one. The outdoors of the warped describe is rotund of transparency, which then permits us to overlay that over the opposite describe and check its upright alignment.
h, w, c=img1.shape warped=cv.warpPerspective(img0, H, (w, h), borderMode=cv.BORDER_CONSTANT, borderValue=(0, 0, 0, 0)) output=np.zeros((h, w, 3), np.uint8) alpha=warped[:, :, 3] / 255.0 output[:, :, 0]=(1. - alpha) img1[:, :, 0] + alpha warped[:, :, 0] output[:, :, 1]=(1. - alpha) img1[:, :, 1] + alpha warped[:, :, 1] output[:, :, 2]=(1. - alpha) img1[:, :, 2] + alpha warped[:, :, 2]
most predominant.py
import cv2 as cv import numpy as np from aux import FeatureExtraction, feature_matching img0=cv.imread("lasmeninas0.jpg", cv.COLOR_BGR2RGBA) img1=cv.imread("lasmeninas1.jpg", cv.COLOR_BGR2RGBA) features0=FeatureExtraction(img0) features1=FeatureExtraction(img1) suits=feature_matching(features0, features1) # matched_image=cv.drawMatches(img0, features0.kps, # img1, features1.kps, suits, None, flags=2) H, _=cv.findHomography( features0.matched_pts, features1.matched_pts, cv.RANSAC, 5.0) h, w, c=img1.shape warped=cv.warpPerspective(img0, H, (w, h), borderMode=cv.BORDER_CONSTANT, borderValue=(0, 0, 0, 0)) output=np.zeros((h, w, 3), np.uint8) alpha=warped[:, :, 3] / 255.0 output[:, :, 0]=(1. - alpha) img1[:, :, 0] + alpha warped[:, :, 0] output[:, :, 1]=(1. - alpha) img1[:, :, 1] + alpha warped[:, :, 1] output[:, :, 2]=(1. - alpha) img1[:, :, 2] + alpha warped[:, :, 2]
aux.py
import cv2 as cv import numpy as np import duplicate orb=cv.ORB_create( nfeatures=10000, scaleFactor=1.2, scoreType=cv.ORB_HARRIS_SCORE) class FeatureExtraction: def __init__(self, img): self.img=duplicate.duplicate(img) self.gray_img=cv.cvtColor(img, cv.COLOR_BGR2GRAY) self.kps, self.des=orb.detectAndCompute( self.gray_img, None) self.img_kps=cv.drawKeypoints( self.img, self.kps, 0, flags=cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) self.matched_pts=[] LOWES_RATIO=0.7 MIN_MATCHES=50 index_params=dict( algorithm=6, # FLANN_INDEX_LSH table_number=6, key_size=10, multi_probe_level=2) search_params=dict(assessments=50) flann=cv.FlannBasedMatcher( index_params, search_params) def feature_matching(features0, features1): suits=[] # gorgeous suits as per Lowe's ratio take a look at if(features0.des is now not None and len(features0.des)> 2): all_matches=flann.knnMatch( features0.des, features1.des, ok=2) try: for m,n in all_matches: if m.distance MIN_MATCHES): features0.matched_pts=np.float32( [ features0.kps[m.queryIdx].pt for m in suits ] ).reshape(-1,1,2) features1.matched_pts=np.float32( [ features1.kps[m.trainIdx].pt for m in suits ] ).reshape(-1,1,2) return suits
necessities.txt
opencv-python==4.2.0.34 numpy==1.19.2