Rotation Invariant Feature Descriptor Integrating HAVA and ... · A. RIFT feature descriptor...

4
Proceedings of the Second APSIPA Annual Summit and Conference, pages 935–938, Biopolis, Singapore, 14-17 December 2010. Rotation Invariant Feature Descriptor Integrating HAVA and RIFT Mingyi He, Yuchao Dai, Jing Zhang and Lin Bai School of Electronics and Information, Northwestern Polytechnical University Shaanxi Key Laboratory of Information Acquisition and Processing, Xi’an, 710129, China E-mail: [email protected], [email protected], [email protected], [email protected] Abstract—Local feature descriptors, which are distinctive and yet invariant to many kinds of geometric and photometric trans- formations, have been paid more and more research attentions due to their promising performance. Aiming at tackling diffi- culties in the estimation of local dominant orientation and high dimensionality of the state-of-the-art local feature descriptors, a novel rotation invariant descriptor HAVA-RIFT (Histogram of Absolute Value Activity-Rotation Invariant Feature Transform) is proposed. Firstly, Harris-Laplace detector is utilized to obtain the candidate multi-scale corners and corresponding characteristic scales. Secondly, histograms of absolute value activity and rota- tion invariant feature transform descriptor are computed in the local region. Finally, a two-step double-threshold matching strat- egy is applied to determine the matching relationship and the two- way matching principle is used to eliminate the mismatches of “many-to-one”. Experiments on real images have demonstrated that HAVA-RIFT descriptor outperforms the existing RIFT descriptor under various conditions such as scaling, rotation, light change, image blurring, affine transformation and JPEG compression. I. I NTRODUCTION Invariant feature descriptors constructed from local image patches have been widely utilized in object recognition [1], texture classification [2], image retrieval [3] and wide baseline image registration [4]. Feature detection is the prerequisite step to construct local feature descriptor, which aims at localizing keypoints or regions with good repeatability under various transformations. Local feature descriptors, which needs to be invariant to many kinds of variations, characterize local region and construct feature vector. Feature matching aims at determining corresponding relationship between keypoints or regions in different images based on feature descriptors. A good feature descriptor should not only localize distinctive keypints or regions with high repeatability but also be invariant to various geometric and photometric transformation ranging from scaling, rotation, light change, viewpoint change, to projective transformation. Researchers in computer vision have invested more to investigate local feature detection and feature description with good distinctiveness and repeatability. Various detectors have been proposed including Harris corner detector [5], Harris- Laplace [6], Hessian and Hessian-Affine, Harris-Affine [7]. A large number of descriptors have also been proposed including distribution based descriptors [8], filter based descriptor, time- frequency based descriptor and differential invariance based descriptor. Recently Calonder and et al [9] proposed a binary robust independent elementary. During the computation of feature descriptors, to achieve rotation invariance, dominant gradient orientation has to be estimated. Then all the supporting regions are rotated to iden- tical orientation. However, computation of dominant gradient orientation is always contaminated by scaling, deformation and noise. Meanwhile, the computation complexity of dominant gradient orientation is high, thus increasing time consuming in feature description. Additionally, high dimensionality feature descriptor increases time consuming in feature matching. Aiming at tackling the problems mentioned above, a novel feature descriptor named as HAVA-RIFT(Histogram of Ab- solute Value Activity-Rotation Invariant Feature Transform), which is rotation invariant and low dimensionality, is proposed in this paper. Firstly, Harris-Laplace multi-scale corner detec- tor is utilized to obtain candidate multi-scale corners and their corresponding scales in scale space. Secondly, histograms of absolute value activity and rotation invariant feature transform descriptors are computed in the local region determined by the detected keypoints and corresponding scale. Finally, a two- step double-threshold matching strategy is applied to obtain the matching relationship, while two-way matching principle is used to eliminate “many-to-one” mismatches. II. HARRIS-LAPLACE FEATURE DETECTOR Harris-Laplace feature detector [6] detects interest points using Harris corner detector [5] and localizes characteristic scale in Laplace scale space through extreme detection. Harris- Laplace feature detector is highly invariant to rotation, scaling and affine deformation. Second order matrix with adapted scale is defined as [7]: μ(xI D )= σ 2 D g(σ I ) * L 2 x (xD ) L x L y (xD ) L x L y (xD ) L 2 y (xD ) (1) where σ I is the variance of Gaussian filter, * denotes convo- lution, σ D is scale parameter used to construct scale space of the image, L x is the derivative computed in x direction while L y is the derivative computed in y direction. Response of corner property is defined as: F (xI D ) = det(μ(xI D )) - αtrace 2 (μ(xI D )) (2) where α =0.04. Local maximum of F (xI D ) after non- maximum suppression determines the locations of interest 935 10-0109350938©2010 APSIPA. All rights reserved.

Transcript of Rotation Invariant Feature Descriptor Integrating HAVA and ... · A. RIFT feature descriptor...

Page 1: Rotation Invariant Feature Descriptor Integrating HAVA and ... · A. RIFT feature descriptor Rotation Invariant Feature Transform (RIFT), which is a generalization of Scale Invariant

Proceedings of the Second APSIPA Annual Summit and Conference, pages 935–938,Biopolis, Singapore, 14-17 December 2010.

Rotation Invariant Feature DescriptorIntegrating HAVA and RIFT

Mingyi He, Yuchao Dai, Jing Zhang and Lin BaiSchool of Electronics and Information, Northwestern Polytechnical University

Shaanxi Key Laboratory of Information Acquisition and Processing, Xi’an, 710129, ChinaE-mail: [email protected], [email protected], [email protected], [email protected]

Abstract—Local feature descriptors, which are distinctive andyet invariant to many kinds of geometric and photometric trans-formations, have been paid more and more research attentionsdue to their promising performance. Aiming at tackling diffi-culties in the estimation of local dominant orientation and highdimensionality of the state-of-the-art local feature descriptors,a novel rotation invariant descriptor HAVA-RIFT (Histogram ofAbsolute Value Activity-Rotation Invariant Feature Transform) isproposed. Firstly, Harris-Laplace detector is utilized to obtain thecandidate multi-scale corners and corresponding characteristicscales. Secondly, histograms of absolute value activity and rota-tion invariant feature transform descriptor are computed in thelocal region. Finally, a two-step double-threshold matching strat-egy is applied to determine the matching relationship and the two-way matching principle is used to eliminate the mismatches of“many-to-one”. Experiments on real images have demonstratedthat HAVA-RIFT descriptor outperforms the existing RIFTdescriptor under various conditions such as scaling, rotation,light change, image blurring, affine transformation and JPEGcompression.

I. INTRODUCTION

Invariant feature descriptors constructed from local imagepatches have been widely utilized in object recognition [1],texture classification [2], image retrieval [3] and wide baselineimage registration [4]. Feature detection is the prerequisite stepto construct local feature descriptor, which aims at localizingkeypoints or regions with good repeatability under varioustransformations. Local feature descriptors, which needs tobe invariant to many kinds of variations, characterize localregion and construct feature vector. Feature matching aimsat determining corresponding relationship between keypointsor regions in different images based on feature descriptors.A good feature descriptor should not only localize distinctivekeypints or regions with high repeatability but also be invariantto various geometric and photometric transformation rangingfrom scaling, rotation, light change, viewpoint change, toprojective transformation.

Researchers in computer vision have invested more toinvestigate local feature detection and feature description withgood distinctiveness and repeatability. Various detectors havebeen proposed including Harris corner detector [5], Harris-Laplace [6], Hessian and Hessian-Affine, Harris-Affine [7]. Alarge number of descriptors have also been proposed includingdistribution based descriptors [8], filter based descriptor, time-frequency based descriptor and differential invariance baseddescriptor. Recently Calonder and et al [9] proposed a binary

robust independent elementary.During the computation of feature descriptors, to achieve

rotation invariance, dominant gradient orientation has to beestimated. Then all the supporting regions are rotated to iden-tical orientation. However, computation of dominant gradientorientation is always contaminated by scaling, deformation andnoise. Meanwhile, the computation complexity of dominantgradient orientation is high, thus increasing time consuming infeature description. Additionally, high dimensionality featuredescriptor increases time consuming in feature matching.

Aiming at tackling the problems mentioned above, a novelfeature descriptor named as HAVA-RIFT(Histogram of Ab-solute Value Activity-Rotation Invariant Feature Transform),which is rotation invariant and low dimensionality, is proposedin this paper. Firstly, Harris-Laplace multi-scale corner detec-tor is utilized to obtain candidate multi-scale corners and theircorresponding scales in scale space. Secondly, histograms ofabsolute value activity and rotation invariant feature transformdescriptors are computed in the local region determined by thedetected keypoints and corresponding scale. Finally, a two-step double-threshold matching strategy is applied to obtainthe matching relationship, while two-way matching principleis used to eliminate “many-to-one” mismatches.

II. HARRIS-LAPLACE FEATURE DETECTOR

Harris-Laplace feature detector [6] detects interest pointsusing Harris corner detector [5] and localizes characteristicscale in Laplace scale space through extreme detection. Harris-Laplace feature detector is highly invariant to rotation, scalingand affine deformation.

Second order matrix with adapted scale is defined as [7]:

µ(x, σI , σD) = σ2Dg(σI) ∗

[L2

x(x, σD) LxLy(x, σD)LxLy(x, σD) L2

y(x, σD)

]

(1)where σI is the variance of Gaussian filter, ∗ denotes convo-lution, σD is scale parameter used to construct scale space ofthe image, Lx is the derivative computed in x direction whileLy is the derivative computed in y direction.

Response of corner property is defined as:

F (x, σI , σD) = det(µ(x, σI , σD))− αtrace2(µ(x, σI , σD))(2)

where α = 0.04. Local maximum of F (x, σI , σD) after non-maximum suppression determines the locations of interest

935

10-0109350938©2010 APSIPA. All rights reserved.

Page 2: Rotation Invariant Feature Descriptor Integrating HAVA and ... · A. RIFT feature descriptor Rotation Invariant Feature Transform (RIFT), which is a generalization of Scale Invariant

points. Keypoints are determined through local maximumdetection in scale space.

Mikolajczyk and Schmid [6] proposed to utilize Laplaceresponse to detect characteristic scale and demonstrated thatLaplace response achieves best performance in the aspect ofratio of correct detection of interest points compared withDifference of Gaussian, Harris and gradient. Scale normalizedLaplace response is defined as:

L(x, σD) =∣∣σ2

D(Lxx(x, σD) + Lyy(x, σD))∣∣ (3)

Scale normalized Laplace response is computed for eachHarris corner detected in different scale. If one interest pointachieves local maximum both in Harris corner response inimage space and Laplace response in scale space, then thispoint is determined as multi-scale Harris corner and thecorresponding scale is determined as characteristic scale. Dueto the discrete property of scale space representation, we canuse interpolation to precisely localize local maximum in 3Dscale space. Scale normalized Laplace response of multi-scaleHarris corner characterizes feasibility of corner. Corner withlarge Laplace response after non-maximal suppression tends tobe more distinctive and have higher repeatability. Thus we cansort all the interest points according to their Laplace responses.

III. HAVA-RIFT FEATURE DESCRIPTOR

A. RIFT feature descriptor

Rotation Invariant Feature Transform (RIFT), which is ageneralization of Scale Invariant Feature Transform (SIFT)[8], is a rotation invariant feature descriptor. The constructionof RIFT feature descriptor consists of the following steps:divide local circular normalized supporting patch to concentricrings of equal width, compute a gradient orientation histogramwithin each ring patch. To realize rotation invariance, gradientorientation is rotated to orientation relative to central orien-tation, where configuration of four rings and 8 histograms isadopted, obtaining feature vector with dimensionality of 32.

For each supporting region, denote the central position as(xc, yc), for arbitrary position (x, y) in the supporting region,gradients in horizontal and perpendicular directions are givenas:

dx = (I(x + 1, y)− I(x− 1, y))/2dy = (I(x, y + 1)− I(x, y − 1))/2.

(4)

The magnitude of gradient is m(x, y) =√

dx2 + dy2 andgradient orientation is θ(x, y) = tan−1(dy/dx). To achieverotation invariance, gradient orientation of the point is mea-sured at each point relative to the direction pointing outwardfrom the central point as:

ϕ(x, y) = θ(x, y)− α(x, y) (5)

where relative direction to the center is

α(x, y) = tan−1((y − yc)/(x− xc)) (6)

When the image rotates, ϕ(x, y) will not change thus isrotation invariant as illustrated in Fig. 1.

Once rotation invariant feature ϕ(x, y) is obtained for eachpoint in local supporting region, histogram can easily becomputed. RIFT feature descriptor is defined as F (d, θ) =m(x, y)g(d, σ), where g(d, σ) is Gaussian window function.The construction of RIFT feature is demonstrated as Fig. 2.Finally, the RIFT feature descriptor is obtained as:

RIFT (xc, yc) = (F (d1, θ1), F (d1, θ2), . . . , F (d4, θ8)) (7)

Fig. 1. Extraction of Angle withrotation invariance.

Fig. 2. RIFT Feature

B. Histogram of Absolute Value Activity

RIFT feature descriptor mainly describes local informationthrough gradient. However, using gradient alone in featuredescription can not guarantee high distinctiveness as illumi-nation information or gray level value has not been takeninto consideration. Illumination information has to be inte-grated with RIFT feature to improve distinctiveness of featuredescription. One way to use illumination information is theintensity domain spin images [2]. However, dimensionality ofspin image is 100, which causes high computation complexityin feature matching. Additionally, information extracted forspin image is redundant to RIFT [2]. In this paper, histogramof activity value and the histogram of absolute value activityare utilized to extract local region descriptor with rotationinvariance. This may be a reflection of the fact that HAVAand RIFT rely on complementary kinds of image information,the former one uses gray level values, while the latter one usesthe gradient.

Activity is defined as Af (x, y) = f(x, y) − fω(x, y)for arbitrary position (x, y) in image [10], where fω(x, y)represents the average value in local region ω around (x, y).Activity measures difference between image pixel and theaverage value in local region. Absolute value activity isdefined as AV Af (x, y) =

∣∣f(x, y)− fω(x, y)∣∣ which is

always non-negative. For image pixels in smooth region,AV Af (x, y) tends to be small. Therefore, absolute valueactivity AV Af (x, y) can express information enclosed inlocal region, also the computation complexity of AV Af (x, y)is one order. Thus computation complexity can be reducedsignificantly while guaranteeing performance. Combining ac-tivity and absolute activity together, local information can becharacterized more distinctively.

Let pc = (xc, yc) be the corner point detected in image fand the corresponding characteristic scale is sc. r = 10sc isutilized to determine the size of supporting region in this paper.Activity Af (x, y) and absolute value activity AV Af (x, y) arecomputed in the local region R around corner point p = (x, y).

936

Page 3: Rotation Invariant Feature Descriptor Integrating HAVA and ... · A. RIFT feature descriptor Rotation Invariant Feature Transform (RIFT), which is a generalization of Scale Invariant

To improve the robustness of HAVA feature, we add Gaus-sian window to activity and absolute activity. Thus weightedactivity and weighted absolute activity are obtained as:

C(p) = ω(r, σ)(f(x, y)− fω(x, y))C′(p) = ω(r, σ)|(f(x, y)− fω(x, y))|. (8)

where ω(r, σ) is a Gaussian window with variance σ.Rotation invariant feature descriptor can be constructed

based on activity C(p) and absolute value activity C′(p) at

pc. To achieve rotation invariance and scale invariance, localregion R is divided into concentric rings of equal width asR1, R2, ..., Rt according to logarithm coordinate, which willbe more sensitive at the position near the center. In this paper,t = 4 is adopted. To construct local feature with rotationinvariance, average weighted activity and average weightedabsolute value activity are computed in each concentric ringregion, these activities characterize information in each ring re-gion. To describe local region Ri effectively and distinctively,we represent the local information in histograms as histogramsare not sensitive to non-uniform deformation in local region.

For local region Ri, average weighted activity is defined as:

HRi(pc) =∑ {C(p)}

#Ri

(9)

where #Ri denotes number of points in the i-th local regionRi. Similarly, average weighted absolute value activity in ringregion Ri is given as:

VRi(pc) =

∑ {C ′(p)}

#Ri

(10)

Take all the histograms in ring regions into considerationand merge these histograms, Histogram of Average Value ofActivity (HAVA) descriptor is defined as:

HAV A(pc) = (HR1 ,HR2 ,HR3 ,HR4 , VR1 , VR2 , VR3 , VR4)(11)

HAVA descriptor can be viewed as local activity descriptorfor point with dimensionality 8. To handle linear illuminationchange, HAVA descriptor is normalized to unit norm, thussolving contrast change in intensity domain. Gradient is usedin activity and absolute activity, thus illumination invarianceis achieved. The construction process of HAVA descriptor isshown in Fig. 3.

Fig. 3. HAVA Feature Descriptor.

Above we have given the principle of RIFT and HAVA rota-tion invariant feature descriptors, these two feature descriptorsare rotation and scale invariant describing different aspects oflocal supporting region. Combining these two descriptors, we

can achieve better description of local supporting region andimprove the distinctiveness.

C. Matching Strategy

We propose to use a two-step double-thresholds matchingmetric. As HAVA is low dimensional feature and RIFT is highdimensional feature, we first apply threshold based nearest dis-tance to match HAVA feature descriptor. If a pair of feature de-scriptor can not pass the test, we stop the process. Otherwise,RIFT feature descriptors are matched by applying thresholdbased nearest distance. The feature descriptors passing thistest confirm corresponding relationship. As two-step and two-threshold matching strategy is used, the implementation speedis increased and matching precision is improved.

D. Algorithm

The procedure of HAVA-RIFT feature descriptor is:

Algorithm 1 HAVA-RIFT Feature Descriptor1) Harris-Laplace feature detector is applied to detect multi-scale corner point in scale space, obtaining position andscale s;2) For each multi-scale corner, compute the local supportingregion R to achieve scale invariant where r = 10s;3) HAVA and RIFT feature descriptors are computed in localsupporting region R;4) Match HAVA feature descriptors across images anddetermine matching relationship based on threshold basednearest distance;5) Match RIFT feature descriptors across images and deter-mine matching relationship based on threshold based nearestdistance.

IV. EXPERIMENTAL RESULTS

To evaluate performance of the proposed HAVA-RIFT fea-ture descriptor, we have conducted extensive experiments onreal images.

A. Image Dataset

Real images from Oxford Visual Geometry Group1 areused, where each group includes six images and ground truthhomograph matrix is given. The image transformations includerotation+scaling, illumination change, image blurring, affinetransformation and JPEG compression as illustrated in Fig. 4.

B. Evaluation Metric

To evaluate the results of feature matching based on HAVA-RIFT feature descriptor, the following evaluation metric isadopted. Let x and x

′be the correspondences determined by

our matching strategy, H denotes the ground truth homographmatrix between images I1, I2 and H−1 denotes the homographmatrix between I2, I1. For arbitrary position x in image I1,the corresponding position in I2 is given as Hx while thecorresponding position determined by HAVA-RIFT descriptor

1http://www.robots.ox.ac.uk/ vgg/data/data-aff.html

937

Page 4: Rotation Invariant Feature Descriptor Integrating HAVA and ... · A. RIFT feature descriptor Rotation Invariant Feature Transform (RIFT), which is a generalization of Scale Invariant

(a) Boat (b) Bark (c) Bike (d) Tree

(e) Wall (f) Graf (g) Leuven (h) UBC

Fig. 4. Real images used for performance evaluation.(a), (b), Scal-ing+Rotation; (c),(d) Image Blurring; (e),(f),Viewpoint Change; (g) LightChange; (h) JPEG Compression.

matching is x′, the distance error is d1 = ‖x′ − Hx‖

in image I2. Similarly, the distance error in image I1 isd2 = ‖x − H−1x

′‖. Thus the total distance error is d =d1+d2 = ‖x′−Hx‖+‖x−H−1x

′‖. An exact correspondenceshould minimize the total distance error. In all the experiments,the threshold is set as 4 pixels.

To tackle the “multi-to-one” mismatch problem, we proposea two-way matching principle. First, we determine corner pointx′

in image I2 corresponding to corner point x in image I1

based on matching result of HAVA-RIFT feature descriptors.If we could not find such x

′, the two-way matching procedure

is terminated. Otherwise we determine the corner point x” inimage I1 corresponding to corner point x

′in image I2 based

on HAVA-RIFT feature descriptor. If x′′

= x, the two-waymatching relationship is guaranteed.

We utilize ratio of correct matching to evaluate performanceof matching. Ratio of correct matching is defined as:

η =Correct Match

Potential Match(12)

where Correct Match is the number of correct matchesbetween two images and Potential Match is the numberof potential matches.

C. Experimental Results on Real ImagesFig. 5 demonstrates experimental results on real images,

where X axis represents scale or image index and Y axisrepresents η. From the figure, we observe that for scal-ing+rotation, illumination, image blurring, viewpoint changeand Jpeg compression, HAVA-RIFT has improved correctmatching ratio around 10% to 30% over RIFT.

V. CONCLUSIONS

Aiming at dealing with orientation estimation and high di-mensionality problems associated with the existing descriptors,a novel rotation invariant feature descriptor, HAVA-RIFT isproposed which achieves superior performance. For futurework, we will investigate local feature detectors to extractfeature descriptor with affine invariance.

ACKNOWLEDGMENT

This work is supported by National Natural Science Foun-dation of China under key project number 60736007, NaturalScience Foundation of Shaanxi Province under 2010JZ011 andSpace Research Grant.

1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.60

0.2

0.4

0.6

0.8

1

Scale+Rotation

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(a) Boat

1 1.5 2 2.5 3 3.5 40

0.2

0.4

0.6

0.8

1

Scale+Rotation

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(b) Bark

1 1.5 2 2.5 3 3.5 4 4.5 50

0.2

0.4

0.6

0.8

1

Image Blurring

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(c) Bike

1 1.5 2 2.5 3 3.5 4 4.5 50

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Image Blurring

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(d) Tree

1 1.5 2 2.5 30

0.2

0.4

0.6

0.8

1

Viewpoint Change

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(e) Wall

1 1.2 1.4 1.6 1.8 20

0.2

0.4

0.6

0.8

1

Viewpoint Change

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(f) Graf

1 1.5 2 2.5 3 3.5 4 4.5 50

0.2

0.4

0.6

0.8

1

Light Change

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(g) Leuven

1 1.5 2 2.5 3 3.5 4 4.5 50.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

JPEG Compression

Rat

io o

f Cor

rect

Mat

chin

g

RIFTHAVA−RIFT

(h) UBC

Fig. 5. Experimental results on real images. The red line with dots denotesHAVA-RIFT while the blue line with squares denotes RIFT.

REFERENCES

[1] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and objectrecognition using shape contexts,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 24, no. 4, pp. 509–522, 2002.

[2] S. Lazebnik, C. Schmid, and J. Ponce, “A sparse texture representationusing local affine regions,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 27, no. 8, pp. 1265–1278, 2005.

[3] J. Li, N. Allinson, D. Tao, and X. Li, “Multitraining support vectormachine for image retrieval,” Image Processing, IEEE Transactions on,vol. 15, no. 11, pp. 3597 –3601, nov. 2006.

[4] V. Ferrari, T. Tuytelaars, and L. V. Gool, “Wide-baseline multiple-viewcorrespondences,” in Proc. Computer Vision and Pattern Recognition,2003.

[5] C. Harris and M. Stephens, “A combined corner and edge detection,” inProceedings of The Fourth Alvey Vision Conference, 1988, pp. 147–151.

[6] K. Mikolajczyk and C. Schmid, “Indexing based on scale invariantinterest points,” in Proc. International Conference on Computer Vision,2001, pp. 525–531.

[7] ——, “Scale & affine invariant interest point detectors,” Int. J. Comput.Vision, vol. 60, no. 1, pp. 63–86, 2004.

[8] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,”Int. J. Comput. Vision, vol. 60, no. 2, pp. 91–110, 2004.

[9] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary robustindependent elementary features,” in Proc. European Conference onComputer Vision. Springer, 2010, pp. 778–792.

[10] Y. Zhang and M. He, “Absolute value activity and regional similaritybased on image fusion,” Computer Engineering and Application, no. 18,2006.

938