Match-time covariance for descriptors

Research output: Contribution to conferencePaperResearchpeer-review

Local descriptor methods are widely used in computer vision to compare local regions of images. These descriptors are often extracted relative to an estimated scale and rotation to provide invariance up to similarity transformations. The estimation of rotation and scale in local neighborhoods (also known as steering) is an imperfect process, however, and can produce errors downstream. In this paper, we propose an alternative to steering that we refer to as match-time covariance (MTC). MTC is a general strategy for descriptor design that simultaneously provides invariance in local neighborhood matches together with the associated aligning transformations. We also provide a general framework for endowing existing descriptors with similarity invariance through MTC. The framework, Similarity-MTC, is simple and dramatically improves accuracy. Finally, we propose NCC-S, a highly effective descriptor based on classic normalized cross-correlation, designed for fast execution in the Similarity-MTC framework. The surprising effectiveness of this very simple descriptor suggests that MTC offers fruitful research directions for image matching previously not accessible in the steering based paradigm.

Original languageEnglish
Publication date2013
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event2013 24th British Machine Vision Conference, BMVC 2013 - Bristol, United Kingdom
Duration: 9 Sep 201313 Sep 2013

Conference

Conference2013 24th British Machine Vision Conference, BMVC 2013
CountryUnited Kingdom
CityBristol
Period09/09/201313/09/2013
SponsorDyson, HP, IET Journals - The Institution of Engineering and Technology, Microsoft Research, Qualcomm

ID: 302046402