Learning to detect and match keypoints with deep architectures

Research output: Contribution to conferencePaperResearchpeer-review

Standard

Learning to detect and match keypoints with deep architectures. / Altwaijry, Hani; Veit, Andreas; Belongie, Serge.

2016. 49.1-49.12 Paper presented at 27th British Machine Vision Conference, BMVC 2016, York, United Kingdom.

Research output: Contribution to conferencePaperResearchpeer-review

Harvard

Altwaijry, H, Veit, A & Belongie, S 2016, 'Learning to detect and match keypoints with deep architectures', Paper presented at 27th British Machine Vision Conference, BMVC 2016, York, United Kingdom, 19/09/2016 - 22/09/2016 pp. 49.1-49.12. https://doi.org/10.5244/C.30.49

APA

Altwaijry, H., Veit, A., & Belongie, S. (2016). Learning to detect and match keypoints with deep architectures. 49.1-49.12. Paper presented at 27th British Machine Vision Conference, BMVC 2016, York, United Kingdom. https://doi.org/10.5244/C.30.49

Vancouver

Altwaijry H, Veit A, Belongie S. Learning to detect and match keypoints with deep architectures. 2016. Paper presented at 27th British Machine Vision Conference, BMVC 2016, York, United Kingdom. https://doi.org/10.5244/C.30.49

Author

Altwaijry, Hani ; Veit, Andreas ; Belongie, Serge. / Learning to detect and match keypoints with deep architectures. Paper presented at 27th British Machine Vision Conference, BMVC 2016, York, United Kingdom.

Bibtex

@conference{486569ebbfeb41a8a81376d2a8aee5a2,
title = "Learning to detect and match keypoints with deep architectures",
abstract = "Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.",
author = "Hani Altwaijry and Andreas Veit and Serge Belongie",
note = "Funding Information: We would like to thank Michael Wilber and Tsung-Yi Lin for their valuable input. This work was supported by the KACST Graduate Studies Scholarship. Publisher Copyright: {\textcopyright} 2016. The copyright of this document resides with its authors.; 27th British Machine Vision Conference, BMVC 2016 ; Conference date: 19-09-2016 Through 22-09-2016",
year = "2016",
doi = "10.5244/C.30.49",
language = "English",
pages = "49.1--49.12",

}

RIS

TY - CONF

T1 - Learning to detect and match keypoints with deep architectures

AU - Altwaijry, Hani

AU - Veit, Andreas

AU - Belongie, Serge

N1 - Funding Information: We would like to thank Michael Wilber and Tsung-Yi Lin for their valuable input. This work was supported by the KACST Graduate Studies Scholarship. Publisher Copyright: © 2016. The copyright of this document resides with its authors.

PY - 2016

Y1 - 2016

N2 - Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.

AB - Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.

UR - http://www.scopus.com/inward/record.url?scp=85029570955&partnerID=8YFLogxK

U2 - 10.5244/C.30.49

DO - 10.5244/C.30.49

M3 - Paper

AN - SCOPUS:85029570955

SP - 49.1-49.12

T2 - 27th British Machine Vision Conference, BMVC 2016

Y2 - 19 September 2016 through 22 September 2016

ER -

ID: 301828084