Similarity metrics for categorization: From monolithic to category specific

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Similarity metrics for categorization : From monolithic to category specific. / Babenko, Boris; Branson, Steve; Belongie, Serge.

In: Proceedings of the IEEE International Conference on Computer Vision, 2009, p. 293-300.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Babenko, B, Branson, S & Belongie, S 2009, 'Similarity metrics for categorization: From monolithic to category specific', Proceedings of the IEEE International Conference on Computer Vision, pp. 293-300. https://doi.org/10.1109/ICCV.2009.5459264

APA

Babenko, B., Branson, S., & Belongie, S. (2009). Similarity metrics for categorization: From monolithic to category specific. Proceedings of the IEEE International Conference on Computer Vision, 293-300. https://doi.org/10.1109/ICCV.2009.5459264

Vancouver

Babenko B, Branson S, Belongie S. Similarity metrics for categorization: From monolithic to category specific. Proceedings of the IEEE International Conference on Computer Vision. 2009;293-300. https://doi.org/10.1109/ICCV.2009.5459264

Author

Babenko, Boris ; Branson, Steve ; Belongie, Serge. / Similarity metrics for categorization : From monolithic to category specific. In: Proceedings of the IEEE International Conference on Computer Vision. 2009 ; pp. 293-300.

Bibtex

@inproceedings{33645e27e52f4f5086526758c42b1a44,
title = "Similarity metrics for categorization: From monolithic to category specific",
abstract = "Similarity metrics that are learned from labeled training data can be advantageous in terms of performance and/or efficiency. These learned metrics can then be used in conjunction with a nearest neighbor classifier, or can be plugged in as kernels to an SVM. For the task of categorization two scenarios have thus far been explored. The first is to train a single {"} monolithic{"} similarity metric that is then used for all examples. The other is to train a metric for each category in a 1-vs-all manner. While the former approach seems to be at a disadvantage in terms of performance, the latter is not practical for large numbers of categories. In this paper we explore the space in between these two extremes. We present an algorithm that learns a few similarity metrics, while simultaneously grouping categories together and assigning one of these metrics to each group. We present promising results and show how the learned metrics generalize to novel categories.",
author = "Boris Babenko and Steve Branson and Serge Belongie",
year = "2009",
doi = "10.1109/ICCV.2009.5459264",
language = "English",
pages = "293--300",
journal = "Proceedings of the IEEE International Conference on Computer Vision",
note = "12th International Conference on Computer Vision, ICCV 2009 ; Conference date: 29-09-2009 Through 02-10-2009",

}

RIS

TY - GEN

T1 - Similarity metrics for categorization

T2 - 12th International Conference on Computer Vision, ICCV 2009

AU - Babenko, Boris

AU - Branson, Steve

AU - Belongie, Serge

PY - 2009

Y1 - 2009

N2 - Similarity metrics that are learned from labeled training data can be advantageous in terms of performance and/or efficiency. These learned metrics can then be used in conjunction with a nearest neighbor classifier, or can be plugged in as kernels to an SVM. For the task of categorization two scenarios have thus far been explored. The first is to train a single " monolithic" similarity metric that is then used for all examples. The other is to train a metric for each category in a 1-vs-all manner. While the former approach seems to be at a disadvantage in terms of performance, the latter is not practical for large numbers of categories. In this paper we explore the space in between these two extremes. We present an algorithm that learns a few similarity metrics, while simultaneously grouping categories together and assigning one of these metrics to each group. We present promising results and show how the learned metrics generalize to novel categories.

AB - Similarity metrics that are learned from labeled training data can be advantageous in terms of performance and/or efficiency. These learned metrics can then be used in conjunction with a nearest neighbor classifier, or can be plugged in as kernels to an SVM. For the task of categorization two scenarios have thus far been explored. The first is to train a single " monolithic" similarity metric that is then used for all examples. The other is to train a metric for each category in a 1-vs-all manner. While the former approach seems to be at a disadvantage in terms of performance, the latter is not practical for large numbers of categories. In this paper we explore the space in between these two extremes. We present an algorithm that learns a few similarity metrics, while simultaneously grouping categories together and assigning one of these metrics to each group. We present promising results and show how the learned metrics generalize to novel categories.

UR - http://www.scopus.com/inward/record.url?scp=77953185204&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2009.5459264

DO - 10.1109/ICCV.2009.5459264

M3 - Conference article

AN - SCOPUS:77953185204

SP - 293

EP - 300

JO - Proceedings of the IEEE International Conference on Computer Vision

JF - Proceedings of the IEEE International Conference on Computer Vision

Y2 - 29 September 2009 through 2 October 2009

ER -

ID: 302049094