Learning concept embeddings with combined human-machine expertise

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Learning concept embeddings with combined human-machine expertise. / Wilber, Michael J.; Kwak, Iljung S.; Kriegman, David; Belongie, Serge.

I: Proceedings of the IEEE International Conference on Computer Vision, 17.02.2015, s. 981-989.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Wilber, MJ, Kwak, IS, Kriegman, D & Belongie, S 2015, 'Learning concept embeddings with combined human-machine expertise', Proceedings of the IEEE International Conference on Computer Vision, s. 981-989. https://doi.org/10.1109/ICCV.2015.118

APA

Wilber, M. J., Kwak, I. S., Kriegman, D., & Belongie, S. (2015). Learning concept embeddings with combined human-machine expertise. Proceedings of the IEEE International Conference on Computer Vision, 981-989. https://doi.org/10.1109/ICCV.2015.118

Vancouver

Wilber MJ, Kwak IS, Kriegman D, Belongie S. Learning concept embeddings with combined human-machine expertise. Proceedings of the IEEE International Conference on Computer Vision. 2015 feb. 17;981-989. https://doi.org/10.1109/ICCV.2015.118

Author

Wilber, Michael J. ; Kwak, Iljung S. ; Kriegman, David ; Belongie, Serge. / Learning concept embeddings with combined human-machine expertise. I: Proceedings of the IEEE International Conference on Computer Vision. 2015 ; s. 981-989.

Bibtex

@inproceedings{3d437d470eee4c97af914eb93eae2206,
title = "Learning concept embeddings with combined human-machine expertise",
abstract = "This paper presents our work on {"}SNaCK,{"} a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods.",
author = "Wilber, {Michael J.} and Kwak, {Iljung S.} and David Kriegman and Serge Belongie",
note = "Publisher Copyright: {\textcopyright} 2015 IEEE.; 15th IEEE International Conference on Computer Vision, ICCV 2015 ; Conference date: 11-12-2015 Through 18-12-2015",
year = "2015",
month = feb,
day = "17",
doi = "10.1109/ICCV.2015.118",
language = "English",
pages = "981--989",
journal = "Proceedings of the IEEE International Conference on Computer Vision",
issn = "1550-5499",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - GEN

T1 - Learning concept embeddings with combined human-machine expertise

AU - Wilber, Michael J.

AU - Kwak, Iljung S.

AU - Kriegman, David

AU - Belongie, Serge

N1 - Publisher Copyright: © 2015 IEEE.

PY - 2015/2/17

Y1 - 2015/2/17

N2 - This paper presents our work on "SNaCK," a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods.

AB - This paper presents our work on "SNaCK," a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods.

UR - http://www.scopus.com/inward/record.url?scp=84973868618&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2015.118

DO - 10.1109/ICCV.2015.118

M3 - Conference article

AN - SCOPUS:84973868618

SP - 981

EP - 989

JO - Proceedings of the IEEE International Conference on Computer Vision

JF - Proceedings of the IEEE International Conference on Computer Vision

SN - 1550-5499

T2 - 15th IEEE International Conference on Computer Vision, ICCV 2015

Y2 - 11 December 2015 through 18 December 2015

ER -

ID: 301828966