Learning concept embeddings with combined human-machine expertise

Research output: Contribution to journalConference articleResearchpeer-review

This paper presents our work on "SNaCK," a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods.

Original languageEnglish
JournalProceedings of the IEEE International Conference on Computer Vision
Pages (from-to)981-989
Number of pages9
ISSN1550-5499
DOIs
Publication statusPublished - 17 Feb 2015
Externally publishedYes
Event15th IEEE International Conference on Computer Vision, ICCV 2015 - Santiago, Chile
Duration: 11 Dec 201518 Dec 2015

Conference

Conference15th IEEE International Conference on Computer Vision, ICCV 2015
CountryChile
CitySantiago
Period11/12/201518/12/2015

Bibliographical note

Publisher Copyright:
© 2015 IEEE.

ID: 301828966