Learning concept embeddings with combined human-machine expertise
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Learning concept embeddings with combined human-machine expertise. / Wilber, Michael J.; Kwak, Iljung S.; Kriegman, David; Belongie, Serge.
In: Proceedings of the IEEE International Conference on Computer Vision, 17.02.2015, p. 981-989.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Learning concept embeddings with combined human-machine expertise
AU - Wilber, Michael J.
AU - Kwak, Iljung S.
AU - Kriegman, David
AU - Belongie, Serge
N1 - Publisher Copyright: © 2015 IEEE.
PY - 2015/2/17
Y1 - 2015/2/17
N2 - This paper presents our work on "SNaCK," a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods.
AB - This paper presents our work on "SNaCK," a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object's visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints. We show that our SNaCK embeddings are useful in several tasks: distinguishing prime and nonprime numbers on MNIST, discovering labeling mistakes in the Caltech UCSD Birds (CUB) dataset with the help of deep-learned features, creating training datasets for bird classifiers, capturing subjective human taste on a new dataset of 10,000 foods, and qualitatively exploring an unstructured set of pictographic characters. Comparisons with the state-of-the-art in these tasks show that SNaCK produces better concept embeddings that require less human supervision than the leading methods.
UR - http://www.scopus.com/inward/record.url?scp=84973868618&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2015.118
DO - 10.1109/ICCV.2015.118
M3 - Conference article
AN - SCOPUS:84973868618
SP - 981
EP - 989
JO - Proceedings of the IEEE International Conference on Computer Vision
JF - Proceedings of the IEEE International Conference on Computer Vision
SN - 1550-5499
T2 - 15th IEEE International Conference on Computer Vision, ICCV 2015
Y2 - 11 December 2015 through 18 December 2015
ER -
ID: 301828966