Tropel: Crowdsourcing Detectors with Minimal Training
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Tropel: Crowdsourcing Detectors with Minimal Training. / Belongie, Serge; Patterson, Genevieve; Perona, Pietro; Hays, James.
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. Vol. 3 1. ed. 2015. p. 150-159.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Tropel: Crowdsourcing Detectors with Minimal Training
AU - Belongie, Serge
AU - Patterson, Genevieve
AU - Perona, Pietro
AU - Hays, James
PY - 2015/9/23
Y1 - 2015/9/23
N2 - This paper introduces the Tropel system which enables non-technical users to create arbitrary visual detectors without first annotating a training set. Our primary contribution is a crowd active learning pipeline that is seeded with only a single positive example and an unlabeled set of training images. We examine the crowd's ability to train visual detectors given severely limited training themselves. This paper presents a series of experiments that reveal the relationship between worker training, worker consensus and the average precision of detectors trained by crowd-in-the-loop active learning. In order to verify the efficacy of our system, we train detectors for bird species that work nearly as well as those trained on the exhaustively labeled CUB 200 dataset at significantly lower cost and with little effort from the end user. To further illustrate the usefulness of our pipeline, we demonstrate qualitative results on unlabeled datasets containing fashion images and street-level photographs of Paris.
AB - This paper introduces the Tropel system which enables non-technical users to create arbitrary visual detectors without first annotating a training set. Our primary contribution is a crowd active learning pipeline that is seeded with only a single positive example and an unlabeled set of training images. We examine the crowd's ability to train visual detectors given severely limited training themselves. This paper presents a series of experiments that reveal the relationship between worker training, worker consensus and the average precision of detectors trained by crowd-in-the-loop active learning. In order to verify the efficacy of our system, we train detectors for bird species that work nearly as well as those trained on the exhaustively labeled CUB 200 dataset at significantly lower cost and with little effort from the end user. To further illustrate the usefulness of our pipeline, we demonstrate qualitative results on unlabeled datasets containing fashion images and street-level photographs of Paris.
M3 - Article in proceedings
VL - 3
SP - 150
EP - 159
BT - Proceedings of the AAAI Conference on Human Computation and Crowdsourcing
ER -
ID: 307528886