Strong supervision from weak annotation: Interactive training of deformable part models

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Strong supervision from weak annotation : Interactive training of deformable part models. / Branson, Steve; Perona, Pietro; Belongie, S.

In: Proceedings of the IEEE International Conference on Computer Vision, 2011, p. 1832-1839.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Branson, S, Perona, P & Belongie, S 2011, 'Strong supervision from weak annotation: Interactive training of deformable part models', Proceedings of the IEEE International Conference on Computer Vision, pp. 1832-1839. https://doi.org/10.1109/ICCV.2011.6126450

APA

Branson, S., Perona, P., & Belongie, S. (2011). Strong supervision from weak annotation: Interactive training of deformable part models. Proceedings of the IEEE International Conference on Computer Vision, 1832-1839. https://doi.org/10.1109/ICCV.2011.6126450

Vancouver

Branson S, Perona P, Belongie S. Strong supervision from weak annotation: Interactive training of deformable part models. Proceedings of the IEEE International Conference on Computer Vision. 2011;1832-1839. https://doi.org/10.1109/ICCV.2011.6126450

Author

Branson, Steve ; Perona, Pietro ; Belongie, S. / Strong supervision from weak annotation : Interactive training of deformable part models. In: Proceedings of the IEEE International Conference on Computer Vision. 2011 ; pp. 1832-1839.

Bibtex

@inproceedings{ff6a0f5c70df412da8e0e82cb2c05c81,
title = "Strong supervision from weak annotation: Interactive training of deformable part models",
abstract = "We propose a framework for large scale learning and annotation of structured models. The system interleaves interactive labeling (where the current model is used to semi-automate the labeling of a new example) and online learning (where a newly labeled example is used to update the current model parameters). This framework is scalable to large datasets and complex image models and is shown to have excellent theoretical and practical properties in terms of train time, optimality guarantees, and bounds on the amount of annotation effort per image. We apply this framework to part-based detection, and introduce a novel algorithm for interactive labeling of deformable part models. The labeling tool updates and displays in real-time the maximum likelihood location of all parts as the user clicks and drags the location of one or more parts. We demonstrate that the system can be used to efficiently and robustly train part and pose detectors on the CUB Birds-200-a challenging dataset of birds in unconstrained pose and environment.",
author = "Steve Branson and Pietro Perona and S. Belongie",
year = "2011",
doi = "10.1109/ICCV.2011.6126450",
language = "English",
pages = "1832--1839",
journal = "Proceedings of the IEEE International Conference on Computer Vision",
note = "2011 IEEE International Conference on Computer Vision, ICCV 2011 ; Conference date: 06-11-2011 Through 13-11-2011",

}

RIS

TY - GEN

T1 - Strong supervision from weak annotation

T2 - 2011 IEEE International Conference on Computer Vision, ICCV 2011

AU - Branson, Steve

AU - Perona, Pietro

AU - Belongie, S.

PY - 2011

Y1 - 2011

N2 - We propose a framework for large scale learning and annotation of structured models. The system interleaves interactive labeling (where the current model is used to semi-automate the labeling of a new example) and online learning (where a newly labeled example is used to update the current model parameters). This framework is scalable to large datasets and complex image models and is shown to have excellent theoretical and practical properties in terms of train time, optimality guarantees, and bounds on the amount of annotation effort per image. We apply this framework to part-based detection, and introduce a novel algorithm for interactive labeling of deformable part models. The labeling tool updates and displays in real-time the maximum likelihood location of all parts as the user clicks and drags the location of one or more parts. We demonstrate that the system can be used to efficiently and robustly train part and pose detectors on the CUB Birds-200-a challenging dataset of birds in unconstrained pose and environment.

AB - We propose a framework for large scale learning and annotation of structured models. The system interleaves interactive labeling (where the current model is used to semi-automate the labeling of a new example) and online learning (where a newly labeled example is used to update the current model parameters). This framework is scalable to large datasets and complex image models and is shown to have excellent theoretical and practical properties in terms of train time, optimality guarantees, and bounds on the amount of annotation effort per image. We apply this framework to part-based detection, and introduce a novel algorithm for interactive labeling of deformable part models. The labeling tool updates and displays in real-time the maximum likelihood location of all parts as the user clicks and drags the location of one or more parts. We demonstrate that the system can be used to efficiently and robustly train part and pose detectors on the CUB Birds-200-a challenging dataset of birds in unconstrained pose and environment.

UR - http://www.scopus.com/inward/record.url?scp=84856684024&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2011.6126450

DO - 10.1109/ICCV.2011.6126450

M3 - Conference article

AN - SCOPUS:84856684024

SP - 1832

EP - 1839

JO - Proceedings of the IEEE International Conference on Computer Vision

JF - Proceedings of the IEEE International Conference on Computer Vision

Y2 - 6 November 2011 through 13 November 2011

ER -

ID: 301830646