Fast feature pyramids for object detection

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Fast feature pyramids for object detection. / Dollar, Piotr; Appel, Ron; Belongie, Serge; Perona, Pietro.

I: IEEE Transactions on Pattern Analysis and Machine Intelligence, Bind 36, Nr. 8, 6714453, 08.2014, s. 1532-1545.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Dollar, P, Appel, R, Belongie, S & Perona, P 2014, 'Fast feature pyramids for object detection', IEEE Transactions on Pattern Analysis and Machine Intelligence, bind 36, nr. 8, 6714453, s. 1532-1545. https://doi.org/10.1109/TPAMI.2014.2300479

APA

Dollar, P., Appel, R., Belongie, S., & Perona, P. (2014). Fast feature pyramids for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(8), 1532-1545. [6714453]. https://doi.org/10.1109/TPAMI.2014.2300479

Vancouver

Dollar P, Appel R, Belongie S, Perona P. Fast feature pyramids for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2014 aug.;36(8):1532-1545. 6714453. https://doi.org/10.1109/TPAMI.2014.2300479

Author

Dollar, Piotr ; Appel, Ron ; Belongie, Serge ; Perona, Pietro. / Fast feature pyramids for object detection. I: IEEE Transactions on Pattern Analysis and Machine Intelligence. 2014 ; Bind 36, Nr. 8. s. 1532-1545.

Bibtex

@article{6d63b7e365484483b3dfbdd80c0c45e4,
title = "Fast feature pyramids for object detection",
abstract = "Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).",
keywords = "image pyramids, natural image statistics, object detection, pedestrian detection, real-time systems, Visual features",
author = "Piotr Dollar and Ron Appel and Serge Belongie and Pietro Perona",
year = "2014",
month = aug,
doi = "10.1109/TPAMI.2014.2300479",
language = "English",
volume = "36",
pages = "1532--1545",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "Institute of Electrical and Electronics Engineers",
number = "8",

}

RIS

TY - JOUR

T1 - Fast feature pyramids for object detection

AU - Dollar, Piotr

AU - Appel, Ron

AU - Belongie, Serge

AU - Perona, Pietro

PY - 2014/8

Y1 - 2014/8

N2 - Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).

AB - Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).

KW - image pyramids

KW - natural image statistics

KW - object detection

KW - pedestrian detection

KW - real-time systems

KW - Visual features

UR - http://www.scopus.com/inward/record.url?scp=84903622275&partnerID=8YFLogxK

U2 - 10.1109/TPAMI.2014.2300479

DO - 10.1109/TPAMI.2014.2300479

M3 - Journal article

AN - SCOPUS:84903622275

VL - 36

SP - 1532

EP - 1545

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 8

M1 - 6714453

ER -

ID: 302045927