Fast feature pyramids for object detection

Research output: Contribution to journalJournal articleResearchpeer-review

Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).

Original languageEnglish
Article number6714453
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume36
Issue number8
Pages (from-to)1532-1545
Number of pages14
ISSN0162-8828
DOIs
Publication statusPublished - Aug 2014
Externally publishedYes

    Research areas

  • image pyramids, natural image statistics, object detection, pedestrian detection, real-time systems, Visual features

ID: 302045927