Rethinking Nearest Neighbors for Visual Classification

Research output: Working paperPreprintResearch

Standard

Rethinking Nearest Neighbors for Visual Classification. / Jia, Menglin; Chen, Bor-Chun; Cardie, Claire; Wu, Zuxuan; Belongie, Serge; Lim, Ser-Nam.

arXiv.org, 2022.

Research output: Working paperPreprintResearch

Harvard

Jia, M, Chen, B-C, Cardie, C, Wu, Z, Belongie, S & Lim, S-N 2022 'Rethinking Nearest Neighbors for Visual Classification' arXiv.org. <https://arxiv.org/pdf/2112.08459.pdf>

APA

Jia, M., Chen, B-C., Cardie, C., Wu, Z., Belongie, S., & Lim, S-N. (2022). Rethinking Nearest Neighbors for Visual Classification. arXiv.org. https://arxiv.org/pdf/2112.08459.pdf

Vancouver

Jia M, Chen B-C, Cardie C, Wu Z, Belongie S, Lim S-N. Rethinking Nearest Neighbors for Visual Classification. arXiv.org. 2022.

Author

Jia, Menglin ; Chen, Bor-Chun ; Cardie, Claire ; Wu, Zuxuan ; Belongie, Serge ; Lim, Ser-Nam. / Rethinking Nearest Neighbors for Visual Classification. arXiv.org, 2022.

Bibtex

@techreport{95117dabd1c345e4a1dda7ae1300f258,
title = "Rethinking Nearest Neighbors for Visual Classification",
abstract = "Neural network classifiers have become the de-facto choice for current {"}pre-train then fine-tune{"} paradigms of visual classification. In this paper, we investigate -Nearest-Neighbor (k-NN) classifiers, a classical model-free learning method from the pre-deep learning era, as an augmentation to modern neural network based approaches. As a lazy learning method, k-NN simply aggregates the distance between the test image and top-k neighbors in a training set. We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps: (1) Leverage k-NN predicted probabilities as indications for easy \vs~hard examples during training. (2) Linearly interpolate the k-NN predicted distribution with that of the augmented classifier. Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration with additional insights: (1) k-NN achieves competitive results, sometimes even outperforming a standard linear classifier. (2) Incorporating k-NN is especially beneficial for tasks where parametric classifiers perform poorly and / or in low-data regimes. We hope these discoveries will encourage people to rethink the role of pre-deep learning, classical methods in computer vision. Our code is available at: https://github.com/KMnP/nn-revisit.",
author = "Menglin Jia and Bor-Chun Chen and Claire Cardie and Zuxuan Wu and Serge Belongie and Ser-Nam Lim",
year = "2022",
language = "English",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - Rethinking Nearest Neighbors for Visual Classification

AU - Jia, Menglin

AU - Chen, Bor-Chun

AU - Cardie, Claire

AU - Wu, Zuxuan

AU - Belongie, Serge

AU - Lim, Ser-Nam

PY - 2022

Y1 - 2022

N2 - Neural network classifiers have become the de-facto choice for current "pre-train then fine-tune" paradigms of visual classification. In this paper, we investigate -Nearest-Neighbor (k-NN) classifiers, a classical model-free learning method from the pre-deep learning era, as an augmentation to modern neural network based approaches. As a lazy learning method, k-NN simply aggregates the distance between the test image and top-k neighbors in a training set. We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps: (1) Leverage k-NN predicted probabilities as indications for easy \vs~hard examples during training. (2) Linearly interpolate the k-NN predicted distribution with that of the augmented classifier. Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration with additional insights: (1) k-NN achieves competitive results, sometimes even outperforming a standard linear classifier. (2) Incorporating k-NN is especially beneficial for tasks where parametric classifiers perform poorly and / or in low-data regimes. We hope these discoveries will encourage people to rethink the role of pre-deep learning, classical methods in computer vision. Our code is available at: https://github.com/KMnP/nn-revisit.

AB - Neural network classifiers have become the de-facto choice for current "pre-train then fine-tune" paradigms of visual classification. In this paper, we investigate -Nearest-Neighbor (k-NN) classifiers, a classical model-free learning method from the pre-deep learning era, as an augmentation to modern neural network based approaches. As a lazy learning method, k-NN simply aggregates the distance between the test image and top-k neighbors in a training set. We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps: (1) Leverage k-NN predicted probabilities as indications for easy \vs~hard examples during training. (2) Linearly interpolate the k-NN predicted distribution with that of the augmented classifier. Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration with additional insights: (1) k-NN achieves competitive results, sometimes even outperforming a standard linear classifier. (2) Incorporating k-NN is especially beneficial for tasks where parametric classifiers perform poorly and / or in low-data regimes. We hope these discoveries will encourage people to rethink the role of pre-deep learning, classical methods in computer vision. Our code is available at: https://github.com/KMnP/nn-revisit.

UR - https://arxiv.org/abs/2112.08459

M3 - Preprint

BT - Rethinking Nearest Neighbors for Visual Classification

PB - arXiv.org

ER -

ID: 303686941