When Does Contrastive Visual Representation Learning Work?

Publikation: Working paperPreprintForskning

Standard

When Does Contrastive Visual Representation Learning Work? / Cole, Elijah; Yang, Xuan; Wilber, Kimberly; Mac Aodha, Oisin; Belongie, Serge.

arXiv.org, 2022.

Publikation: Working paperPreprintForskning

Harvard

Cole, E, Yang, X, Wilber, K, Mac Aodha, O & Belongie, S 2022 'When Does Contrastive Visual Representation Learning Work?' arXiv.org. <https://arxiv.org/pdf/2105.05837.pdf>

APA

Cole, E., Yang, X., Wilber, K., Mac Aodha, O., & Belongie, S. (2022). When Does Contrastive Visual Representation Learning Work? arXiv.org. https://arxiv.org/pdf/2105.05837.pdf

Vancouver

Cole E, Yang X, Wilber K, Mac Aodha O, Belongie S. When Does Contrastive Visual Representation Learning Work? arXiv.org. 2022.

Author

Cole, Elijah ; Yang, Xuan ; Wilber, Kimberly ; Mac Aodha, Oisin ; Belongie, Serge. / When Does Contrastive Visual Representation Learning Work?. arXiv.org, 2022.

Bibtex

@techreport{ff493ccc15ee4934b24a7db316d2d98a,
title = "When Does Contrastive Visual Representation Learning Work?",
abstract = "Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on fine-grained visual classification tasks.",
author = "Elijah Cole and Xuan Yang and Kimberly Wilber and {Mac Aodha}, Oisin and Serge Belongie",
year = "2022",
language = "English",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - When Does Contrastive Visual Representation Learning Work?

AU - Cole, Elijah

AU - Yang, Xuan

AU - Wilber, Kimberly

AU - Mac Aodha, Oisin

AU - Belongie, Serge

PY - 2022

Y1 - 2022

N2 - Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on fine-grained visual classification tasks.

AB - Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on fine-grained visual classification tasks.

UR - https://arxiv.org/abs/2105.05837

M3 - Preprint

BT - When Does Contrastive Visual Representation Learning Work?

PB - arXiv.org

ER -

ID: 303800508