Spatiotemporal Contrastive Video Representation Learning

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Spatiotemporal Contrastive Video Representation Learning. / Qian, Rui; Meng, Tianjian; Gong, Boqing; Yang, Ming Hsuan; Wang, Huisheng; Belongie, Serge; Cui, Yin.

I: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, s. 6960-6970.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Qian, R, Meng, T, Gong, B, Yang, MH, Wang, H, Belongie, S & Cui, Y 2021, 'Spatiotemporal Contrastive Video Representation Learning', Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, s. 6960-6970. https://doi.org/10.1109/CVPR46437.2021.00689

APA

Qian, R., Meng, T., Gong, B., Yang, M. H., Wang, H., Belongie, S., & Cui, Y. (2021). Spatiotemporal Contrastive Video Representation Learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 6960-6970. https://doi.org/10.1109/CVPR46437.2021.00689

Vancouver

Qian R, Meng T, Gong B, Yang MH, Wang H, Belongie S o.a. Spatiotemporal Contrastive Video Representation Learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2021;6960-6970. https://doi.org/10.1109/CVPR46437.2021.00689

Author

Qian, Rui ; Meng, Tianjian ; Gong, Boqing ; Yang, Ming Hsuan ; Wang, Huisheng ; Belongie, Serge ; Cui, Yin. / Spatiotemporal Contrastive Video Representation Learning. I: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2021 ; s. 6960-6970.

Bibtex

@inproceedings{35e2b372303a4cfd81bd3ce877096c0a,
title = "Spatiotemporal Contrastive Video Representation Learning",
abstract = "We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentations for video self-supervised learning and find that both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on clips that are distant in time. On Kinetics-600, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.9% with a larger R3D-152 (2× filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning. Our code and models will be available at https://github.com/tensorflow/models/tree/master/official/.",
author = "Rui Qian and Tianjian Meng and Boqing Gong and Yang, {Ming Hsuan} and Huisheng Wang and Serge Belongie and Yin Cui",
note = "Funding Information: We would like to thank Yeqing Li and the TensorFlow TPU team for their infrastructure support; Tsung-Yi Lin, Ting Chen and Yonglong Tian for their valuable feedback. Publisher Copyright: {\textcopyright} 2021 IEEE; 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 ; Conference date: 19-06-2021 Through 25-06-2021",
year = "2021",
doi = "10.1109/CVPR46437.2021.00689",
language = "English",
pages = "6960--6970",
journal = "I E E E Conference on Computer Vision and Pattern Recognition. Proceedings",
issn = "1063-6919",
publisher = "Institute of Electrical and Electronics Engineers",

}

RIS

TY - GEN

T1 - Spatiotemporal Contrastive Video Representation Learning

AU - Qian, Rui

AU - Meng, Tianjian

AU - Gong, Boqing

AU - Yang, Ming Hsuan

AU - Wang, Huisheng

AU - Belongie, Serge

AU - Cui, Yin

N1 - Funding Information: We would like to thank Yeqing Li and the TensorFlow TPU team for their infrastructure support; Tsung-Yi Lin, Ting Chen and Yonglong Tian for their valuable feedback. Publisher Copyright: © 2021 IEEE

PY - 2021

Y1 - 2021

N2 - We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentations for video self-supervised learning and find that both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on clips that are distant in time. On Kinetics-600, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.9% with a larger R3D-152 (2× filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning. Our code and models will be available at https://github.com/tensorflow/models/tree/master/official/.

AB - We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentations for video self-supervised learning and find that both spatial and temporal information are crucial. We carefully design data augmentations involving spatial and temporal cues. Concretely, we propose a temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. We also propose a sampling-based temporal augmentation method to avoid overly enforcing invariance on clips that are distant in time. On Kinetics-600, a linear classifier trained on the representations learned by CVRL achieves 70.4% top-1 accuracy with a 3D-ResNet-50 (R3D-50) backbone, outperforming ImageNet supervised pre-training by 15.7% and SimCLR unsupervised pre-training by 18.8% using the same inflated R3D-50. The performance of CVRL can be further improved to 72.9% with a larger R3D-152 (2× filters) backbone, significantly closing the gap between unsupervised and supervised video representation learning. Our code and models will be available at https://github.com/tensorflow/models/tree/master/official/.

UR - http://www.scopus.com/inward/record.url?scp=85119132477&partnerID=8YFLogxK

U2 - 10.1109/CVPR46437.2021.00689

DO - 10.1109/CVPR46437.2021.00689

M3 - Conference article

AN - SCOPUS:85119132477

SP - 6960

EP - 6970

JO - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

JF - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

SN - 1063-6919

T2 - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021

Y2 - 19 June 2021 through 25 June 2021

ER -

ID: 301817502