Exploring Temporal Granularity in Self-Supervised Video Representation Learning

Research output: Working paperPreprintResearch

Standard

Exploring Temporal Granularity in Self-Supervised Video Representation Learning. / Li, Yeqing; Yuan, Liangzhe; Gong, Boqing; Liu, Ting; Brown, Matthew; Belongie, Serge; Yang, Ming Hsuan; Adam, Hartwig; Cui, Yin.

arXiv.org, 2022.

Research output: Working paperPreprintResearch

Harvard

Li, Y, Yuan, L, Gong, B, Liu, T, Brown, M, Belongie, S, Yang, MH, Adam, H & Cui, Y 2022 'Exploring Temporal Granularity in Self-Supervised Video Representation Learning' arXiv.org. <https://arxiv.org/pdf/2112.04480.pdf>

APA

Li, Y., Yuan, L., Gong, B., Liu, T., Brown, M., Belongie, S., Yang, M. H., Adam, H., & Cui, Y. (2022). Exploring Temporal Granularity in Self-Supervised Video Representation Learning. arXiv.org. https://arxiv.org/pdf/2112.04480.pdf

Vancouver

Li Y, Yuan L, Gong B, Liu T, Brown M, Belongie S et al. Exploring Temporal Granularity in Self-Supervised Video Representation Learning. arXiv.org. 2022.

Author

Li, Yeqing ; Yuan, Liangzhe ; Gong, Boqing ; Liu, Ting ; Brown, Matthew ; Belongie, Serge ; Yang, Ming Hsuan ; Adam, Hartwig ; Cui, Yin. / Exploring Temporal Granularity in Self-Supervised Video Representation Learning. arXiv.org, 2022.

Bibtex

@techreport{5905d7c91f7542f4a0ea6b074d8ef993,
title = "Exploring Temporal Granularity in Self-Supervised Video Representation Learning",
abstract = "This work presents a self-supervised learning framework named TeG to explore Temporal Granularity in learning video representations. In TeG, we sample a long clip from a video and a short clip that lies inside the long clip. We then extract their dense temporal embeddings. The training objective consists of two parts: a fine-grained temporal learning objective to maximize the similarity between corresponding temporal embeddings in the short clip and the long clip, and a persistent temporal learning objective to pull together global embeddings of the two clips. Our study reveals the impact of temporal granularity with three major findings. 1) Different video tasks may require features of different temporal granularities. 2) Intriguingly, some tasks that are widely considered to require temporal awareness can actually be well addressed by temporally persistent features. 3) The flexibility of TeG gives rise to state-of-the-art results on 8 video benchmarks, outperforming supervised pre-training in most cases.",
author = "Yeqing Li and Liangzhe Yuan and Boqing Gong and Ting Liu and Matthew Brown and Serge Belongie and Yang, {Ming Hsuan} and Hartwig Adam and Yin Cui",
year = "2022",
language = "English",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - Exploring Temporal Granularity in Self-Supervised Video Representation Learning

AU - Li, Yeqing

AU - Yuan, Liangzhe

AU - Gong, Boqing

AU - Liu, Ting

AU - Brown, Matthew

AU - Belongie, Serge

AU - Yang, Ming Hsuan

AU - Adam, Hartwig

AU - Cui, Yin

PY - 2022

Y1 - 2022

N2 - This work presents a self-supervised learning framework named TeG to explore Temporal Granularity in learning video representations. In TeG, we sample a long clip from a video and a short clip that lies inside the long clip. We then extract their dense temporal embeddings. The training objective consists of two parts: a fine-grained temporal learning objective to maximize the similarity between corresponding temporal embeddings in the short clip and the long clip, and a persistent temporal learning objective to pull together global embeddings of the two clips. Our study reveals the impact of temporal granularity with three major findings. 1) Different video tasks may require features of different temporal granularities. 2) Intriguingly, some tasks that are widely considered to require temporal awareness can actually be well addressed by temporally persistent features. 3) The flexibility of TeG gives rise to state-of-the-art results on 8 video benchmarks, outperforming supervised pre-training in most cases.

AB - This work presents a self-supervised learning framework named TeG to explore Temporal Granularity in learning video representations. In TeG, we sample a long clip from a video and a short clip that lies inside the long clip. We then extract their dense temporal embeddings. The training objective consists of two parts: a fine-grained temporal learning objective to maximize the similarity between corresponding temporal embeddings in the short clip and the long clip, and a persistent temporal learning objective to pull together global embeddings of the two clips. Our study reveals the impact of temporal granularity with three major findings. 1) Different video tasks may require features of different temporal granularities. 2) Intriguingly, some tasks that are widely considered to require temporal awareness can actually be well addressed by temporally persistent features. 3) The flexibility of TeG gives rise to state-of-the-art results on 8 video benchmarks, outperforming supervised pre-training in most cases.

UR - https://arxiv.org/abs/2112.04480

M3 - Preprint

BT - Exploring Temporal Granularity in Self-Supervised Video Representation Learning

PB - arXiv.org

ER -

ID: 303687615