Controllable Video Generation with Sparse Trajectories

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Controllable Video Generation with Sparse Trajectories. / Hao, Zekun; Huang, Xun; Belongie, Serge.

I: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 14.12.2018, s. 7854-7863.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Hao, Z, Huang, X & Belongie, S 2018, 'Controllable Video Generation with Sparse Trajectories', Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, s. 7854-7863. https://doi.org/10.1109/CVPR.2018.00819

APA

Hao, Z., Huang, X., & Belongie, S. (2018). Controllable Video Generation with Sparse Trajectories. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 7854-7863. https://doi.org/10.1109/CVPR.2018.00819

Vancouver

Hao Z, Huang X, Belongie S. Controllable Video Generation with Sparse Trajectories. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2018 dec. 14;7854-7863. https://doi.org/10.1109/CVPR.2018.00819

Author

Hao, Zekun ; Huang, Xun ; Belongie, Serge. / Controllable Video Generation with Sparse Trajectories. I: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2018 ; s. 7854-7863.

Bibtex

@inproceedings{a5a6c2daef914ec496c5033a54087779,
title = "Controllable Video Generation with Sparse Trajectories",
abstract = "Video generation and manipulation is an important yet challenging task in computer vision. Existing methods usually lack ways to explicitly control the synthesized motion. In this work, we present a conditional video generation model that allows detailed control over the motion of the generated video. Given the first frame and sparse motion trajectories specified by users, our model can synthesize a video with corresponding appearance and motion. We propose to combine the advantage of copying pixels from the given frame and hallucinating the lightness difference from scratch which help generate sharp video while keeping the model robust to occlusion and lightness change. We also propose a training paradigm that calculate trajectories from video clips, which eliminated the need of annotated training data. Experiments on several standard benchmarks demonstrate that our approach can generate realistic videos comparable to state-of-the-art video generation and video prediction methods while the motion of the generated videos can correspond well with user input.",
author = "Zekun Hao and Xun Huang and Serge Belongie",
note = "Publisher Copyright: {\textcopyright} 2018 IEEE.; 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 ; Conference date: 18-06-2018 Through 22-06-2018",
year = "2018",
month = dec,
day = "14",
doi = "10.1109/CVPR.2018.00819",
language = "English",
pages = "7854--7863",
journal = "I E E E Conference on Computer Vision and Pattern Recognition. Proceedings",
issn = "1063-6919",
publisher = "Institute of Electrical and Electronics Engineers",

}

RIS

TY - GEN

T1 - Controllable Video Generation with Sparse Trajectories

AU - Hao, Zekun

AU - Huang, Xun

AU - Belongie, Serge

N1 - Publisher Copyright: © 2018 IEEE.

PY - 2018/12/14

Y1 - 2018/12/14

N2 - Video generation and manipulation is an important yet challenging task in computer vision. Existing methods usually lack ways to explicitly control the synthesized motion. In this work, we present a conditional video generation model that allows detailed control over the motion of the generated video. Given the first frame and sparse motion trajectories specified by users, our model can synthesize a video with corresponding appearance and motion. We propose to combine the advantage of copying pixels from the given frame and hallucinating the lightness difference from scratch which help generate sharp video while keeping the model robust to occlusion and lightness change. We also propose a training paradigm that calculate trajectories from video clips, which eliminated the need of annotated training data. Experiments on several standard benchmarks demonstrate that our approach can generate realistic videos comparable to state-of-the-art video generation and video prediction methods while the motion of the generated videos can correspond well with user input.

AB - Video generation and manipulation is an important yet challenging task in computer vision. Existing methods usually lack ways to explicitly control the synthesized motion. In this work, we present a conditional video generation model that allows detailed control over the motion of the generated video. Given the first frame and sparse motion trajectories specified by users, our model can synthesize a video with corresponding appearance and motion. We propose to combine the advantage of copying pixels from the given frame and hallucinating the lightness difference from scratch which help generate sharp video while keeping the model robust to occlusion and lightness change. We also propose a training paradigm that calculate trajectories from video clips, which eliminated the need of annotated training data. Experiments on several standard benchmarks demonstrate that our approach can generate realistic videos comparable to state-of-the-art video generation and video prediction methods while the motion of the generated videos can correspond well with user input.

UR - http://www.scopus.com/inward/record.url?scp=85062865985&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2018.00819

DO - 10.1109/CVPR.2018.00819

M3 - Conference article

AN - SCOPUS:85062865985

SP - 7854

EP - 7863

JO - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

JF - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

SN - 1063-6919

T2 - 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018

Y2 - 18 June 2018 through 22 June 2018

ER -

ID: 301825079