Neural Puppet: Generative Layered Cartoon Characters
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Neural Puppet : Generative Layered Cartoon Characters. / Poursaeed, Omid; Kim, Vladimir G.; Shechtman, Eli; Saito, Jun; Belongie, Serge.
In: Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020, 03.2020, p. 3335-3345.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Neural Puppet
T2 - 2020 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2020
AU - Poursaeed, Omid
AU - Kim, Vladimir G.
AU - Shechtman, Eli
AU - Saito, Jun
AU - Belongie, Serge
N1 - Publisher Copyright: © 2020 IEEE.
PY - 2020/3
Y1 - 2020/3
N2 - We propose a learning based method for generating new animations of a cartoon character given a few example images. Our method is designed to learn from a traditionally animated sequence, where each frame is drawn by an artist, and thus the input images lack any common structure, correspondences, or labels. We express pose changes as a deformation of a layered 2.5D template mesh, and devise a novel architecture that learns to predict mesh deformations matching the template to a target image. This enables us to extract a common low-dimensional structure from a diverse set of character poses. We combine recent advances in differentiable rendering as well as mesh-aware models to successfully align common template even if only a few character images are available during training. In addition to coarse poses, character appearance also varies due to shading, out-of-plane motions, and artistic effects. We capture these subtle changes by applying an image translation network to refine the mesh rendering, providing an end-to-end model to generate new animations of a character with high visual quality. We demonstrate that our generative model can be used to synthesize in-between frames and to create data-driven deformation. Our template fitting procedure outperforms state-of-the-art generic techniques for detecting image correspondences.
AB - We propose a learning based method for generating new animations of a cartoon character given a few example images. Our method is designed to learn from a traditionally animated sequence, where each frame is drawn by an artist, and thus the input images lack any common structure, correspondences, or labels. We express pose changes as a deformation of a layered 2.5D template mesh, and devise a novel architecture that learns to predict mesh deformations matching the template to a target image. This enables us to extract a common low-dimensional structure from a diverse set of character poses. We combine recent advances in differentiable rendering as well as mesh-aware models to successfully align common template even if only a few character images are available during training. In addition to coarse poses, character appearance also varies due to shading, out-of-plane motions, and artistic effects. We capture these subtle changes by applying an image translation network to refine the mesh rendering, providing an end-to-end model to generate new animations of a character with high visual quality. We demonstrate that our generative model can be used to synthesize in-between frames and to create data-driven deformation. Our template fitting procedure outperforms state-of-the-art generic techniques for detecting image correspondences.
UR - http://www.scopus.com/inward/record.url?scp=85085497976&partnerID=8YFLogxK
U2 - 10.1109/WACV45572.2020.9093346
DO - 10.1109/WACV45572.2020.9093346
M3 - Conference article
AN - SCOPUS:85085497976
SP - 3335
EP - 3345
JO - Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020
JF - Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020
Y2 - 1 March 2020 through 5 March 2020
ER -
ID: 301823072