Pointflow: 3D point cloud generation with continuous normalizing flows

Research output: Contribution to journalConference articleResearchpeer-review

As 3D point clouds become the representation of choice for multiple vision and graphics applications, the ability to synthesize or reconstruct high-resolution, high-fidelity point clouds becomes crucial. Despite the recent success of deep learning models in discriminative tasks of point clouds, generating point clouds remains challenging. This paper proposes a principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions. Specifically, we learn a two-level hierarchy of distributions where the first level is the distribution of shapes and the second level is the distribution of points given a shape. This formulation allows us to both sample shapes and sample an arbitrary number of points from a shape. Our generative model, named PointFlow, learns each level of the distribution with a continuous normalizing flow. The invertibility of normalizing flows enables the computation of the likelihood during training and allows us to train our model in the variational inference framework. Empirically, we demonstrate that PointFlow achieves state-of-the-art performance in point cloud generation. We additionally show that our model can faithfully reconstruct point clouds and learn useful representations in an unsupervised manner. The code is available at https://github.com/stevenygd/PointFlow.

Original languageEnglish
JournalProceedings of the IEEE International Conference on Computer Vision
Pages (from-to)4540-4549
Number of pages10
ISSN1550-5499
DOIs
Publication statusPublished - Oct 2019
Externally publishedYes
Event17th IEEE/CVF International Conference on Computer Vision, ICCV 2019 - Seoul, Korea, Republic of
Duration: 27 Oct 20192 Nov 2019

Conference

Conference17th IEEE/CVF International Conference on Computer Vision, ICCV 2019
CountryKorea, Republic of
CitySeoul
Period27/10/201902/11/2019
SponsorComputer Vision Foundation, IEEE

Bibliographical note

Funding Information:
This work was supported in part by a research gift from Magic Leap. Xun Huang was supported by NVIDIA Graduate Fellowship.

Publisher Copyright:
© 2019 IEEE.

ID: 301823903