Learning single-view 3D reconstruction with limited pose supervision

Research output: Contribution to journalConference articleResearchpeer-review

It is expensive to label images with 3D structure or precise camera pose. Yet, this is precisely the kind of annotation required to train single-view 3D reconstruction models. In contrast, unlabeled images or images with just category labels are easy to acquire, but few current models can use this weak supervision. We present a unified framework that can combine both types of supervision: a small amount of camera pose annotations are used to enforce pose-invariance and view-point consistency, and unlabeled images combined with an adversarial loss are used to enforce the realism of rendered, generated models. We use this unified framework to measure the impact of each form of supervision in three paradigms: semi-supervised, multi-task, and transfer learning. We show that with a combination of these ideas, we can train single-view reconstruction models that improve up to 7 points in performance (AP) when using only 1% pose annotated training data.

Original languageEnglish
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages (from-to)90-105
Number of pages16
ISSN0302-9743
DOIs
Publication statusPublished - 2018
Externally publishedYes
Event15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany
Duration: 8 Sep 201814 Sep 2018

Conference

Conference15th European Conference on Computer Vision, ECCV 2018
CountryGermany
CityMunich
Period08/09/201814/09/2018

Bibliographical note

Publisher Copyright:
© Springer Nature Switzerland AG 2018.

    Research areas

  • Few-shot learning, GANs, Single-image 3D-reconstruction

ID: 301825834