Volumetric Disentanglement for 3D Scene Manipulation

Publikation: Working paperPreprint

Standard

Volumetric Disentanglement for 3D Scene Manipulation. / Benaim, Sagie; Warburg, Frederik ; Christensen, Peter Ebert; Belongie, Serge.

arXiv.org, 2022.

Publikation: Working paperPreprint

Harvard

Benaim, S, Warburg, F, Christensen, PE & Belongie, S 2022 'Volumetric Disentanglement for 3D Scene Manipulation' arXiv.org. <https://arxiv.org/abs/2206.02776>

APA

Benaim, S., Warburg, F., Christensen, P. E., & Belongie, S. (2022). Volumetric Disentanglement for 3D Scene Manipulation. arXiv.org. https://arxiv.org/abs/2206.02776

Vancouver

Benaim S, Warburg F, Christensen PE, Belongie S. Volumetric Disentanglement for 3D Scene Manipulation. arXiv.org. 2022.

Author

Benaim, Sagie ; Warburg, Frederik ; Christensen, Peter Ebert ; Belongie, Serge. / Volumetric Disentanglement for 3D Scene Manipulation. arXiv.org, 2022.

Bibtex

@techreport{283a47be08ee4196aa1b8baf2ef6f038,
title = "Volumetric Disentanglement for 3D Scene Manipulation",
abstract = "Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate the applicability of our framework on a number of downstream manipulation tasks including object camouflage, non-negative 3D object inpainting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation. Full results are given in our project webpage at this https URL",
author = "Sagie Benaim and Frederik Warburg and Christensen, {Peter Ebert} and Serge Belongie",
year = "2022",
language = "English",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - Volumetric Disentanglement for 3D Scene Manipulation

AU - Benaim, Sagie

AU - Warburg, Frederik

AU - Christensen, Peter Ebert

AU - Belongie, Serge

PY - 2022

Y1 - 2022

N2 - Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate the applicability of our framework on a number of downstream manipulation tasks including object camouflage, non-negative 3D object inpainting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation. Full results are given in our project webpage at this https URL

AB - Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate the applicability of our framework on a number of downstream manipulation tasks including object camouflage, non-negative 3D object inpainting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation. Full results are given in our project webpage at this https URL

M3 - Preprint

BT - Volumetric Disentanglement for 3D Scene Manipulation

PB - arXiv.org

ER -

ID: 384870144