GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds. / Belongie, Serge; Hao, Zekun; Mallya, Arun; Liu, Ming Yu.
In: IEEE Xplore Digital Library, Vol. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 28.02.2022, p. 14052-12062.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds
AU - Belongie, Serge
AU - Hao, Zekun
AU - Mallya, Arun
AU - Liu, Ming Yu
PY - 2022/2/28
Y1 - 2022/2/28
N2 - We present GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft. Our method takes a semantic block world as input, where each block is assigned a semantic label such as dirt, grass, or water. We represent the world as a continuous volumetric function and train our model to render view-consistent photorealistic images for a user-controlled camera. In the absence of paired ground truth real images for the block world, we devise a training technique based on pseudo-ground truth and adversarial training. This stands in contrast to prior work on neural rendering for view synthesis, which requires ground truth images to estimate scene geometry and view-dependent appearance. In addition to camera trajectory, GANcraft allows user control over both scene semantics and output style. Experimental results with comparison to strong baselines show the effectiveness of GANcraft on this novel task of photorealistic 3D block world synthesis.
AB - We present GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft. Our method takes a semantic block world as input, where each block is assigned a semantic label such as dirt, grass, or water. We represent the world as a continuous volumetric function and train our model to render view-consistent photorealistic images for a user-controlled camera. In the absence of paired ground truth real images for the block world, we devise a training technique based on pseudo-ground truth and adversarial training. This stands in contrast to prior work on neural rendering for view synthesis, which requires ground truth images to estimate scene geometry and view-dependent appearance. In addition to camera trajectory, GANcraft allows user control over both scene semantics and output style. Experimental results with comparison to strong baselines show the effectiveness of GANcraft on this novel task of photorealistic 3D block world synthesis.
UR - https://openaccess.thecvf.com/content/ICCV2021/html/Hao_GANcraft_Unsupervised_3D_Neural_Rendering_of_Minecraft_Worlds_ICCV_2021_paper.html
U2 - 10.1109/ICCV48922.2021.01381
DO - 10.1109/ICCV48922.2021.01381
M3 - Conference article
VL - 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
SP - 14052
EP - 12062
JO - IEEE Xplore Digital Library
JF - IEEE Xplore Digital Library
ER -
ID: 303806434