GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds. / Belongie, Serge; Hao, Zekun; Mallya, Arun; Liu, Ming Yu.

I: IEEE Xplore Digital Library, Bind 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 28.02.2022, s. 14052-12062.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Belongie, S, Hao, Z, Mallya, A & Liu, MY 2022, 'GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds', IEEE Xplore Digital Library, bind 2021 IEEE/CVF International Conference on Computer Vision (ICCV), s. 14052-12062. https://doi.org/10.1109/ICCV48922.2021.01381

APA

Belongie, S., Hao, Z., Mallya, A., & Liu, M. Y. (2022). GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds. IEEE Xplore Digital Library, 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 14052-12062. https://doi.org/10.1109/ICCV48922.2021.01381

Vancouver

Belongie S, Hao Z, Mallya A, Liu MY. GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds. IEEE Xplore Digital Library. 2022 feb. 28;2021 IEEE/CVF International Conference on Computer Vision (ICCV):14052-12062. https://doi.org/10.1109/ICCV48922.2021.01381

Author

Belongie, Serge ; Hao, Zekun ; Mallya, Arun ; Liu, Ming Yu. / GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds. I: IEEE Xplore Digital Library. 2022 ; Bind 2021 IEEE/CVF International Conference on Computer Vision (ICCV). s. 14052-12062.

Bibtex

@inproceedings{c03e001a4a21423faed1534e52abc7cc,
title = "GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds",
abstract = "We present GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft. Our method takes a semantic block world as input, where each block is assigned a semantic label such as dirt, grass, or water. We represent the world as a continuous volumetric function and train our model to render view-consistent photorealistic images for a user-controlled camera. In the absence of paired ground truth real images for the block world, we devise a training technique based on pseudo-ground truth and adversarial training. This stands in contrast to prior work on neural rendering for view synthesis, which requires ground truth images to estimate scene geometry and view-dependent appearance. In addition to camera trajectory, GANcraft allows user control over both scene semantics and output style. Experimental results with comparison to strong baselines show the effectiveness of GANcraft on this novel task of photorealistic 3D block world synthesis.",
author = "Serge Belongie and Zekun Hao and Arun Mallya and Liu, {Ming Yu}",
year = "2022",
month = feb,
day = "28",
doi = "10.1109/ICCV48922.2021.01381",
language = "English",
volume = "2021 IEEE/CVF International Conference on Computer Vision (ICCV)",
pages = "14052--12062",
journal = "IEEE Xplore Digital Library",

}

RIS

TY - GEN

T1 - GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds

AU - Belongie, Serge

AU - Hao, Zekun

AU - Mallya, Arun

AU - Liu, Ming Yu

PY - 2022/2/28

Y1 - 2022/2/28

N2 - We present GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft. Our method takes a semantic block world as input, where each block is assigned a semantic label such as dirt, grass, or water. We represent the world as a continuous volumetric function and train our model to render view-consistent photorealistic images for a user-controlled camera. In the absence of paired ground truth real images for the block world, we devise a training technique based on pseudo-ground truth and adversarial training. This stands in contrast to prior work on neural rendering for view synthesis, which requires ground truth images to estimate scene geometry and view-dependent appearance. In addition to camera trajectory, GANcraft allows user control over both scene semantics and output style. Experimental results with comparison to strong baselines show the effectiveness of GANcraft on this novel task of photorealistic 3D block world synthesis.

AB - We present GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft. Our method takes a semantic block world as input, where each block is assigned a semantic label such as dirt, grass, or water. We represent the world as a continuous volumetric function and train our model to render view-consistent photorealistic images for a user-controlled camera. In the absence of paired ground truth real images for the block world, we devise a training technique based on pseudo-ground truth and adversarial training. This stands in contrast to prior work on neural rendering for view synthesis, which requires ground truth images to estimate scene geometry and view-dependent appearance. In addition to camera trajectory, GANcraft allows user control over both scene semantics and output style. Experimental results with comparison to strong baselines show the effectiveness of GANcraft on this novel task of photorealistic 3D block world synthesis.

UR - https://openaccess.thecvf.com/content/ICCV2021/html/Hao_GANcraft_Unsupervised_3D_Neural_Rendering_of_Minecraft_Worlds_ICCV_2021_paper.html

U2 - 10.1109/ICCV48922.2021.01381

DO - 10.1109/ICCV48922.2021.01381

M3 - Conference article

VL - 2021 IEEE/CVF International Conference on Computer Vision (ICCV)

SP - 14052

EP - 12062

JO - IEEE Xplore Digital Library

JF - IEEE Xplore Digital Library

ER -

ID: 303806434