Stacked generative adversarial networks

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Stacked generative adversarial networks. / Huang, Xun; Li, Yixuan; Poursaeed, Omid; Hopcroft, John; Belongie, Serge.

I: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 06.11.2017, s. 1866-1875.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Huang, X, Li, Y, Poursaeed, O, Hopcroft, J & Belongie, S 2017, 'Stacked generative adversarial networks', Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, s. 1866-1875. https://doi.org/10.1109/CVPR.2017.202

APA

Huang, X., Li, Y., Poursaeed, O., Hopcroft, J., & Belongie, S. (2017). Stacked generative adversarial networks. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 1866-1875. https://doi.org/10.1109/CVPR.2017.202

Vancouver

Huang X, Li Y, Poursaeed O, Hopcroft J, Belongie S. Stacked generative adversarial networks. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2017 nov. 6;1866-1875. https://doi.org/10.1109/CVPR.2017.202

Author

Huang, Xun ; Li, Yixuan ; Poursaeed, Omid ; Hopcroft, John ; Belongie, Serge. / Stacked generative adversarial networks. I: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2017 ; s. 1866-1875.

Bibtex

@inproceedings{73627f896707494eb2ccc21d9a2925d0,
title = "Stacked generative adversarial networks",
abstract = "In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.",
author = "Xun Huang and Yixuan Li and Omid Poursaeed and John Hopcroft and Serge Belongie",
note = "Publisher Copyright: {\textcopyright} 2017 IEEE.; 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 ; Conference date: 21-07-2017 Through 26-07-2017",
year = "2017",
month = nov,
day = "6",
doi = "10.1109/CVPR.2017.202",
language = "English",
pages = "1866--1875",
journal = "Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017",

}

RIS

TY - GEN

T1 - Stacked generative adversarial networks

AU - Huang, Xun

AU - Li, Yixuan

AU - Poursaeed, Omid

AU - Hopcroft, John

AU - Belongie, Serge

N1 - Publisher Copyright: © 2017 IEEE.

PY - 2017/11/6

Y1 - 2017/11/6

N2 - In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.

AB - In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking.

UR - http://www.scopus.com/inward/record.url?scp=85041903901&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2017.202

DO - 10.1109/CVPR.2017.202

M3 - Conference article

AN - SCOPUS:85041903901

SP - 1866

EP - 1875

JO - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

JF - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

T2 - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

Y2 - 21 July 2017 through 26 July 2017

ER -

ID: 301826993