Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts

Research output: Working paperPreprintResearch

Standard

Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts. / Kutuzova, Svetlana; Krause, Oswin; McCloskey, Douglas; Nielsen, Mads; Igel, Christian.

arXiv.org, 2022.

Research output: Working paperPreprintResearch

Harvard

Kutuzova, S, Krause, O, McCloskey, D, Nielsen, M & Igel, C 2022 'Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts' arXiv.org.

APA

Kutuzova, S., Krause, O., McCloskey, D., Nielsen, M., & Igel, C. (2022). Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts. arXiv.org.

Vancouver

Kutuzova S, Krause O, McCloskey D, Nielsen M, Igel C. Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts. arXiv.org. 2022 Jan.

Author

Kutuzova, Svetlana ; Krause, Oswin ; McCloskey, Douglas ; Nielsen, Mads ; Igel, Christian. / Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts. arXiv.org, 2022.

Bibtex

@techreport{5427e78b9be8459e9f743fc2c82a29ed,
title = "Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts",
abstract = "Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities (e.g., images and text). Many applications also require the ability to accurately sample modalities conditioned on observations of a subset of the modalities. Often not all modalities may be observed for all training data points, so semi-supervised learning should be possible. In this study, we propose a novel product-of-experts (PoE) based variational autoencoder that have these desired properties. We benchmark it against a mixture-of-experts (MoE) approach and an approach of combining the modalities with an additional encoder network. An empirical evaluation shows that the PoE based models can outperform the contrasted models. Our experiments support the intuition that PoE models are more suited for a conjunctive combination of modalities.",
author = "Svetlana Kutuzova and Oswin Krause and Douglas McCloskey and Mads Nielsen and Christian Igel",
year = "2022",
month = jan,
language = "English",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts

AU - Kutuzova, Svetlana

AU - Krause, Oswin

AU - McCloskey, Douglas

AU - Nielsen, Mads

AU - Igel, Christian

PY - 2022/1

Y1 - 2022/1

N2 - Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities (e.g., images and text). Many applications also require the ability to accurately sample modalities conditioned on observations of a subset of the modalities. Often not all modalities may be observed for all training data points, so semi-supervised learning should be possible. In this study, we propose a novel product-of-experts (PoE) based variational autoencoder that have these desired properties. We benchmark it against a mixture-of-experts (MoE) approach and an approach of combining the modalities with an additional encoder network. An empirical evaluation shows that the PoE based models can outperform the contrasted models. Our experiments support the intuition that PoE models are more suited for a conjunctive combination of modalities.

AB - Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities (e.g., images and text). Many applications also require the ability to accurately sample modalities conditioned on observations of a subset of the modalities. Often not all modalities may be observed for all training data points, so semi-supervised learning should be possible. In this study, we propose a novel product-of-experts (PoE) based variational autoencoder that have these desired properties. We benchmark it against a mixture-of-experts (MoE) approach and an approach of combining the modalities with an additional encoder network. An empirical evaluation shows that the PoE based models can outperform the contrasted models. Our experiments support the intuition that PoE models are more suited for a conjunctive combination of modalities.

M3 - Preprint

BT - Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts

PB - arXiv.org

ER -

ID: 300693272