The Deep Generative Decoder: MAP estimation of representations improves modelling of single-cell RNA data

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

The Deep Generative Decoder : MAP estimation of representations improves modelling of single-cell RNA data. / Schuster, Viktoria; Krogh, Anders.

I: Bioinformatics, Bind 39, Nr. 9, btad497, 2023.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Schuster, V & Krogh, A 2023, 'The Deep Generative Decoder: MAP estimation of representations improves modelling of single-cell RNA data', Bioinformatics, bind 39, nr. 9, btad497. https://doi.org/10.1093/bioinformatics/btad497

APA

Schuster, V., & Krogh, A. (2023). The Deep Generative Decoder: MAP estimation of representations improves modelling of single-cell RNA data. Bioinformatics, 39(9), [btad497]. https://doi.org/10.1093/bioinformatics/btad497

Vancouver

Schuster V, Krogh A. The Deep Generative Decoder: MAP estimation of representations improves modelling of single-cell RNA data. Bioinformatics. 2023;39(9). btad497. https://doi.org/10.1093/bioinformatics/btad497

Author

Schuster, Viktoria ; Krogh, Anders. / The Deep Generative Decoder : MAP estimation of representations improves modelling of single-cell RNA data. I: Bioinformatics. 2023 ; Bind 39, Nr. 9.

Bibtex

@article{06c41323eeab4dc3a23845d2b8cbb396,
title = "The Deep Generative Decoder: MAP estimation of representations improves modelling of single-cell RNA data",
abstract = "Learning low-dimensional representations of single-cell transcriptomics has become instrumental to its downstream analysis. The state of the art is currently represented by neural network models, such as variational autoencoders, which use a variational approximation of the likelihood for inference. Results: We here present the Deep Generative Decoder (DGD), a simple generative model that computes model parameters and representations directly via maximum a posteriori estimation. The DGD handles complex parameterized latent distributions naturally unlike variational autoencoders, which typically use a fixed Gaussian distribution, because of the complexity of adding other types. We first show its general functionality on a commonly used benchmark set, Fashion-MNIST. Secondly, we apply the model to multiple single-cell datasets. Here, the DGD learns low-dimensional, meaningful, and well-structured latent representations with sub-clustering beyond the provided labels. The advantages of this approach are its simplicity and its capability to provide representations of much smaller dimensionality than a comparable variational autoencoder. ",
author = "Viktoria Schuster and Anders Krogh",
note = "Publisher Copyright: {\textcopyright} 2023 The Author(s). Published by Oxford University Press.",
year = "2023",
doi = "10.1093/bioinformatics/btad497",
language = "English",
volume = "39",
journal = "Bioinformatics (Online)",
issn = "1367-4811",
publisher = "Oxford University Press",
number = "9",

}

RIS

TY - JOUR

T1 - The Deep Generative Decoder

T2 - MAP estimation of representations improves modelling of single-cell RNA data

AU - Schuster, Viktoria

AU - Krogh, Anders

N1 - Publisher Copyright: © 2023 The Author(s). Published by Oxford University Press.

PY - 2023

Y1 - 2023

N2 - Learning low-dimensional representations of single-cell transcriptomics has become instrumental to its downstream analysis. The state of the art is currently represented by neural network models, such as variational autoencoders, which use a variational approximation of the likelihood for inference. Results: We here present the Deep Generative Decoder (DGD), a simple generative model that computes model parameters and representations directly via maximum a posteriori estimation. The DGD handles complex parameterized latent distributions naturally unlike variational autoencoders, which typically use a fixed Gaussian distribution, because of the complexity of adding other types. We first show its general functionality on a commonly used benchmark set, Fashion-MNIST. Secondly, we apply the model to multiple single-cell datasets. Here, the DGD learns low-dimensional, meaningful, and well-structured latent representations with sub-clustering beyond the provided labels. The advantages of this approach are its simplicity and its capability to provide representations of much smaller dimensionality than a comparable variational autoencoder.

AB - Learning low-dimensional representations of single-cell transcriptomics has become instrumental to its downstream analysis. The state of the art is currently represented by neural network models, such as variational autoencoders, which use a variational approximation of the likelihood for inference. Results: We here present the Deep Generative Decoder (DGD), a simple generative model that computes model parameters and representations directly via maximum a posteriori estimation. The DGD handles complex parameterized latent distributions naturally unlike variational autoencoders, which typically use a fixed Gaussian distribution, because of the complexity of adding other types. We first show its general functionality on a commonly used benchmark set, Fashion-MNIST. Secondly, we apply the model to multiple single-cell datasets. Here, the DGD learns low-dimensional, meaningful, and well-structured latent representations with sub-clustering beyond the provided labels. The advantages of this approach are its simplicity and its capability to provide representations of much smaller dimensionality than a comparable variational autoencoder.

UR - http://www.scopus.com/inward/record.url?scp=85172672094&partnerID=8YFLogxK

U2 - 10.1093/bioinformatics/btad497

DO - 10.1093/bioinformatics/btad497

M3 - Journal article

C2 - 37572301

AN - SCOPUS:85172672094

VL - 39

JO - Bioinformatics (Online)

JF - Bioinformatics (Online)

SN - 1367-4811

IS - 9

M1 - btad497

ER -

ID: 369554272