The Deep Generative Decoder: MAP estimation of representations improves modelling of single-cell RNA data

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 5,23 MB, PDF-dokument

Learning low-dimensional representations of single-cell transcriptomics has become instrumental to its downstream analysis. The state of the art is currently represented by neural network models, such as variational autoencoders, which use a variational approximation of the likelihood for inference. Results: We here present the Deep Generative Decoder (DGD), a simple generative model that computes model parameters and representations directly via maximum a posteriori estimation. The DGD handles complex parameterized latent distributions naturally unlike variational autoencoders, which typically use a fixed Gaussian distribution, because of the complexity of adding other types. We first show its general functionality on a commonly used benchmark set, Fashion-MNIST. Secondly, we apply the model to multiple single-cell datasets. Here, the DGD learns low-dimensional, meaningful, and well-structured latent representations with sub-clustering beyond the provided labels. The advantages of this approach are its simplicity and its capability to provide representations of much smaller dimensionality than a comparable variational autoencoder.

OriginalsprogEngelsk
Artikelnummerbtad497
TidsskriftBioinformatics
Vol/bind39
Udgave nummer9
Antal sider14
ISSN1367-4803
DOI
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
This work was supported by grants from the Novo Nordisk Foundation [NNF20OC0062606, NNF20OC0059939, NNF20OC0063268 to A.K.].

Publisher Copyright:
© 2023 The Author(s). Published by Oxford University Press.

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 369554272