Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods. / Schön, Julian; Selvan, Raghavendra; Petersen, Jens.

Deep Generative Models: Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. Springer, 2022. p. 24-33 (Lecture Notes in Computer Science, Vol. 13609).

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Schön, J, Selvan, R & Petersen, J 2022, Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods. in Deep Generative Models: Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. Springer, Lecture Notes in Computer Science, vol. 13609, pp. 24-33, Second MICCAI Workshop, DGM4MICCAI 2022, , Singapore, 22/09/2022. https://doi.org/10.1007/978-3-031-18576-2_3

APA

Schön, J., Selvan, R., & Petersen, J. (2022). Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods. In Deep Generative Models: Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings (pp. 24-33). Springer. Lecture Notes in Computer Science Vol. 13609 https://doi.org/10.1007/978-3-031-18576-2_3

Vancouver

Schön J, Selvan R, Petersen J. Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods. In Deep Generative Models: Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. Springer. 2022. p. 24-33. (Lecture Notes in Computer Science, Vol. 13609). https://doi.org/10.1007/978-3-031-18576-2_3

Author

Schön, Julian ; Selvan, Raghavendra ; Petersen, Jens. / Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods. Deep Generative Models: Second MICCAI Workshop, DGM4MICCAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. Springer, 2022. pp. 24-33 (Lecture Notes in Computer Science, Vol. 13609).

Bibtex

@inproceedings{a1ef2245744f43d782badbd58f5ccc4e,
title = "Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods",
abstract = "Generative models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) play an increasingly important role in medical image analysis. The latent spaces of these models often show semantically meaningful directions corresponding to human-interpretable image transformations. However, until now, their exploration for medical images has been limited due to the requirement of supervised data. Several methods for unsupervised discovery of interpretable directions in GAN latent spaces have shown interesting results on natural images. This work explores the potential of applying these techniques on medical images by training a GAN and a VAE on thoracic CT scans and using an unsupervised method to discover interpretable directions in the resulting latent space. We find several directions corresponding to non-trivial image transformations, such as rotation or breast size. Furthermore, the directions show that the generative models capture 3D structure despite being presented only with 2D data. The results show that unsupervised methods to discover interpretable directions in GANs generalize to VAEs and can be applied to medical images. This opens a wide array of future work using these methods in medical image analysis.",
keywords = "eess.IV, cs.CV, cs.LG",
author = "Julian Sch{\"o}n and Raghavendra Selvan and Jens Petersen",
year = "2022",
doi = "10.1007/978-3-031-18576-2_3",
language = "English",
isbn = "978-3-031-18575-5",
series = "Lecture Notes in Computer Science",
publisher = "Springer",
pages = "24--33",
booktitle = "Deep Generative Models",
address = "Switzerland",
note = "null ; Conference date: 22-09-2022",

}

RIS

TY - GEN

T1 - Interpreting Latent Spaces of Generative Models for Medical Images using Unsupervised Methods

AU - Schön, Julian

AU - Selvan, Raghavendra

AU - Petersen, Jens

PY - 2022

Y1 - 2022

N2 - Generative models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) play an increasingly important role in medical image analysis. The latent spaces of these models often show semantically meaningful directions corresponding to human-interpretable image transformations. However, until now, their exploration for medical images has been limited due to the requirement of supervised data. Several methods for unsupervised discovery of interpretable directions in GAN latent spaces have shown interesting results on natural images. This work explores the potential of applying these techniques on medical images by training a GAN and a VAE on thoracic CT scans and using an unsupervised method to discover interpretable directions in the resulting latent space. We find several directions corresponding to non-trivial image transformations, such as rotation or breast size. Furthermore, the directions show that the generative models capture 3D structure despite being presented only with 2D data. The results show that unsupervised methods to discover interpretable directions in GANs generalize to VAEs and can be applied to medical images. This opens a wide array of future work using these methods in medical image analysis.

AB - Generative models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) play an increasingly important role in medical image analysis. The latent spaces of these models often show semantically meaningful directions corresponding to human-interpretable image transformations. However, until now, their exploration for medical images has been limited due to the requirement of supervised data. Several methods for unsupervised discovery of interpretable directions in GAN latent spaces have shown interesting results on natural images. This work explores the potential of applying these techniques on medical images by training a GAN and a VAE on thoracic CT scans and using an unsupervised method to discover interpretable directions in the resulting latent space. We find several directions corresponding to non-trivial image transformations, such as rotation or breast size. Furthermore, the directions show that the generative models capture 3D structure despite being presented only with 2D data. The results show that unsupervised methods to discover interpretable directions in GANs generalize to VAEs and can be applied to medical images. This opens a wide array of future work using these methods in medical image analysis.

KW - eess.IV

KW - cs.CV

KW - cs.LG

U2 - 10.1007/978-3-031-18576-2_3

DO - 10.1007/978-3-031-18576-2_3

M3 - Article in proceedings

SN - 978-3-031-18575-5

T3 - Lecture Notes in Computer Science

SP - 24

EP - 33

BT - Deep Generative Models

PB - Springer

Y2 - 22 September 2022

ER -

ID: 335278278