PhD defence by Pengfei Diao

dekorativt billede

Title

Automated Mammographic Risk Scoring and Domain Adaptation for X-ray Image.

Abstract

Breast cancer is the most common cancer in women and the leading cause of cancer death in women worldwide. Many European countries have introduced national mammography screening programme in order to detect and treat breast cancer at an early stage and hence reduce breast cancer mortality. However, not only does periodic breast screening increase the burden on public spending but also, potentially, the cancer risk of women due to exposure to unnecessary radiation. Introducing the automated breast cancer risk scoring assessment, which supports the personalized breast screening plan, could potentially help reduce public spending and also incite women to receive breast screening.

In recent years, automated disease diagnosis and prognosis based on medical images has been quickly shifting from devising traditionally handcrafted features to deep learning methods that learn features directly from the image data. Convolutional neural networks (CNNs) have been successfully applied to solve various medical image classification tasks and achieve state-of-the-art performance for the majority of the applications. Training CNNs, however, requires vast amounts of computational power as well as abundant labeled image data, which makes its application prohibitive in places where both computational resources and medical image annotators are limited. Furthermore, despite the outstanding generalization performance on unseen data from the same source that they were built on, CNNs still suffer from domain shift problems where they underperform on new data acquired from different sources.

The work presented in this thesis is two-fold. First, we developed a deep learning method, in the context of limited computational resources and labeled data, for automated breast cancer risk scoring based on mammograms. Our proposed learning method incorporates the auto-encoder to train convolutional neural networks in a layer-wise fashion. Our models were trained for two different tasks, namely, breast dense tissue segmentation and mammographic texture risk scoring. We compared our automated breast tissue segmentation with manual Cumulus-like segmentation from a trained radiologist and the texture risk model with two state-of-the-art handcrafted feature-based scoring methods. Our results showed that the proposed method was able to learn meaningful features directly from the data for both breast density segmentation and texture scoring. When compared to the radiologist’s manual scores and other existing automated scores, our method achieved competitive performance.

Second, we analyzed Generative Adversarial Networks (GAN) methods for solving single-source unsupervised domain adaptation problems under the assumption that images from the target domain are unlabeled and only available at test time. We evaluated the cross-source generalization performance of CNNs for the lung disease classification task based on chest X-ray images. We proposed two novel histogram-based GANs to transform images from the target domain to the source domain. The trained generator is used as a preprocessor to transform the input image from the target domain to the source domain. We compared the performance of the proposed method to that of existing standard methods and showed that current pixel-level local transformations are not good enough to be used in such medical image classification tasks. Intensity-level global transformation methods are more promising and reliable for such kinds of tasks.

Supervisors

Principal Supervisor Christian Igel
Co-supervisor Mads Nielsen

Assessment Committee

Professor Erik Dam, DIKU
Professor Susanne Winter, Hochschule Ruhr West
Professor Wiro Niessen, Erasmus University Rotterdam

Moderator of defence: Martin Lillholm, DIKU

This defence will take place physically, but you can follow online on Zoom: Click here to join on Zoom.

For a digital copy of the thesis, please visit https://di.ku.dk/english/research/phd/.