Training recurrent neural networks robust to incomplete data: application to Alzheimer’s disease progression modeling

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Training recurrent neural networks robust to incomplete data : application to Alzheimer’s disease progression modeling. / Mehdipour Ghazi, Mostafa; Nielsen, Mads; Pai, Akshay Sadananda Uppinakudru; Cardoso, M. Jorge; Modat, Marc; Ourselin, Sebastien; Sørensen, Lauge.

I: Medical Image Analysis, Bind 53, 2019, s. 39-46.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Mehdipour Ghazi, M, Nielsen, M, Pai, ASU, Cardoso, MJ, Modat, M, Ourselin, S & Sørensen, L 2019, 'Training recurrent neural networks robust to incomplete data: application to Alzheimer’s disease progression modeling', Medical Image Analysis, bind 53, s. 39-46. https://doi.org/10.1016/j.media.2019.01.004

APA

Mehdipour Ghazi, M., Nielsen, M., Pai, A. S. U., Cardoso, M. J., Modat, M., Ourselin, S., & Sørensen, L. (2019). Training recurrent neural networks robust to incomplete data: application to Alzheimer’s disease progression modeling. Medical Image Analysis, 53, 39-46. https://doi.org/10.1016/j.media.2019.01.004

Vancouver

Mehdipour Ghazi M, Nielsen M, Pai ASU, Cardoso MJ, Modat M, Ourselin S o.a. Training recurrent neural networks robust to incomplete data: application to Alzheimer’s disease progression modeling. Medical Image Analysis. 2019;53:39-46. https://doi.org/10.1016/j.media.2019.01.004

Author

Mehdipour Ghazi, Mostafa ; Nielsen, Mads ; Pai, Akshay Sadananda Uppinakudru ; Cardoso, M. Jorge ; Modat, Marc ; Ourselin, Sebastien ; Sørensen, Lauge. / Training recurrent neural networks robust to incomplete data : application to Alzheimer’s disease progression modeling. I: Medical Image Analysis. 2019 ; Bind 53. s. 39-46.

Bibtex

@article{21bb456afce04dbf978f415277c08920,
title = "Training recurrent neural networks robust to incomplete data: application to Alzheimer{\textquoteright}s disease progression modeling",
abstract = "Disease progression modeling (DPM) using longitudinal data is a challenging machine learning task. Existing DPM algorithms neglect temporal dependencies among measurements, make parametric assumptions about biomarker trajectories, do not model multiple biomarkers jointly, and need an alignment of subjects{\textquoteright} trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. Instead, we propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle both missing predictor and target values. The proposed LSTM algorithm is applied to model the progression of Alzheimer's disease (AD) using six volumetric magnetic resonance imaging (MRI) biomarkers, i.e., volumes of ventricles, hippocampus, whole brain, fusiform, middle temporal gyrus, and entorhinal cortex, and it is compared to standard LSTM networks with data imputation and a parametric, regression-based DPM method. The results show that the proposed algorithm achieves a significantly lower mean absolute error (MAE) than the alternatives with p < 0.05 using Wilcoxon signed rank test in predicting values of almost all of the MRI biomarkers. Moreover, a linear discriminant analysis (LDA) classifier applied to the predicted biomarker values produces a significantly larger area under the receiver operating characteristic curve (AUC) of 0.90 vs. at most 0.84 with p < 0.001 using McNemar's test for clinical diagnosis of AD. Inspection of MAE curves as a function of the amount of missing data reveals that the proposed LSTM algorithm achieves the best performance up until more than 74% missing values. Finally, it is illustrated how the method can successfully be applied to data with varying time intervals. This paper shows that built-in handling of missing values in training an LSTM network benefits the application of RNNs in neurodegenerative disease progression modeling in longitudinal cohorts.",
author = "{Mehdipour Ghazi}, Mostafa and Mads Nielsen and Pai, {Akshay Sadananda Uppinakudru} and Cardoso, {M. Jorge} and Marc Modat and Sebastien Ourselin and Lauge S{\o}rensen",
year = "2019",
doi = "10.1016/j.media.2019.01.004",
language = "English",
volume = "53",
pages = "39--46",
journal = "Medical Image Analysis",
issn = "1361-8415",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - Training recurrent neural networks robust to incomplete data

T2 - application to Alzheimer’s disease progression modeling

AU - Mehdipour Ghazi, Mostafa

AU - Nielsen, Mads

AU - Pai, Akshay Sadananda Uppinakudru

AU - Cardoso, M. Jorge

AU - Modat, Marc

AU - Ourselin, Sebastien

AU - Sørensen, Lauge

PY - 2019

Y1 - 2019

N2 - Disease progression modeling (DPM) using longitudinal data is a challenging machine learning task. Existing DPM algorithms neglect temporal dependencies among measurements, make parametric assumptions about biomarker trajectories, do not model multiple biomarkers jointly, and need an alignment of subjects’ trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. Instead, we propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle both missing predictor and target values. The proposed LSTM algorithm is applied to model the progression of Alzheimer's disease (AD) using six volumetric magnetic resonance imaging (MRI) biomarkers, i.e., volumes of ventricles, hippocampus, whole brain, fusiform, middle temporal gyrus, and entorhinal cortex, and it is compared to standard LSTM networks with data imputation and a parametric, regression-based DPM method. The results show that the proposed algorithm achieves a significantly lower mean absolute error (MAE) than the alternatives with p < 0.05 using Wilcoxon signed rank test in predicting values of almost all of the MRI biomarkers. Moreover, a linear discriminant analysis (LDA) classifier applied to the predicted biomarker values produces a significantly larger area under the receiver operating characteristic curve (AUC) of 0.90 vs. at most 0.84 with p < 0.001 using McNemar's test for clinical diagnosis of AD. Inspection of MAE curves as a function of the amount of missing data reveals that the proposed LSTM algorithm achieves the best performance up until more than 74% missing values. Finally, it is illustrated how the method can successfully be applied to data with varying time intervals. This paper shows that built-in handling of missing values in training an LSTM network benefits the application of RNNs in neurodegenerative disease progression modeling in longitudinal cohorts.

AB - Disease progression modeling (DPM) using longitudinal data is a challenging machine learning task. Existing DPM algorithms neglect temporal dependencies among measurements, make parametric assumptions about biomarker trajectories, do not model multiple biomarkers jointly, and need an alignment of subjects’ trajectories. In this paper, recurrent neural networks (RNNs) are utilized to address these issues. However, in many cases, longitudinal cohorts contain incomplete data, which hinders the application of standard RNNs and requires a pre-processing step such as imputation of the missing values. Instead, we propose a generalized training rule for the most widely used RNN architecture, long short-term memory (LSTM) networks, that can handle both missing predictor and target values. The proposed LSTM algorithm is applied to model the progression of Alzheimer's disease (AD) using six volumetric magnetic resonance imaging (MRI) biomarkers, i.e., volumes of ventricles, hippocampus, whole brain, fusiform, middle temporal gyrus, and entorhinal cortex, and it is compared to standard LSTM networks with data imputation and a parametric, regression-based DPM method. The results show that the proposed algorithm achieves a significantly lower mean absolute error (MAE) than the alternatives with p < 0.05 using Wilcoxon signed rank test in predicting values of almost all of the MRI biomarkers. Moreover, a linear discriminant analysis (LDA) classifier applied to the predicted biomarker values produces a significantly larger area under the receiver operating characteristic curve (AUC) of 0.90 vs. at most 0.84 with p < 0.001 using McNemar's test for clinical diagnosis of AD. Inspection of MAE curves as a function of the amount of missing data reveals that the proposed LSTM algorithm achieves the best performance up until more than 74% missing values. Finally, it is illustrated how the method can successfully be applied to data with varying time intervals. This paper shows that built-in handling of missing values in training an LSTM network benefits the application of RNNs in neurodegenerative disease progression modeling in longitudinal cohorts.

U2 - 10.1016/j.media.2019.01.004

DO - 10.1016/j.media.2019.01.004

M3 - Journal article

C2 - 30682584

VL - 53

SP - 39

EP - 46

JO - Medical Image Analysis

JF - Medical Image Analysis

SN - 1361-8415

ER -

ID: 211818323