Reducing Annotation Need in Self-explanatory Models for Lung Nodule Diagnosis
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Reducing Annotation Need in Self-explanatory Models for Lung Nodule Diagnosis. / Lu, Jiahao; Yin, Chong; Krause, Oswin; Erleben, Kenny; Nielsen, Michael Bachmann; Darkner, Sune.
Interpretability of Machine Intelligence in Medical Image Computing. ed. / M Reyes; PH Abreu; J Cardoso. Springer, 2022. p. 33-43 (Lecture Notes in Computer Science, Vol. 13611).Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Reducing Annotation Need in Self-explanatory Models for Lung Nodule Diagnosis
AU - Lu, Jiahao
AU - Yin, Chong
AU - Krause, Oswin
AU - Erleben, Kenny
AU - Nielsen, Michael Bachmann
AU - Darkner, Sune
PY - 2022
Y1 - 2022
N2 - Feature-based self-explanatory methods explain their classification in terms of human-understandable features. In the medical imaging community, this semantic matching of clinical knowledge adds significantly to the trustworthiness of the AI. However, the cost of additional annotation of features remains a pressing issue. We address this problem by proposing cRedAnno, a data-/annotation-efficient self-explanatory approach for lung nodule diagnosis. cRedAnno considerably reduces the annotation need by introducing self-supervised contrastive learning to alleviate the burden of learning most parameters from annotation, replacing end-to-end training with two-stage training. When training with hundreds of nodule samples and only 1% of their annotations, cRedAnno achieves competitive accuracy in predicting malignancy, meanwhile significantly surpassing most previous works in predicting nodule attributes. Visualisation of the learned space further indicates that the correlation between the clustering of malignancy and nodule attributes coincides with clinical knowledge. Our complete code is open-source available: https://github.com/diku-dk/credanno.
AB - Feature-based self-explanatory methods explain their classification in terms of human-understandable features. In the medical imaging community, this semantic matching of clinical knowledge adds significantly to the trustworthiness of the AI. However, the cost of additional annotation of features remains a pressing issue. We address this problem by proposing cRedAnno, a data-/annotation-efficient self-explanatory approach for lung nodule diagnosis. cRedAnno considerably reduces the annotation need by introducing self-supervised contrastive learning to alleviate the burden of learning most parameters from annotation, replacing end-to-end training with two-stage training. When training with hundreds of nodule samples and only 1% of their annotations, cRedAnno achieves competitive accuracy in predicting malignancy, meanwhile significantly surpassing most previous works in predicting nodule attributes. Visualisation of the learned space further indicates that the correlation between the clustering of malignancy and nodule attributes coincides with clinical knowledge. Our complete code is open-source available: https://github.com/diku-dk/credanno.
KW - Explainable AI
KW - Lung nodule diagnosis
KW - Self-explanatory model
KW - Intrinsic explanation
KW - Self-supervised learning
U2 - 10.1007/978-3-031-17976-1_4
DO - 10.1007/978-3-031-17976-1_4
M3 - Article in proceedings
SN - 978-3-031-17975-4
T3 - Lecture Notes in Computer Science
SP - 33
EP - 43
BT - Interpretability of Machine Intelligence in Medical Image Computing
A2 - Reyes, M
A2 - Abreu, PH
A2 - Cardoso, J
PB - Springer
T2 - 5th International Workshop on Interpretability of Machine Intelligence in Medical Image Computing (IMIMIC)
Y2 - 22 September 2022
ER -
ID: 324695516