Benefits of auxiliary information in deep learning-based teeth segmentation
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
This paper evaluates deep learning methods on segmentation of dental arches in panoramic radiographs. Our main aim is to test whether introducing auxiliary learning goals can improve image segmentation. We implement three multi-output networks that detect (1) patient characteristics (e.g missing teeth, no dental artifacts), (2) buccal area, (3) individual teeth, alongside the dental arches. These design choices may restrict the region of interest and improve the internal representation of teeth shapes. The models are based on the modified U-net1 architecture and optimized with Dice loss. Two data sets, of 1500 and 116 samples, collected at different institutions2, 3 were used for training and testing the methods. Additionally, we evaluated the networks against various patient conditions, namely: 32 teeth, ? 32 teeth, dental artifacts, no dental artifacts. The standard U-net architecture reaches the highest Dice scores of 0.932 on the larger data set2 and 0.946 on the group of patients with no missing teeth. The model that outputs probability masks for individual teeth reaches the best Dice score of 0.903 on the smaller data set.3 We observe certain benefits in augmenting teeth segmentation with other information sources, which indicate the potential of this research direction and justifies further investigations.
|Title of host publication||Medical Imaging 2022 : Image Processing|
|Editors||Olivier Colliot, Ivana Isgum, Bennett A. Landman, Murray H. Loew|
|Publication status||Published - 2022|
|Event||Medical Imaging 2022: Image Processing - Virtual, Online|
Duration: 21 Mar 2021 → 27 Mar 2021
|Conference||Medical Imaging 2022: Image Processing|
|Periode||21/03/2021 → 27/03/2021|
|Sponsor||Philips Healthcare, The Society of Photo-Optical Instrumentation Engineers (SPIE)|
|Series||Progress in Biomedical Optics and Imaging - Proceedings of SPIE|
© 2022 SPIE.