Augmentation based unsupervised domain adaptation

Research output: Working paperPreprintResearch


  • Fulltet

    Final published version, 2.99 MB, PDF document

  • Mauricio Orbes-Arteaga
  • Thomas Varsavsky
  • Lauge Sørensen
  • Nielsen, Mads
  • Akshay Sadananda Uppinakudru Pai
  • Sebastien Ourselin
  • Marc Modat
  • M. Jorge Cardoso
The insertion of deep learning in medical image analysis had lead to the development of state-of-the art strategies in several applications such a disease classification, as well as abnormality detection and segmentation. However, even the most advanced methods require a huge and diverse amount of data to generalize. Because in realistic clinical scenarios, data acquisition and annotation is expensive, deep learning models trained on small and unrepresentative data tend to outperform when deployed in data that differs from the one used for training (e.g data from different scanners). In this work, we proposed a domain adaptation methodology to alleviate this problem in segmentation models. Our approach takes advantage of the properties of adversarial domain adaptation and consistency training to achieve more robust adaptation. Using two datasets with white matter hyperintensities (WMH) annotations, we demonstrated that the proposed method improves model generalization even in corner cases where individual strategies tend to fail.
Original languageEnglish
Number of pages12
Publication statusPublished - 2022


Number of downloads are based on statistics from Google Scholar and

No data available

ID: 339908099