Augmentation based unsupervised domain adaptation

Publikation: Working paperPreprintForskning

Dokumenter

  • Fulltet

    Forlagets udgivne version, 2,99 MB, PDF-dokument

  • Mauricio Orbes-Arteaga
  • Thomas Varsavsky
  • Lauge Sørensen
  • Nielsen, Mads
  • Akshay Sadananda Uppinakudru Pai
  • Sebastien Ourselin
  • Marc Modat
  • M. Jorge Cardoso
The insertion of deep learning in medical image analysis had lead to the development of state-of-the art strategies in several applications such a disease classification, as well as abnormality detection and segmentation. However, even the most advanced methods require a huge and diverse amount of data to generalize. Because in realistic clinical scenarios, data acquisition and annotation is expensive, deep learning models trained on small and unrepresentative data tend to outperform when deployed in data that differs from the one used for training (e.g data from different scanners). In this work, we proposed a domain adaptation methodology to alleviate this problem in segmentation models. Our approach takes advantage of the properties of adversarial domain adaptation and consistency training to achieve more robust adaptation. Using two datasets with white matter hyperintensities (WMH) annotations, we demonstrated that the proposed method improves model generalization even in corner cases where individual strategies tend to fail.
OriginalsprogEngelsk
UdgiverarXiv.org
Antal sider12
StatusUdgivet - 2022

Links

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 339908099