A clinically motivated self-supervised approach for content-based image retrieval of CT liver images

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 174 KB, application/octet-stream

  • Kristoffer Knutsen Wickstrøm
  • Eirik Agnalt Østmo
  • Keyur Radiya
  • Karl Øyvind Mikalsen
  • Michael Christian Kampffmeyer
  • Jenssen, Robert

Deep learning-based approaches for content-based image retrieval (CBIR) of computed tomography (CT) liver images is an active field of research, but suffer from some critical limitations. First, they are heavily reliant on labeled data, which can be challenging and costly to acquire. Second, they lack transparency and explainability, which limits the trustworthiness of deep CBIR systems. We address these limitations by: (1) Proposing a self-supervised learning framework that incorporates domain-knowledge into the training procedure, and, (2) by providing the first representation learning explainability analysis in the context of CBIR of CT liver images. Results demonstrate improved performance compared to the standard self-supervised approach across several metrics, as well as improved generalization across datasets. Further, we conduct the first representation learning explainability analysis in the context of CBIR, which reveals new insights into the feature extraction process. Lastly, we perform a case study with cross-examination CBIR that demonstrates the usability of our proposed framework. We believe that our proposed framework could play a vital role in creating trustworthy deep CBIR systems that can successfully take advantage of unlabeled data.

OriginalsprogEngelsk
Artikelnummer102239
TidsskriftComputerized Medical Imaging and Graphics
Vol/bind107
Antal sider12
ISSN0895-6111
DOI
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
This work was supported by The Research Council of Norway (RCN) , through its Centre for Research-based Innovation funding scheme [grant number 309439 ] and Consortium Partners; RCN FRIPRO [grant number 315029 ]; RCN IKTPLUSS [grant number 303514 ]; and the UiT Thematic Initiative .

Publisher Copyright:
© 2023 The Author(s)

ID: 347977905