The Medical Segmentation Decathlon

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 10 MB, PDF-dokument

  • Michela Antonelli
  • Annika Reinke
  • Spyridon Bakas
  • Keyvan Farahani
  • Annette Kopp-Schneider
  • Bennett A. Landman
  • Geert Litjens
  • Bjoern Menze
  • Olaf Ronneberger
  • Ronald M. Summers
  • Bram van Ginneken
  • Michel Bilello
  • Patrick Bilic
  • Patrick F. Christ
  • Richard K.G. Do
  • Marc J. Gollub
  • Stephan H. Heckers
  • Henkjan Huisman
  • William R. Jarnagin
  • Maureen K. McHugo
  • Sandy Napel
  • Jennifer S.Golia Pernicka
  • Kawal Rhode
  • Catalina Tobon-Gomez
  • Eugene Vorontsov
  • James A. Meakin
  • Sebastien Ourselin
  • Manuel Wiesenfarth
  • Pablo Arbeláez
  • Byeonguk Bae
  • Sihong Chen
  • Laura Daza
  • Jianjiang Feng
  • Baochun He
  • Fabian Isensee
  • Yuanfeng Ji
  • Fucang Jia
  • Ildoo Kim
  • Klaus Maier-Hein
  • Dorit Merhof
  • Akshay Pai
  • Beomhee Park
  • Ramin Rezaiifar
  • Oliver Rippel
  • Ignacio Sarasua
  • Wei Shen
  • Jaemin Son
  • Christian Wachinger
  • Liansheng Wang
  • Yan Wang
  • Yingda Xia
  • Daguang Xu
  • Zhanwei Xu
  • Yefeng Zheng
  • Amber L. Simpson
  • Lena Maier-Hein
  • M. Jorge Cardoso

International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)—a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.

OriginalsprogEngelsk
Artikelnummer4128
TidsskriftNature Communications
Vol/bind13
Udgave nummer1
Antal sider1
ISSN2041-1723
DOI
StatusUdgivet - 2022

Bibliografisk note

Funding Information:
This work was supported by the UK Research and Innovation London Medical Imaging & Artificial Intelligence Center for Value-Based Healthcare. Investigators received support from the Wellcome/EPSRC Center for Medical Engineering (WT203148), Wellcome Flagship Program (WT213038). The research was also supported by the Bavarian State Ministry of Science and the Arts and coordinated by the Bavarian Research Institute for Digital Transformation and by the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science. Team CerebriuDIKU gratefully acknowledges support from the Independent Research Fund Denmark through the project U-Sleep (project number 9131-00099B). R.M.S. is supported by the Intramural Research Program of the National Institutes of Health Clinical Center G.L. reported research grants from the Dutch Cancer Society, the Netherlands Organization for Scientific Research (NWO), and HealthHolland during the conduct of the study, and grants from Philips Digital Pathology Solutions, and consultancy fees from Novartis and Vital Imaging, outside the submitted work. Research reported in this publication was partly supported by the National Institutes of Health (NIH) under award numbers NCI:U01CA242871, NCI:U24CA189523, NINDS:R01NS042645. The content of this publication is solely the responsibility of the authors and does not represent the official views of the NIH.Henkjan Huisman is receiving grant support from Siemens Healthineers. James Meakin received grant funding from AWS. The method presented by BCVUniandes was made in collaboration with Silvana Castillo, from Universidad de los Andes. We would like to thank Minu D. Tizabi for proof-reading the paper.

Funding Information:
This work was supported by the UK Research and Innovation London Medical Imaging & Artificial Intelligence Center for Value-Based Healthcare. Investigators received support from the Wellcome/EPSRC Center for Medical Engineering (WT203148), Wellcome Flagship Program (WT213038). The research was also supported by the Bavarian State Ministry of Science and the Arts and coordinated by the Bavarian Research Institute for Digital Transformation and by the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science. Team CerebriuDIKU gratefully acknowledges support from the Independent Research Fund Denmark through the project U-Sleep (project number 9131-00099B). R.M.S. is supported by the Intramural Research Program of the National Institutes of Health Clinical Center G.L. reported research grants from the Dutch Cancer Society, the Netherlands Organization for Scientific Research (NWO), and HealthHolland during the conduct of the study, and grants from Philips Digital Pathology Solutions, and consultancy fees from Novartis and Vital Imaging, outside the submitted work. Research reported in this publication was partly supported by the National Institutes of Health (NIH) under award numbers NCI:U01CA242871, NCI:U24CA189523, NINDS:R01NS042645. The content of this publication is solely the responsibility of the authors and does not represent the official views of the NIH.Henkjan Huisman is receiving grant support from Siemens Healthineers. James Meakin received grant funding from AWS. The method presented by BCVUniandes was made in collaboration with Silvana Castillo, from Universidad de los Andes. We would like to thank Minu D. Tizabi for proof-reading the paper.

Publisher Copyright:
© 2022, The Author(s).

ID: 318033459