Weighting training images by maximizing distribution similarity for supervised segmentation across scanners

Publikation: Bidrag til tidsskriftTidsskriftartikelfagfællebedømt

Standard

Weighting training images by maximizing distribution similarity for supervised segmentation across scanners. / van Opbroek, Annegreet; Vernooij, Meike W; Ikram, M.Arfan; de Bruijne, Marleen.

I: Medical Image Analysis, Bind 24, Nr. 1, 2015, s. 245-254.

Publikation: Bidrag til tidsskriftTidsskriftartikelfagfællebedømt

Harvard

van Opbroek, A, Vernooij, MW, Ikram, MA & de Bruijne, M 2015, 'Weighting training images by maximizing distribution similarity for supervised segmentation across scanners', Medical Image Analysis, bind 24, nr. 1, s. 245-254. https://doi.org/10.1016/j.media.2015.06.010

APA

van Opbroek, A., Vernooij, M. W., Ikram, M. A., & de Bruijne, M. (2015). Weighting training images by maximizing distribution similarity for supervised segmentation across scanners. Medical Image Analysis, 24(1), 245-254. https://doi.org/10.1016/j.media.2015.06.010

Vancouver

van Opbroek A, Vernooij MW, Ikram MA, de Bruijne M. Weighting training images by maximizing distribution similarity for supervised segmentation across scanners. Medical Image Analysis. 2015;24(1):245-254. https://doi.org/10.1016/j.media.2015.06.010

Author

van Opbroek, Annegreet ; Vernooij, Meike W ; Ikram, M.Arfan ; de Bruijne, Marleen. / Weighting training images by maximizing distribution similarity for supervised segmentation across scanners. I: Medical Image Analysis. 2015 ; Bind 24, Nr. 1. s. 245-254.

Bibtex

@article{5901f8a1f1104c46b15111bab335dece,
title = "Weighting training images by maximizing distribution similarity for supervised segmentation across scanners",
abstract = "Many automatic segmentation methods are based on supervised machine learning. Such methods have proven to perform well, on the condition that they are trained on a sufficiently large manually labeled training set that is representative of the images to segment. However, due to differences between scanners, scanning parameters, and patients such a training set may be difficult to obtain. We present a transfer-learning approach to segmentation by multi-feature voxelwise classification. The presented method can be trained using a heterogeneous set of training images that may be obtained with different scanners than the target image. In our approach each training image is given a weight based on the distribution of its voxels in the feature space. These image weights are chosen as to minimize the difference between the weighted probability density function (PDF) of the voxels of the training images and the PDF of the voxels of the target image. The voxels and weights of the training images are then used to train a weighted classifier. We tested our method on three segmentation tasks: brain-tissue segmentation, skull stripping, and white-matter-lesion segmentation. For all three applications, the proposed weighted classifier significantly outperformed an unweighted classifier on all training images, reducing classification errors by up to 42%. For brain-tissue segmentation and skull stripping our method even significantly outperformed the traditional approach of training on representative training images from the same study as the target image.",
author = "{van Opbroek}, Annegreet and Vernooij, {Meike W} and M.Arfan Ikram and {de Bruijne}, Marleen",
note = "Copyright {\textcopyright} 2015 Elsevier B.V. All rights reserved.",
year = "2015",
doi = "10.1016/j.media.2015.06.010",
language = "English",
volume = "24",
pages = "245--254",
journal = "Medical Image Analysis",
issn = "1361-8415",
publisher = "Elsevier",
number = "1",

}

RIS

TY - JOUR

T1 - Weighting training images by maximizing distribution similarity for supervised segmentation across scanners

AU - van Opbroek, Annegreet

AU - Vernooij, Meike W

AU - Ikram, M.Arfan

AU - de Bruijne, Marleen

N1 - Copyright © 2015 Elsevier B.V. All rights reserved.

PY - 2015

Y1 - 2015

N2 - Many automatic segmentation methods are based on supervised machine learning. Such methods have proven to perform well, on the condition that they are trained on a sufficiently large manually labeled training set that is representative of the images to segment. However, due to differences between scanners, scanning parameters, and patients such a training set may be difficult to obtain. We present a transfer-learning approach to segmentation by multi-feature voxelwise classification. The presented method can be trained using a heterogeneous set of training images that may be obtained with different scanners than the target image. In our approach each training image is given a weight based on the distribution of its voxels in the feature space. These image weights are chosen as to minimize the difference between the weighted probability density function (PDF) of the voxels of the training images and the PDF of the voxels of the target image. The voxels and weights of the training images are then used to train a weighted classifier. We tested our method on three segmentation tasks: brain-tissue segmentation, skull stripping, and white-matter-lesion segmentation. For all three applications, the proposed weighted classifier significantly outperformed an unweighted classifier on all training images, reducing classification errors by up to 42%. For brain-tissue segmentation and skull stripping our method even significantly outperformed the traditional approach of training on representative training images from the same study as the target image.

AB - Many automatic segmentation methods are based on supervised machine learning. Such methods have proven to perform well, on the condition that they are trained on a sufficiently large manually labeled training set that is representative of the images to segment. However, due to differences between scanners, scanning parameters, and patients such a training set may be difficult to obtain. We present a transfer-learning approach to segmentation by multi-feature voxelwise classification. The presented method can be trained using a heterogeneous set of training images that may be obtained with different scanners than the target image. In our approach each training image is given a weight based on the distribution of its voxels in the feature space. These image weights are chosen as to minimize the difference between the weighted probability density function (PDF) of the voxels of the training images and the PDF of the voxels of the target image. The voxels and weights of the training images are then used to train a weighted classifier. We tested our method on three segmentation tasks: brain-tissue segmentation, skull stripping, and white-matter-lesion segmentation. For all three applications, the proposed weighted classifier significantly outperformed an unweighted classifier on all training images, reducing classification errors by up to 42%. For brain-tissue segmentation and skull stripping our method even significantly outperformed the traditional approach of training on representative training images from the same study as the target image.

U2 - 10.1016/j.media.2015.06.010

DO - 10.1016/j.media.2015.06.010

M3 - Journal article

C2 - 26210914

VL - 24

SP - 245

EP - 254

JO - Medical Image Analysis

JF - Medical Image Analysis

SN - 1361-8415

IS - 1

ER -

ID: 147992199