Multiple-instance learning as a classifier combining problem

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Multiple-instance learning as a classifier combining problem. / Li, Yan; Tax, David M. J.; Duin, Robert P. W.; Loog, Marco.

In: Pattern Recognition, Vol. 46, No. 3, 2013, p. 865-874.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Li, Y, Tax, DMJ, Duin, RPW & Loog, M 2013, 'Multiple-instance learning as a classifier combining problem', Pattern Recognition, vol. 46, no. 3, pp. 865-874. https://doi.org/10.1016/j.patcog.2012.08.018

APA

Li, Y., Tax, D. M. J., Duin, R. P. W., & Loog, M. (2013). Multiple-instance learning as a classifier combining problem. Pattern Recognition, 46(3), 865-874. https://doi.org/10.1016/j.patcog.2012.08.018

Vancouver

Li Y, Tax DMJ, Duin RPW, Loog M. Multiple-instance learning as a classifier combining problem. Pattern Recognition. 2013;46(3):865-874. https://doi.org/10.1016/j.patcog.2012.08.018

Author

Li, Yan ; Tax, David M. J. ; Duin, Robert P. W. ; Loog, Marco. / Multiple-instance learning as a classifier combining problem. In: Pattern Recognition. 2013 ; Vol. 46, No. 3. pp. 865-874.

Bibtex

@article{522227aa5fa5415694f5193b4b95eeeb,
title = "Multiple-instance learning as a classifier combining problem",
abstract = "In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model. The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. (C) 2012 Elsevier Ltd. All rights reserved.",
keywords = "Multiple instance learning, Classifier combining",
author = "Yan Li and Tax, {David M. J.} and Duin, {Robert P. W.} and Marco Loog",
year = "2013",
doi = "10.1016/j.patcog.2012.08.018",
language = "English",
volume = "46",
pages = "865--874",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier",
number = "3",

}

RIS

TY - JOUR

T1 - Multiple-instance learning as a classifier combining problem

AU - Li, Yan

AU - Tax, David M. J.

AU - Duin, Robert P. W.

AU - Loog, Marco

PY - 2013

Y1 - 2013

N2 - In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model. The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. (C) 2012 Elsevier Ltd. All rights reserved.

AB - In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model. The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. (C) 2012 Elsevier Ltd. All rights reserved.

KW - Multiple instance learning

KW - Classifier combining

U2 - 10.1016/j.patcog.2012.08.018

DO - 10.1016/j.patcog.2012.08.018

M3 - Journal article

VL - 46

SP - 865

EP - 874

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

IS - 3

ER -

ID: 118769055