Resilient approximation of kernel classifiers

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Resilient approximation of kernel classifiers. / Suttorp, Thorsten; Igel, Christian.

Artificial Neural Networks – ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I. red. / Joaquim Marques de Sá; Lius A. Alexandre; Włodzisław Duch; Danilo Mandic. Bind Part I Springer, 2007. s. 139-148 (Lecture notes in computer science, Bind 4668).

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Suttorp, T & Igel, C 2007, Resilient approximation of kernel classifiers. i JM de Sá, LA Alexandre, W Duch & D Mandic (red), Artificial Neural Networks – ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I. bind Part I, Springer, Lecture notes in computer science, bind 4668, s. 139-148, 17th International Conference on Artificial Neural Networks, ICANN 2007, Porto, Portugal, 09/09/2007. https://doi.org/10.1007/978-3-540-74690-4_15

APA

Suttorp, T., & Igel, C. (2007). Resilient approximation of kernel classifiers. I J. M. de Sá, L. A. Alexandre, W. Duch, & D. Mandic (red.), Artificial Neural Networks – ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I (Bind Part I, s. 139-148). Springer. Lecture notes in computer science Bind 4668 https://doi.org/10.1007/978-3-540-74690-4_15

Vancouver

Suttorp T, Igel C. Resilient approximation of kernel classifiers. I de Sá JM, Alexandre LA, Duch W, Mandic D, red., Artificial Neural Networks – ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I. Bind Part I. Springer. 2007. s. 139-148. (Lecture notes in computer science, Bind 4668). https://doi.org/10.1007/978-3-540-74690-4_15

Author

Suttorp, Thorsten ; Igel, Christian. / Resilient approximation of kernel classifiers. Artificial Neural Networks – ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I. red. / Joaquim Marques de Sá ; Lius A. Alexandre ; Włodzisław Duch ; Danilo Mandic. Bind Part I Springer, 2007. s. 139-148 (Lecture notes in computer science, Bind 4668).

Bibtex

@inproceedings{b01ca9e054f94f7bbda9ebccac4aafd6,
title = "Resilient approximation of kernel classifiers",
abstract = "Trained support vector machines (SVMs) have a slow run-time classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient descent using the improved Rprop algorithm increases the robustness of the method compared to fixed-point iteration. Three different heuristics for selecting the support vectors to be used in the construction of the sparse approximation are proposed. It turns out that none is superior to random selection. The effect of a finishing gradient descent on all parameters of the sparse approximation is studied.",
author = "Thorsten Suttorp and Christian Igel",
year = "2007",
doi = "10.1007/978-3-540-74690-4_15",
language = "English",
isbn = "978-3-540-74689-8",
volume = "Part I",
series = "Lecture notes in computer science",
publisher = "Springer",
pages = "139--148",
editor = "{de S{\'a}}, {Joaquim Marques} and Alexandre, {Lius A.} and W{\l}odzis{\l}aw Duch and Danilo Mandic",
booktitle = "Artificial Neural Networks – ICANN 2007",
address = "Switzerland",
note = "17th International Conference on Artificial Neural Networks, ICANN 2007 ; Conference date: 09-09-2007 Through 13-09-2007",

}

RIS

TY - GEN

T1 - Resilient approximation of kernel classifiers

AU - Suttorp, Thorsten

AU - Igel, Christian

PY - 2007

Y1 - 2007

N2 - Trained support vector machines (SVMs) have a slow run-time classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient descent using the improved Rprop algorithm increases the robustness of the method compared to fixed-point iteration. Three different heuristics for selecting the support vectors to be used in the construction of the sparse approximation are proposed. It turns out that none is superior to random selection. The effect of a finishing gradient descent on all parameters of the sparse approximation is studied.

AB - Trained support vector machines (SVMs) have a slow run-time classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient descent using the improved Rprop algorithm increases the robustness of the method compared to fixed-point iteration. Three different heuristics for selecting the support vectors to be used in the construction of the sparse approximation are proposed. It turns out that none is superior to random selection. The effect of a finishing gradient descent on all parameters of the sparse approximation is studied.

U2 - 10.1007/978-3-540-74690-4_15

DO - 10.1007/978-3-540-74690-4_15

M3 - Article in proceedings

AN - SCOPUS:38149089662

SN - 978-3-540-74689-8

VL - Part I

T3 - Lecture notes in computer science

SP - 139

EP - 148

BT - Artificial Neural Networks – ICANN 2007

A2 - de Sá, Joaquim Marques

A2 - Alexandre, Lius A.

A2 - Duch, Włodzisław

A2 - Mandic, Danilo

PB - Springer

T2 - 17th International Conference on Artificial Neural Networks, ICANN 2007

Y2 - 9 September 2007 through 13 September 2007

ER -

ID: 168563567