Remember to Correct the Bias When Using Deep Learning for Regression!

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Remember to Correct the Bias When Using Deep Learning for Regression! / Igel, Christian; Oehmcke, Stefan.

I: KI - Kunstliche Intelligenz, Bind 37, Nr. 1, 2023, s. 33-40.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Igel, C & Oehmcke, S 2023, 'Remember to Correct the Bias When Using Deep Learning for Regression!', KI - Kunstliche Intelligenz, bind 37, nr. 1, s. 33-40. https://doi.org/10.1007/s13218-023-00801-0

APA

Igel, C., & Oehmcke, S. (2023). Remember to Correct the Bias When Using Deep Learning for Regression! KI - Kunstliche Intelligenz, 37(1), 33-40. https://doi.org/10.1007/s13218-023-00801-0

Vancouver

Igel C, Oehmcke S. Remember to Correct the Bias When Using Deep Learning for Regression! KI - Kunstliche Intelligenz. 2023;37(1):33-40. https://doi.org/10.1007/s13218-023-00801-0

Author

Igel, Christian ; Oehmcke, Stefan. / Remember to Correct the Bias When Using Deep Learning for Regression!. I: KI - Kunstliche Intelligenz. 2023 ; Bind 37, Nr. 1. s. 33-40.

Bibtex

@article{353b8f86754f4db1bd26e2272276e83e,
title = "Remember to Correct the Bias When Using Deep Learning for Regression!",
abstract = "When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points (e.g., the sum of the residuals on previously unseen data). We suggest adjusting the bias of the machine learning model after training as a default post-processing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction are demonstrated in exemplary experiments.",
keywords = "Bias correction, Deep learning, Regression",
author = "Christian Igel and Stefan Oehmcke",
note = "Funding Information: The authors acknowledge support by the Villum Foundation through the project Deep Learning and Remote Sensing for Unlocking Global Ecosystem Resource Dynamics (DeReEco, project number 34306) and the Pioneer Centre for AI, DNRF grant number P1. Publisher Copyright: {\textcopyright} 2023, The Author(s).",
year = "2023",
doi = "10.1007/s13218-023-00801-0",
language = "English",
volume = "37",
pages = "33--40",
journal = "KI - K{\"u}nstliche Intelligenz",
issn = "0933-1875",
publisher = "Springer",
number = "1",

}

RIS

TY - JOUR

T1 - Remember to Correct the Bias When Using Deep Learning for Regression!

AU - Igel, Christian

AU - Oehmcke, Stefan

N1 - Funding Information: The authors acknowledge support by the Villum Foundation through the project Deep Learning and Remote Sensing for Unlocking Global Ecosystem Resource Dynamics (DeReEco, project number 34306) and the Pioneer Centre for AI, DNRF grant number P1. Publisher Copyright: © 2023, The Author(s).

PY - 2023

Y1 - 2023

N2 - When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points (e.g., the sum of the residuals on previously unseen data). We suggest adjusting the bias of the machine learning model after training as a default post-processing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction are demonstrated in exemplary experiments.

AB - When training deep learning models for least-squares regression, we cannot expect that the training error residuals of the final model, selected after a fixed training time or based on performance on a hold-out data set, sum to zero. This can introduce a systematic error that accumulates if we are interested in the total aggregated performance over many data points (e.g., the sum of the residuals on previously unseen data). We suggest adjusting the bias of the machine learning model after training as a default post-processing step, which efficiently solves the problem. The severeness of the error accumulation and the effectiveness of the bias correction are demonstrated in exemplary experiments.

KW - Bias correction

KW - Deep learning

KW - Regression

U2 - 10.1007/s13218-023-00801-0

DO - 10.1007/s13218-023-00801-0

M3 - Journal article

AN - SCOPUS:85152908339

VL - 37

SP - 33

EP - 40

JO - KI - Künstliche Intelligenz

JF - KI - Künstliche Intelligenz

SN - 0933-1875

IS - 1

ER -

ID: 347311460