Multi-task learning for historical text normalization: Size matters

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Multi-task learning for historical text normalization : Size matters. / Bollmann, Marc Marcel; Søgaard, Anders; Bingel, Joachim.

Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP. Association for Computational Linguistics, 2018. p. 19–24.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Bollmann, MM, Søgaard, A & Bingel, J 2018, Multi-task learning for historical text normalization: Size matters. in Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP. Association for Computational Linguistics, pp. 19–24, Workshop on Deep Learning Approaches for Low-Resource NLP, Melbourne, Australia, 19/07/2018.

APA

Bollmann, M. M., Søgaard, A., & Bingel, J. (2018). Multi-task learning for historical text normalization: Size matters. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP (pp. 19–24). Association for Computational Linguistics.

Vancouver

Bollmann MM, Søgaard A, Bingel J. Multi-task learning for historical text normalization: Size matters. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP. Association for Computational Linguistics. 2018. p. 19–24

Author

Bollmann, Marc Marcel ; Søgaard, Anders ; Bingel, Joachim. / Multi-task learning for historical text normalization : Size matters. Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP. Association for Computational Linguistics, 2018. pp. 19–24

Bibtex

@inproceedings{e0c40b1c1f9d44279d28902cceaad388,
title = "Multi-task learning for historical text normalization: Size matters",
abstract = "Historical text normalization suffers fromsmall datasets that exhibit high variance,and previous work has shown that multitasklearning can be used to leverage datafrom related problems in order to obtainmore robust models. Previous work hasbeen limited to datasets from a specific languageand a specific historical period, andit is not clear whether results generalize. Ittherefore remains an open problem, whenhistorical text normalization benefits frommulti-task learning. We explore the benefitsof multi-task learning across 10 differentdatasets, representing different languagesand periods. Our main finding—contrary to what has been observed forother NLP tasks—is that multi-task learningmainly works when target task data isvery scarce.",
author = "Bollmann, {Marc Marcel} and Anders S{\o}gaard and Joachim Bingel",
year = "2018",
language = "English",
pages = "19–24",
booktitle = "Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP",
publisher = "Association for Computational Linguistics",
note = "null ; Conference date: 19-07-2018 Through 19-07-2018",

}

RIS

TY - GEN

T1 - Multi-task learning for historical text normalization

AU - Bollmann, Marc Marcel

AU - Søgaard, Anders

AU - Bingel, Joachim

PY - 2018

Y1 - 2018

N2 - Historical text normalization suffers fromsmall datasets that exhibit high variance,and previous work has shown that multitasklearning can be used to leverage datafrom related problems in order to obtainmore robust models. Previous work hasbeen limited to datasets from a specific languageand a specific historical period, andit is not clear whether results generalize. Ittherefore remains an open problem, whenhistorical text normalization benefits frommulti-task learning. We explore the benefitsof multi-task learning across 10 differentdatasets, representing different languagesand periods. Our main finding—contrary to what has been observed forother NLP tasks—is that multi-task learningmainly works when target task data isvery scarce.

AB - Historical text normalization suffers fromsmall datasets that exhibit high variance,and previous work has shown that multitasklearning can be used to leverage datafrom related problems in order to obtainmore robust models. Previous work hasbeen limited to datasets from a specific languageand a specific historical period, andit is not clear whether results generalize. Ittherefore remains an open problem, whenhistorical text normalization benefits frommulti-task learning. We explore the benefitsof multi-task learning across 10 differentdatasets, representing different languagesand periods. Our main finding—contrary to what has been observed forother NLP tasks—is that multi-task learningmainly works when target task data isvery scarce.

M3 - Article in proceedings

SP - 19

EP - 24

BT - Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP

PB - Association for Computational Linguistics

Y2 - 19 July 2018 through 19 July 2018

ER -

ID: 214754949