Few-Shot and Zero-Shot Learning for Historical Text Normalization

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Few-Shot and Zero-Shot Learning for Historical Text Normalization. / Bollmann, Marcel; Korchagina, Natalia; Søgaard, Anders.

Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Association for Computational Linguistics, 2019. s. 104-114.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Bollmann, M, Korchagina, N & Søgaard, A 2019, Few-Shot and Zero-Shot Learning for Historical Text Normalization. i Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Association for Computational Linguistics, s. 104-114, 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo), Hong Kong, Kina, 03/11/2019. https://doi.org/10.18653/v1/D19-6112

APA

Bollmann, M., Korchagina, N., & Søgaard, A. (2019). Few-Shot and Zero-Shot Learning for Historical Text Normalization. I Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019) (s. 104-114). Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-6112

Vancouver

Bollmann M, Korchagina N, Søgaard A. Few-Shot and Zero-Shot Learning for Historical Text Normalization. I Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Association for Computational Linguistics. 2019. s. 104-114 https://doi.org/10.18653/v1/D19-6112

Author

Bollmann, Marcel ; Korchagina, Natalia ; Søgaard, Anders. / Few-Shot and Zero-Shot Learning for Historical Text Normalization. Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Association for Computational Linguistics, 2019. s. 104-114

Bibtex

@inproceedings{d2685bb85be94d48870b6e6d439b81fb,
title = "Few-Shot and Zero-Shot Learning for Historical Text Normalization",
abstract = "Historical text normalization often relies on small training datasets. Recent work has shown that multi-task learning can lead to significant improvements by exploiting synergies with related datasets, but there has been no systematic study of different multi-task learning architectures. This paper evaluates 63 multi-task learning configurations for sequence-to-sequence-based historical text normalization across ten datasets from eight languages, using autoencoding, grapheme-to-phoneme mapping, and lemmatization as auxiliary tasks. We observe consistent, significant improvements across languages when training data for the target task is limited, but minimal or no improvements when training data is abundant. We also show that zero-shot learning outperforms the simple, but relatively strong, identity baseline.",
author = "Marcel Bollmann and Natalia Korchagina and Anders S{\o}gaard",
year = "2019",
doi = "10.18653/v1/D19-6112",
language = "English",
pages = "104--114",
booktitle = "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)",
publisher = "Association for Computational Linguistics",
note = "null ; Conference date: 03-11-2019 Through 03-11-2019",

}

RIS

TY - GEN

T1 - Few-Shot and Zero-Shot Learning for Historical Text Normalization

AU - Bollmann, Marcel

AU - Korchagina, Natalia

AU - Søgaard, Anders

PY - 2019

Y1 - 2019

N2 - Historical text normalization often relies on small training datasets. Recent work has shown that multi-task learning can lead to significant improvements by exploiting synergies with related datasets, but there has been no systematic study of different multi-task learning architectures. This paper evaluates 63 multi-task learning configurations for sequence-to-sequence-based historical text normalization across ten datasets from eight languages, using autoencoding, grapheme-to-phoneme mapping, and lemmatization as auxiliary tasks. We observe consistent, significant improvements across languages when training data for the target task is limited, but minimal or no improvements when training data is abundant. We also show that zero-shot learning outperforms the simple, but relatively strong, identity baseline.

AB - Historical text normalization often relies on small training datasets. Recent work has shown that multi-task learning can lead to significant improvements by exploiting synergies with related datasets, but there has been no systematic study of different multi-task learning architectures. This paper evaluates 63 multi-task learning configurations for sequence-to-sequence-based historical text normalization across ten datasets from eight languages, using autoencoding, grapheme-to-phoneme mapping, and lemmatization as auxiliary tasks. We observe consistent, significant improvements across languages when training data for the target task is limited, but minimal or no improvements when training data is abundant. We also show that zero-shot learning outperforms the simple, but relatively strong, identity baseline.

U2 - 10.18653/v1/D19-6112

DO - 10.18653/v1/D19-6112

M3 - Article in proceedings

SP - 104

EP - 114

BT - Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)

PB - Association for Computational Linguistics

Y2 - 3 November 2019 through 3 November 2019

ER -

ID: 239617207