A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models. / Snæbjarnarson, Vésteinn; Símonarson, Haukur Barri; Ragnarsson, Pétur Orri; Ingólfsdóttir, Svanhvít Lilja; Jónsson, Haukur Páll; Porsteinsson, Vilhjálmur; Einarsson, Hafsteinn.

2022 Language Resources and Evaluation Conference, LREC 2022. ed. / Nicoletta Calzolari; Frederic Bechet; Philippe Blache; Khalid Choukri; Christopher Cieri; Thierry Declerck; Sara Goggi; Hitoshi Isahara; Bente Maegaard; Joseph Mariani; Helene Mazo; Jan Odijk; Stelios Piperidis. European Language Resources Association (ELRA), 2022. p. 4356-4366.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Snæbjarnarson, V, Símonarson, HB, Ragnarsson, PO, Ingólfsdóttir, SL, Jónsson, HP, Porsteinsson, V & Einarsson, H 2022, A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models. in N Calzolari, F Bechet, P Blache, K Choukri, C Cieri, T Declerck, S Goggi, H Isahara, B Maegaard, J Mariani, H Mazo, J Odijk & S Piperidis (eds), 2022 Language Resources and Evaluation Conference, LREC 2022. European Language Resources Association (ELRA), pp. 4356-4366, 13th International Conference on Language Resources and Evaluation Conference, LREC 2022, Marseille, France, 20/06/2022.

APA

Snæbjarnarson, V., Símonarson, H. B., Ragnarsson, P. O., Ingólfsdóttir, S. L., Jónsson, H. P., Porsteinsson, V., & Einarsson, H. (2022). A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models. In N. Calzolari, F. Bechet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, J. Odijk, & S. Piperidis (Eds.), 2022 Language Resources and Evaluation Conference, LREC 2022 (pp. 4356-4366). European Language Resources Association (ELRA).

Vancouver

Snæbjarnarson V, Símonarson HB, Ragnarsson PO, Ingólfsdóttir SL, Jónsson HP, Porsteinsson V et al. A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models. In Calzolari N, Bechet F, Blache P, Choukri K, Cieri C, Declerck T, Goggi S, Isahara H, Maegaard B, Mariani J, Mazo H, Odijk J, Piperidis S, editors, 2022 Language Resources and Evaluation Conference, LREC 2022. European Language Resources Association (ELRA). 2022. p. 4356-4366

Author

Snæbjarnarson, Vésteinn ; Símonarson, Haukur Barri ; Ragnarsson, Pétur Orri ; Ingólfsdóttir, Svanhvít Lilja ; Jónsson, Haukur Páll ; Porsteinsson, Vilhjálmur ; Einarsson, Hafsteinn. / A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models. 2022 Language Resources and Evaluation Conference, LREC 2022. editor / Nicoletta Calzolari ; Frederic Bechet ; Philippe Blache ; Khalid Choukri ; Christopher Cieri ; Thierry Declerck ; Sara Goggi ; Hitoshi Isahara ; Bente Maegaard ; Joseph Mariani ; Helene Mazo ; Jan Odijk ; Stelios Piperidis. European Language Resources Association (ELRA), 2022. pp. 4356-4366

Bibtex

@inproceedings{d30ee276b06e44b8b60d86037e7219ce,
title = "A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models",
abstract = "We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain.is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.",
keywords = "co-reference resolution, corpus, IceBERT, Icelandic, language model, named entity recognition, natural language understanding, parsing, part of speech",
author = "V{\'e}steinn Sn{\ae}bjarnarson and S{\'i}monarson, {Haukur Barri} and Ragnarsson, {P{\'e}tur Orri} and Ing{\'o}lfsd{\'o}ttir, {Svanhv{\'i}t Lilja} and J{\'o}nsson, {Haukur P{\'a}ll} and Vilhj{\'a}lmur Porsteinsson and Hafsteinn Einarsson",
note = "Funding Information: We thank Prof. Dr.-Ing. Morris Riedel and his team for providing access to the DEEP super-computer at Forschungszentrum J{\"u}lich. We also thank the Icelandic Language Technology Program (Nikul{\'a}sd{\'o}ttir et al., 2020). It has enabled the authors to focus on work in Icelandic NLP. Finally, we thank the anonymous reviewers for their helpful feedback. Publisher Copyright: {\textcopyright} European Language Resources Association (ELRA), licensed under CC-BY-NC-4.0.; 13th International Conference on Language Resources and Evaluation Conference, LREC 2022 ; Conference date: 20-06-2022 Through 25-06-2022",
year = "2022",
language = "English",
pages = "4356--4366",
editor = "Nicoletta Calzolari and Frederic Bechet and Philippe Blache and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Helene Mazo and Jan Odijk and Stelios Piperidis",
booktitle = "2022 Language Resources and Evaluation Conference, LREC 2022",
publisher = "European Language Resources Association (ELRA)",

}

RIS

TY - GEN

T1 - A Warm Start and a Clean Crawled Corpus - A Recipe for Good Language Models

AU - Snæbjarnarson, Vésteinn

AU - Símonarson, Haukur Barri

AU - Ragnarsson, Pétur Orri

AU - Ingólfsdóttir, Svanhvít Lilja

AU - Jónsson, Haukur Páll

AU - Porsteinsson, Vilhjálmur

AU - Einarsson, Hafsteinn

N1 - Funding Information: We thank Prof. Dr.-Ing. Morris Riedel and his team for providing access to the DEEP super-computer at Forschungszentrum Jülich. We also thank the Icelandic Language Technology Program (Nikulásdóttir et al., 2020). It has enabled the authors to focus on work in Icelandic NLP. Finally, we thank the anonymous reviewers for their helpful feedback. Publisher Copyright: © European Language Resources Association (ELRA), licensed under CC-BY-NC-4.0.

PY - 2022

Y1 - 2022

N2 - We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain.is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.

AB - We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain.is. Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we manually translate and adapt the WinoGrande commonsense reasoning dataset. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.

KW - co-reference resolution

KW - corpus

KW - IceBERT

KW - Icelandic

KW - language model

KW - named entity recognition

KW - natural language understanding

KW - parsing

KW - part of speech

UR - http://www.scopus.com/inward/record.url?scp=85137484516&partnerID=8YFLogxK

M3 - Article in proceedings

AN - SCOPUS:85137484516

SP - 4356

EP - 4366

BT - 2022 Language Resources and Evaluation Conference, LREC 2022

A2 - Calzolari, Nicoletta

A2 - Bechet, Frederic

A2 - Blache, Philippe

A2 - Choukri, Khalid

A2 - Cieri, Christopher

A2 - Declerck, Thierry

A2 - Goggi, Sara

A2 - Isahara, Hitoshi

A2 - Maegaard, Bente

A2 - Mariani, Joseph

A2 - Mazo, Helene

A2 - Odijk, Jan

A2 - Piperidis, Stelios

PB - European Language Resources Association (ELRA)

T2 - 13th International Conference on Language Resources and Evaluation Conference, LREC 2022

Y2 - 20 June 2022 through 25 June 2022

ER -

ID: 371184890