A Large-Scale Comparison of Historical Text Normalization Systems

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Marcel Bollmann
There is no consensus on the state-of-the-art approach to historical text normalization. Many techniques have been proposed, including rule-based methods, distance metrics, character-based statistical machine translation, and neural encoder–decoder models, but studies have used different datasets, different evaluation methods, and have come to different conclusions. This paper presents the largest study of historical text normalization done so far. We critically survey the existing literature and report experiments on eight languages, comparing systems spanning all categories of proposed normalization techniques, analysing the effect of training data quantity, and using different evaluation methods. The datasets and scripts are made publicly available.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
PublisherAssociation for Computational Linguistics
Publication date2019
Pages3885-3898
DOIs
Publication statusPublished - 2019
Event2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - NAACL-HLT 2019 - Minneapolis, United States
Duration: 3 Jun 20197 Jun 2019

Conference

Conference2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - NAACL-HLT 2019
LandUnited States
ByMinneapolis
Periode03/06/201907/06/2019

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 239617830