mDAPT: Multilingual Domain Adaptive Pretraining in a Single Model

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Final published version, 490 KB, PDF document

Domain adaptive pretraining, i.e. the continued unsupervised pretraining of a language model on domain-specific text, improves the modelling of text for downstream tasks within the domain. Numerous real-world applications are based on domain-specific text, e.g. working with financial or biomedical documents, and these applications often need to support multiple languages. However, large-scale domain-specific multilingual pretraining data for such scenarios can be difficult to obtain, due to regulations, legislation, or simply a lack of language- and domain-specific text. One solution is to train a single multilingual model, taking advantage of the data available in as many languages as possible. In this work, we explore the benefits of domain adaptive pretraining with a focus on adapting to multiple languages within a specific domain. We propose different techniques to compose pretraining corpora that enable a language model to both become domain-specific and multilingual. Evaluation on nine domain-specific datasets—for biomedical named entity recognition and financial sentence classification—covering seven different languages show that a single multilingual domain-specific model can outperform the general multilingual model, and performs close to its monolingual counterpart. This finding holds across two different pretraining methods, adapter-based pretraining and full model pretraining.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2021
PublisherAssociation for Computational Linguistics
Publication date2021
Pages3404-3418
DOIs
Publication statusPublished - 2021
EventFindings of the Association for Computational Linguistics: EMNLP 2021 - Punta Cana, Dominican Republic
Duration: 1 Nov 20211 Nov 2021

Conference

ConferenceFindings of the Association for Computational Linguistics: EMNLP 2021
LandDominican Republic
ByPunta Cana
Periode01/11/202101/11/2021

ID: 299036345