Factual Consistency of Multilingual Pretrained Language Models

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 2,85 MB, PDF-dokument

Pretrained language models can be queried for factual knowledge, with potential applications in knowledge base acquisition and tasks that require inference. However, for that, we need to know how reliable this knowledge is, and recent work has shown that monolingual English language models lack consistency when predicting factual knowledge, that is, they fill-in-the-blank differently for paraphrases describing the same fact. In this paper, we extend the analysis of consistency to a multilingual setting. We introduce a resource, MPARAREL, and investigate (i) whether multilingual language models such as mBERT and XLM-R are more consistent than their monolingual counterparts; and (ii) if such models are equally consistent across languages. We find that mBERT is as inconsistent as English BERT in English paraphrases, but that both mBERT and XLM-R exhibit a high degree of inconsistency in English and even more so for all the other 45 languages.

OriginalsprogEngelsk
TitelACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022
RedaktørerSmaranda Muresan, Preslav Nakov, Aline Villavicencio
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2022
Sider3046-3052
ISBN (Elektronisk)9781955917254
StatusUdgivet - 2022
Begivenhed60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - Dublin, Irland
Varighed: 22 maj 202227 maj 2022

Konference

Konference60th Annual Meeting of the Association for Computational Linguistics, ACL 2022
LandIrland
ByDublin
Periode22/05/202227/05/2022
SponsorAmazon Science, Bloomberg Engineering, et al., Google Research, Liveperson, Meta

Bibliografisk note

Publisher Copyright:
© 2022 Association for Computational Linguistics.

ID: 341485866