Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how these models learn to generalise across languages. In this work, we conjecture that multilingual pretrained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages. We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.

OriginalsprogEngelsk
TitelProceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2022
Sider1589-1598
ISBN (Elektronisk)9781955917711
DOI
StatusUdgivet - 2022
Begivenhed2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022 - Seattle, USA
Varighed: 10 jul. 202215 jul. 2022

Konference

Konference2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022
LandUSA
BySeattle
Periode10/07/202215/07/2022
SponsorAmazon, Bloomberg, et al., Google Research, LIVE PERSON, Meta

Bibliografisk note

Funding Information:
This work is mostly funded by Independent Research Fund Denmark under grant agreement number 9130-00092B, as well as by a project grant from the Swedish Research Council under grant agreement number 2019-04129. Lucas Torroba Hennigen acknowledges funding from the Michael Athans Fellowship fund. Ryan Cotterell acknowledges support from the Swiss National Science Foundation (SNSF) as part of the “The Forgotten Role of Inductive Bias in Interpretability” project.

Publisher Copyright:
© 2022 Association for Computational Linguistics.

ID: 341039204