Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Same Neurons, Different Languages : Probing Morphosyntax in Multilingual Pre-trained Models. / Stańczak, Karolina; Ponti, Edoardo; Hennigen, Lucas Torroba; Cotterell, Ryan; Augenstein, Isabelle.

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics (ACL), 2022. p. 1589-1598.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Stańczak, K, Ponti, E, Hennigen, LT, Cotterell, R & Augenstein, I 2022, Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models. in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics (ACL), pp. 1589-1598, 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, United States, 10/07/2022. https://doi.org/10.18653/v1/2022.naacl-main.114

APA

Stańczak, K., Ponti, E., Hennigen, L. T., Cotterell, R., & Augenstein, I. (2022). Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 1589-1598). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.114

Vancouver

Stańczak K, Ponti E, Hennigen LT, Cotterell R, Augenstein I. Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics (ACL). 2022. p. 1589-1598 https://doi.org/10.18653/v1/2022.naacl-main.114

Author

Stańczak, Karolina ; Ponti, Edoardo ; Hennigen, Lucas Torroba ; Cotterell, Ryan ; Augenstein, Isabelle. / Same Neurons, Different Languages : Probing Morphosyntax in Multilingual Pre-trained Models. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics (ACL), 2022. pp. 1589-1598

Bibtex

@inproceedings{d6db1b5459904774b4f23795766642df,
title = "Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models",
abstract = "The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how these models learn to generalise across languages. In this work, we conjecture that multilingual pretrained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages. We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.",
author = "Karolina Sta{\'n}czak and Edoardo Ponti and Hennigen, {Lucas Torroba} and Ryan Cotterell and Isabelle Augenstein",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022 ; Conference date: 10-07-2022 Through 15-07-2022",
year = "2022",
doi = "10.18653/v1/2022.naacl-main.114",
language = "English",
pages = "1589--1598",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",

}

RIS

TY - GEN

T1 - Same Neurons, Different Languages

T2 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022

AU - Stańczak, Karolina

AU - Ponti, Edoardo

AU - Hennigen, Lucas Torroba

AU - Cotterell, Ryan

AU - Augenstein, Isabelle

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how these models learn to generalise across languages. In this work, we conjecture that multilingual pretrained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages. We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.

AB - The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how these models learn to generalise across languages. In this work, we conjecture that multilingual pretrained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages. We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.

UR - http://www.scopus.com/inward/record.url?scp=85138357220&partnerID=8YFLogxK

U2 - 10.18653/v1/2022.naacl-main.114

DO - 10.18653/v1/2022.naacl-main.114

M3 - Article in proceedings

AN - SCOPUS:85138357220

SP - 1589

EP - 1598

BT - Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

PB - Association for Computational Linguistics (ACL)

Y2 - 10 July 2022 through 15 July 2022

ER -

ID: 341039204