Inducing Language-Agnostic Multilingual Representations

Research output: Contribution to journalJournal articleResearch

Standard

Inducing Language-Agnostic Multilingual Representations. / Zhao, Wei; Eger, Steffan; Bjerva, Johannes; Augenstein, Isabelle.

In: arXiv.org, Vol. CoRR 2020, 2020.

Research output: Contribution to journalJournal articleResearch

Harvard

Zhao, W, Eger, S, Bjerva, J & Augenstein, I 2020, 'Inducing Language-Agnostic Multilingual Representations', arXiv.org, vol. CoRR 2020. <https://arxiv.org/abs/2008.09112>

APA

Zhao, W., Eger, S., Bjerva, J., & Augenstein, I. (2020). Inducing Language-Agnostic Multilingual Representations. arXiv.org, CoRR 2020. https://arxiv.org/abs/2008.09112

Vancouver

Zhao W, Eger S, Bjerva J, Augenstein I. Inducing Language-Agnostic Multilingual Representations. arXiv.org. 2020;CoRR 2020.

Author

Zhao, Wei ; Eger, Steffan ; Bjerva, Johannes ; Augenstein, Isabelle. / Inducing Language-Agnostic Multilingual Representations. In: arXiv.org. 2020 ; Vol. CoRR 2020.

Bibtex

@article{3a3aa9d9e2174828af83a8a4f3e35bd5,
title = "Inducing Language-Agnostic Multilingual Representations",
abstract = "Multilingual representations have the potential to make cross-lingual systems available to the vast majority of languages in the world. However, they currently require large pretraining corpora, or assume access to typologically similar languages. In this work, we address these obstacles by removing language identity signals from multilingual embeddings. We examine three approaches for this: 1) re-aligning the vector spaces of target languages (all together) to a pivot source language; 2) removing languages-specific means and variances, which yields better discriminativeness of embeddings as a by-product; and 3) normalizing input texts by removing morphological contractions and sentence reordering, thus yielding language-agnostic representations. We evaluate on the tasks of XNLI and reference-free MT evaluation of varying difficulty across 19 selected languages. Our experiments demonstrate the language-agnostic behavior of our multilingual representations, which manifest the potential of zero-shot cross-lingual transfer to distant and low-resource languages, and decrease the performance gap by 8.9 points (M-BERT) and 18.2 points (XLM-R) on average across all tasks and languages. We make our codes and models available. ",
author = "Wei Zhao and Steffan Eger and Johannes Bjerva and Isabelle Augenstein",
year = "2020",
language = "English",
volume = "CoRR 2020",
journal = "arXiv.org",

}

RIS

TY - JOUR

T1 - Inducing Language-Agnostic Multilingual Representations

AU - Zhao, Wei

AU - Eger, Steffan

AU - Bjerva, Johannes

AU - Augenstein, Isabelle

PY - 2020

Y1 - 2020

N2 - Multilingual representations have the potential to make cross-lingual systems available to the vast majority of languages in the world. However, they currently require large pretraining corpora, or assume access to typologically similar languages. In this work, we address these obstacles by removing language identity signals from multilingual embeddings. We examine three approaches for this: 1) re-aligning the vector spaces of target languages (all together) to a pivot source language; 2) removing languages-specific means and variances, which yields better discriminativeness of embeddings as a by-product; and 3) normalizing input texts by removing morphological contractions and sentence reordering, thus yielding language-agnostic representations. We evaluate on the tasks of XNLI and reference-free MT evaluation of varying difficulty across 19 selected languages. Our experiments demonstrate the language-agnostic behavior of our multilingual representations, which manifest the potential of zero-shot cross-lingual transfer to distant and low-resource languages, and decrease the performance gap by 8.9 points (M-BERT) and 18.2 points (XLM-R) on average across all tasks and languages. We make our codes and models available.

AB - Multilingual representations have the potential to make cross-lingual systems available to the vast majority of languages in the world. However, they currently require large pretraining corpora, or assume access to typologically similar languages. In this work, we address these obstacles by removing language identity signals from multilingual embeddings. We examine three approaches for this: 1) re-aligning the vector spaces of target languages (all together) to a pivot source language; 2) removing languages-specific means and variances, which yields better discriminativeness of embeddings as a by-product; and 3) normalizing input texts by removing morphological contractions and sentence reordering, thus yielding language-agnostic representations. We evaluate on the tasks of XNLI and reference-free MT evaluation of varying difficulty across 19 selected languages. Our experiments demonstrate the language-agnostic behavior of our multilingual representations, which manifest the potential of zero-shot cross-lingual transfer to distant and low-resource languages, and decrease the performance gap by 8.9 points (M-BERT) and 18.2 points (XLM-R) on average across all tasks and languages. We make our codes and models available.

M3 - Journal article

VL - CoRR 2020

JO - arXiv.org

JF - arXiv.org

ER -

ID: 254998886