Are All Good Word Vector Spaces Isomorphic?
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Are All Good Word Vector Spaces Isomorphic? / Vulic, Ivan; Ruder, Sebastian ; Søgaard, Anders.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. p. 3178–3192.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Are All Good Word Vector Spaces Isomorphic?
AU - Vulic, Ivan
AU - Ruder, Sebastian
AU - Søgaard, Anders
PY - 2020
Y1 - 2020
N2 - Existing algorithms for aligning cross-lingual word vector spaces assume that vector spaces are approximately isomorphic. As a result, they perform poorly or fail completely on non-isomorphic spaces. Such non-isomorphism has been hypothesised to result from typological differences between languages. In this work, we ask whether non-isomorphism is also crucially a sign of degenerate word vector spaces. We present a series of experiments across diverse languages which show that variance in performance across language pairs is not only due to typological differences, but can mostly be attributed to the size of the monolingual resources available, and to the properties and duration of monolingual training (e.g. “under-training”).
AB - Existing algorithms for aligning cross-lingual word vector spaces assume that vector spaces are approximately isomorphic. As a result, they perform poorly or fail completely on non-isomorphic spaces. Such non-isomorphism has been hypothesised to result from typological differences between languages. In this work, we ask whether non-isomorphism is also crucially a sign of degenerate word vector spaces. We present a series of experiments across diverse languages which show that variance in performance across language pairs is not only due to typological differences, but can mostly be attributed to the size of the monolingual resources available, and to the properties and duration of monolingual training (e.g. “under-training”).
U2 - 10.18653/v1/2020.emnlp-main.257
DO - 10.18653/v1/2020.emnlp-main.257
M3 - Article in proceedings
SP - 3178
EP - 3192
BT - Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
PB - Association for Computational Linguistics
T2 - The 2020 Conference on Empirical Methods in Natural Language Processing
Y2 - 16 November 2020 through 20 November 2020
ER -
ID: 258388356