Parameter sharing between dependency parsers for related languages

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

Previous work has suggested that parameter sharing between transition-based neural dependency parsers for related languages can lead to better performance, but there is no consensus on what parameters to share. We present an evaluation of 27 different parameter sharing strategies across 10 languages, representing five pairs of related languages, each pair from a different language family. We find that sharing transition classifier parameters always helps, whereas the usefulness of sharing word and/or character LSTM parameters varies. Based on this result, we propose an architecture where the transition classifier is shared, and the sharing of word and character parameters is controlled by a parameter that can be tuned on validation data. This model is linguistically motivated and obtains significant improvements over a mono-lingually trained baseline. We also find that sharing transition classifier parameters helps when training a parser on unrelated language pairs, but we find that, in the case of unrelated languages, sharing too many parameters does not help.
OriginalsprogEngelsk
TitelProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
ForlagAssociation for Computational Linguistics
Publikationsdato2020
Sider4992-4997
StatusUdgivet - 2020
Begivenhed2018 Conference on Empirical Methods in Natural Language Processing - Brussels, Belgien
Varighed: 31 okt. 20184 nov. 2018

Konference

Konference2018 Conference on Empirical Methods in Natural Language Processing
LandBelgien
ByBrussels
Periode31/10/201804/11/2018

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 214507219