Are Multilingual Sentiment Models Equally Right for the Right Reasons?
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Documents
- Fulltext
Final published version, 238 KB, PDF document
Multilingual NLP models provide potential solutions to the digital language divide, i.e., cross-language performance disparities. Early analyses of such models have indicated good performance across training languages and good generalization to unseen, related languages. This work examines whether, between related languages, multilingual models are equally right for the right reasons, i.e., if interpretability methods reveal that the models put emphasis on the same words as humans. To this end, we provide a new trilingual, parallel corpus of rationale annotations for English, Danish, and Italian sentiment analysis models and use it to benchmark models and interpretability methods. We propose rank-biased overlap as a better metric for comparing input token attributions to human rationale annotations. Our results show: (i) models generally perform well on the languages they are trained on, and align best with human rationales in these languages; (ii) performance is higher on English, even when not a source language, but this performance is not accompanied by higher alignment with human rationales, which suggests that language models favor English, but do not facilitate successful transfer of rationales.
Original language | English |
---|---|
Title of host publication | Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP |
Publisher | Association for Computational Linguistics (ACL) |
Publication date | 2022 |
Pages | 131–141 |
Publication status | Published - 2022 |
Event | Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP - Abu Dhabi Duration: 8 Dec 2022 → 8 Dec 2022 |
Workshop
Workshop | Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP |
---|---|
By | Abu Dhabi |
Periode | 08/12/2022 → 08/12/2022 |
Links
- https://aclanthology.org/2022.blackboxnlp-1.11/
Final published version
Number of downloads are based on statistics from Google Scholar and www.ku.dk
No data available
ID: 338603346