Multilingual Multimodal Learning with Machine Translated Text

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 1,87 MB, PDF-dokument

Most vision-and-language pretraining research focuses on English tasks. However, the creation of multilingual multimodal evaluation datasets (e.g. Multi30K, xGQA, XVNLI, and MaRVL) poses a new challenge in finding high-quality training data that is both multilingual and multimodal. In this paper, we investigate whether machine translating English multimodal data can be an effective proxy for the lack of readily available multilingual data. We call this framework TD-MML: Translated Data for Multilingual Multimodal Learning, and it can be applied to any multimodal dataset and model. We apply it to both pretraining and fine-tuning data with a state-of-the-art model. In order to prevent models from learning from low-quality translated text, we propose two metrics for automatically removing such translations from the resulting datasets. In experiments on five tasks across 20 languages in the IGLUE benchmark, we show that translated data can provide a useful signal for multilingual multimodal learning, both at pretraining and fine-tuning.
OriginalsprogEngelsk
TitelFindings of the Association for Computational Linguistics: EMNLP 2022
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2022
Sider4178–4193
StatusUdgivet - 2022
BegivenhedThe 2022 Conference on Empirical Methods in Natural Language Processing - Abu Dhabi, Abu Dhabi
Varighed: 7 dec. 202211 dec. 2022
Konferencens nummer: 17
https://2022.emnlp.org/

Konference

KonferenceThe 2022 Conference on Empirical Methods in Natural Language Processing
Nummer17
LokationAbu Dhabi
ByAbu Dhabi
Periode07/12/202211/12/2022
Internetadresse

ID: 339327319