IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
IGLUE : A Benchmark for Transfer Learning across Modalities, Tasks, and Languages. / Bugliarello, Emanuele; Liu, Fangyu; Pfeiffer, Jonas ; Reddy, Siva; Elliott, Desmond; Ponti, Edoardo Maria; Vulić, Ivan.
Proceedings of the 39th International Conference on Machine Learning. PMLR, 2022. s. 2370-2392 (Proceedings of Machine Learning Research, Bind 162).Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
Learning (ICML 2022), Baltimore, MD, USA, 17/07/2022. <https://proceedings.mlr.press/v162/bugliarello22a.html>
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - IGLUE
T2 - 39th International Conference on Machine<br/>Learning (ICML 2022)
AU - Bugliarello, Emanuele
AU - Liu, Fangyu
AU - Pfeiffer, Jonas
AU - Reddy, Siva
AU - Elliott, Desmond
AU - Ponti, Edoardo Maria
AU - Vulić, Ivan
PY - 2022
Y1 - 2022
N2 - Reliable evaluation benchmarks designed for replicability and comprehensiveness have driven progress in machine learning. Due to the lack of a multilingual benchmark, however, vision-and-language research has mostly focused on English language tasks. To fill this gap, we introduce the Image-Grounded Language Understanding Evaluation benchmark. IGLUE brings together{—}by both aggregating pre-existing datasets and creating new ones{—}visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages. Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups. Based on the evaluation of the available state-of-the-art models, we find that translate-test transfer is superior to zero-shot transfer and that few-shot learning is hard to harness for many tasks. Moreover, downstream performance is partially explained by the amount of available unlabelled textual data for pretraining, and only weakly by the typological distance of target{–}source languages. We hope to encourage future research efforts in this area by releasing the benchmark to the community.
AB - Reliable evaluation benchmarks designed for replicability and comprehensiveness have driven progress in machine learning. Due to the lack of a multilingual benchmark, however, vision-and-language research has mostly focused on English language tasks. To fill this gap, we introduce the Image-Grounded Language Understanding Evaluation benchmark. IGLUE brings together{—}by both aggregating pre-existing datasets and creating new ones{—}visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages. Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups. Based on the evaluation of the available state-of-the-art models, we find that translate-test transfer is superior to zero-shot transfer and that few-shot learning is hard to harness for many tasks. Moreover, downstream performance is partially explained by the amount of available unlabelled textual data for pretraining, and only weakly by the typological distance of target{–}source languages. We hope to encourage future research efforts in this area by releasing the benchmark to the community.
M3 - Article in proceedings
T3 - Proceedings of Machine Learning Research
SP - 2370
EP - 2392
BT - Proceedings of the 39th International Conference on Machine Learning
PB - PMLR
Y2 - 17 July 2022 through 23 July 2022
ER -
ID: 339325236