Probing Cross-Modal Representations in Multi-Step Relational Reasoning

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 674 KB, PDF-dokument

We investigate the representations learned by vision and language models in tasks that require relational reasoning. Focusing on the problem of assessing the relative size of objects in abstract visual contexts, we analyse both one-step and two-step reasoning. For the latter, we construct a new dataset of three-image scenes and define a task that requires reasoning at the level of the individual images and across images in a scene. We probe the learned model representations using diagnostic classifiers. Our experiments show that pretrained multimodal transformer-based architectures can perform higher-level relational reasoning, and are able to learn representations for novel tasks and data that are very different from what was seen in pretraining.
OriginalsprogEngelsk
TitelProceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
ForlagAssociation for Computational Linguistics
Publikationsdato2021
Sider152-162
DOI
StatusUdgivet - 2021
Begivenhed6th Workshop on Representation Learning for NLP (RepL4NLP-2021) - Online, Online
Varighed: 1 aug. 20211 aug. 2021

Konference

Konference6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
LokationOnline
ByOnline
Periode01/08/202101/08/2021

ID: 299038005