Visual Definition Modeling: Challenging Vision & Language Models to Define Words and Objects
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Visual Definition Modeling : Challenging Vision & Language Models to Define Words and Objects. / Scarlini, Bianca; Pasini, Tommaso; Navigli, Roberto.
In: AAAI Conference on Artificial Intelligence, Vol. 36, No. 10, 2022, p. 11267-11275.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Visual Definition Modeling
T2 - 36th AAAI Conference on Artificial Intelligence / 34th Conference on Innovative Applications of Artificial Intelligence / 12th Symposium on Educational Advances in Artificial Intelligence
AU - Scarlini, Bianca
AU - Pasini, Tommaso
AU - Navigli, Roberto
PY - 2022
Y1 - 2022
N2 - Architectures that model language and vision together have received much attention in recent years. Nonetheless, most tasks in this field focus on end-to-end applications without providing insights on whether it is the underlying semantics of visual objects or words that is captured. In this paper we draw on the established Definition Modeling paradigm and enhance it by grounding, for the first time, textual definitions to visual representations. We name this new task Visual Definition Modeling and put forward DEMETER and DIONYSUS, two benchmarks where, given an image as context, models have to generate a textual definition for a target being either i) a word that describes the image, or ii) an object patch therein. To measure the difficulty of our tasks we finetuned six different baselines and analyzed their performances, which show that a text-only encoder-decoder model is more effective than models pretrained for handling inputs of both modalities concurrently. This demonstrates the complexity of our benchmarks and encourages more research on text generation conditioned on multimodal inputs. The datasets for both benchmarks are available at https://github.com/SapienzaNLP/visual-definition-modeling as well as the code to reproduce our models.
AB - Architectures that model language and vision together have received much attention in recent years. Nonetheless, most tasks in this field focus on end-to-end applications without providing insights on whether it is the underlying semantics of visual objects or words that is captured. In this paper we draw on the established Definition Modeling paradigm and enhance it by grounding, for the first time, textual definitions to visual representations. We name this new task Visual Definition Modeling and put forward DEMETER and DIONYSUS, two benchmarks where, given an image as context, models have to generate a textual definition for a target being either i) a word that describes the image, or ii) an object patch therein. To measure the difficulty of our tasks we finetuned six different baselines and analyzed their performances, which show that a text-only encoder-decoder model is more effective than models pretrained for handling inputs of both modalities concurrently. This demonstrates the complexity of our benchmarks and encourages more research on text generation conditioned on multimodal inputs. The datasets for both benchmarks are available at https://github.com/SapienzaNLP/visual-definition-modeling as well as the code to reproduce our models.
U2 - 10.1609/aaai.v36i10.21377
DO - 10.1609/aaai.v36i10.21377
M3 - Conference article
VL - 36
SP - 11267
EP - 11275
JO - AAAI Conference on Artificial Intelligence
JF - AAAI Conference on Artificial Intelligence
SN - 2159-5399
IS - 10
Y2 - 22 February 2022 through 1 March 2022
ER -
ID: 337601998