Compositional Generalization in Image Captioning

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts. We study the problem of compositional generalization, which measures how well a model composes unseen combinations of concepts when describing images. State-of-the-art image captioning models show poor generalization performance on this task. We propose a multi-task model to address the poor performance, that combines caption generation and image--sentence ranking, and uses a decoding mechanism that re-ranks the captions according their similarity to the image. This model is substantially better at generalizing to unseen combinations of concepts compared to state-of-the-art captioning models.
Original languageEnglish
Title of host publicationProceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Number of pages12
PublisherAssociation for Computational Linguistics
Publication date1 Nov 2019
Publication statusPublished - 1 Nov 2019
Event23rd Conference on Computational Natural Language Learning - Hong Kong, China
Duration: 3 Nov 20194 Nov 2019


Conference23rd Conference on Computational Natural Language Learning
ByHong Kong

Number of downloads are based on statistics from Google Scholar and

No data available

ID: 230849989