Measuring the Diversity of Automatic Image Descriptions

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Automatic image description systems typically produce generic sentences that only make use of a small subset of the vocabulary available to them. In this paper, we consider the production of generic descriptions as a lack of diversity in the output, which we quantify using established metrics and two new metrics that frame image description as a word recall task. This framing allows us to evaluate system performance on the head of the vocabulary, as well as on the long tail, where system performance degrades. We use these metrics to examine the diversity of the sentences generated by nine state-of-the-art systems on the MS COCO data set. We find that the systems trained with maximum likelihood objectives produce less diverse output than those trained with additional adversarial objectives. However, the adversarially-trained models only produce more types from the head of the vocabulary and not the tail. Besides vocabulary-based methods, we also look at the compositional capacity of the systems, specifically their ability to create compound nouns and prepositional phrases of different lengths. We conclude that there is still much room for improvement, and offer a toolkit to measure progress towards the goal of generating more diverse image descriptions
OriginalsprogEngelsk
TitelProceedings of the 27th International Conference on Computational Linguistics
Antal sider12
ForlagAssociation for Computational Linguistics
Publikationsdato2018
Sider1730-1741
ISBN (Trykt)978-1-948087-50-6
StatusUdgivet - 2018
Begivenhed27th International Conference on Computational Linguistics - Sanata Fe, USA
Varighed: 20 aug. 201826 aug. 2018

Konference

Konference27th International Conference on Computational Linguistics
LandUSA
BySanata Fe
Periode20/08/201826/08/2018

Links

ID: 230797609