Measuring the Diversity of Automatic Image Descriptions

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Measuring the Diversity of Automatic Image Descriptions. / van Miltenburg, Emiel; Elliott, Desmond; Vossen, Piek.

Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, 2018. s. 1730-1741.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

van Miltenburg, E, Elliott, D & Vossen, P 2018, Measuring the Diversity of Automatic Image Descriptions. i Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, s. 1730-1741, 27th International Conference on Computational Linguistics, Sanata Fe, USA, 20/08/2018. <https://www.aclweb.org/anthology/C18-1.pdf>

APA

van Miltenburg, E., Elliott, D., & Vossen, P. (2018). Measuring the Diversity of Automatic Image Descriptions. I Proceedings of the 27th International Conference on Computational Linguistics (s. 1730-1741). Association for Computational Linguistics. https://www.aclweb.org/anthology/C18-1.pdf

Vancouver

van Miltenburg E, Elliott D, Vossen P. Measuring the Diversity of Automatic Image Descriptions. I Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics. 2018. s. 1730-1741

Author

van Miltenburg, Emiel ; Elliott, Desmond ; Vossen, Piek. / Measuring the Diversity of Automatic Image Descriptions. Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, 2018. s. 1730-1741

Bibtex

@inproceedings{e48ffa013ad64815ad2bdd95cb8cd4f9,
title = "Measuring the Diversity of Automatic Image Descriptions",
abstract = "Automatic image description systems typically produce generic sentences that only make use of a small subset of the vocabulary available to them. In this paper, we consider the production of generic descriptions as a lack of diversity in the output, which we quantify using established metrics and two new metrics that frame image description as a word recall task. This framing allows us to evaluate system performance on the head of the vocabulary, as well as on the long tail, where system performance degrades. We use these metrics to examine the diversity of the sentences generated by nine state-of-the-art systems on the MS COCO data set. We find that the systems trained with maximum likelihood objectives produce less diverse output than those trained with additional adversarial objectives. However, the adversarially-trained models only produce more types from the head of the vocabulary and not the tail. Besides vocabulary-based methods, we also look at the compositional capacity of the systems, specifically their ability to create compound nouns and prepositional phrases of different lengths. We conclude that there is still much room for improvement, and offer a toolkit to measure progress towards the goal of generating more diverse image descriptions",
author = "{van Miltenburg}, Emiel and Desmond Elliott and Piek Vossen",
year = "2018",
language = "English",
isbn = "978-1-948087-50-6",
pages = "1730--1741",
booktitle = "Proceedings of the 27th International Conference on Computational Linguistics",
publisher = "Association for Computational Linguistics",
note = "null ; Conference date: 20-08-2018 Through 26-08-2018",

}

RIS

TY - GEN

T1 - Measuring the Diversity of Automatic Image Descriptions

AU - van Miltenburg, Emiel

AU - Elliott, Desmond

AU - Vossen, Piek

PY - 2018

Y1 - 2018

N2 - Automatic image description systems typically produce generic sentences that only make use of a small subset of the vocabulary available to them. In this paper, we consider the production of generic descriptions as a lack of diversity in the output, which we quantify using established metrics and two new metrics that frame image description as a word recall task. This framing allows us to evaluate system performance on the head of the vocabulary, as well as on the long tail, where system performance degrades. We use these metrics to examine the diversity of the sentences generated by nine state-of-the-art systems on the MS COCO data set. We find that the systems trained with maximum likelihood objectives produce less diverse output than those trained with additional adversarial objectives. However, the adversarially-trained models only produce more types from the head of the vocabulary and not the tail. Besides vocabulary-based methods, we also look at the compositional capacity of the systems, specifically their ability to create compound nouns and prepositional phrases of different lengths. We conclude that there is still much room for improvement, and offer a toolkit to measure progress towards the goal of generating more diverse image descriptions

AB - Automatic image description systems typically produce generic sentences that only make use of a small subset of the vocabulary available to them. In this paper, we consider the production of generic descriptions as a lack of diversity in the output, which we quantify using established metrics and two new metrics that frame image description as a word recall task. This framing allows us to evaluate system performance on the head of the vocabulary, as well as on the long tail, where system performance degrades. We use these metrics to examine the diversity of the sentences generated by nine state-of-the-art systems on the MS COCO data set. We find that the systems trained with maximum likelihood objectives produce less diverse output than those trained with additional adversarial objectives. However, the adversarially-trained models only produce more types from the head of the vocabulary and not the tail. Besides vocabulary-based methods, we also look at the compositional capacity of the systems, specifically their ability to create compound nouns and prepositional phrases of different lengths. We conclude that there is still much room for improvement, and offer a toolkit to measure progress towards the goal of generating more diverse image descriptions

M3 - Article in proceedings

SN - 978-1-948087-50-6

SP - 1730

EP - 1741

BT - Proceedings of the 27th International Conference on Computational Linguistics

PB - Association for Computational Linguistics

Y2 - 20 August 2018 through 26 August 2018

ER -

ID: 230797609