Adversarial Evaluation of Multimodal Machine Translation

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Adversarial Evaluation of Multimodal Machine Translation. / Elliott, Desmond.

Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018. p. 2974-2978.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Elliott, D 2018, Adversarial Evaluation of Multimodal Machine Translation. in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 2974-2978, 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31/10/2018. <https://www.aclweb.org/anthology/D18-1.pdf>

APA

Elliott, D. (2018). Adversarial Evaluation of Multimodal Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 2974-2978). Association for Computational Linguistics. https://www.aclweb.org/anthology/D18-1.pdf

Vancouver

Elliott D. Adversarial Evaluation of Multimodal Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. 2018. p. 2974-2978

Author

Elliott, Desmond. / Adversarial Evaluation of Multimodal Machine Translation. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018. pp. 2974-2978

Bibtex

@inproceedings{7bc42407e70c46f48890c59f7ac30a30,
title = "Adversarial Evaluation of Multimodal Machine Translation",
abstract = "The promise of combining vision and language in multimodal machine translation is that systems will produce better translations by leveraging the image data. However, inconsistent results have lead to uncertainty about whether the images actually improve translation quality. We present an adversarial evaluation method to directly examine the utility of the image data in this task. Our evaluation measures whether multimodal translation systems perform better given either the congruentimage or a random incongruent image, in add ition to the correct source language sentence. We find that two out of three publicly available systems are sensitive to this perturbation of the data, and recommend that all systems pass this evaluation in the future ",
author = "Desmond Elliott",
year = "2018",
language = "English",
isbn = "978-1-948087-84-1",
pages = "2974--2978",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics",
note = "null ; Conference date: 31-10-2018 Through 04-11-2018",

}

RIS

TY - GEN

T1 - Adversarial Evaluation of Multimodal Machine Translation

AU - Elliott, Desmond

PY - 2018

Y1 - 2018

N2 - The promise of combining vision and language in multimodal machine translation is that systems will produce better translations by leveraging the image data. However, inconsistent results have lead to uncertainty about whether the images actually improve translation quality. We present an adversarial evaluation method to directly examine the utility of the image data in this task. Our evaluation measures whether multimodal translation systems perform better given either the congruentimage or a random incongruent image, in add ition to the correct source language sentence. We find that two out of three publicly available systems are sensitive to this perturbation of the data, and recommend that all systems pass this evaluation in the future

AB - The promise of combining vision and language in multimodal machine translation is that systems will produce better translations by leveraging the image data. However, inconsistent results have lead to uncertainty about whether the images actually improve translation quality. We present an adversarial evaluation method to directly examine the utility of the image data in this task. Our evaluation measures whether multimodal translation systems perform better given either the congruentimage or a random incongruent image, in add ition to the correct source language sentence. We find that two out of three publicly available systems are sensitive to this perturbation of the data, and recommend that all systems pass this evaluation in the future

M3 - Article in proceedings

SN - 978-1-948087-84-1

SP - 2974

EP - 2978

BT - Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics

Y2 - 31 October 2018 through 4 November 2018

ER -

ID: 230797240