Lessons learned in multilingual grounded language learning

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Lessons learned in multilingual grounded language learning. / Kádár, Ákos; Elliott, Desmond; Côté, Marc-Alexandre; Chrupala, Grzegorz; Alishahi, Afra.

Proceedings of the 22nd Conference on Computational Natural Language Learning. Association for Computational Linguistics, 2018. p. 402-412.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Kádár, Á, Elliott, D, Côté, M-A, Chrupala, G & Alishahi, A 2018, Lessons learned in multilingual grounded language learning. in Proceedings of the 22nd Conference on Computational Natural Language Learning. Association for Computational Linguistics, pp. 402-412, 22nd Conference on Computational Natural Language Learning (CoNLL 2018), Brussels, Belgium, 31/10/2018.

APA

Kádár, Á., Elliott, D., Côté, M-A., Chrupala, G., & Alishahi, A. (2018). Lessons learned in multilingual grounded language learning. In Proceedings of the 22nd Conference on Computational Natural Language Learning (pp. 402-412). Association for Computational Linguistics.

Vancouver

Kádár Á, Elliott D, Côté M-A, Chrupala G, Alishahi A. Lessons learned in multilingual grounded language learning. In Proceedings of the 22nd Conference on Computational Natural Language Learning. Association for Computational Linguistics. 2018. p. 402-412

Author

Kádár, Ákos ; Elliott, Desmond ; Côté, Marc-Alexandre ; Chrupala, Grzegorz ; Alishahi, Afra. / Lessons learned in multilingual grounded language learning. Proceedings of the 22nd Conference on Computational Natural Language Learning. Association for Computational Linguistics, 2018. pp. 402-412

Bibtex

@inproceedings{937709f484784c409d1d5643104d03de,
title = "Lessons learned in multilingual grounded language learning",
abstract = "Recent work has shown how to learn bettervisual-semantic embeddings by leveraging imagedescriptions in more than one language.Here, we investigate in detail which conditionsaffect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective ",
author = "{\'A}kos K{\'a}d{\'a}r and Desmond Elliott and Marc-Alexandre C{\^o}t{\'e} and Grzegorz Chrupala and Afra Alishahi",
year = "2018",
language = "English",
isbn = "978-1-948087-72-8",
pages = "402--412",
booktitle = "Proceedings of the 22nd Conference on Computational Natural Language Learning",
publisher = "Association for Computational Linguistics",
note = "22nd Conference on Computational Natural Language Learning (CoNLL 2018) ; Conference date: 31-10-2018 Through 01-11-2018",

}

RIS

TY - GEN

T1 - Lessons learned in multilingual grounded language learning

AU - Kádár, Ákos

AU - Elliott, Desmond

AU - Côté, Marc-Alexandre

AU - Chrupala, Grzegorz

AU - Alishahi, Afra

PY - 2018

Y1 - 2018

N2 - Recent work has shown how to learn bettervisual-semantic embeddings by leveraging imagedescriptions in more than one language.Here, we investigate in detail which conditionsaffect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective

AB - Recent work has shown how to learn bettervisual-semantic embeddings by leveraging imagedescriptions in more than one language.Here, we investigate in detail which conditionsaffect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective

M3 - Article in proceedings

SN - 978-1-948087-72-8

SP - 402

EP - 412

BT - Proceedings of the 22nd Conference on Computational Natural Language Learning

PB - Association for Computational Linguistics

T2 - 22nd Conference on Computational Natural Language Learning (CoNLL 2018)

Y2 - 31 October 2018 through 1 November 2018

ER -

ID: 230797458