Lessons learned in multilingual grounded language learning

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

  • Ákos Kádár
  • Elliott, Desmond
  • Marc-Alexandre Côté
  • Grzegorz Chrupala
  • Afra Alishahi
Recent work has shown how to learn bettervisual-semantic embeddings by leveraging imagedescriptions in more than one language.Here, we investigate in detail which conditionsaffect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective
Original languageEnglish
Title of host publicationProceedings of the 22nd Conference on Computational Natural Language Learning
Number of pages11
PublisherAssociation for Computational Linguistics
Publication date2018
Pages402-412
ISBN (Print)978-1-948087-72-8
Publication statusPublished - 2018
Event22nd Conference on Computational Natural Language Learning (CoNLL 2018) - Brussels, Belgium
Duration: 31 Oct 20181 Nov 2018

Conference

Conference22nd Conference on Computational Natural Language Learning (CoNLL 2018)
LandBelgium
ByBrussels
Periode31/10/201801/11/2018

ID: 230797458