Grounded Sequence to Sequence Transduction

Research output: Contribution to journalJournal articleResearchpeer-review

  • Lucia Specia
  • Loic Barrault
  • Ozan Caglayan
  • Amanda Duarte
  • Elliott, Desmond
  • Spandana Gella
  • Nils Holzenberger
  • Chiraag Lala
  • Sun Jae Lee
  • Jindrich Libovicky
  • Pranava Madhyastha
  • Florian Metze
  • Karl Mulligan
  • Alissa Ostapenko
  • Shruti Palaskar
  • Ramon Sanabria
  • Josiah Wang
  • Raman Arora

Speech recognition and machine translation have made major progress over the past decades, providing practical systems to map one language sequence to another. Although multiple modalities such as sound and video are becoming increasingly available, the state-of-the-art systems are inherently unimodal, in the sense that they take a single modality - either speech or text - as input. Evidence from human learning suggests that additional modalities can provide disambiguating signals crucial for many language tasks. In this article, we describe the How2 dataset , a large, open-domain collection of videos with transcriptions and their translations. We then show how this single dataset can be used to develop systems for a variety of language tasks and present a number of models meant as starting points. Across tasks, we find that building multimodal architectures that perform better than their unimodal counterpart remains a challenge. This leaves plenty of room for the exploration of more advanced solutions that fully exploit the multimodal nature of the How2 dataset , and the general direction of multimodal learning with other datasets as well.

Original languageEnglish
Article number9103248
JournalIEEE Journal on Selected Topics in Signal Processing
Volume14
Issue number3
Pages (from-to)577-591
ISSN1932-4553
DOIs
Publication statusPublished - 2020

    Research areas

  • Grounding, machine translation, multimodal machine learning, representation learning, speech recognition, summarization

ID: 250484073