Textual Supervision for Visually Grounded Spoken Language Understanding

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 502 KB, PDF-dokument

Visually-grounded models of spoken language understanding extract semantic information directly from speech, without relying on transcriptions. This is useful for low-resource languages, where transcriptions can be expensive or impossible to obtain. Recent work showed that these models can be improved if transcriptions are available at training time. However, it is not clear how an end-to-end approach compares to a traditional pipeline-based approach when one has access to transcriptions. Comparing different strategies, we find that the pipeline approach works better when enough text is available. With low-resource languages in mind, we also show that translations can be effectively used in place of transcriptions but more data is needed to obtain similar results.
OriginalsprogEngelsk
TitelFindings of the Association for Computational Linguistics: EMNLP 2020
ForlagAssociation for Computational Linguistics
Publikationsdato2020
Sider2698–2709
DOI
StatusUdgivet - 2020
BegivenhedFindings of the Association of Computational Linguistics: EMNLP 2020 -
Varighed: 16 nov. 202020 nov. 2020

Konference

KonferenceFindings of the Association of Computational Linguistics
Periode16/11/202020/11/2020

    Forskningsområder

  • cs.CL, cs.LG, cs.SD, eess.AS

ID: 305183788