Do end-to-end speech recognition models care about context?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

The two most common paradigms for end-to-end speech recognition are connectionist temporal classification (CTC) and attention-based encoder-decoder (AED) models. It has been argued that the latter is better suited for learning an implicit language model. We test this hypothesis by measuring temporal context sensitivity and evaluate how the models perform when we constrain the amount of contextual information in the audio input. We find that the AED model is indeed more context sensitive, but that the gap can be closed by adding self-attention to the CTC model. Furthermore, the two models perform similarly when contextual information is constrained. Finally, in contrast to previous research, our results show that the CTC model is highly competitive on WSJ and LibriSpeech without the help of an external language model.

Original languageEnglish
Title of host publicationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2020-October
PublisherInternational Speech Communication Association (ISCA)
Publication date2020
Pages4352-4356
DOIs
Publication statusPublished - 2020
Event21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020 - Shanghai, China
Duration: 25 Oct 202029 Oct 2020

Conference

Conference21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020
LandChina
ByShanghai
Periode25/10/202029/10/2020
SponsorAlibaba Group, Amazon Alexa, Apple, et al., Intel, Magic Data

    Research areas

  • Attention-based encoder-decoder, Automatic speech recognition, Connectionist temporal classification, End-to-end speech recognition

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 254726027