What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis. / Gkoumas, Dimitris; Li, Qiuchi; Lioma, Christina; Yu, Yijun; Song, Dawei.

In: Information Fusion, Vol. 66, 2021, p. 184-197.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Gkoumas, D, Li, Q, Lioma, C, Yu, Y & Song, D 2021, 'What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis', Information Fusion, vol. 66, pp. 184-197. https://doi.org/10.1016/j.inffus.2020.09.005

APA

Gkoumas, D., Li, Q., Lioma, C., Yu, Y., & Song, D. (2021). What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis. Information Fusion, 66, 184-197. https://doi.org/10.1016/j.inffus.2020.09.005

Vancouver

Gkoumas D, Li Q, Lioma C, Yu Y, Song D. What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis. Information Fusion. 2021;66:184-197. https://doi.org/10.1016/j.inffus.2020.09.005

Author

Gkoumas, Dimitris ; Li, Qiuchi ; Lioma, Christina ; Yu, Yijun ; Song, Dawei. / What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis. In: Information Fusion. 2021 ; Vol. 66. pp. 184-197.

Bibtex

@article{b007cc9320334a6da01cfcfaf723d99a,
title = "What makes the difference?: An empirical comparison of fusion strategies for multimodal language analysis",
abstract = "Multimodal video sentiment analysis is a rapidly growing area. It combines verbal (i.e., linguistic) and non-verbal modalities (i.e., visual, acoustic) to predict the sentiment of utterances. A recent trend has been geared towards different modality fusion models utilizing various attention, memory and recurrent components. However, there lacks a systematic investigation on how these different components contribute to solving the problem as well as their limitations. This paper aims to fill the gap, marking the following key innovations. We present the first large-scale and comprehensive empirical comparison of eleven state-of-the-art (SOTA) modality fusion approaches in two video sentiment analysis tasks, with three SOTA benchmark corpora. An in-depth analysis of the results shows that the attention mechanisms are the most effective for modelling crossmodal interactions, yet they are computationally expensive. Second, additional levels of crossmodal interaction decrease performance. Third, positive sentiment utterances are the most challenging cases for all approaches. Finally, integrating context and utilizing the linguistic modality as a pivot for non-verbal modalities improve performance. We expect that the findings would provide helpful insights and guidance to the development of more effective modality fusion models.",
keywords = "Emotion recognition, Multimodal human language understanding, Reproducibility in multimodal machine learning, Video sentiment analysis",
author = "Dimitris Gkoumas and Qiuchi Li and Christina Lioma and Yijun Yu and Dawei Song",
note = "Publisher Copyright: {\textcopyright} 2020 Elsevier B.V.",
year = "2021",
doi = "10.1016/j.inffus.2020.09.005",
language = "English",
volume = "66",
pages = "184--197",
journal = "Information Fusion",
issn = "1566-2535",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - What makes the difference?

T2 - An empirical comparison of fusion strategies for multimodal language analysis

AU - Gkoumas, Dimitris

AU - Li, Qiuchi

AU - Lioma, Christina

AU - Yu, Yijun

AU - Song, Dawei

N1 - Publisher Copyright: © 2020 Elsevier B.V.

PY - 2021

Y1 - 2021

N2 - Multimodal video sentiment analysis is a rapidly growing area. It combines verbal (i.e., linguistic) and non-verbal modalities (i.e., visual, acoustic) to predict the sentiment of utterances. A recent trend has been geared towards different modality fusion models utilizing various attention, memory and recurrent components. However, there lacks a systematic investigation on how these different components contribute to solving the problem as well as their limitations. This paper aims to fill the gap, marking the following key innovations. We present the first large-scale and comprehensive empirical comparison of eleven state-of-the-art (SOTA) modality fusion approaches in two video sentiment analysis tasks, with three SOTA benchmark corpora. An in-depth analysis of the results shows that the attention mechanisms are the most effective for modelling crossmodal interactions, yet they are computationally expensive. Second, additional levels of crossmodal interaction decrease performance. Third, positive sentiment utterances are the most challenging cases for all approaches. Finally, integrating context and utilizing the linguistic modality as a pivot for non-verbal modalities improve performance. We expect that the findings would provide helpful insights and guidance to the development of more effective modality fusion models.

AB - Multimodal video sentiment analysis is a rapidly growing area. It combines verbal (i.e., linguistic) and non-verbal modalities (i.e., visual, acoustic) to predict the sentiment of utterances. A recent trend has been geared towards different modality fusion models utilizing various attention, memory and recurrent components. However, there lacks a systematic investigation on how these different components contribute to solving the problem as well as their limitations. This paper aims to fill the gap, marking the following key innovations. We present the first large-scale and comprehensive empirical comparison of eleven state-of-the-art (SOTA) modality fusion approaches in two video sentiment analysis tasks, with three SOTA benchmark corpora. An in-depth analysis of the results shows that the attention mechanisms are the most effective for modelling crossmodal interactions, yet they are computationally expensive. Second, additional levels of crossmodal interaction decrease performance. Third, positive sentiment utterances are the most challenging cases for all approaches. Finally, integrating context and utilizing the linguistic modality as a pivot for non-verbal modalities improve performance. We expect that the findings would provide helpful insights and guidance to the development of more effective modality fusion models.

KW - Emotion recognition

KW - Multimodal human language understanding

KW - Reproducibility in multimodal machine learning

KW - Video sentiment analysis

UR - http://www.scopus.com/inward/record.url?scp=85091217348&partnerID=8YFLogxK

U2 - 10.1016/j.inffus.2020.09.005

DO - 10.1016/j.inffus.2020.09.005

M3 - Journal article

AN - SCOPUS:85091217348

VL - 66

SP - 184

EP - 197

JO - Information Fusion

JF - Information Fusion

SN - 1566-2535

ER -

ID: 306691667