Quantum-inspired multimodal fusion for video sentiment analysis

Research output: Contribution to journalJournal articleResearchpeer-review

Documents

  • Fulltext

    Submitted manuscript, 843 KB, PDF document

We tackle the crucial challenge of fusing different modalities of features for multimodal sentiment analysis. Mainly based on neural networks, existing approaches largely model multimodal interactions in an implicit and hard-to-understand manner. We address this limitation with inspirations from quantum theory, which contains principled methods for modeling complicated interactions and correlations. In our quantum-inspired framework, the word interaction within a single modality and the interaction across modalities are formulated with superposition and entanglement respectively at different stages. The complex-valued neural network implementation of the framework achieves comparable results to state-of-the-art systems on two benchmarking video sentiment analysis datasets. In the meantime, we produce the unimodal and bimodal sentiment directly from the model to interpret the entangled decision.

Original languageEnglish
JournalInformation Fusion
Volume65
Pages (from-to)58-71
ISSN1566-2535
DOIs
Publication statusPublished - 2021

Bibliographical note

Publisher Copyright:
© 2020 Elsevier B.V.

    Research areas

  • Machine learning, Multimodal sentiment analysis, Quantum theory

ID: 306691917