Joint emotion label space modeling for affect lexica

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Joint emotion label space modeling for affect lexica. / De Bruyne, Luna; Atanasova, Pepa; Augenstein, Isabelle.

In: Computer Speech and Language, Vol. 71, 101257, 01.2022.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

De Bruyne, L, Atanasova, P & Augenstein, I 2022, 'Joint emotion label space modeling for affect lexica', Computer Speech and Language, vol. 71, 101257. https://doi.org/10.1016/j.csl.2021.101257

APA

De Bruyne, L., Atanasova, P., & Augenstein, I. (2022). Joint emotion label space modeling for affect lexica. Computer Speech and Language, 71, [101257]. https://doi.org/10.1016/j.csl.2021.101257

Vancouver

De Bruyne L, Atanasova P, Augenstein I. Joint emotion label space modeling for affect lexica. Computer Speech and Language. 2022 Jan;71. 101257. https://doi.org/10.1016/j.csl.2021.101257

Author

De Bruyne, Luna ; Atanasova, Pepa ; Augenstein, Isabelle. / Joint emotion label space modeling for affect lexica. In: Computer Speech and Language. 2022 ; Vol. 71.

Bibtex

@article{4ba166fab2e8472cbe2c176aa8f70e39,
title = "Joint emotion label space modeling for affect lexica",
abstract = "Emotion lexica are commonly used resources to combat data poverty in automatic emotion detection. However, vocabulary coverage issues, differences in construction method and discrepancies in emotion framework and representation result in a heterogeneous landscape of emotion detection resources, calling for a unified approach to utilizing them. To combat this, we present an extended emotion lexicon of 30,273 unique entries, which is a result of merging eight existing emotion lexica by means of a multi-view variational autoencoder (VAE). We showed that a VAE is a valid approach for combining lexica with different label spaces into a joint emotion label space with a chosen number of dimensions, and that these dimensions are still interpretable. We tested the utility of the unified VAE lexicon by employing the lexicon values as features in an emotion detection model. We found that the VAE lexicon outperformed individual lexica, but contrary to our expectations, it did not outperform a naive concatenation of lexica, although it did contribute to the naive concatenation when added as an extra lexicon. Furthermore, using lexicon information as additional features on top of state-of-the-art language models usually resulted in a better performance than when no lexicon information was used.",
keywords = "Emotion detection, Emotion lexica, NLP, VAE",
author = "{De Bruyne}, Luna and Pepa Atanasova and Isabelle Augenstein",
year = "2022",
month = jan,
doi = "10.1016/j.csl.2021.101257",
language = "English",
volume = "71",
journal = "Computer Speech and Language",
issn = "0885-2308",
publisher = "Academic Press",

}

RIS

TY - JOUR

T1 - Joint emotion label space modeling for affect lexica

AU - De Bruyne, Luna

AU - Atanasova, Pepa

AU - Augenstein, Isabelle

PY - 2022/1

Y1 - 2022/1

N2 - Emotion lexica are commonly used resources to combat data poverty in automatic emotion detection. However, vocabulary coverage issues, differences in construction method and discrepancies in emotion framework and representation result in a heterogeneous landscape of emotion detection resources, calling for a unified approach to utilizing them. To combat this, we present an extended emotion lexicon of 30,273 unique entries, which is a result of merging eight existing emotion lexica by means of a multi-view variational autoencoder (VAE). We showed that a VAE is a valid approach for combining lexica with different label spaces into a joint emotion label space with a chosen number of dimensions, and that these dimensions are still interpretable. We tested the utility of the unified VAE lexicon by employing the lexicon values as features in an emotion detection model. We found that the VAE lexicon outperformed individual lexica, but contrary to our expectations, it did not outperform a naive concatenation of lexica, although it did contribute to the naive concatenation when added as an extra lexicon. Furthermore, using lexicon information as additional features on top of state-of-the-art language models usually resulted in a better performance than when no lexicon information was used.

AB - Emotion lexica are commonly used resources to combat data poverty in automatic emotion detection. However, vocabulary coverage issues, differences in construction method and discrepancies in emotion framework and representation result in a heterogeneous landscape of emotion detection resources, calling for a unified approach to utilizing them. To combat this, we present an extended emotion lexicon of 30,273 unique entries, which is a result of merging eight existing emotion lexica by means of a multi-view variational autoencoder (VAE). We showed that a VAE is a valid approach for combining lexica with different label spaces into a joint emotion label space with a chosen number of dimensions, and that these dimensions are still interpretable. We tested the utility of the unified VAE lexicon by employing the lexicon values as features in an emotion detection model. We found that the VAE lexicon outperformed individual lexica, but contrary to our expectations, it did not outperform a naive concatenation of lexica, although it did contribute to the naive concatenation when added as an extra lexicon. Furthermore, using lexicon information as additional features on top of state-of-the-art language models usually resulted in a better performance than when no lexicon information was used.

KW - Emotion detection

KW - Emotion lexica

KW - NLP

KW - VAE

U2 - 10.1016/j.csl.2021.101257

DO - 10.1016/j.csl.2021.101257

M3 - Journal article

AN - SCOPUS:85111060499

VL - 71

JO - Computer Speech and Language

JF - Computer Speech and Language

SN - 0885-2308

M1 - 101257

ER -

ID: 300694897