Combining Sentiment Lexica with a Multi-View Variational Autoencoder

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

When assigning quantitative labels to a dataset, different methodologies may rely on different scales. In particular, when assigning polarities to words in a sentiment lexicon, annotators may use binary, categorical, or continuous labels. Naturally, it is of interest to unify these labels from disparate scales to both achieve maximal coverage over words and to create a single, more robust sentiment lexicon while retaining scale coherence. We introduce a generative model of sentiment lexica to combine disparate scales into a common latent representation. We realize this model with a novel multi-view variational autoencoder (VAE), called SentiVAE. We evaluate our approach via a downstream text classification task involving nine English-Language sentiment analysis datasets; our representation outperforms six individual sentiment lexica, as well as a straightforward combination thereof.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
PublisherAssociation for Computational Linguistics
Publication date2019
Pages635-640
DOIs
Publication statusPublished - 2019
Event2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - NAACL-HLT 2019 - Minneapolis, United States
Duration: 3 Jun 20197 Jun 2019

Conference

Conference2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - NAACL-HLT 2019
LandUnited States
ByMinneapolis
Periode03/06/201907/06/2019

ID: 240629209