Domain-Specific Word Embeddings with Structure Prediction

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 1,62 MB, PDF-dokument

Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, for example, across time or domain. Current methods do not offer a way to use or predict information on structure between sub-corpora, time or domain and dynamic embeddings can only be compared after postalignment. We propose novel word embedding methods that provide general word representations for the whole corpus, domainspecific representations for each sub-corpus, sub-corpus structure, and embedding alignment simultaneously. We present an empirical evaluation on New York Times articles and two English Wikipedia datasets with articles on science and philosophy. Our method, called Word2Vec with Structure Prediction (W2VPred), provides better performance than baselines in terms of the general analogy tests, domain-specific analogy tests, and multiple specific word embedding evaluations as well as structure prediction performance when no structure is given a priori. As a use case in the field of Digital Humanities we demonstrate how to raise novel research questions for high literature from the German Text Archive.

OriginalsprogEngelsk
TidsskriftTransactions of the Association for Computational Linguistics
Vol/bind11
Sider (fra-til)320-335
ISSN2307-387X
DOI
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
We thank Gilles Blanchard for valuable comments on the manuscript. We further thank Felix Herron for his support in the data collection process. DL and SN are supported by the German Ministry for Education and Research (BMBF) as BIFOLD-Berlin Institute for the Foundations of Learning and Data under grants 01IS18025A and 01IS18037A. SB was partially funded by the Platform Intelligence in News project, which is supported by Innovation Fund Denmark via the Grand Solutions program and by the European Union under the Grant Agreement no. 10106555, FairER. Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency (REA). Neither the European Union nor REA can be held responsible for them.

Funding Information:
We thank Gilles Blanchard for valuable comments on the manuscript. We further thank Felix Herron for his support in the data collection process. DL and SN are supported by the German Ministry for Education and Research (BMBF) as BIFOLD - Berlin Institute for the Foundations of Learning and Data under grants 01IS18025A and 01IS18037A. SB was partially funded by the Platform Intelligence in News project, which is supported by Innovation Fund Denmark via the Grand Solutions program and by the European Union under the Grant Agreement no. 10106555, FairER. Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency (REA). Neither the European Union nor REA can be held responsible for them.

Publisher Copyright:
© 2023 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

ID: 371562185