Domain-Specific Word Embeddings with Structure Prediction

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Domain-Specific Word Embeddings with Structure Prediction. / Lassner, David; Brandl, Stephanie; Baillot, Anne; Nakajima, Shinichi.

I: Transactions of the Association for Computational Linguistics, Bind 11, 2023, s. 320-335.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Lassner, D, Brandl, S, Baillot, A & Nakajima, S 2023, 'Domain-Specific Word Embeddings with Structure Prediction', Transactions of the Association for Computational Linguistics, bind 11, s. 320-335. https://doi.org/10.1162/tacl_a_00538

APA

Lassner, D., Brandl, S., Baillot, A., & Nakajima, S. (2023). Domain-Specific Word Embeddings with Structure Prediction. Transactions of the Association for Computational Linguistics, 11, 320-335. https://doi.org/10.1162/tacl_a_00538

Vancouver

Lassner D, Brandl S, Baillot A, Nakajima S. Domain-Specific Word Embeddings with Structure Prediction. Transactions of the Association for Computational Linguistics. 2023;11:320-335. https://doi.org/10.1162/tacl_a_00538

Author

Lassner, David ; Brandl, Stephanie ; Baillot, Anne ; Nakajima, Shinichi. / Domain-Specific Word Embeddings with Structure Prediction. I: Transactions of the Association for Computational Linguistics. 2023 ; Bind 11. s. 320-335.

Bibtex

@article{08da1df814a242fcb6b3e7f71e53d8dc,
title = "Domain-Specific Word Embeddings with Structure Prediction",
abstract = "Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, for example, across time or domain. Current methods do not offer a way to use or predict information on structure between sub-corpora, time or domain and dynamic embeddings can only be compared after postalignment. We propose novel word embedding methods that provide general word representations for the whole corpus, domainspecific representations for each sub-corpus, sub-corpus structure, and embedding alignment simultaneously. We present an empirical evaluation on New York Times articles and two English Wikipedia datasets with articles on science and philosophy. Our method, called Word2Vec with Structure Prediction (W2VPred), provides better performance than baselines in terms of the general analogy tests, domain-specific analogy tests, and multiple specific word embedding evaluations as well as structure prediction performance when no structure is given a priori. As a use case in the field of Digital Humanities we demonstrate how to raise novel research questions for high literature from the German Text Archive.",
author = "David Lassner and Stephanie Brandl and Anne Baillot and Shinichi Nakajima",
note = "Publisher Copyright: {\textcopyright} 2023 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
year = "2023",
doi = "10.1162/tacl_a_00538",
language = "English",
volume = "11",
pages = "320--335",
journal = "Transactions of the Association for Computational Linguistics",
issn = "2307-387X",
publisher = "MIT Press",

}

RIS

TY - JOUR

T1 - Domain-Specific Word Embeddings with Structure Prediction

AU - Lassner, David

AU - Brandl, Stephanie

AU - Baillot, Anne

AU - Nakajima, Shinichi

N1 - Publisher Copyright: © 2023 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

PY - 2023

Y1 - 2023

N2 - Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, for example, across time or domain. Current methods do not offer a way to use or predict information on structure between sub-corpora, time or domain and dynamic embeddings can only be compared after postalignment. We propose novel word embedding methods that provide general word representations for the whole corpus, domainspecific representations for each sub-corpus, sub-corpus structure, and embedding alignment simultaneously. We present an empirical evaluation on New York Times articles and two English Wikipedia datasets with articles on science and philosophy. Our method, called Word2Vec with Structure Prediction (W2VPred), provides better performance than baselines in terms of the general analogy tests, domain-specific analogy tests, and multiple specific word embedding evaluations as well as structure prediction performance when no structure is given a priori. As a use case in the field of Digital Humanities we demonstrate how to raise novel research questions for high literature from the German Text Archive.

AB - Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, for example, across time or domain. Current methods do not offer a way to use or predict information on structure between sub-corpora, time or domain and dynamic embeddings can only be compared after postalignment. We propose novel word embedding methods that provide general word representations for the whole corpus, domainspecific representations for each sub-corpus, sub-corpus structure, and embedding alignment simultaneously. We present an empirical evaluation on New York Times articles and two English Wikipedia datasets with articles on science and philosophy. Our method, called Word2Vec with Structure Prediction (W2VPred), provides better performance than baselines in terms of the general analogy tests, domain-specific analogy tests, and multiple specific word embedding evaluations as well as structure prediction performance when no structure is given a priori. As a use case in the field of Digital Humanities we demonstrate how to raise novel research questions for high literature from the German Text Archive.

U2 - 10.1162/tacl_a_00538

DO - 10.1162/tacl_a_00538

M3 - Journal article

AN - SCOPUS:85153523524

VL - 11

SP - 320

EP - 335

JO - Transactions of the Association for Computational Linguistics

JF - Transactions of the Association for Computational Linguistics

SN - 2307-387X

ER -

ID: 371562185