Syntactic Interchangeability in Word Embedding Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Nearest neighbors in word embedding models are commonly observed to be semantically similar, but the relations between them can vary greatly. We investigate the extent to which word embedding models preserve syntactic interchangeability, as reflected by distances between word vectors, and the effect of hyper-parameters—context window size in particular. We use part of speech (POS) as a proxy for syntactic interchangeability, as generally speaking, words with the same POS are syntactically valid in the same contexts. We also investigate the relationship between interchangeability and similarity as judged by commonly-used word similarity benchmarks, and correlate the result with the performance of word embedding models on these benchmarks. Our results will inform future research and applications in the selection of word embedding model, suggesting a principle for an appropriate selection of the context window size parameter depending on the use-case.
Original languageEnglish
Title of host publicationProceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP
PublisherAssociation for Computational Linguistics
Publication date2019
Pages70-76
DOIs
Publication statusPublished - 2019
Externally publishedYes
EventProceedings of the 3rd Workshop on Evaluating Vector Space Representations for - Minneapolis, United States
Duration: 1 Jun 20191 Jun 2019

Conference

ConferenceProceedings of the 3rd Workshop on Evaluating Vector Space Representations for
LandUnited States
ByMinneapolis
Periode01/06/201901/06/2019

ID: 239016766