A Discriminative Latent-Variable Model for Bilingual Lexicon Induction

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

A Discriminative Latent-Variable Model for Bilingual Lexicon Induction. / Ruder, Sebastian ; Cotterell, Ryan ; Kementchedjhieva, Yova Radoslavova; Søgaard, Anders.

Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2020. p. 458–468.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Ruder, S, Cotterell, R, Kementchedjhieva, YR & Søgaard, A 2020, A Discriminative Latent-Variable Model for Bilingual Lexicon Induction. in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 458–468, 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31/10/2018.

APA

Ruder, S., Cotterell, R., Kementchedjhieva, Y. R., & Søgaard, A. (2020). A Discriminative Latent-Variable Model for Bilingual Lexicon Induction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 458–468). Association for Computational Linguistics.

Vancouver

Ruder S, Cotterell R, Kementchedjhieva YR, Søgaard A. A Discriminative Latent-Variable Model for Bilingual Lexicon Induction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. 2020. p. 458–468

Author

Ruder, Sebastian ; Cotterell, Ryan ; Kementchedjhieva, Yova Radoslavova ; Søgaard, Anders. / A Discriminative Latent-Variable Model for Bilingual Lexicon Induction. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2020. pp. 458–468

Bibtex

@inproceedings{85a6379607ad4c6f8f0ac621555a72a4,
title = "A Discriminative Latent-Variable Model for Bilingual Lexicon Induction",
abstract = "We introduce a novel discriminative latent-variable model for the task of bilingual lexicon induction. Our model combines the bipartite matching dictionary prior of Haghighi et al. (2008) with a state-of-the-art embedding-based approach. To train the model, we derive an efficient Viterbi EM algorithm. We provide empirical improvements on six language pairs under two metrics and show that the prior theoretically and empirically helps to mitigate the hubness problem. We also demonstrate how previous work may be viewed as a similarly fashioned latent-variable model, albeit with a different prior.1 {\textcopyright} 2018 Association for Computational Linguistics",
author = "Sebastian Ruder and Ryan Cotterell and Kementchedjhieva, {Yova Radoslavova} and Anders S{\o}gaard",
year = "2020",
language = "English",
pages = "458–468",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics",
note = "null ; Conference date: 31-10-2018 Through 04-11-2018",

}

RIS

TY - GEN

T1 - A Discriminative Latent-Variable Model for Bilingual Lexicon Induction

AU - Ruder, Sebastian

AU - Cotterell, Ryan

AU - Kementchedjhieva, Yova Radoslavova

AU - Søgaard, Anders

PY - 2020

Y1 - 2020

N2 - We introduce a novel discriminative latent-variable model for the task of bilingual lexicon induction. Our model combines the bipartite matching dictionary prior of Haghighi et al. (2008) with a state-of-the-art embedding-based approach. To train the model, we derive an efficient Viterbi EM algorithm. We provide empirical improvements on six language pairs under two metrics and show that the prior theoretically and empirically helps to mitigate the hubness problem. We also demonstrate how previous work may be viewed as a similarly fashioned latent-variable model, albeit with a different prior.1 © 2018 Association for Computational Linguistics

AB - We introduce a novel discriminative latent-variable model for the task of bilingual lexicon induction. Our model combines the bipartite matching dictionary prior of Haghighi et al. (2008) with a state-of-the-art embedding-based approach. To train the model, we derive an efficient Viterbi EM algorithm. We provide empirical improvements on six language pairs under two metrics and show that the prior theoretically and empirically helps to mitigate the hubness problem. We also demonstrate how previous work may be viewed as a similarly fashioned latent-variable model, albeit with a different prior.1 © 2018 Association for Computational Linguistics

M3 - Article in proceedings

SP - 458

EP - 468

BT - Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics

Y2 - 31 October 2018 through 4 November 2018

ER -

ID: 214760286