Standard
Multi-head Self-attention with Role-Guided Masks. / Wang, Dongsheng; Hansen, Casper; Lima, Lucas Chaves; Hansen, Christian; Maistro, Maria; Simonsen, Jakob Grue; Lioma, Christina.
Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II. ed. / Djoerd Hiemstra; Marie-Francine Moens; Josiane Mothe; Raffaele Perego; Martin Potthast; Fabrizio Sebastiani. Springer, 2021. p. 432-439 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 12657 LNCS).
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
Wang, D, Hansen, C, Lima, LC, Hansen, C
, Maistro, M, Simonsen, JG & Lioma, C 2021,
Multi-head Self-attention with Role-Guided Masks. in D Hiemstra, M-F Moens, J Mothe, R Perego, M Potthast & F Sebastiani (eds),
Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II. Springer, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12657 LNCS, pp. 432-439, 43rd European Conference on Information Retrieval, ECIR 2021, Virtual, Online,
28/03/2021.
https://doi.org/10.1007/978-3-030-72240-1_45
APA
Wang, D., Hansen, C., Lima, L. C., Hansen, C.
, Maistro, M., Simonsen, J. G., & Lioma, C. (2021).
Multi-head Self-attention with Role-Guided Masks. In D. Hiemstra, M-F. Moens, J. Mothe, R. Perego, M. Potthast, & F. Sebastiani (Eds.),
Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II (pp. 432-439). Springer. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 12657 LNCS
https://doi.org/10.1007/978-3-030-72240-1_45
Vancouver
Wang D, Hansen C, Lima LC, Hansen C
, Maistro M, Simonsen JG et al.
Multi-head Self-attention with Role-Guided Masks. In Hiemstra D, Moens M-F, Mothe J, Perego R, Potthast M, Sebastiani F, editors, Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II. Springer. 2021. p. 432-439. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 12657 LNCS).
https://doi.org/10.1007/978-3-030-72240-1_45
Author
Wang, Dongsheng ; Hansen, Casper ; Lima, Lucas Chaves ; Hansen, Christian ; Maistro, Maria ; Simonsen, Jakob Grue ; Lioma, Christina. / Multi-head Self-attention with Role-Guided Masks. Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II. editor / Djoerd Hiemstra ; Marie-Francine Moens ; Josiane Mothe ; Raffaele Perego ; Martin Potthast ; Fabrizio Sebastiani. Springer, 2021. pp. 432-439 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 12657 LNCS).
Bibtex
@inproceedings{bec27deb89a4452a8eb7faab00db4028,
title = "Multi-head Self-attention with Role-Guided Masks",
abstract = "The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.",
keywords = "Self-attention, Text classification, Transformer",
author = "Dongsheng Wang and Casper Hansen and Lima, {Lucas Chaves} and Christian Hansen and Maria Maistro and Simonsen, {Jakob Grue} and Christina Lioma",
note = "Publisher Copyright: {\textcopyright} 2021, Springer Nature Switzerland AG.; 43rd European Conference on Information Retrieval, ECIR 2021 ; Conference date: 28-03-2021 Through 01-04-2021",
year = "2021",
doi = "10.1007/978-3-030-72240-1_45",
language = "English",
isbn = "9783030722395",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "432--439",
editor = "Djoerd Hiemstra and Marie-Francine Moens and Josiane Mothe and Raffaele Perego and Martin Potthast and Fabrizio Sebastiani",
booktitle = "Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II",
address = "Switzerland",
}
RIS
TY - GEN
T1 - Multi-head Self-attention with Role-Guided Masks
AU - Wang, Dongsheng
AU - Hansen, Casper
AU - Lima, Lucas Chaves
AU - Hansen, Christian
AU - Maistro, Maria
AU - Simonsen, Jakob Grue
AU - Lioma, Christina
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.
AB - The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.
KW - Self-attention
KW - Text classification
KW - Transformer
U2 - 10.1007/978-3-030-72240-1_45
DO - 10.1007/978-3-030-72240-1_45
M3 - Article in proceedings
AN - SCOPUS:85107354449
SN - 9783030722395
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 432
EP - 439
BT - Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II
A2 - Hiemstra, Djoerd
A2 - Moens, Marie-Francine
A2 - Mothe, Josiane
A2 - Perego, Raffaele
A2 - Potthast, Martin
A2 - Sebastiani, Fabrizio
PB - Springer
T2 - 43rd European Conference on Information Retrieval, ECIR 2021
Y2 - 28 March 2021 through 1 April 2021
ER -