Multi-head Self-attention with Role-Guided Masks

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Submitted manuscript, 218 KB, PDF document

The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms. Simply put, the attention mechanisms learn to attend to specific parts of the input dispensing recurrence and convolutions. While some of the learned attention heads have been found to play linguistically interpretable roles, they can be redundant or prone to errors. We propose a method to guide the attention heads towards roles identified in prior work as important. We do this by defining role-specific masks to constrain the heads to attend to specific parts of the input, such that different heads are designed to play different roles. Experiments on text classification and machine translation using 7 different datasets show that our method outperforms competitive attention-based, CNN, and RNN baselines.

Original languageEnglish
Title of host publicationAdvances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Proceedings, Part II
EditorsDjoerd Hiemstra, Marie-Francine Moens, Josiane Mothe, Raffaele Perego, Martin Potthast, Fabrizio Sebastiani
PublisherSpringer
Publication date2021
Pages432-439
ISBN (Print)9783030722395
DOIs
Publication statusPublished - 2021
Event43rd European Conference on Information Retrieval, ECIR 2021 - Virtual, Online
Duration: 28 Mar 20211 Apr 2021

Conference

Conference43rd European Conference on Information Retrieval, ECIR 2021
ByVirtual, Online
Periode28/03/202101/04/2021
SeriesLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12657 LNCS
ISSN0302-9743

Bibliographical note

Publisher Copyright:
© 2021, Springer Nature Switzerland AG.

    Research areas

  • Self-attention, Text classification, Transformer

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 283133892