Is Sparse Attention more Interpretable?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Is Sparse Attention more Interpretable? / Meister, Clara; Lazov, Stefan; Augenstein, Isabelle; Cotterell, Ryan.

Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, 2021. p. 122-129.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Meister, C, Lazov, S, Augenstein, I & Cotterell, R 2021, Is Sparse Attention more Interpretable? in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, pp. 122-129, 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, 01/08/2021. https://doi.org/10.18653/v1/2021.acl-short.17

APA

Meister, C., Lazov, S., Augenstein, I., & Cotterell, R. (2021). Is Sparse Attention more Interpretable? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers) (pp. 122-129). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.acl-short.17

Vancouver

Meister C, Lazov S, Augenstein I, Cotterell R. Is Sparse Attention more Interpretable? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics. 2021. p. 122-129 https://doi.org/10.18653/v1/2021.acl-short.17

Author

Meister, Clara ; Lazov, Stefan ; Augenstein, Isabelle ; Cotterell, Ryan. / Is Sparse Attention more Interpretable?. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, 2021. pp. 122-129

Bibtex

@inproceedings{508675d3b6e249bf8a2926be5197b4a7,
title = "Is Sparse Attention more Interpretable?",
abstract = "Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet the attention distribution is typically over representations internal to the model rather than the inputs themselves, suggesting this assumption may not have merit. We build on the recent work exploring the interpretability of attention; we design a set of experiments to help us understand how sparsity affects our ability to use attention as an explainability tool. On three text classification tasks, we verify that only a weak relationship between inputs and co-indexed intermediate representations exists—under sparse attention and otherwise. Further, we do not find any plausible mappings from sparse attention distributions to a sparse set of influential inputs through other avenues. Rather, we observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for understanding model behavior.",
author = "Clara Meister and Stefan Lazov and Isabelle Augenstein and Ryan Cotterell",
year = "2021",
doi = "10.18653/v1/2021.acl-short.17",
language = "English",
pages = "122--129",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)",
publisher = "Association for Computational Linguistics",
note = "59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing ; Conference date: 01-08-2021 Through 06-08-2021",

}

RIS

TY - GEN

T1 - Is Sparse Attention more Interpretable?

AU - Meister, Clara

AU - Lazov, Stefan

AU - Augenstein, Isabelle

AU - Cotterell, Ryan

PY - 2021

Y1 - 2021

N2 - Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet the attention distribution is typically over representations internal to the model rather than the inputs themselves, suggesting this assumption may not have merit. We build on the recent work exploring the interpretability of attention; we design a set of experiments to help us understand how sparsity affects our ability to use attention as an explainability tool. On three text classification tasks, we verify that only a weak relationship between inputs and co-indexed intermediate representations exists—under sparse attention and otherwise. Further, we do not find any plausible mappings from sparse attention distributions to a sparse set of influential inputs through other avenues. Rather, we observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for understanding model behavior.

AB - Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs. Yet the attention distribution is typically over representations internal to the model rather than the inputs themselves, suggesting this assumption may not have merit. We build on the recent work exploring the interpretability of attention; we design a set of experiments to help us understand how sparsity affects our ability to use attention as an explainability tool. On three text classification tasks, we verify that only a weak relationship between inputs and co-indexed intermediate representations exists—under sparse attention and otherwise. Further, we do not find any plausible mappings from sparse attention distributions to a sparse set of influential inputs through other avenues. Rather, we observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for understanding model behavior.

U2 - 10.18653/v1/2021.acl-short.17

DO - 10.18653/v1/2021.acl-short.17

M3 - Article in proceedings

SP - 122

EP - 129

BT - Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

PB - Association for Computational Linguistics

T2 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing

Y2 - 1 August 2021 through 6 August 2021

ER -

ID: 300696284