A Diagnostic Study of Explainability Techniques for Text Classification
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
A Diagnostic Study of Explainability Techniques for Text Classification. / Atanasova, Pepa; Simonsen, Jakob Grue; Lioma, Christina; Augenstein, Isabelle.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. p. 3256-3274.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - A Diagnostic Study of Explainability Techniques for Text Classification
AU - Atanasova, Pepa
AU - Simonsen, Jakob Grue
AU - Lioma, Christina
AU - Augenstein, Isabelle
PY - 2020
Y1 - 2020
N2 - Recent developments in machine learning have introduced models that approach human performance at the cost of increased architectural complexity. Efforts to make the rationales behind the models’ predictions transparent have inspired an abundance of new explainability techniques. Provided with an already trained model, they compute saliency scores for the words of an input instance. However, there exists no definitive guide on (i) how to choose such a technique given a particular application task and model architecture, and (ii) the benefits and drawbacks of using each such technique. In this paper, we develop a comprehensive list of diagnostic properties for evaluating existing explainability techniques. We then employ the proposed list to compare a set of diverse explainability techniques on downstream text classification tasks and neural network architectures. We also compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model’s performance and the agreement of its rationales with human ones. Overall, we find that the gradient-based explanations perform best across tasks and model architectures, and we present further insights into the properties of the reviewed explainability techniques.
AB - Recent developments in machine learning have introduced models that approach human performance at the cost of increased architectural complexity. Efforts to make the rationales behind the models’ predictions transparent have inspired an abundance of new explainability techniques. Provided with an already trained model, they compute saliency scores for the words of an input instance. However, there exists no definitive guide on (i) how to choose such a technique given a particular application task and model architecture, and (ii) the benefits and drawbacks of using each such technique. In this paper, we develop a comprehensive list of diagnostic properties for evaluating existing explainability techniques. We then employ the proposed list to compare a set of diverse explainability techniques on downstream text classification tasks and neural network architectures. We also compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model’s performance and the agreement of its rationales with human ones. Overall, we find that the gradient-based explanations perform best across tasks and model architectures, and we present further insights into the properties of the reviewed explainability techniques.
U2 - 10.18653/v1/2020.emnlp-main.263
DO - 10.18653/v1/2020.emnlp-main.263
M3 - Article in proceedings
SP - 3256
EP - 3274
BT - Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
PB - Association for Computational Linguistics
T2 - The 2020 Conference on Empirical Methods in Natural Language Processing
Y2 - 16 November 2020 through 20 November 2020
ER -
ID: 254783374