Faithfulness Tests for Natural Language Explanations
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Forlagets udgivne version, 296 KB, PDF-dokument
Explanations of neural models aim to reveal a model’s decision-making process for its predictions. However, recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading, as they are prone to present reasons that are unfaithful to the model’s inner workings. This work explores the challenging question of evaluating the faithfulness of natural language explanations (NLEs). To this end, we present two tests. First, we propose a counterfactual input editor for inserting reasons that lead to counterfactual predictions but are not reflected by the NLEs. Second, we reconstruct inputs from the reasons stated in the generated NLEs and check how often they lead to the same predictions. Our tests can evaluate emerging NLE models, proving a fundamental tool in the development of faithful NLEs.
|Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
|Association for Computational Linguistics (ACL)
|Udgivet - 2023
|61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada
Varighed: 9 jul. 2023 → 14 jul. 2023
|61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
|09/07/2023 → 14/07/2023
|Bloomberg Engineering, et al., Google Research, Liveperson, Meta, Microsoft
The research documented in this paper has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. Isabelle Augenstein’s research is further partially funded by a DFF Sapere Aude research leader grant under grant agreement No 0171-00034B, as well as by the Pioneer Centre for AI, DNRF grant number P1. Thomas Lukasiewicz was supported by the Alan Turing Institute under the UK EPSRC grant EP/N510129/1, the AXA Research Fund, and the EU TAILOR grant 952215. Oana-Maria Camburu was supported by a UK Leverhulme Early Career Fellowship. Christina Lioma’s research is partially funded by the Villum and Velux Foundations Algorithms, Data and Democracy (ADD) grant.
© 2023 Association for Computational Linguistics.