On the Interaction of Belief Bias and Explanations

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

On the Interaction of Belief Bias and Explanations. / González, Ana Valeria; Rogers, Anna; Søgaard, Anders.

Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online : Association for Computational Linguistics, 2021. s. 2930-2942.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

González, AV, Rogers, A & Søgaard, A 2021, On the Interaction of Belief Bias and Explanations. i Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, Online, s. 2930-2942, Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021, Virtual, Online, 01/08/2021. https://doi.org/10.18653/v1/2021.findings-acl.259

APA

González, A. V., Rogers, A., & Søgaard, A. (2021). On the Interaction of Belief Bias and Explanations. I Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (s. 2930-2942). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.findings-acl.259

Vancouver

González AV, Rogers A, Søgaard A. On the Interaction of Belief Bias and Explanations. I Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online: Association for Computational Linguistics. 2021. s. 2930-2942 https://doi.org/10.18653/v1/2021.findings-acl.259

Author

González, Ana Valeria ; Rogers, Anna ; Søgaard, Anders. / On the Interaction of Belief Bias and Explanations. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online : Association for Computational Linguistics, 2021. s. 2930-2942

Bibtex

@inproceedings{88e121bdd6254a92b4f0db150f0e8d02,
title = "On the Interaction of Belief Bias and Explanations",
abstract = "A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick enchmarking,it isn{\textquoteright}t clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For t o experimental paradigms, we present a case study of gradientbased explainability ntroducing simple ways to account for humans{\textquoteright} prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.1 Int",
author = "Gonz{\'a}lez, {Ana Valeria} and Anna Rogers and Anders S{\o}gaard",
year = "2021",
month = aug,
day = "1",
doi = "10.18653/v1/2021.findings-acl.259",
language = "English",
pages = "2930--2942",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
publisher = "Association for Computational Linguistics",
note = "Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021 ; Conference date: 01-08-2021 Through 06-08-2021",

}

RIS

TY - GEN

T1 - On the Interaction of Belief Bias and Explanations

AU - González, Ana Valeria

AU - Rogers, Anna

AU - Søgaard, Anders

PY - 2021/8/1

Y1 - 2021/8/1

N2 - A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick enchmarking,it isn’t clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For t o experimental paradigms, we present a case study of gradientbased explainability ntroducing simple ways to account for humans’ prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.1 Int

AB - A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick enchmarking,it isn’t clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For t o experimental paradigms, we present a case study of gradientbased explainability ntroducing simple ways to account for humans’ prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.1 Int

U2 - 10.18653/v1/2021.findings-acl.259

DO - 10.18653/v1/2021.findings-acl.259

M3 - Article in proceedings

SP - 2930

EP - 2942

BT - Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

PB - Association for Computational Linguistics

CY - Online

T2 - Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2021

Y2 - 1 August 2021 through 6 August 2021

ER -

ID: 285387796