Rather a Nurse than a Physician - Contrastive Explanations under Investigation

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Rather a Nurse than a Physician - Contrastive Explanations under Investigation. / Eberle, Oliver; Chalkidis, Ilias; Cabello, Laura; Brandl, Stephanie.

Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2023. s. 6907-6920.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Eberle, O, Chalkidis, I, Cabello, L & Brandl, S 2023, Rather a Nurse than a Physician - Contrastive Explanations under Investigation. i Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), s. 6907-6920, 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 06/12/2023. https://doi.org/10.18653/v1/2023.emnlp-main.427

APA

Eberle, O., Chalkidis, I., Cabello, L., & Brandl, S. (2023). Rather a Nurse than a Physician - Contrastive Explanations under Investigation. I Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (s. 6907-6920). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.427

Vancouver

Eberle O, Chalkidis I, Cabello L, Brandl S. Rather a Nurse than a Physician - Contrastive Explanations under Investigation. I Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL). 2023. s. 6907-6920 https://doi.org/10.18653/v1/2023.emnlp-main.427

Author

Eberle, Oliver ; Chalkidis, Ilias ; Cabello, Laura ; Brandl, Stephanie. / Rather a Nurse than a Physician - Contrastive Explanations under Investigation. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2023. s. 6907-6920

Bibtex

@inproceedings{1bf7cbbd58fd4c9786d6bbef7c18ce3b,
title = "Rather a Nurse than a Physician - Contrastive Explanations under Investigation",
abstract = "Contrastive explanations, where one decision is explained *in contrast to another*, are supposed to be closer to how humans explain a decision than non-contrastive explanations, where the decision is not necessarily referenced to an alternative. This claim has never been empirically validated. We analyze four English text-classification datasets (SST2, DynaSent, BIOS and DBpedia-Animals). We fine-tune and extract explanations from three different models (RoBERTa, GTP-2, and T5), each in three different sizes and apply three post-hoc explainability methods (LRP, GradientxInput, GradNorm). We furthermore collect and release human rationale annotations for a subset of 100 samples from the BIOS dataset for contrastive and non-contrastive settings. A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans. Moreover, model-based explanations computed in both settings align equally well with human rationales. Thus, we empirically find that humans do not necessarily explain in a contrastive manner.",
author = "Oliver Eberle and Ilias Chalkidis and Laura Cabello and Stephanie Brandl",
year = "2023",
doi = "10.18653/v1/2023.emnlp-main.427",
language = "English",
pages = "6907--6920",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",
note = "2023 Conference on Empirical Methods in Natural Language Processing ; Conference date: 06-12-2023 Through 10-12-2023",

}

RIS

TY - GEN

T1 - Rather a Nurse than a Physician - Contrastive Explanations under Investigation

AU - Eberle, Oliver

AU - Chalkidis, Ilias

AU - Cabello, Laura

AU - Brandl, Stephanie

PY - 2023

Y1 - 2023

N2 - Contrastive explanations, where one decision is explained *in contrast to another*, are supposed to be closer to how humans explain a decision than non-contrastive explanations, where the decision is not necessarily referenced to an alternative. This claim has never been empirically validated. We analyze four English text-classification datasets (SST2, DynaSent, BIOS and DBpedia-Animals). We fine-tune and extract explanations from three different models (RoBERTa, GTP-2, and T5), each in three different sizes and apply three post-hoc explainability methods (LRP, GradientxInput, GradNorm). We furthermore collect and release human rationale annotations for a subset of 100 samples from the BIOS dataset for contrastive and non-contrastive settings. A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans. Moreover, model-based explanations computed in both settings align equally well with human rationales. Thus, we empirically find that humans do not necessarily explain in a contrastive manner.

AB - Contrastive explanations, where one decision is explained *in contrast to another*, are supposed to be closer to how humans explain a decision than non-contrastive explanations, where the decision is not necessarily referenced to an alternative. This claim has never been empirically validated. We analyze four English text-classification datasets (SST2, DynaSent, BIOS and DBpedia-Animals). We fine-tune and extract explanations from three different models (RoBERTa, GTP-2, and T5), each in three different sizes and apply three post-hoc explainability methods (LRP, GradientxInput, GradNorm). We furthermore collect and release human rationale annotations for a subset of 100 samples from the BIOS dataset for contrastive and non-contrastive settings. A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans. Moreover, model-based explanations computed in both settings align equally well with human rationales. Thus, we empirically find that humans do not necessarily explain in a contrastive manner.

U2 - 10.18653/v1/2023.emnlp-main.427

DO - 10.18653/v1/2023.emnlp-main.427

M3 - Article in proceedings

SP - 6907

EP - 6920

BT - Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics (ACL)

T2 - 2023 Conference on Empirical Methods in Natural Language Processing

Y2 - 6 December 2023 through 10 December 2023

ER -

ID: 383927536