Rather a Nurse than a Physician - Contrastive Explanations under Investigation

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 1,84 MB, PDF-dokument

Contrastive explanations, where one decision is explained *in contrast to another*, are supposed to be closer to how humans explain a decision than non-contrastive explanations, where the decision is not necessarily referenced to an alternative. This claim has never been empirically validated. We analyze four English text-classification datasets (SST2, DynaSent, BIOS and DBpedia-Animals). We fine-tune and extract explanations from three different models (RoBERTa, GTP-2, and T5), each in three different sizes and apply three post-hoc explainability methods (LRP, GradientxInput, GradNorm). We furthermore collect and release human rationale annotations for a subset of 100 samples from the BIOS dataset for contrastive and non-contrastive settings. A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans. Moreover, model-based explanations computed in both settings align equally well with human rationales. Thus, we empirically find that humans do not necessarily explain in a contrastive manner.
OriginalsprogEngelsk
TitelProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2023
Sider6907-6920
ISBN (Elektronisk)979-8-89176-060-8
DOI
StatusUdgivet - 2023
Begivenhed2023 Conference on Empirical Methods in Natural Language Processing - Singapore
Varighed: 6 dec. 202310 dec. 2023

Konference

Konference2023 Conference on Empirical Methods in Natural Language Processing
BySingapore
Periode06/12/202310/12/2023

ID: 383927536