Diagnostics-Guided Explanation Generation
Research output: Contribution to journal › Conference article › Research
Standard
Diagnostics-Guided Explanation Generation. / Atanasova, Pepa Kostadinova; Simonsen, Jakob Grue; Lioma, Christina; Augenstein, Isabelle.
In: Proceedings of the International Joint Conference on Artificial Intelligence, Vol. 36, No. 10, 2022, p. 10445-10453.Research output: Contribution to journal › Conference article › Research
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Diagnostics-Guided Explanation Generation
AU - Atanasova, Pepa Kostadinova
AU - Simonsen, Jakob Grue
AU - Lioma, Christina
AU - Augenstein, Isabelle
PY - 2022
Y1 - 2022
N2 - Explanations shed light on a machine learning model's rationales and can aid in identifying deficiencies in its reasoning process. Explanation generation models are typically trained in a supervised way given human explanations. When such annotations are not available, explanations are often selected as those portions of the input that maximise a downstream task's performance, which corresponds to optimising an explanation's Faithfulness to a given model. Faithfulness is one of several so-called diagnostic properties, which prior work has identified as useful for gauging the quality of an explanation without requiring annotations. Other diagnostic properties are Data Consistency, which measures how similar explanations are for similar input instances, and Confidence Indication, which shows whether the explanation reflects the confidence of the model. In this work, we show how to directly optimise for these diagnostic properties when training a model to generate sentence-level explanations, which markedly improves explanation quality, agreement with human rationales, and downstream task performance on three complex reasoning tasks.
AB - Explanations shed light on a machine learning model's rationales and can aid in identifying deficiencies in its reasoning process. Explanation generation models are typically trained in a supervised way given human explanations. When such annotations are not available, explanations are often selected as those portions of the input that maximise a downstream task's performance, which corresponds to optimising an explanation's Faithfulness to a given model. Faithfulness is one of several so-called diagnostic properties, which prior work has identified as useful for gauging the quality of an explanation without requiring annotations. Other diagnostic properties are Data Consistency, which measures how similar explanations are for similar input instances, and Confidence Indication, which shows whether the explanation reflects the confidence of the model. In this work, we show how to directly optimise for these diagnostic properties when training a model to generate sentence-level explanations, which markedly improves explanation quality, agreement with human rationales, and downstream task performance on three complex reasoning tasks.
U2 - 10.1609/aaai.v36i10.21287
DO - 10.1609/aaai.v36i10.21287
M3 - Conference article
VL - 36
SP - 10445-10453.
JO - Proceedings of the International Joint Conference on Artificial Intelligence
JF - Proceedings of the International Joint Conference on Artificial Intelligence
SN - 1045-0823
IS - 10
T2 - 36th AAAI Conference on Artificial Intelligence (AAAI-22)
Y2 - 28 February 2022 through 1 March 2022
ER -
ID: 339344122