Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing. / Jolly, Shailza; Atanasova, Pepa; Augenstein, Isabelle.
In: Information (Switzerland), Vol. 13, No. 10, 500, 2022, p. 1-18.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
AU - Jolly, Shailza
AU - Atanasova, Pepa
AU - Augenstein, Isabelle
N1 - Publisher Copyright: © 2022 by the authors.
PY - 2022
Y1 - 2022
N2 - Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence. In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check.
AB - Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence. In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check.
KW - explainable AI
KW - fact-checking
KW - natural language generation
UR - http://www.scopus.com/inward/record.url?scp=85140652386&partnerID=8YFLogxK
U2 - 10.3390/info13100500
DO - 10.3390/info13100500
M3 - Journal article
AN - SCOPUS:85140652386
VL - 13
SP - 1
EP - 18
JO - Information (Switzerland)
JF - Information (Switzerland)
SN - 2078-2489
IS - 10
M1 - 500
ER -
ID: 324680448