Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing

Research output: Contribution to journalJournal articlepeer-review

Documents

Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence. In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check.

Original languageEnglish
Article number500
JournalInformation (Switzerland)
Volume13
Issue number10
Pages (from-to)1-18
ISSN2078-2489
DOIs
Publication statusPublished - 2022

Bibliographical note

Publisher Copyright:
© 2022 by the authors.

    Research areas

  • explainable AI, fact-checking, natural language generation

ID: 324680448