Generating Fact Checking Explanations

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Generating Fact Checking Explanations. / Atanasova, Pepa; Simonsen, Jakob Grue; Lioma, Christina; Augenstein, Isabelle.

: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. p. 7352-7364.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Atanasova, P, Simonsen, JG, Lioma, C & Augenstein, I 2020, Generating Fact Checking Explanations. in : Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 7352-7364, 58th Annual Meeting of the Association for Computational Linguistics, Online, 05/07/2020. https://doi.org/10.18653/v1/2020.acl-main.656

APA

Atanasova, P., Simonsen, J. G., Lioma, C., & Augenstein, I. (2020). Generating Fact Checking Explanations. In : Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 7352-7364). Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.656

Vancouver

Atanasova P, Simonsen JG, Lioma C, Augenstein I. Generating Fact Checking Explanations. In : Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. 2020. p. 7352-7364 https://doi.org/10.18653/v1/2020.acl-main.656

Author

Atanasova, Pepa ; Simonsen, Jakob Grue ; Lioma, Christina ; Augenstein, Isabelle. / Generating Fact Checking Explanations. : Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. pp. 7352-7364

Bibtex

@inproceedings{de955ada20ac426c8c10771979ce333e,
title = "Generating Fact Checking Explanations",
abstract = "Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.",
author = "Pepa Atanasova and Simonsen, {Jakob Grue} and Christina Lioma and Isabelle Augenstein",
year = "2020",
doi = "10.18653/v1/2020.acl-main.656",
language = "English",
pages = "7352--7364",
booktitle = ": Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
publisher = "Association for Computational Linguistics",
note = "58th Annual Meeting of the Association for Computational Linguistics ; Conference date: 05-07-2020 Through 10-07-2020",

}

RIS

TY - GEN

T1 - Generating Fact Checking Explanations

AU - Atanasova, Pepa

AU - Simonsen, Jakob Grue

AU - Lioma, Christina

AU - Augenstein, Isabelle

PY - 2020

Y1 - 2020

N2 - Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.

AB - Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.

U2 - 10.18653/v1/2020.acl-main.656

DO - 10.18653/v1/2020.acl-main.656

M3 - Article in proceedings

SP - 7352

EP - 7364

BT - : Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

PB - Association for Computational Linguistics

T2 - 58th Annual Meeting of the Association for Computational Linguistics

Y2 - 5 July 2020 through 10 July 2020

ER -

ID: 254776315