Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 1: Check-worthiness

Research output: Contribution to journalConference articlepeer-review

We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2380
ISSN1613-0073
Publication statusPublished - 2019
Event20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 - Lugano, Switzerland
Duration: 9 Sep 201912 Sep 2019

Conference

Conference20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019
CountrySwitzerland
CityLugano
Period09/09/201912/09/2019

    Research areas

  • Check-worthiness estimation, Computational journalism, Fact-checking, Veracity

ID: 227335461