Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 1: Check-worthiness

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Overview of the CLEF-2019 Checkthat! LAB : Automatic identification and verification of claims. Task 1: Check-worthiness. / Atanasova, Pepa; Nakov, Preslav; Karadzhov, Georgi; Mohtarami, Mitra; Da San Martino, Giovanni.

In: CEUR Workshop Proceedings, Vol. 2380, 2019.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Atanasova, P, Nakov, P, Karadzhov, G, Mohtarami, M & Da San Martino, G 2019, 'Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 1: Check-worthiness', CEUR Workshop Proceedings, vol. 2380.

APA

Atanasova, P., Nakov, P., Karadzhov, G., Mohtarami, M., & Da San Martino, G. (2019). Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 1: Check-worthiness. CEUR Workshop Proceedings, 2380.

Vancouver

Atanasova P, Nakov P, Karadzhov G, Mohtarami M, Da San Martino G. Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 1: Check-worthiness. CEUR Workshop Proceedings. 2019;2380.

Author

Atanasova, Pepa ; Nakov, Preslav ; Karadzhov, Georgi ; Mohtarami, Mitra ; Da San Martino, Giovanni. / Overview of the CLEF-2019 Checkthat! LAB : Automatic identification and verification of claims. Task 1: Check-worthiness. In: CEUR Workshop Proceedings. 2019 ; Vol. 2380.

Bibtex

@inproceedings{067f683678034f44b13c61b9d789200f,
title = "Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 1: Check-worthiness",
abstract = "We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.",
keywords = "Check-worthiness estimation, Computational journalism, Fact-checking, Veracity",
author = "Pepa Atanasova and Preslav Nakov and Georgi Karadzhov and Mitra Mohtarami and {Da San Martino}, Giovanni",
year = "2019",
language = "English",
volume = "2380",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "ceur workshop proceedings",
note = "20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 ; Conference date: 09-09-2019 Through 12-09-2019",

}

RIS

TY - GEN

T1 - Overview of the CLEF-2019 Checkthat! LAB

T2 - 20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019

AU - Atanasova, Pepa

AU - Nakov, Preslav

AU - Karadzhov, Georgi

AU - Mohtarami, Mitra

AU - Da San Martino, Giovanni

PY - 2019

Y1 - 2019

N2 - We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.

AB - We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.

KW - Check-worthiness estimation

KW - Computational journalism

KW - Fact-checking

KW - Veracity

UR - http://www.scopus.com/inward/record.url?scp=85070508754&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85070508754

VL - 2380

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

Y2 - 9 September 2019 through 12 September 2019

ER -

ID: 227335461