It takes nine to smell a rat: Neural multi-task learning for check-worthiness prediction

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

It takes nine to smell a rat : Neural multi-task learning for check-worthiness prediction. / Vasileva, Slavena; Atanasova, Pepa; Màrquez, Lluís; Barrón-Cedeño, Alberto; Nakov, Preslav.

International Conference on Recent Advances in Natural Language Processing in a Deep Learning World, RANLP 2019 - Proceedings. ed. / Galia Angelova; Ruslan Mitkov; Ivelina Nikolova; Irina Temnikova; Irina Temnikova. Incoma Ltd, 2019. p. 1229-1239 (International Conference Recent Advances in Natural Language Processing, RANLP, Vol. 2019-September).

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Vasileva, S, Atanasova, P, Màrquez, L, Barrón-Cedeño, A & Nakov, P 2019, It takes nine to smell a rat: Neural multi-task learning for check-worthiness prediction. in G Angelova, R Mitkov, I Nikolova, I Temnikova & I Temnikova (eds), International Conference on Recent Advances in Natural Language Processing in a Deep Learning World, RANLP 2019 - Proceedings. Incoma Ltd, International Conference Recent Advances in Natural Language Processing, RANLP, vol. 2019-September, pp. 1229-1239, 12th International Conference on Recent Advances in Natural Language Processing, RANLP 2019, Varna, Bulgaria, 02/09/2019. https://doi.org/10.26615/978-954-452-056-4_141

APA

Vasileva, S., Atanasova, P., Màrquez, L., Barrón-Cedeño, A., & Nakov, P. (2019). It takes nine to smell a rat: Neural multi-task learning for check-worthiness prediction. In G. Angelova, R. Mitkov, I. Nikolova, I. Temnikova, & I. Temnikova (Eds.), International Conference on Recent Advances in Natural Language Processing in a Deep Learning World, RANLP 2019 - Proceedings (pp. 1229-1239). Incoma Ltd. International Conference Recent Advances in Natural Language Processing, RANLP Vol. 2019-September https://doi.org/10.26615/978-954-452-056-4_141

Vancouver

Vasileva S, Atanasova P, Màrquez L, Barrón-Cedeño A, Nakov P. It takes nine to smell a rat: Neural multi-task learning for check-worthiness prediction. In Angelova G, Mitkov R, Nikolova I, Temnikova I, Temnikova I, editors, International Conference on Recent Advances in Natural Language Processing in a Deep Learning World, RANLP 2019 - Proceedings. Incoma Ltd. 2019. p. 1229-1239. (International Conference Recent Advances in Natural Language Processing, RANLP, Vol. 2019-September). https://doi.org/10.26615/978-954-452-056-4_141

Author

Vasileva, Slavena ; Atanasova, Pepa ; Màrquez, Lluís ; Barrón-Cedeño, Alberto ; Nakov, Preslav. / It takes nine to smell a rat : Neural multi-task learning for check-worthiness prediction. International Conference on Recent Advances in Natural Language Processing in a Deep Learning World, RANLP 2019 - Proceedings. editor / Galia Angelova ; Ruslan Mitkov ; Ivelina Nikolova ; Irina Temnikova ; Irina Temnikova. Incoma Ltd, 2019. pp. 1229-1239 (International Conference Recent Advances in Natural Language Processing, RANLP, Vol. 2019-September).

Bibtex

@inproceedings{2f1e9965d77f4b98b2797fd9d02f656a,
title = "It takes nine to smell a rat: Neural multi-task learning for check-worthiness prediction",
abstract = "We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.",
author = "Slavena Vasileva and Pepa Atanasova and Llu{\'i}s M{\`a}rquez and Alberto Barr{\'o}n-Cede{\~n}o and Preslav Nakov",
year = "2019",
doi = "10.26615/978-954-452-056-4_141",
language = "English",
series = "International Conference Recent Advances in Natural Language Processing, RANLP",
publisher = "Incoma Ltd",
pages = "1229--1239",
editor = "Galia Angelova and Ruslan Mitkov and Ivelina Nikolova and Irina Temnikova and Irina Temnikova",
booktitle = "International Conference on Recent Advances in Natural Language Processing in a Deep Learning World, RANLP 2019 - Proceedings",
note = "12th International Conference on Recent Advances in Natural Language Processing, RANLP 2019 ; Conference date: 02-09-2019 Through 04-09-2019",

}

RIS

TY - GEN

T1 - It takes nine to smell a rat

T2 - 12th International Conference on Recent Advances in Natural Language Processing, RANLP 2019

AU - Vasileva, Slavena

AU - Atanasova, Pepa

AU - Màrquez, Lluís

AU - Barrón-Cedeño, Alberto

AU - Nakov, Preslav

PY - 2019

Y1 - 2019

N2 - We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.

AB - We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.

UR - http://www.scopus.com/inward/record.url?scp=85070498498&partnerID=8YFLogxK

U2 - 10.26615/978-954-452-056-4_141

DO - 10.26615/978-954-452-056-4_141

M3 - Article in proceedings

AN - SCOPUS:85070498498

T3 - International Conference Recent Advances in Natural Language Processing, RANLP

SP - 1229

EP - 1239

BT - International Conference on Recent Advances in Natural Language Processing in a Deep Learning World, RANLP 2019 - Proceedings

A2 - Angelova, Galia

A2 - Mitkov, Ruslan

A2 - Nikolova, Ivelina

A2 - Temnikova, Irina

A2 - Temnikova, Irina

PB - Incoma Ltd

Y2 - 2 September 2019 through 4 September 2019

ER -

ID: 241089731