Neural check-worthiness ranking with weak supervision: Finding sentences for fact-checking

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Neural check-worthiness ranking with weak supervision : Finding sentences for fact-checking. / Hansen, Casper; Hansen, Christian; Alstrup, Stephen; Simonsen, Jakob Grue; Lioma, Christina.

The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019. Association for Computing Machinery, 2019. p. 994-1000.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Hansen, C, Hansen, C, Alstrup, S, Simonsen, JG & Lioma, C 2019, Neural check-worthiness ranking with weak supervision: Finding sentences for fact-checking. in The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019. Association for Computing Machinery, pp. 994-1000, 2019 World Wide Web Conference, WWW 2019, San Francisco, United States, 13/05/2019. https://doi.org/10.1145/3308560.3316736

APA

Hansen, C., Hansen, C., Alstrup, S., Simonsen, J. G., & Lioma, C. (2019). Neural check-worthiness ranking with weak supervision: Finding sentences for fact-checking. In The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019 (pp. 994-1000). Association for Computing Machinery. https://doi.org/10.1145/3308560.3316736

Vancouver

Hansen C, Hansen C, Alstrup S, Simonsen JG, Lioma C. Neural check-worthiness ranking with weak supervision: Finding sentences for fact-checking. In The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019. Association for Computing Machinery. 2019. p. 994-1000 https://doi.org/10.1145/3308560.3316736

Author

Hansen, Casper ; Hansen, Christian ; Alstrup, Stephen ; Simonsen, Jakob Grue ; Lioma, Christina. / Neural check-worthiness ranking with weak supervision : Finding sentences for fact-checking. The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019. Association for Computing Machinery, 2019. pp. 994-1000

Bibtex

@inproceedings{7368d612cc6746b4ad58253196776cb2,
title = "Neural check-worthiness ranking with weak supervision: Finding sentences for fact-checking",
abstract = "Automatic fact-checking systems detect misinformation, such as fake news, by (i) selecting check-worthy sentences for fact-checking, (ii) gathering related information to the sentences, and (iii) inferring the factuality of the sentences. Most prior research on (i) uses hand-crafted features to select check-worthy sentences, and does not explicitly account for the recent finding that the top weighted terms in both check-worthy and non-check-worthy sentences are actually overlapping [15]. Motivated by this, we present a neural check-worthiness sentence ranking model that represents each word in a sentence by both its embedding (aiming to capture its semantics) and its syntactic dependencies (aiming to capture its role in modifying the semantics of other terms in the sentence). Our model is an end-to-end trainable neural network for check-worthiness ranking, which is trained on large amounts of unlabelled data through weak supervision. Thorough experimental evaluation against state of the art baselines, with and without weak supervision, shows our model to be superior at all times (+13% in MAP and +28% at various Precision cut-offs from the best baseline with statistical significance). Empirical analysis of the use of weak supervision, word embedding pretraining on domain-specific data, and the use of syntactic dependencies of our model reveals that check-worthy sentences contain notably more identical syntactic dependencies than non-check-worthy sentences.",
keywords = "Check worthiness, Deep learning, Fact checking, Weak supervision",
author = "Casper Hansen and Christian Hansen and Stephen Alstrup and Simonsen, {Jakob Grue} and Christina Lioma",
year = "2019",
doi = "10.1145/3308560.3316736",
language = "English",
pages = "994--1000",
booktitle = "The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019",
publisher = "Association for Computing Machinery",
note = "2019 World Wide Web Conference, WWW 2019 ; Conference date: 13-05-2019 Through 17-05-2019",

}

RIS

TY - GEN

T1 - Neural check-worthiness ranking with weak supervision

T2 - 2019 World Wide Web Conference, WWW 2019

AU - Hansen, Casper

AU - Hansen, Christian

AU - Alstrup, Stephen

AU - Simonsen, Jakob Grue

AU - Lioma, Christina

PY - 2019

Y1 - 2019

N2 - Automatic fact-checking systems detect misinformation, such as fake news, by (i) selecting check-worthy sentences for fact-checking, (ii) gathering related information to the sentences, and (iii) inferring the factuality of the sentences. Most prior research on (i) uses hand-crafted features to select check-worthy sentences, and does not explicitly account for the recent finding that the top weighted terms in both check-worthy and non-check-worthy sentences are actually overlapping [15]. Motivated by this, we present a neural check-worthiness sentence ranking model that represents each word in a sentence by both its embedding (aiming to capture its semantics) and its syntactic dependencies (aiming to capture its role in modifying the semantics of other terms in the sentence). Our model is an end-to-end trainable neural network for check-worthiness ranking, which is trained on large amounts of unlabelled data through weak supervision. Thorough experimental evaluation against state of the art baselines, with and without weak supervision, shows our model to be superior at all times (+13% in MAP and +28% at various Precision cut-offs from the best baseline with statistical significance). Empirical analysis of the use of weak supervision, word embedding pretraining on domain-specific data, and the use of syntactic dependencies of our model reveals that check-worthy sentences contain notably more identical syntactic dependencies than non-check-worthy sentences.

AB - Automatic fact-checking systems detect misinformation, such as fake news, by (i) selecting check-worthy sentences for fact-checking, (ii) gathering related information to the sentences, and (iii) inferring the factuality of the sentences. Most prior research on (i) uses hand-crafted features to select check-worthy sentences, and does not explicitly account for the recent finding that the top weighted terms in both check-worthy and non-check-worthy sentences are actually overlapping [15]. Motivated by this, we present a neural check-worthiness sentence ranking model that represents each word in a sentence by both its embedding (aiming to capture its semantics) and its syntactic dependencies (aiming to capture its role in modifying the semantics of other terms in the sentence). Our model is an end-to-end trainable neural network for check-worthiness ranking, which is trained on large amounts of unlabelled data through weak supervision. Thorough experimental evaluation against state of the art baselines, with and without weak supervision, shows our model to be superior at all times (+13% in MAP and +28% at various Precision cut-offs from the best baseline with statistical significance). Empirical analysis of the use of weak supervision, word embedding pretraining on domain-specific data, and the use of syntactic dependencies of our model reveals that check-worthy sentences contain notably more identical syntactic dependencies than non-check-worthy sentences.

KW - Check worthiness

KW - Deep learning

KW - Fact checking

KW - Weak supervision

UR - http://www.scopus.com/inward/record.url?scp=85066888862&partnerID=8YFLogxK

U2 - 10.1145/3308560.3316736

DO - 10.1145/3308560.3316736

M3 - Article in proceedings

AN - SCOPUS:85066888862

SP - 994

EP - 1000

BT - The Web Conference 2019 - Companion of the World Wide Web Conference, WWW 2019

PB - Association for Computing Machinery

Y2 - 13 May 2019 through 17 May 2019

ER -

ID: 223251762