Habilitation Abstract: Towards Explainable Fact Checking
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Habilitation Abstract : Towards Explainable Fact Checking. / Augenstein, Isabelle.
In: KI - Künstliche Intelligenz, Vol. 36, 2022, p. 255–258.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Habilitation Abstract
T2 - Towards Explainable Fact Checking
AU - Augenstein, Isabelle
PY - 2022
Y1 - 2022
N2 - With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.
AB - With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.
KW - Automatic fact checking
KW - Explainable AI
KW - Natural language understanding
KW - Low-resource learning
KW - Multi-task learning
U2 - 10.1007/s13218-022-00774-6
DO - 10.1007/s13218-022-00774-6
M3 - Journal article
VL - 36
SP - 255
EP - 258
JO - KI - Künstliche Intelligenz
JF - KI - Künstliche Intelligenz
SN - 0933-1875
ER -
ID: 320394657