Habilitation Abstract: Towards Explainable Fact Checking

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Habilitation Abstract : Towards Explainable Fact Checking. / Augenstein, Isabelle.

In: KI - Künstliche Intelligenz, Vol. 36, 2022, p. 255–258.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Augenstein, I 2022, 'Habilitation Abstract: Towards Explainable Fact Checking', KI - Künstliche Intelligenz, vol. 36, pp. 255–258. https://doi.org/10.1007/s13218-022-00774-6

APA

Augenstein, I. (2022). Habilitation Abstract: Towards Explainable Fact Checking. KI - Künstliche Intelligenz, 36, 255–258. https://doi.org/10.1007/s13218-022-00774-6

Vancouver

Augenstein I. Habilitation Abstract: Towards Explainable Fact Checking. KI - Künstliche Intelligenz. 2022;36:255–258. https://doi.org/10.1007/s13218-022-00774-6

Author

Augenstein, Isabelle. / Habilitation Abstract : Towards Explainable Fact Checking. In: KI - Künstliche Intelligenz. 2022 ; Vol. 36. pp. 255–258.

Bibtex

@article{078a7287a57a47699c9b08c611b789b7,
title = "Habilitation Abstract: Towards Explainable Fact Checking",
abstract = "With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.",
keywords = "Automatic fact checking, Explainable AI, Natural language understanding, Low-resource learning, Multi-task learning",
author = "Isabelle Augenstein",
year = "2022",
doi = "10.1007/s13218-022-00774-6",
language = "English",
volume = "36",
pages = "255–258",
journal = "KI - K{\"u}nstliche Intelligenz",
issn = "0933-1875",
publisher = "Springer",

}

RIS

TY - JOUR

T1 - Habilitation Abstract

T2 - Towards Explainable Fact Checking

AU - Augenstein, Isabelle

PY - 2022

Y1 - 2022

N2 - With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.

AB - With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.

KW - Automatic fact checking

KW - Explainable AI

KW - Natural language understanding

KW - Low-resource learning

KW - Multi-task learning

U2 - 10.1007/s13218-022-00774-6

DO - 10.1007/s13218-022-00774-6

M3 - Journal article

VL - 36

SP - 255

EP - 258

JO - KI - Künstliche Intelligenz

JF - KI - Künstliche Intelligenz

SN - 0933-1875

ER -

ID: 320394657