Habilitation Abstract: Towards Explainable Fact Checking

Research output: Contribution to journalJournal articleResearchpeer-review

Documents

  • Fulltext

    Submitted manuscript, 317 KB, PDF document

With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.
Original languageEnglish
JournalKI - Künstliche Intelligenz
Volume36
Pages (from-to)255–258
ISSN0933-1875
DOIs
Publication statusPublished - 2022

    Research areas

  • Automatic fact checking, Explainable AI, Natural language understanding, Low-resource learning, Multi-task learning

ID: 320394657