Locke’s Holiday: Belief Bias in Machine Reading

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • fULLTEXT

    Final published version, 222 KB, PDF document

I highlight a simple failure mode of state-of-the-art machine reading systems: when contexts do not align with commonly shared beliefs. For example, machine reading systems fail to answer What did Elizabeth want? correctly in the context of ‘My kingdom for a cough drop, cried Queen Elizabeth.’ Biased by co-occurrence statistics in the training data of pretrained language models, systems predict my kingdom, rather than a cough drop. I argue such biases are analogous to human belief biases and present a carefully designed challenge dataset for English machine reading, called Auto-Locke, to quantify such effects. Evaluations of machine reading systems on Auto-Locke show the pervasiveness of belief bias in machine reading.
Original languageEnglish
Title of host publicationProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
PublisherAssociation for Computational Linguistics
Publication date2021
Pages8240–8245
DOIs
Publication statusPublished - 2021
Event2021 Conference on Empirical Methods in Natural Language Processing -
Duration: 7 Nov 202111 Nov 2021

Conference

Conference2021 Conference on Empirical Methods in Natural Language Processing
Periode07/11/202111/11/2021

ID: 299822827