Unsupervised Evaluation for Question Answering with Transformers

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

It is challenging to automatically evaluate the answer of a QA model at inference time. Although many models provide confidence scores, and simple heuristics can go a long way towards indicating answer correctness, such measures are heavily dataset-dependent and are unlikely to generalise. In this work, we begin by investigating the hidden representations of questions, answers, and contexts in transformer-based QA architectures. We observe a consistent pattern in the answer representations, which we show can be used to automatically evaluate whether or not a predicted answer span is correct. Our method does not require any labelled data and outperforms strong heuristic baselines, across 2 datasets and 7 domains. We are able to predict whether or not a model’s answer is correct with 91.37% accuracy on SQuAD, and 80.7% accuracy on SubjQA. We expect that this method will have broad applications, e.g., in semi-automatic development of QA datasets.
Original languageEnglish
Title of host publicationProceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
PublisherAssociation for Computational Linguistics
Publication date2020
Pages83-90
DOIs
Publication statusPublished - 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - online
Duration: 16 Nov 202020 Nov 2020
http://2020.emnlp.org

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Locationonline
Periode16/11/202020/11/2020
Internetadresse

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 254996871