Machine Reading, Fast and Slow: When Do Models “Understand” Language?

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 461 KB, PDF-dokument

Two of the most fundamental issues in Natural Language Understanding (NLU) at present are: (a) how it can established whether deep learning-based models score highly on NLU benchmarks for the ”right” reasons; and (b) what those reasons would even be. We investigate the behavior of reading comprehension models with respect to two linguistic ”skills”: coreference resolution and comparison. We propose a definition for the reasoning steps expected from a system that would be ”reading slowly”, and compare that with the behavior of five models of the BERT family of various sizes, observed through saliency scores and counterfactual explanations. We find that for comparison (but not coreference) the systems based on larger encoders are more likely to rely on the ”right” information, but even they struggle with generalization, suggesting that they still learn specific lexical patterns rather than the general principles of comparison.
OriginalsprogEngelsk
TitelProceedings of the 29th International Conference on Computational Linguistics
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2022
Sider78–93
StatusUdgivet - 2022
BegivenhedTHE 29TH
INTERNATIONAL CONFERENCE ON
COMPUTATIONAL LINGUISTICS
- Gyeongju, Republic of Korea
Varighed: 12 okt. 202217 okt. 2022

Konference

KonferenceTHE 29TH
INTERNATIONAL CONFERENCE ON
COMPUTATIONAL LINGUISTICS
LokationGyeongju, Republic of Korea
Periode12/10/202217/10/2022

Links

ID: 341057090