Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models?
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models? / Ray Choudhury, Sagnik; Bhutani, Nikita; Augenstein, Isabelle.
Proceedings of the 29th International Conference on Computational Linguistics. Association for Computational Linguistics (ACL), 2022. p. 1620–1635.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models?
AU - Ray Choudhury, Sagnik
AU - Bhutani, Nikita
AU - Augenstein, Isabelle
N1 - Conference code: 29
PY - 2022
Y1 - 2022
N2 - There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through ‘Edge Probing’ (EP) tests: supervised classification tasks to predict the grammatical properties of a span (whether it has a particular part of speech) using only the token representations coming from the LM encoder. However, most NLP applications fine-tune these LM encoders for specific tasks. Here, we ask: if an LM is fine-tuned, does the encoding of linguistic information in it change, as measured by EP tests? Specifically, we focus on the task of Question Answering (QA) and conduct experiments on multiple datasets. We find that EP test results do not change significantly when the fine-tuned model performs well or in adversarial situations where the model is forced to learn wrong correlations. From a similar finding, some recent papers conclude that fine-tuning does not change linguistic knowledge in encoders but they do not provide an explanation. We find that EP models are susceptible to exploiting spurious correlations in the EP datasets. When this dataset bias is corrected, we do see an improvement in the EP test results as expected.
AB - There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through ‘Edge Probing’ (EP) tests: supervised classification tasks to predict the grammatical properties of a span (whether it has a particular part of speech) using only the token representations coming from the LM encoder. However, most NLP applications fine-tune these LM encoders for specific tasks. Here, we ask: if an LM is fine-tuned, does the encoding of linguistic information in it change, as measured by EP tests? Specifically, we focus on the task of Question Answering (QA) and conduct experiments on multiple datasets. We find that EP test results do not change significantly when the fine-tuned model performs well or in adversarial situations where the model is forced to learn wrong correlations. From a similar finding, some recent papers conclude that fine-tuning does not change linguistic knowledge in encoders but they do not provide an explanation. We find that EP models are susceptible to exploiting spurious correlations in the EP datasets. When this dataset bias is corrected, we do see an improvement in the EP test results as expected.
M3 - Article in proceedings
SP - 1620
EP - 1635
BT - Proceedings of the 29th International Conference on Computational Linguistics
PB - Association for Computational Linguistics (ACL)
T2 - THE 29TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS
Y2 - 12 October 2022 through 17 October 2022
ER -
ID: 341056680