Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models? / Ray Choudhury, Sagnik; Bhutani, Nikita; Augenstein, Isabelle.

Proceedings of the 29th International Conference on Computational Linguistics. Association for Computational Linguistics (ACL), 2022. p. 1620–1635.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Ray Choudhury, S, Bhutani, N & Augenstein, I 2022, Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models? in Proceedings of the 29th International Conference on Computational Linguistics. Association for Computational Linguistics (ACL), pp. 1620–1635, THE 29TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS, GYEONGJU, Korea, Republic of, 12/10/2022. <https://aclanthology.org/2022.coling-1.139/>

APA

Ray Choudhury, S., Bhutani, N., & Augenstein, I. (2022). Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models? In Proceedings of the 29th International Conference on Computational Linguistics (pp. 1620–1635). Association for Computational Linguistics (ACL). https://aclanthology.org/2022.coling-1.139/

Vancouver

Ray Choudhury S, Bhutani N, Augenstein I. Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models? In Proceedings of the 29th International Conference on Computational Linguistics. Association for Computational Linguistics (ACL). 2022. p. 1620–1635

Author

Ray Choudhury, Sagnik ; Bhutani, Nikita ; Augenstein, Isabelle. / Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models?. Proceedings of the 29th International Conference on Computational Linguistics. Association for Computational Linguistics (ACL), 2022. pp. 1620–1635

Bibtex

@inproceedings{a02dedd005d447fd87d080bb821c6e6e,
title = "Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models?",
abstract = "There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through {\textquoteleft}Edge Probing{\textquoteright} (EP) tests: supervised classification tasks to predict the grammatical properties of a span (whether it has a particular part of speech) using only the token representations coming from the LM encoder. However, most NLP applications fine-tune these LM encoders for specific tasks. Here, we ask: if an LM is fine-tuned, does the encoding of linguistic information in it change, as measured by EP tests? Specifically, we focus on the task of Question Answering (QA) and conduct experiments on multiple datasets. We find that EP test results do not change significantly when the fine-tuned model performs well or in adversarial situations where the model is forced to learn wrong correlations. From a similar finding, some recent papers conclude that fine-tuning does not change linguistic knowledge in encoders but they do not provide an explanation. We find that EP models are susceptible to exploiting spurious correlations in the EP datasets. When this dataset bias is corrected, we do see an improvement in the EP test results as expected.",
author = "{Ray Choudhury}, Sagnik and Nikita Bhutani and Isabelle Augenstein",
year = "2022",
language = "English",
pages = "1620–1635",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",
note = "THE 29TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS, COLING 2022 ; Conference date: 12-10-2022 Through 17-10-2022",
url = "https://coling2022.org/coling",

}

RIS

TY - GEN

T1 - Can Edge Probing Tests Reveal Linguistic Knowledge in QA Models?

AU - Ray Choudhury, Sagnik

AU - Bhutani, Nikita

AU - Augenstein, Isabelle

N1 - Conference code: 29

PY - 2022

Y1 - 2022

N2 - There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through ‘Edge Probing’ (EP) tests: supervised classification tasks to predict the grammatical properties of a span (whether it has a particular part of speech) using only the token representations coming from the LM encoder. However, most NLP applications fine-tune these LM encoders for specific tasks. Here, we ask: if an LM is fine-tuned, does the encoding of linguistic information in it change, as measured by EP tests? Specifically, we focus on the task of Question Answering (QA) and conduct experiments on multiple datasets. We find that EP test results do not change significantly when the fine-tuned model performs well or in adversarial situations where the model is forced to learn wrong correlations. From a similar finding, some recent papers conclude that fine-tuning does not change linguistic knowledge in encoders but they do not provide an explanation. We find that EP models are susceptible to exploiting spurious correlations in the EP datasets. When this dataset bias is corrected, we do see an improvement in the EP test results as expected.

AB - There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through ‘Edge Probing’ (EP) tests: supervised classification tasks to predict the grammatical properties of a span (whether it has a particular part of speech) using only the token representations coming from the LM encoder. However, most NLP applications fine-tune these LM encoders for specific tasks. Here, we ask: if an LM is fine-tuned, does the encoding of linguistic information in it change, as measured by EP tests? Specifically, we focus on the task of Question Answering (QA) and conduct experiments on multiple datasets. We find that EP test results do not change significantly when the fine-tuned model performs well or in adversarial situations where the model is forced to learn wrong correlations. From a similar finding, some recent papers conclude that fine-tuning does not change linguistic knowledge in encoders but they do not provide an explanation. We find that EP models are susceptible to exploiting spurious correlations in the EP datasets. When this dataset bias is corrected, we do see an improvement in the EP test results as expected.

M3 - Article in proceedings

SP - 1620

EP - 1635

BT - Proceedings of the 29th International Conference on Computational Linguistics

PB - Association for Computational Linguistics (ACL)

T2 - THE 29TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS

Y2 - 12 October 2022 through 17 October 2022

ER -

ID: 341056680