Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Semantic Sensitivities and Inconsistent Predictions : Measuring the Fragility of NLI Models. / Arakelyan, Erik; Liu, Zhaoqi; Augenstein, Isabelle.

EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. ed. / Yvette Graham; Matthew Purver; Matthew Purver. Association for Computational Linguistics (ACL), 2024. p. 432-444.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Arakelyan, E, Liu, Z & Augenstein, I 2024, Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models. in Y Graham, M Purver & M Purver (eds), EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. Association for Computational Linguistics (ACL), pp. 432-444, 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024, St. Julian's, Malta, 17/03/2024. <https://aclanthology.org/2024.eacl-long.27/>

APA

Arakelyan, E., Liu, Z., & Augenstein, I. (2024). Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models. In Y. Graham, M. Purver, & M. Purver (Eds.), EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 432-444). Association for Computational Linguistics (ACL). https://aclanthology.org/2024.eacl-long.27/

Vancouver

Arakelyan E, Liu Z, Augenstein I. Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models. In Graham Y, Purver M, Purver M, editors, EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. Association for Computational Linguistics (ACL). 2024. p. 432-444

Author

Arakelyan, Erik ; Liu, Zhaoqi ; Augenstein, Isabelle. / Semantic Sensitivities and Inconsistent Predictions : Measuring the Fragility of NLI Models. EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference. editor / Yvette Graham ; Matthew Purver ; Matthew Purver. Association for Computational Linguistics (ACL), 2024. pp. 432-444

Bibtex

@inproceedings{8048dca0600841d6b06c73b54d0b111b,
title = "Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models",
abstract = "Recent studies of the emergent capabilities of transformer-based Natural Language Understanding (NLU) models have indicated that they have an understanding of lexical and compositional semantics. We provide evidence that suggests these claims should be taken with a grain of salt: we find that state-of-the-art Natural Language Inference (NLI) models are sensitive towards minor semantics preserving surface-form variations, which lead to sizable inconsistent model decisions during inference. Notably, this behaviour differs from valid and in-depth comprehension of compositional semantics, however does neither emerge when evaluating model accuracy on standard benchmarks nor when probing for syntactic, monotonic, and logically robust reasoning. We propose a novel framework to measure the extent of semantic sensitivity. To this end, we evaluate NLI models on adversarially generated examples containing minor semantics-preserving surface-form input noise. This is achieved using conditional text generation, with the explicit condition that the NLI model predicts the relationship between the original and adversarial inputs as a symmetric equivalence entailment. We systematically study the effects of the phenomenon across NLI models for in- and out-of domain settings. Our experiments show that semantic sensitivity causes performance degradations of 12.92% and 23.71% average over in- and out-of- domain settings, respectively. We further perform ablation studies, analysing this phenomenon across models, datasets, and variations in inference and show that semantic sensitivity can lead to major inconsistency within model predictions.",
author = "Erik Arakelyan and Zhaoqi Liu and Isabelle Augenstein",
note = "Publisher Copyright: {\textcopyright} 2024 Association for Computational Linguistics.; 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024 ; Conference date: 17-03-2024 Through 22-03-2024",
year = "2024",
language = "English",
pages = "432--444",
editor = "Yvette Graham and Matthew Purver and Matthew Purver",
booktitle = "EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",

}

RIS

TY - GEN

T1 - Semantic Sensitivities and Inconsistent Predictions

T2 - 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024

AU - Arakelyan, Erik

AU - Liu, Zhaoqi

AU - Augenstein, Isabelle

N1 - Publisher Copyright: © 2024 Association for Computational Linguistics.

PY - 2024

Y1 - 2024

N2 - Recent studies of the emergent capabilities of transformer-based Natural Language Understanding (NLU) models have indicated that they have an understanding of lexical and compositional semantics. We provide evidence that suggests these claims should be taken with a grain of salt: we find that state-of-the-art Natural Language Inference (NLI) models are sensitive towards minor semantics preserving surface-form variations, which lead to sizable inconsistent model decisions during inference. Notably, this behaviour differs from valid and in-depth comprehension of compositional semantics, however does neither emerge when evaluating model accuracy on standard benchmarks nor when probing for syntactic, monotonic, and logically robust reasoning. We propose a novel framework to measure the extent of semantic sensitivity. To this end, we evaluate NLI models on adversarially generated examples containing minor semantics-preserving surface-form input noise. This is achieved using conditional text generation, with the explicit condition that the NLI model predicts the relationship between the original and adversarial inputs as a symmetric equivalence entailment. We systematically study the effects of the phenomenon across NLI models for in- and out-of domain settings. Our experiments show that semantic sensitivity causes performance degradations of 12.92% and 23.71% average over in- and out-of- domain settings, respectively. We further perform ablation studies, analysing this phenomenon across models, datasets, and variations in inference and show that semantic sensitivity can lead to major inconsistency within model predictions.

AB - Recent studies of the emergent capabilities of transformer-based Natural Language Understanding (NLU) models have indicated that they have an understanding of lexical and compositional semantics. We provide evidence that suggests these claims should be taken with a grain of salt: we find that state-of-the-art Natural Language Inference (NLI) models are sensitive towards minor semantics preserving surface-form variations, which lead to sizable inconsistent model decisions during inference. Notably, this behaviour differs from valid and in-depth comprehension of compositional semantics, however does neither emerge when evaluating model accuracy on standard benchmarks nor when probing for syntactic, monotonic, and logically robust reasoning. We propose a novel framework to measure the extent of semantic sensitivity. To this end, we evaluate NLI models on adversarially generated examples containing minor semantics-preserving surface-form input noise. This is achieved using conditional text generation, with the explicit condition that the NLI model predicts the relationship between the original and adversarial inputs as a symmetric equivalence entailment. We systematically study the effects of the phenomenon across NLI models for in- and out-of domain settings. Our experiments show that semantic sensitivity causes performance degradations of 12.92% and 23.71% average over in- and out-of- domain settings, respectively. We further perform ablation studies, analysing this phenomenon across models, datasets, and variations in inference and show that semantic sensitivity can lead to major inconsistency within model predictions.

M3 - Article in proceedings

AN - SCOPUS:85189930486

SP - 432

EP - 444

BT - EACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference

A2 - Graham, Yvette

A2 - Purver, Matthew

A2 - Purver, Matthew

PB - Association for Computational Linguistics (ACL)

Y2 - 17 March 2024 through 22 March 2024

ER -

ID: 392216116