How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? / Sen, Indira; Samory, Mattia; Flöck, Fabian; Wagner, Claudia; Augenstein, Isabelle.

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2021. p. 325-344.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Sen, I, Samory, M, Flöck, F, Wagner, C & Augenstein, I 2021, How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 325-344, 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, 07/11/2021. https://doi.org/10.18653/v1/2021.emnlp-main.28

APA

Sen, I., Samory, M., Flöck, F., Wagner, C., & Augenstein, I. (2021). How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 325-344). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.emnlp-main.28

Vancouver

Sen I, Samory M, Flöck F, Wagner C, Augenstein I. How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. 2021. p. 325-344 https://doi.org/10.18653/v1/2021.emnlp-main.28

Author

Sen, Indira ; Samory, Mattia ; Flöck, Fabian ; Wagner, Claudia ; Augenstein, Isabelle. / How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2021. pp. 325-344

Bibtex

@inproceedings{a035c7f598394bccaec62bf722c3e56b,
title = "How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?",
abstract = "As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, it is crucial to ensure that these models are robust. One way of improving model robustness is to generate counterfactually augmented data (CAD) for training models that can better learn to distinguish between core features and data artifacts. While models trained on this type of data have shown promising out-of-domain generalizability, it is still unclear what the sources of such improvements are. We investigate the benefits of CAD for social NLP models by focusing on three social computing constructs — sentiment, sexism, and hate speech. Assessing the performance of models trained with and without CAD across different types of datasets, we find that while models trained on CAD show lower in-domain performance, they generalize better out-of-domain. We unpack this apparent discrepancy using machine explanations and find that CAD reduces model reliance on spurious features. Leveraging a novel typology of CAD to analyze their relationship with model performance, we find that CAD which acts on the construct directly or a diverse set of CAD leads to higher performance.",
author = "Indira Sen and Mattia Samory and Fabian Fl{\"o}ck and Claudia Wagner and Isabelle Augenstein",
year = "2021",
doi = "10.18653/v1/2021.emnlp-main.28",
language = "English",
pages = "325--344",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics",
note = "2021 Conference on Empirical Methods in Natural Language Processing ; Conference date: 07-11-2021 Through 11-11-2021",

}

RIS

TY - GEN

T1 - How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?

AU - Sen, Indira

AU - Samory, Mattia

AU - Flöck, Fabian

AU - Wagner, Claudia

AU - Augenstein, Isabelle

PY - 2021

Y1 - 2021

N2 - As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, it is crucial to ensure that these models are robust. One way of improving model robustness is to generate counterfactually augmented data (CAD) for training models that can better learn to distinguish between core features and data artifacts. While models trained on this type of data have shown promising out-of-domain generalizability, it is still unclear what the sources of such improvements are. We investigate the benefits of CAD for social NLP models by focusing on three social computing constructs — sentiment, sexism, and hate speech. Assessing the performance of models trained with and without CAD across different types of datasets, we find that while models trained on CAD show lower in-domain performance, they generalize better out-of-domain. We unpack this apparent discrepancy using machine explanations and find that CAD reduces model reliance on spurious features. Leveraging a novel typology of CAD to analyze their relationship with model performance, we find that CAD which acts on the construct directly or a diverse set of CAD leads to higher performance.

AB - As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, it is crucial to ensure that these models are robust. One way of improving model robustness is to generate counterfactually augmented data (CAD) for training models that can better learn to distinguish between core features and data artifacts. While models trained on this type of data have shown promising out-of-domain generalizability, it is still unclear what the sources of such improvements are. We investigate the benefits of CAD for social NLP models by focusing on three social computing constructs — sentiment, sexism, and hate speech. Assessing the performance of models trained with and without CAD across different types of datasets, we find that while models trained on CAD show lower in-domain performance, they generalize better out-of-domain. We unpack this apparent discrepancy using machine explanations and find that CAD reduces model reliance on spurious features. Leveraging a novel typology of CAD to analyze their relationship with model performance, we find that CAD which acts on the construct directly or a diverse set of CAD leads to higher performance.

U2 - 10.18653/v1/2021.emnlp-main.28

DO - 10.18653/v1/2021.emnlp-main.28

M3 - Article in proceedings

SP - 325

EP - 344

BT - Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics

T2 - 2021 Conference on Empirical Methods in Natural Language Processing

Y2 - 7 November 2021 through 11 November 2021

ER -

ID: 300695338