Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models. / Cabello, Laura; Bugliarello, Emanuele; Brandl, Stephanie; Elliott, Desmond.

Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2023. p. 8465-8483.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Cabello, L, Bugliarello, E, Brandl, S & Elliott, D 2023, Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models. in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), pp. 8465-8483, 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 06/12/2023. https://doi.org/10.18653/v1/2023.emnlp-main.525

APA

Cabello, L., Bugliarello, E., Brandl, S., & Elliott, D. (2023). Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 8465-8483). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.525

Vancouver

Cabello L, Bugliarello E, Brandl S, Elliott D. Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL). 2023. p. 8465-8483 https://doi.org/10.18653/v1/2023.emnlp-main.525

Author

Cabello, Laura ; Bugliarello, Emanuele ; Brandl, Stephanie ; Elliott, Desmond. / Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2023. pp. 8465-8483

Bibtex

@inproceedings{b9f91d2ae5084f4fab36108de65d59aa,
title = "Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models",
abstract = "Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.",
keywords = "cs.CV, cs.CL, cs.LG",
author = "Laura Cabello and Emanuele Bugliarello and Stephanie Brandl and Desmond Elliott",
year = "2023",
doi = "10.18653/v1/2023.emnlp-main.525",
language = "English",
pages = "8465--8483",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",
note = "2023 Conference on Empirical Methods in Natural Language Processing ; Conference date: 06-12-2023 Through 10-12-2023",

}

RIS

TY - GEN

T1 - Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

AU - Cabello, Laura

AU - Bugliarello, Emanuele

AU - Brandl, Stephanie

AU - Elliott, Desmond

PY - 2023

Y1 - 2023

N2 - Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.

AB - Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.

KW - cs.CV

KW - cs.CL

KW - cs.LG

U2 - 10.18653/v1/2023.emnlp-main.525

DO - 10.18653/v1/2023.emnlp-main.525

M3 - Article in proceedings

SP - 8465

EP - 8483

BT - Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics (ACL)

T2 - 2023 Conference on Empirical Methods in Natural Language Processing

Y2 - 6 December 2023 through 10 December 2023

ER -

ID: 382997067