On the Independence of Association Bias and Empirical Fairness in Language Models

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Indsendt manuskript, 862 KB, PDF-dokument

The societal impact of pre-trained language models has prompted researchers to probe them for strong associations between protected attributes and value-loaded terms, from slur to prestigious job titles. Such work is said to probe models for bias or fairness - or such probes 'into representational biases' are said to be 'motivated by fairness' - suggesting an intimate connection between bias and fairness. We provide conceptual clarity by distinguishing between association biases [11] and empirical fairness [56] and show the two can be independent. Our main contribution, however, is showing why this should not come as a surprise. To this end, we first provide a thought experiment, showing how association bias and empirical fairness can be completely orthogonal. Next, we provide empirical evidence that there is no correlation between bias metrics and fairness metrics across the most widely used language models. Finally, we survey the sociological and psychological literature and show how this literature provides ample support for expecting these metrics to be uncorrelated.

OriginalsprogEngelsk
TitelProceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
ForlagAssociation for Computing Machinery, Inc.
Publikationsdato2023
Sider370-378
ISBN (Elektronisk)9781450372527
DOI
StatusUdgivet - 2023
Begivenhed6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 - Chicago, USA
Varighed: 12 jun. 202315 jun. 2023

Konference

Konference6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
LandUSA
ByChicago
Periode12/06/202315/06/2023

Bibliografisk note

Publisher Copyright:
© 2023 ACM.

ID: 381563506