Pioneer Centre Science Talk: Effects Talk by Professor Nathan Kallus

Title

What's the Harm? Bounding Disparities in Treatment

Abstract

If, in an A/B test, half of users click (or buy, or watch, or renew, etc.) whether exposed to the standard experience "A" or a new one "B," hypothetically it could be because the change affects no one, because the change positively affects half the user population to go from no-click to click while negatively affecting the other half, or anything in between. And, the fundamental problem of causal inference -- that we never observe counterfactuals -- prevents us from knowing exactly where we stand. Specifically, what can we say about the average effect on the 10%-worst affected subpopulation (or 20% or 30%, etc.)? While demonstrably unknowable, this impact is clearly of material importance to the decision to implement a change or not. If we cannot hope to measure this, what are then the best bounds we can get and how do we estimate them?

I show a tight upper bound is the 10%-tail average of the conditional average treatment effect (CATE) given pre-treatment covariates. I also provide tight lower bounds when residual heterogeneity is bounded. A negative upper bound would, for example, provide irrefutable evidence of negative impact on a sizable group, despite challenges to measurement, thus better supporting efforts to address disparities and improve fairness. Inference on these bounds, however, is made difficult by their dependence on the unknown CATE function, and simply plugging in an estimate of CATE can incur significant bias. Instead, I develop a robust inference algorithm that is consistent almost regardless of how and how fast CATE is learned as long as it is consistent, and it still gives valid but conservative bounds even if CATE is learned inconsistently.

Studying a hypothetical change to French unemployment services, the new bounds and inference algorithm demonstrate a small social benefit coincides with harm to a sizable subpopulation. Time permitting, I will also turn the question of assessing disparate impacts of personalized interventions, such as the targeted allocation of homelessness prevention interventions, healthcare case management, and unemployment benefits. Specifically, are these limited resources allocated to the individuals that would actually benefit from them at equal rates between protected groups? Again, the fundamental problem of causal inference prevents measurement. I show that measurement is possible under the additional assumption of monotone treatment response and derive bounds when it holds approximately.

Bio

Professor Nathan KallusNathan Kallus is an Assistant Professor in the School of Operations Research and Information Engineering and Cornell Tech at Cornell University. Nathan's research interests include optimization, especially under uncertainty; causal inference; sequential decision making; and algorithmic fairness. He holds a PhD in Operations Research from MIT as well as a BA in Mathematics and a BS in Computer Science from UC Berkeley. Before coming to Cornell, Nathan was a Visiting Scholar at USC's Department of Data Sciences and Operations and a Postdoctoral Associate at MIT's Operations Research and Statistics group.

Register

As there are a limited number of seats, please send an email to aicentre@ku.dk no later than Thursday 4 August at 17.00 if you wish to participate in the event physically. It is not necessary to register by email if you wish to participate via Zoom.