Exploring Deviations from the Norm: A Participatory Study of Outliers in Asylum Decision-Making [Oral Presentation}

Publikation: KonferencebidragKonferenceabstrakt til konferenceForskning

Refugees around the world are increasingly subject to data-driven decision-making when apply- ing for asylum. Researchers in countries such as the US, Canada and Australia are experiment- ing with machine learning algorithms to predict asylum outcomes, to mitigate judges’ bias and harmonise decision outcomes [Cameron et al.(2021)], [Chen and Eagel(2017)], [Dunn et al.(2017)]. However, there is a growing literature warning of the potential harms of automated decision-making [Zavrˇsnik(2021)], [Brown et al.(2019)]. Few of these studies have scrutinised concrete algorithmic techniques and the social values that are encoded in them, e.g. [Bechmann(2019)] and [Rieder(2017)]. Moreover, the critical reflection of algorithmic bias often comes from an academic environment and doesn’t take into account perspectives of practitioners of automated decision-making.
Using a data set of over 17,000 Danish asylum decision summaries, we propose a participatory approach to scrutinising an algorithmic technique, the social values that it encodes and the lived experiences of the data subjects. We apply and study algorithms used for outlier detection in the context of Danish asylum decision-making. Our study is comprised of two parts, of which we will present preliminary results:
1. We implement three variations of commonly used unsupervised outlier detection algorithms, to answer the questions: Who are the outliers in the data set of asylum decision summaries? What effect have different choices of the analyst in the stages of implementing the algorithm, on the result?
2. Applying a participatory approach, we take the results of our quantitative analysis to the stakeholders of the Danish asylum decision-making process and ask: Who are the outliers for stakeholders of the Danish asylum decision-making process? What makes them outliers? How does the Danish asylum decision-making process account for their outliers?
Outliers are a central concept in data analysis and are often defined as observations that deviate significantly from the majority of data points. Outlier detection algorithms create a model of the normal patterns in a data set and calculate an outlier score of a given data point on the basis of deviations from these patterns. It is often up to the discretion of the data analyst to either regard them as the result of measurement or data entry errors, and thus exclude them from the data set as noise, or consider them as legitimate observations.
Using the domain of asylum decision-making, we show 1) How outliers are results of a balance of human judgment and calculation in the process of implementing outlier detection algorithms; and 2) How to use this algorithmic technique to engage stakeholders of the decision-making process in a discussion about cases in asylum decision-making that do not conform to the norm.
We show how outlier detection algorithms are built on a philosophy of a majority, serving and reinforcing majority traits and characteristics, while minorities or outliers are rendered invisible. We identify and center the lived experiences of outliers in the Danish asylum domain together with the stakeholders of the decision-making process. Following the principle of mutual learning, we engage in collective sensemaking of our data [Holten Møller et al.(2021)], but also foster awareness in the public sector about the possibilities and limitations of using data-driven technologies in decision- making.
OriginalsprogEngelsk
Publikationsdato2023
StatusUdgivet - 2023
BegivenhedData Justice: Collective Experiences in the Datafied Society - Cardiff University, Cardiff, Storbritannien
Varighed: 19 jun. 202320 jun. 2023
https://datajusticelab.org/data-justice-2023/

Konference

KonferenceData Justice
LokationCardiff University
LandStorbritannien
ByCardiff
Periode19/06/202320/06/2023
Internetadresse

ID: 357073145