'Thinking problematically' as a resource for AI design in politicised contexts
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
'Thinking problematically' as a resource for AI design in politicised contexts. / Petersen, Anette C.M.; Cohn, Marisa Leavitt; Hildebrandt, Thomas T.; Møller, Naja Holten.
CHItaly 2021 - Frontiers of HCI: Proceedings of the 14th Biannual Conference of the Italian SIGCHI Chapter. Association for Computing Machinery, 2021. p. 1-8 (ACM International Conference Proceeding Series).Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - 'Thinking problematically' as a resource for AI design in politicised contexts
AU - Petersen, Anette C.M.
AU - Cohn, Marisa Leavitt
AU - Hildebrandt, Thomas T.
AU - Møller, Naja Holten
N1 - Publisher Copyright: © 2021 ACM.
PY - 2021
Y1 - 2021
N2 - When designing artificial intelligence (AI) in politicised contexts, such as the public sector, optimistic promises of what AI can achieve often shape decisions around which problems AI should address. Different epistemological views carry different understandings of what is considered the problem at hand, and, as we show in this paper, ethnographic perspectives often fail to match the politicised promises of AI. This paper reflects on personal experiences from an interdisciplinary research project that aimed to take a responsible approach to research and design AI for public services in Denmark. Seeking alternatives to the inflexible algorithms [3, 38] often used to automate or augment specific decision-making tasks in these contexts [1, 2, 35], our research project took a flexible approach to research and design and included ethnographic workplace studies to explore whether AI could both leverage the increasing powers of computing and retain the discretion of the user [23]. Following Mesman [33], we present three empirical moments that were particularly challenging for us as ethnographic researchers and influenced our project in important ways regarding the problems for AI to solve. Problematising [6] them, enabled us to surface how 'readiness', emerging from the politicised context of AI in Denmark, had confounded our efforts at interdisciplinary collaboration. Problematisation, then, allowed us to come to a new understanding of the problem at hand and open up a space to collaboratively re-imagine the problems for AI to solve. This paper is in the spirit of serving as a bridge between our initial and revised understanding, pointing to the ongoing discussion in HCI about 'bridging the gap' between ethnography and design. Our contribution is a discussion of how researchers and designers might engage with problematisation at the frontiers of HCI to develop an open-ended approach to collaborative AI design in politicised contexts.
AB - When designing artificial intelligence (AI) in politicised contexts, such as the public sector, optimistic promises of what AI can achieve often shape decisions around which problems AI should address. Different epistemological views carry different understandings of what is considered the problem at hand, and, as we show in this paper, ethnographic perspectives often fail to match the politicised promises of AI. This paper reflects on personal experiences from an interdisciplinary research project that aimed to take a responsible approach to research and design AI for public services in Denmark. Seeking alternatives to the inflexible algorithms [3, 38] often used to automate or augment specific decision-making tasks in these contexts [1, 2, 35], our research project took a flexible approach to research and design and included ethnographic workplace studies to explore whether AI could both leverage the increasing powers of computing and retain the discretion of the user [23]. Following Mesman [33], we present three empirical moments that were particularly challenging for us as ethnographic researchers and influenced our project in important ways regarding the problems for AI to solve. Problematising [6] them, enabled us to surface how 'readiness', emerging from the politicised context of AI in Denmark, had confounded our efforts at interdisciplinary collaboration. Problematisation, then, allowed us to come to a new understanding of the problem at hand and open up a space to collaboratively re-imagine the problems for AI to solve. This paper is in the spirit of serving as a bridge between our initial and revised understanding, pointing to the ongoing discussion in HCI about 'bridging the gap' between ethnography and design. Our contribution is a discussion of how researchers and designers might engage with problematisation at the frontiers of HCI to develop an open-ended approach to collaborative AI design in politicised contexts.
KW - AI design
KW - Ethnography
KW - Interdisciplinary research
KW - Politics
KW - Problematisation
KW - Public digitisation
KW - Public services
UR - http://www.scopus.com/inward/record.url?scp=85112707591&partnerID=8YFLogxK
U2 - 10.1145/3464385.3464738
DO - 10.1145/3464385.3464738
M3 - Article in proceedings
AN - SCOPUS:85112707591
T3 - ACM International Conference Proceeding Series
SP - 1
EP - 8
BT - CHItaly 2021 - Frontiers of HCI
PB - Association for Computing Machinery
T2 - 14th Biannual Conference of the Italian SIGCHI Chapter: Frontiers of HCI, CHItaly 2021
Y2 - 11 July 2021 through 13 July 2021
ER -
ID: 281989568