Nodes of certainty and spaces for doubt in AI ethics for engineers
Publikation: Bidrag til tidsskrift › Tidsskriftartikel › fagfællebedømt
Discussions about AI development frequently bring up the question of ethics because it is difficult to predict how technological decisions might play out once AI systems are implemented and used in the world. Engineers of AI systems are increasingly expected to go beyond the traditions of requirement specifications, taking into account broader societal contexts and their complexities. In this paper we present findings from a hackathon event conducted with working engineers, exploring the gaps between existing guidelines and recommendations for addressing ethical issues with respect to AI technologies and the realities experienced by the engineers in practice. We found that when faced with the uncertainties of how to recognize and navigate ethical issues and challenges, engineers looked to identify the responsibilities that need to be in place to sustain trust and to hold the relevant parties to account for their misdeeds. We re-envision familiar engineering practices as nodes of certainty to accommodate the needs of responsible and ethical AI. Despite the desire for mechanisms for sustaining certainty in how to build AI technology responsibly by providing frameworks for action and accountability, there remains a need to ensure just enough spaces and opportunities to cultivate reasonable doubt. Space and capacity to doubt accepted certainties, in fact, is the very process of ethics, necessary for holding to account our standards, guidelines, and checklists as technology and society co-evolve.
|Tidsskrift||Information Communication and Society|
|Status||Udgivet - 2023|
© 2022 Informa UK Limited, trading as Taylor & Francis Group.