Shortcomings of Interpretability Taxonomies for Deep Neural Networks

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 220 KB, PDF-dokument

Taxonomies are vehicles for thinking about what’s possible, for identifying unconsidered options, as well as for establishing formal relations between entities. We identify several shortcomings in 10 existing taxonomies for interpretability methods for explainable artificial intelligence (XAI), focusing on methods for deep neural networks. The shortcomings include redundancies, incompleteness, and inconsistencies. We design a new taxonomy based on two orthogonal dimensions and show how it can be used to derive results about entire classes of interpretability methods for deep neural networks.

OriginalsprogEngelsk
TidsskriftCEUR Workshop Proceedings
Vol/bind3318
ISSN1613-0073
StatusUdgivet - 2022
Begivenhed2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022 - Atlanta, USA
Varighed: 17 okt. 202221 okt. 2022

Konference

Konference2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022
LandUSA
ByAtlanta
Periode17/10/202221/10/2022

Bibliografisk note

Publisher Copyright:
© 2022 Copyright for this paper by its authors.

ID: 336294291