Shortcomings of Interpretability Taxonomies for Deep Neural Networks
Research output: Contribution to journal › Conference article › Research › peer-review
Documents
- Fulltext
Final published version, 220 KB, PDF document
Taxonomies are vehicles for thinking about what’s possible, for identifying unconsidered options, as well as for establishing formal relations between entities. We identify several shortcomings in 10 existing taxonomies for interpretability methods for explainable artificial intelligence (XAI), focusing on methods for deep neural networks. The shortcomings include redundancies, incompleteness, and inconsistencies. We design a new taxonomy based on two orthogonal dimensions and show how it can be used to derive results about entire classes of interpretability methods for deep neural networks.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 3318 |
ISSN | 1613-0073 |
Publication status | Published - 2022 |
Event | 2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022 - Atlanta, United States Duration: 17 Oct 2022 → 21 Oct 2022 |
Conference
Conference | 2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022 |
---|---|
Country | United States |
City | Atlanta |
Period | 17/10/2022 → 21/10/2022 |
Bibliographical note
Publisher Copyright:
© 2022 Copyright for this paper by its authors.
- interpretability, taxonomy
Research areas
ID: 336294291