Shortcomings of Interpretability Taxonomies for Deep Neural Networks

Research output: Contribution to journalConference articleResearchpeer-review

Documents

  • Fulltext

    Final published version, 220 KB, PDF document

Taxonomies are vehicles for thinking about what’s possible, for identifying unconsidered options, as well as for establishing formal relations between entities. We identify several shortcomings in 10 existing taxonomies for interpretability methods for explainable artificial intelligence (XAI), focusing on methods for deep neural networks. The shortcomings include redundancies, incompleteness, and inconsistencies. We design a new taxonomy based on two orthogonal dimensions and show how it can be used to derive results about entire classes of interpretability methods for deep neural networks.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3318
ISSN1613-0073
Publication statusPublished - 2022
Event2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022 - Atlanta, United States
Duration: 17 Oct 202221 Oct 2022

Conference

Conference2022 International Conference on Information and Knowledge Management Workshops, CIKM-WS 2022
CountryUnited States
CityAtlanta
Period17/10/202221/10/2022

Bibliographical note

Publisher Copyright:
© 2022 Copyright for this paper by its authors.

    Research areas

  • interpretability, taxonomy

ID: 336294291