On the Opacity of Deep Neural Networks

Research output: Contribution to journalJournal articleResearchpeer-review

Documents

  • Fulltext

    Final published version, 166 KB, PDF document

Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what extent the two kinds of opacity can be mitigated by explainability methods.

Original languageEnglish
JournalCanadian Journal of Philosophy
Volume53
Issue number3
Pages (from-to)224–239
ISSN0045-5091
DOIs
Publication statusPublished - 2023

Bibliographical note

Publisher Copyright:
© The Author(s), 2024. Published by Cambridge University Press on behalf of The Canadian Journal of Philosophy Inc.

    Research areas

  • deep neural networks, explainability, mitigation, model size, opacity

ID: 389904615