Towards better understanding of gradient-based attribution methods for Deep Neural Networks

Open access
Date
2018Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years.
While several methods have been proposed to explain network predictions, only a few attempts to analyze them from a theoretical perspective have been made in the past. In this work, we analyze various state-of-the-art attribution methods and prove unexplored connections between them. We also show how some methods can be reformulated and more conveniently implemented. Finally, we perform an empirical evaluation with six attribution methods on a variety of tasks and architectures and discuss their strengths and limitations. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000249929Publication status
publishedExternal links
Book title
6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track ProceedingsPublisher
OpenReview.netEvent
Subject
neural network; deep learning; interpretation; saliency; visualisationOrganisational unit
03420 - Gross, Markus / Gross, Markus
02533 - Institut für Neuroinformatik / Institute of Neuroinformatics
More
Show all metadata
ETH Bibliography
yes
Altmetrics