Increasing Confidence in Adversarial Robustness Evaluations
METADATA ONLY
Loading...
Author / Producer
Date
2022
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Hundreds of defenses have been proposed to make deep neural networks robust against minimal (adversarial) input perturbations. However, only a handful of these defenses held up their claims because correctly evaluating robustness is extremely challenging: Weak attacks often fail to find adversarial examples even if they unknowingly exist, thereby making a vulnerable network look robust. In this paper, we propose a test to identify weak attacks, and thus weak defense evaluations. Our test slightly modifies a neural network to guarantee the existence of an adversarial example for every sample. Consequentially, any correct attack must succeed in breaking this modified network. For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test, while stronger attacks that break these defenses pass it. We hope that attack unit tests - such as ours - will be a major component in future robustness evaluations and increase confidence in an empirical field that is currently riddled with skepticism.
Permanent link
Publication status
published
Book title
Advances in Neural Information Processing Systems 35
Journal / series
Volume
Pages / Article No.
13174 - 13189
Publisher
Curran
Event
36th Annual Conference on Neural Information Processing Systems (NeurIPS 2022)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Machine Learning; Adversarial Examples
Organisational unit
09764 - Tramèr, Florian / Tramèr, Florian