Open access
Date
2022Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Randomized Smoothing (RS) is a promising method for obtaining robustness certificates by evaluating a base model under noise. In this work, we: (i) theoretically motivate why ensembles are a particularly suitable choice as base models for RS, and (ii) empirically confirm this choice, obtaining state-of-the-art results in multiple settings. The key insight of our work is that the reduced variance of ensembles over the perturbations introduced in RS leads to significantly more consistent classifications for a given input. This, in turn, leads to substantially increased certifiable radii for samples close to the decision boundary. Additionally, we introduce key optimizations which enable an up to 55-fold decrease in sample complexity of RS, thus drastically reducing its computational overhead. Experimentally, we show that ensembles of only 3 to 10 classifiers consistently improve on their strongest constituting model with respect to their average certified radius (ACR) by 5% to 21% on both CIFAR10 and ImageNet, achieving a new state-of-the-art ACR of 0.86 and 1.11, respectively. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000592011Publication status
publishedExternal links
Book title
The Tenth International Conference on Learning Representations (ICLR 2022)Publisher
OpenReviewEvent
Subject
Adversarial Robustness; Certified Robustness; Randomized SmoothingOrganisational unit
03948 - Vechev, Martin / Vechev, Martin
Related publications and datasets
Is new version of: https://doi.org/10.48550/arXiv.2106.06946
Notes
Spotlight PaperMore
Show all metadata
ETH Bibliography
yes
Altmetrics