Expressivity of Certified Neural Networks


Loading...

Date

2024

Publication Type

Doctoral Thesis

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

The vulnerability of neural networks to adversarial examples is a major reliability concern, impeding their deployment in safety-critical applications. To address this, verification methods have emerged to analyze local robustness, often relying on convex relaxations to over-approximate the network's behavior. To get networks that are amenable to certification, specialized training methods have been designed aiming to train both accurate and certifiably robust networks. However, these training methods remain ineffective in all but the simplest settings and thus training certified neural networks remains despite significant effort an open challenge. This raises questions regarding the expressivity of certified neural networks, particularly their ability to approximate given functions. In this thesis, we investigate the expressivity of ReLU neural networks under convex relaxations. Our results show that while certified networks exist, their expressivity is limited compared to standard networks. Specifically, we show that ReLU networks certified with interval-bound-propagation (IBP) can approximately encode continuous functions on compact domains and are hence universal. This universality result directly implies the universality of ReLU networks certified by more precise convex relaxations. Further, we show that single hidden layer ReLU networks are not universal approximators when analyzed using IBP, showcasing a fundamental limitation. Aiding to that, we can show that even the most precise single-neuron convex relaxation cannot exactly encode simple multivariate monotone and convex piecewise linear functions exactly. This impossibility result is surprising as ReLU networks naturally express the class of piecewise linear functions. As before, this impossibility result directly generalizes to all less precise convex relaxations. These results however do not separate different single-neuron convex relaxations by expressivity. To address this, we present a series of results, demonstrating that more precise relaxations enable certified networks to encode a broader class of functions or offer increased flexibility when encoding the same function.

Publication status

published

Editor

Contributors

Examiner : Vechev, Martin T.
Examiner : Albarghouthi, Aws
Examiner : Wong, Eric

Book title

Journal / series

Volume

Pages / Article No.

Publisher

ETH Zurich

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Adversarial Robustness; Certified Robustness; Trustworthy AI; Universal Approximation

Organisational unit

03948 - Vechev, Martin / Vechev, Martin check_circle

Notes

Funding

Related publications and datasets