- Working Paper
Rights / licenseIn Copyright - Non-Commercial Use Permitted
We present a training system, which can provably defend signiﬁcantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for ﬁne-tuning the precision and scalability of the abstraction, (ii) a ﬂexible domain speciﬁc language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary speciﬁcations. Our training method is implemented in the DiffAI system. Show more
Journal / seriesarXiv
Pages / Article No.
Organisational unit03948 - Vechev, Martin / Vechev, Martin
MoreShow all metadata