This record is currently in review state, the data hasn’t been validated yet.
- Bachelor Thesis
Rights / licenseIn Copyright - Non-Commercial Use Permitted
Although deep neural networks have proven to be successful across a large variety of machine learning tasks, recent work has demonstrated that they are at the same time vulnerable to so-called adversarial examples: inputs that are almost indistinguishable from natural data but misclassified by the network. In the case of image classifiers, such adversarial examples have traditionally been constructed by perturbing the original images, but more recently algorithms have been proposed that apply small deformations to the images in order to fool the networks. Simultaneously, defense methods have been proposed that promise to increase the robustness of neural networks against such adversarial attacks. In this work, we compare two state-of-the-art deformation attacks on MNIST and ImageNet data. Furthermore, we extend current defense methods to the setting of adversarial deformations and we demonstrate that these defenses can be combined with existing methods to train networks that are robust against both adversarial deformations and perturbations. Show more
External linksSearch via SFX
ContributorsExaminer: Alaifari, Rima
Organisational unit09603 - Alaifari, Rima / Alaifari, Rima
MoreShow all metadata