
Open access
Date
2024-12-14Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Anomaly detection focuses on identifying samples that deviate from the norm. When working with high-dimensional data such as images, a crucial requirement for detecting anomalous patterns is learning lower-dimensional representations that capture concepts of normality. Recent advances in self-supervised learning have shown great promise in this regard. However, many successful self-supervised anomaly detection methods assume prior knowledge about anomalies to create synthetic outliers during training. Yet, in real-world applications, we often do not know what to expect from unseen data, and we can solely leverage knowledge about normal data. In this work, we propose Con
, which learns representations through context augmentations that model invariances of normal data while letting us observe samples from two distinct perspectives. At test time, representations of anomalies that do not adhere to these invariances deviate from the representation structure learned during training, allowing us to detect anomalies without relying on prior knowledge about them. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000712987Publication status
publishedExternal links
Book title
NeurIPS 2024 Workshop: Self-Supervised Learning - Theory and PracticePublisher
OpenReviewEvent
Organisational unit
09670 - Vogt, Julia / Vogt, Julia
More
Show all metadata
ETH Bibliography
yes
Altmetrics