The OOD Blind Spot of Unsupervised Anomaly Detection


METADATA ONLY
Loading...

Date

2021

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

Deep unsupervised generative models are regarded as a promising alternative to supervised counterparts in the field of MRI-based lesion detection. They denote a principled approach for detecting unseen types of anomalies without relying on large amounts of expensive ground truth annotations. To this end, deep generative models are trained exclusively on data from healthy patients and detect lesions as Out-of-Distribution (OOD) data at test time (i.e. low likelihood). While this is a promising way of bypassing the need for costly annotations, this work demonstrates that it also renders this widely used unsupervised anomaly detection approach particularly vulnerable to non-lesion-based OOD data (e.g. data from different sensors). Since models are likely to be exposed to such OOD data in production, it is crucial to employ safety mechanisms to filter for such samples and run inference only on input for which the model is able to provide reliable results. We first show extensively that conventional, unsupervised anomaly detection mechanisms fail when being presented with true OOD data. Secondly, we apply prior knowledge to disentangle lesion-based OOD from their non-lesion-based counterparts.

Publication status

published

Book title

Proceedings of the Fourth Conference on Medical Imaging with Deep Learning

Volume

143

Pages / Article No.

286 - 300

Publisher

PMLR

Event

4th Conference on Medical Imaging with Deep Learning (MIDL 2021)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

09579 - Konukoglu, Ender / Konukoglu, Ender check_circle

Notes

Funding

Related publications and datasets