DAVA: Disentangling Adversarial Variational Autoencoder


METADATA ONLY
Loading...

Date

2023

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

The use of well-disentangled representations offers many advantages for down stream tasks, e.g. an increased sample efficiency, or better interpretability. However, the quality of disentangled interpretations is often highly dependent on the choice of dataset-specific hyperparameters, in particular the regularization strength. To address this issue, we introduce DAVA, a novel training procedure for variational auto-encoders. DAVA completely alleviates the problem of hyperparameter selec tion. We compare DAVA to models with optimal hyperparameters. Without any hyperparameter tuning, DAVA is competitive on a diverse range of commonly used datasets. Underlying DAVA, we discover a necessary condition for unsupervised disentanglement, which we call PIPE. We demonstrate the ability of PIPE to posi tively predict the performance of downstream models in abstract reasoning. We also thoroughly investigate correlations with existing supervised and unsupervised metrics. The code is available at github.com/besterma/dava.

Permanent link

Publication status

published

Editor

Book title

The Eleventh International Conference on Learning Representations (ICLR 2023)

Journal / series

Volume

Pages / Article No.

Publisher

OpenReview

Event

11th International Conference on Learning Representations (ICLR 2023)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

03604 - Wattenhofer, Roger / Wattenhofer, Roger check_circle

Notes

Poster presentation on May 3, 2023.

Funding

Related publications and datasets