Unity by Diversity: Improved Representation Learning in Multimodal VAEs


Loading...

Date

2024-12

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation. Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality’s latent representation towards a shared aggregate posterior. This approach results in a superior latent representation and allows each encoding to preserve information better from its uncompressed original features. In extensive experiments on multiple benchmark datasets and two challenging real-world datasets, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.

Publication status

published

Book title

Advances in Neural Information Processing Systems 37

Journal / series

Volume

Pages / Article No.

74262 - 74297

Publisher

Curran

Event

38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

multimodal VAE; representation learning; data-dependent prior; vamp-prior

Organisational unit

09670 - Vogt, Julia / Vogt, Julia check_circle

Notes

Poster presentation on December 13, 2024

Funding

Related publications and datasets