Open access
Date
2020Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000462046Publication status
publishedExternal links
Book title
Proceedings of the 37th International Conference on Machine LearningJournal / series
Proceedings of Machine Learning ResearchVolume
Pages / Article No.
Publisher
PMLREvent
Organisational unit
09486 - Borgwardt, Karsten M. (ehemalig) / Borgwardt, Karsten M. (former)
Funding
155913 - Significant Pattern Mining (SNF)
Notes
Due to the Coronavirus (COVID-19) the conference was conducted virtually.More
Show all metadata
ETH Bibliography
yes
Altmetrics