Metadata only
Date
2021Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Deep learning based image compression has recently witnessed exciting progress and in some cases even managed to surpass transform coding based approaches. However, state-of-the-art solutions for deep image compression typically employ autoencoders which map the input to a lower dimensional latent space and thus irreversibly discard information already before quantization. In contrast, traditional approaches in image compression employ an invertible transformation before performing the quantization step. In this work, we propose a deep image compression method that is similarly able to go from low bit-rates to near lossless quality, by leveraging normalizing flows to learn a bijective mapping from the image space to a latent representation. We demonstrate further advantages unique to our solution, such as the ability to maintain constant quality results through reencoding, even when performed multiple times. To the best of our knowledge, this is the first work leveraging normalizing flows for lossy image compression. Show more
Publication status
publishedExternal links
Book title
Neural Compression: From Information Theory to Applications - Workshop @ ICLR 2021Publisher
OpenReviewEvent
Organisational unit
03420 - Gross, Markus / Gross, Markus
Notes
Spotlight presentation held on May 7, 2021.More
Show all metadata
ETH Bibliography
yes
Altmetrics