Bridging the Gap between Diffusion Models and Universal Quantization for Image Compression


Loading...

Date

2024-10-09

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

By leveraging the similarities between quantization error and additive noise, diffusion-based image compression codecs can be built by using a diffusion model to “denoise” the artifacts introduced by quantization. However, we identify three gaps in this approach which result in the quantized data falling out of distribution of the diffusion model: a gap in noise level, noise type, and a gap caused by discretization. To address these issues, we propose a novel quantization-based forward diffusion process that is theoretically founded and bridges all three afore mentioned gaps. This is achieved through universal quantization with a carefully tailored quantization schedule, as well as diffusion model trained for uniform noise. Compared to previous work, our proposed architecture produces consistently realistic and detailed results, even at extremely low bitrates, while maintaining strong faithfulness to the original images.

Publication status

published

Editor

Book title

Journal / series

Volume

Pages / Article No.

Publisher

OpenReview

Event

Machine Learning and Compression Workshop @ NeurIPS 2024

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

03420 - Gross, Markus / Gross, Markus check_circle

Notes

Funding

Related publications and datasets