Metadata only
Date
2024-01-15Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
We introduce a self-supervised training strategy for burst super-resolution that only uses noisy low-resolution bursts during training. Our approach eliminates the need to carefully tune synthetic data simulation pipelines, which often do not match real-world image statistics. Compared to weakly-paired training strategies, which require noisy smartphone burst photos of static scenes, paired with a clean reference obtained from a tripod-mounted DSLR camera, our approach is more scalable, and avoids the color mismatch between the smartphone and DSLR. To achieve this, we propose a new self-supervised objective that uses a forward imaging model to recover a high-resolution image from aliased high frequencies in the burst. Our approach does not require any manual tuning of the forward model's parameters; we learn them from data. Furthermore, we show our training strategy is robust to dynamic scene motion in the burst, which enables training burst super-resolution models using in-the-wild data. Extensive experiments on real and synthetic data show that, despite only using noisy bursts during training, models trained with our self-supervised strategy match, and sometimes surpass, the quality of fully-supervised baselines trained with synthetic data or weakly-paired ground-truth. Finally, we show our training strategy is general using four different burst super-resolution architectures. Show more
Publication status
publishedExternal links
Book title
2023 IEEE/CVF International Conference on Computer Vision (ICCV)Pages / Article No.
Publisher
IEEEEvent
Organisational unit
02652 - Institut für Bildverarbeitung / Computer Vision Laboratory
Notes
Conference leture held on October 5, 2023.More
Show all metadata
ETH Bibliography
yes
Altmetrics