Equivariant Multi-Modality Image Fusion
METADATA ONLY
Loading...
Author / Producer
Date
2024
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Multi-modality image fusion is a technique that combines information from different sensors or modalities, enabling the fused image to retain complementary features from each modality, such as functional highlights and texture details. However, effective training of such fusion models is challenging due to the scarcity of ground truth fusion data. To tackle this issue, we propose the Equivariant Multi-Modality imAge fusion (EMMA) paradigm for end-to-end self-supervised learning. Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations. Consequently, we introduce a novel training paradigm that encompasses a fusion module, a pseudo-sensing module, and an equivariant fusion module. These components enable the net training to follow the principles of the natural sensing-imaging process while satisfying the equivariant imaging prior. Extensive experiments confirm that EMMA yields high-quality fusion results for infraredvisible and medical images, concurrently facilitating downstream multi-modal segmentation and detection tasks. The code is available at https://github.com/Zhaozixiang1228/MMIF-EMMA.
Permanent link
Publication status
published
External links
Editor
Book title
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Journal / series
Volume
Pages / Article No.
25912 - 25921
Publisher
IEEE
Event
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Image fusion; Low-level vision
Organisational unit
Notes
Funding
Related publications and datasets
Is supplemented by: https://github.com/Zhaozixiang1228/MMIF-EMMA