Coconet: Coupled contrastive learning network with multi-level feature ensemble for multi-modality image fusion

J Liu, R Lin, G Wu, R Liu, Z Luo, X Fan - International Journal of Computer …, 2024 - Springer
Infrared and visible image fusion targets to provide an informative image by combining
complementary information from different sensors. Existing learning-based fusion …

Fusion-mamba for cross-modality object detection

W Dong, H Zhu, S Lin, X Luo, Y Shen, X Liu… - arxiv preprint arxiv …, 2024 - arxiv.org
Cross-modality fusing complementary information from different modalities effectively
improves object detection performance, making it more useful and robust for a wider range …

Cross-Modality Interaction Network for Pan-sharpening

Y Wang, X He, Y Dong, Y Lin… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Pan-sharpening seeks to generate a high-resolution multispectral (HRMS) image by
merging the high-resolution panchromatic (PAN) image and its low-resolution multispectral …

MMAE: A universal image fusion method via mask attention mechanism

X Wang, L Fang, J Zhao, Z Pan, H Li, Y Li - Pattern Recognition, 2025 - Elsevier
As an important carrier of data, images contain a huge amount of information. The purpose
of image fusion is to integrate the information from source images into a single image. Since …

SDFuse: Semantic-injected dual-flow learning for infrared and visible image fusion

E Wang, J Li, J Lei, J Liu, S Zhou, B Wang… - Expert Systems with …, 2024 - Elsevier
Infrared and visible image fusion (IVIF) strives to render fused results that preserve the
strengths of the source images (eg, texture details and thermal highlights) while boosting …

TSJNet: A multi-modality target and semantic awareness joint-driven image fusion network

Y Jie, Y Xu, X Li, H Tan - arxiv preprint arxiv:2402.01212, 2024 - arxiv.org
Multi-modality image fusion involves integrating complementary information from different
modalities into a single image. Current methods primarily focus on enhancing image fusion …

PromptFusion: Harmonized semantic prompt learning for infrared and visible image fusion

J Liu, X Li, Z Wang, Z Jiang, W Zhong… - IEEE/CAA Journal of …, 2024 - ieeexplore.ieee.org
The goal of infrared and visible image fusion (TVIF) is to integrate the unique advantages of
both modalities to achieve a more comprehensive understanding of a scene. However …

MMA-UNet: A Multi-Modal Asymmetric UNet Architecture for Infrared and Visible Image Fusion

J Huang, X Li, T Tan, X Li, T Ye - arxiv preprint arxiv:2404.17747, 2024 - arxiv.org
Multi-modal image fusion (MMIF) maps useful information from various modalities into the
same representation space, thereby producing an informative fused image. However, the …

Unpaired high-quality image-guided infrared and visible image fusion via generative adversarial network

H Li, Z Guan, X Wang, Q Shao - Computer Aided Geometric Design, 2024 - Elsevier
Current infrared and visible image fusion (IVIF) methods lack ground truth and require prior
knowledge to guide the feature fusion process. However, in the fusion process, these …

Polarized Prior Guided Fusion Network for Infrared Polarization Images

K Li, M Qi, S Zhuang, Y Liu - IEEE Transactions on Geoscience …, 2024 - ieeexplore.ieee.org
Typical infrared polarization image fusion aims to integrate background details in the
infrared intensity and salient target in the degree of linear polarization (DoLP). Many fusion …