Multiscale Progressive Fusion of Infrared and Visible Images

Seonghyun Park, Chul Lee

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Infrared and visible image fusion aims to generate more informative images of a given scene by combining multimodal images with complementary information. Although recent learning-based approaches have shown significant fusion performance, developing an effective fusion algorithm that can preserve complementary information while preventing bias toward either of the source images remains a significant challenge. In this work, we propose a multiscale progressive fusion (MPFusion) algorithm that extracts and progressively fuses multiscale features of infrared and visible images. The proposed algorithm consists of two networks, IRNet and FusionNet, which extract the intrinsic features of infrared and visible images, respectively. We transfer the multiscale information of the infrared image from IRNet to FusionNet to generate an informative fusion result. To this end, we develop the multi-dilated residual block (MDRB) and the progressive fusion block (PFB), which progressively combines the multiscale features from IRNet with those from FusionNet to fuse complementary features effectively and adaptively. Furthermore, we exploit edge-guided attention maps to preserve complementary edge information in the source images during fusion. Experimental results on several datasets demonstrate that the proposed algorithm outperforms state-of-the-art infrared and visible image fusion algorithms on both quantitative and qualitative comparisons.

Original languageEnglish
Pages (from-to)126117-126132
Number of pages16
JournalIEEE Access
Volume10
DOIs
StatePublished - 2022

Keywords

  • edge-guided attention map
  • Image fusion
  • infrared image
  • multiscale network
  • visible image

Fingerprint

Dive into the research topics of 'Multiscale Progressive Fusion of Infrared and Visible Images'. Together they form a unique fingerprint.

Cite this