首页 > 论文 > 光学学报 > 39卷 > 10期(pp:1015001--1)

基于卷积自编码器和残差块的红外与可见光图像融合方法

Infrared and Visible Image Fusion Method Based on Convolutional Auto-Encoder and Residual Block

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

为了在红外与可见光图像融合中充分利用中间层提取的信息,防止信息过度丢失,提出一种新的基于卷积自编码器和残差块的图像融合方法。该方法采用由编码器、融合层和解码器三部分组成的网络结构。将残差网络引入编码器中,将红外与可见光图像分别送入编码器后,通过卷积层和残差块来获取图像的特征图;将得到的特征图采用改进的基于L1-norm的相似度融合策略进行融合,并将其整合为一个包含源图像显著特征的特征图;重新设计损失函数,利用解码器对融合后的图像进行重构。实验结果表明,与其他融合方法相比,该方法有效地提取并保留了源图像的深层信息,融合结果在主观和客观评价中都有着一定的优势。

Abstract

In order to make full use of the information extracted from the middle layer and prevent information from losing excessively, a new image fusion method based on a convolutional auto-encoder and a residual block is proposed, which is composed of an encoder, a fusion layer, and a decoder. First, the residual network is introduced into the encoder, the infrared and visible images are fed into the encoder, and the convolution layer and residual block are used to obtain the feature map of the image. Then, the obtained feature map is fused by using an improved fusion strategy based on L1-norm similarity, which is integrated into a feature map containing the salient features of the source image. Finally, the loss function is redesigned and the decoder is used to reconstruct the fused image. The experimental results show that compared with other fusion methods, the method effectively extracts and preserves the deep information of the source image, which makes the fusion result have certain advantages in both subjective and objective evaluation.

Newport宣传-MKS新实验室计划
补充资料

DOI:10.3788/AOS201939.1015001

所属栏目:机器视觉

基金项目:国家自然科学基金、广西科技计划、广西图像图形与智能处理重点实验项目、广西研究生教育创新计划;

收稿日期:2019-04-23

修改稿日期:2019-05-31

网络出版日期:2019-10-01

作者单位    点击查看

江泽涛:桂林电子科技大学广西图像图形与处理智能处理重点实验室, 广西 桂林 541004
何玉婷:桂林电子科技大学广西图像图形与处理智能处理重点实验室, 广西 桂林 541004

联系人作者:何玉婷(839191881@qq.com)

备注:国家自然科学基金、广西科技计划、广西图像图形与智能处理重点实验项目、广西研究生教育创新计划;

【1】Jiang Z T, Wu H and Zhou X L. Infrared and visible image fusion algorithm based on improved guided filtering and dual-channel spiking cortical model [J]. Acta Optica Sinica. 2018, 38(2): 0210002.
江泽涛, 吴辉, 周哓玲. 基于改进引导滤波和双通道脉冲发放皮层模型的红外与可见光图像融合算法 [J]. 光学学报. 2018, 38(2): 0210002.

【2】Ma J Y, Ma Y and Li C. Infrared and visible image fusion methods and applications: a survey [J]. Information Fusion. 2019, 45: 153-178.

【3】Liu X H and Chen Z B. Fusion of infrared and visible images based on multi-scale directional guided filter and convolutional sparse representation [J]. Acta Optica Sinica. 2017, 37(11): 1110004.
刘先红, 陈志斌. 基于多尺度方向引导滤波和卷积稀疏表示的红外与可见光图像融合 [J]. 光学学报. 2017, 37(11): 1110004.

【4】Lin S Z and Han Z. Images fusion based on deep stack convolutional neural network [J]. Chinese Journal of Computers. 2017, 40(11): 2506-2518.
蔺素珍, 韩泽. 基于深度堆叠卷积神经网络的图像融合 [J]. 计算机学报. 2017, 40(11): 2506-2518.

【5】Li H, Wu X J and Kittler J. Infrared and visible image fusion using a deep learning framework . [C]∥2018 24th International Conference on Pattern Recognition (ICPR), August 20-24, 2018, Beijing, China. New York: IEEE. 2018, 2705-2710.

【6】Liu Y, Chen X, Peng H et al. Multi-focus image fusion with a deep convolutional neural network [J]. Information Fusion. 2017, 36: 191-207.

【7】Prabhakar K R, Sai Srikar V and Babu R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs . [C]∥2017 IEEE International Conference on Computer Vision (ICCV), October 22-29, 2017, Venice, Italy. New York: IEEE. 2017, 4724-4732.

【8】He K M, Zhang X Y, Ren S Q et al. Deep residual learning for image recognition . [C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27-30, 2016, Las Vegas, NV, USA. New York: IEEE. 2016, 770-778.

【9】Lu Y S, Li Y X, Liu B et al. Hyperspectral data haze monitoring based on deep residual network [J]. Acta Optica Sinica. 2017, 37(11): 1128001.
陆永帅, 李元祥, 刘波 等. 基于深度残差网络的高光谱遥感数据霾监测 [J]. 光学学报. 2017, 37(11): 1128001.

【10】Liu Y, Chen X, Ward R K et al. Image fusion with convolutional sparse representation [J]. IEEE Signal Processing Letters. 2016, 23(12): 1882-1886.

【11】Zhao H, Gallo O, Frosio I et al. Loss functions for image restoration with neural networks [J]. IEEE Transactions on Computational Imaging. 2017, 3(1): 47-57.

【12】Dong C, Loy C C, He K M et al. Image super-resolution using deep convolutional networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2016, 38(2): 295-307.

【13】Huang R, Zhang S, Li T Y et al. Beyond face rotation: global and local perception GAN for photorealistic and identity preserving frontal view synthesis . [C]∥2017 IEEE International Conference on Computer Vision (ICCV), October 22-29, 2017, Venice, Italy. New York: IEEE. 2017, 2458-2467.

【14】Tao L, Zhu C, Xiang G Q et al. LLCNN: a convolutional neural network for low-light image enhancement . [C]∥2017 IEEE Visual Communications and Image Processing (VCIP), December 10-13, 2017, St. Petersburg, FL, USA. New York: IEEE. 2017, 17614346:

【15】Liu Y, Chen X, Cheng J et al. Infrared and visible image fusion with convolutional neural networks [J]. International Journal of Wavelets, Multiresolution and Information Processing. 2018, 16(3): 1850018.

【16】Ma J L, Zhou Z Q, Wang B et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization [J]. Infrared Physics & Technology. 2017, 82: 8-17.

【17】Zhang Y, Zhang L J, Bai X Z et al. Infrared and visual image fusion through infrared feature extraction and visual information preservation [J]. Infrared Physics & Technology. 2017, 83: 227-237.

【18】Chen M S. Image fusion of visual and infrared image based on NSCT and compressed sensing [J]. Journal of Image and Graphics. 2016, 21(1): 39-44.
陈木生. 结合NSCT和压缩感知的红外与可见光图像融合 [J]. 中国图象图形学报. 2016, 21(1): 39-44.

引用该论文

Jiang Zetao,He Yuting. Infrared and Visible Image Fusion Method Based on Convolutional Auto-Encoder and Residual Block[J]. Acta Optica Sinica, 2019, 39(10): 1015001

江泽涛,何玉婷. 基于卷积自编码器和残差块的红外与可见光图像融合方法[J]. 光学学报, 2019, 39(10): 1015001

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF