光学 精密工程, 2015, 23 (5): 1474, 网络出版: 2015-06-11   

多模态鲁棒的局部特征描述符

Multimodality robust local feature descriptors
赵春阳 1,2,3,*赵怀慈 1,2
作者单位
1 中国科学院 沈阳自动化研究所, 辽宁 沈阳 110016
2 中国科学院 光电信息处理重点实验室, 辽宁 沈阳 110016
3 中国科学院大学, 北京 100049
摘要
针对基于灰度的局部特征匹配方法对图像对比度变化敏感, 导致在多模态图像配准应用中性能大幅下降的问题, 提出了一种多模态鲁棒的局部特征描述符和匹配方法。首先, 基于对比度变化不敏感的相位一致性和局部方向信息, 提出一种多模态鲁棒的角点和线段特征提取方法, 在对比度差异显著的多模态图像之间提取较多的共性角点和线段特征; 然后, 以角点为中心选择48个均匀分布的圆形特征子区域, 利用角点与特征子区域内线段的距离和线段长度信息, 构建96维的特征向量; 最后, 将归一化相关函数作为匹配测度函数进行特征匹配, 并采用基于位置约束的随机抽样一致(RANSAC)方法进行匹配提纯。实验表明, 本文提出的多模态匹配方法匹配正确率和重复率分别高达80%和13%, 分别为对称-尺度不变特征变换算法(S-SIFT)、多模态-快速鲁棒特征算法(MM-SURF)等基于灰度方法的2~4倍和4~7倍, 显著优于同类方法。
Abstract
The intensity-based local feature matching methods are sensitive to image contrast variations, so the performance declines significantly when they are applied in multimodal image registration. To solve the above problem, a multimodality robust local feature descriptor was proposed and the corresponding feature matching method was developed. Firstly, an extraction method for the multimodality robust corner and line segment was proposed based on the phase congruency and local direction information insensitive to contrast variants. Compared with intensity-based method, more equivalent corners and line segments were extracted between multimodal images with more contrast differences. Then, the feature region containing of 48 circular sub-regions was selected by using the corner for a center and the 96 dimensional feature vectors were generated by using the distance values of corners and the length values of line segments located in feature sub-regions. Finally, the feature matching method based on normalized correlation function was proposed and the location constraint-based RANdom SAmple Consensus(RANSAC) algorithm was used to remove false matching point pairs. The experimental results indicate that the precision and repeatability on multimodal image matching of the proposed method reach 80% and 13% respectively. As compared with the other intensity-based image matching methods, the precision and repeatability of proposed method are 2-4 times and 4-7 times respectively those of Symmetric-Scale Invariable Feature Transformation(S-SIFT) and Multimodal-Speeded-up Robust Features(MM-SURF). It concludes that the proposed method outperforms many state-of-the-art methods significantly.

赵春阳, 赵怀慈. 多模态鲁棒的局部特征描述符[J]. 光学 精密工程, 2015, 23(5): 1474. ZHAO Chun-Yang, ZHAO Huai-Ci. Multimodality robust local feature descriptors[J]. Optics and Precision Engineering, 2015, 23(5): 1474.

本文已被 4 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!