光学 精密工程, 2017, 25 (8): 2221, 网络出版: 2017-10-16   

结合图像内容匹配的机器人视觉导航定位与全局地图构建系统

Robot vision system for keyframe global map establishment and robot localization based on graphic content matching
作者单位
1 中国科学院 电子学研究所 传感技术联合国家重点实验室, 北京 100190
2 中国科学院大学, 北京 100190
摘要
为了解决机器人室内定位时的绑架问题和相似物体的干扰,设计了一种具有图像内容匹配功能的视觉系统,从而使机器人能有效提取关键帧序列构建室内全局地图并实现自主定位。考虑影响图像内容匹配的主要干扰是机器人视角和位移造成的图像畸变,本文通过对室内物体的图像畸变建模与特征分析,设计了一种图像内容匹配方法。该方法以图像重叠区提取、基于子块分解匹配的重叠区重建两部分为核心,可将待匹配的两帧图像畸变调整为一致后再进行内容匹配并准确解算它们的相似度。其能有效利用各个房间内不同的景物和布局信息来消除相似物体的影响,从机器人学习环境时采集的视频中提取空间间距大且重叠相连的关键帧序列建立整栋建筑内部的全局导航地图。机器人工作时,实时视觉的图像内容与地图关键帧序列匹配,提取出与每个时刻视觉图像最相似的关键帧对机器人实施定位。在由3个房间和2条走廊组成的实验区进行了实验测试,结果表明: 机器人可有效消除相似物体的干扰,绑架发生时仍可通过与全局地图匹配实施准确自主定位,匹配准确率≥93%,定位精度误差(RMSE)<0.5 m。
Abstract
To solve kidnapping problem and similar object interference at the time of indoor localization of robots, a visual system with graphic contents matching function was designed to make robots extract constructed indoor global map of key frame sequence effectively and to realize self-localization.Since the main interferences influencing graphic content matching was image distortion caused by visual angels of robots and displacement, a graphic content matching method was designed by image distortion modeling and feature analysis of indoor objects. With the method, both parts of image overlapping area extraction and overlapping area reconstruction based on sub-block decomposition and matching were taken as a core.The content matching could be implemented after two image distortions waiting for matching were adjusted to be conformity and their similarity could be calculated accurately. The method could make use of different sceneries and arrangement information in various rooms effectively to eliminate the influence of similar objects and to extract the overlapped and connected key frame sequence with great space distance from collected video when the robots learn environment to construct global navigation map in the whole building. When robots work, graphic contents of real-time vision and key frame sequence of map was matching, and key frame that is the most similar with visual image in all moments was extracted to implement localization on robots. Experimental test was performed in an experimental area composed of 3 rooms and 2 corridors. Experimental result indicates that robots can eliminate interference of similar objects effectively and can implement accurate self-localization by matching with global map when kidnapping happens. The matching accuracy is ≥93% and localization accuracy error (RMSE) is <0.5m.

曹天扬, 蔡浩原, 方东明, 刘昶. 结合图像内容匹配的机器人视觉导航定位与全局地图构建系统[J]. 光学 精密工程, 2017, 25(8): 2221. CAO Tian-yang, CAI Hao-yuan, FANG Dong-ming, LIU Chang. Robot vision system for keyframe global map establishment and robot localization based on graphic content matching[J]. Optics and Precision Engineering, 2017, 25(8): 2221.

本文已被 6 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!