光学学报, 2019, 39 (4): 0415005, 网络出版: 2019-04-26  

解耦光流运动场模型的车载平台仿真

Simulation on Vehicle Platform Based on Decoupled Optical Flow Motion Field Model
乌萌 1,2,3,*郝金明 1付浩 4高扬 2,3
作者单位
1 信息工程大学地理空间信息学院, 河南 郑州 450052
2 地理信息工程国家重点实验室, 陕西 西安 710054
3 西安测绘研究所, 陕西 西安 710054
4 国防科技大学智能科学学院, 湖南 长沙 410073
摘要
针对无人车平台利用光流进行载体位姿估计时面临的不同运动状态光流矢量解耦与分析问题, 推导了光流运动场模型, 分析了载体在六自由度位姿独立变化时的解耦光流运动场模型; 根据解耦光流运动场模型, 设计了车载平台的仿真算法, 并给出完全解耦的仿真结果; 利用解耦光流运动场模型量化分析了仿真结果的正确性; 利用KITTI数据集的平移、旋转两个典型场景开展真实光流解耦实验, 进行了模型分析、仿真过程、真实数据、对比结果的一致性验证。结果表明:所给出的解耦模型分析、仿真算法、仿真与真实结果以及对比分析不仅可用于车载平台利用光流开展位姿解耦估计中的误差分析和算法验证, 还对深入理解光流运动成像、开展无人车平台光流应用的研究具一定的借鉴和指导。
Abstract
Aiming at the problems of decoupling and analysis of optical flow vectors with various motion states confronted in the pose estimation process of autonomous vehicles (AVs) using optical flow, the optical flow motion field model (OFMFM) is derived and the decoupled optical flow motion filed model (DOFMFM) is analysed as the vehicle poses change independently in six degrees of freedom. According to the DOFMFM, a simulation algorithm is designed for the vehicle platform, and the completely decoupled simulation results are presented. The DOFMFM is applied to quantify and verify the simulation results. Two real scenes of translation and rotation from the KITTI dataset are utilized for the flow-decoupled experiments. The consistency among model analysis, simulation process, real data and comparison results is verified. The results show that the proposed decoupled model analysis, simulation algorithm, simulation and real results together with comparison analysis can not only be applied in the error analysis and algorithm test of pose estimation, but also provide a reference or instruction for the improvement in the understanding of optical flow motion imaging and the research on the optical flow based applications on an AV platform.
参考文献

[1] 熊昌镇, 车满强, 王润玲, 等. 稳健的双模型自适应切换实时跟踪算法[J]. 光学学报, 2018, 38(10): 1015002.

    Xiong C Z, Che M Q, Wang R L, et al. Robust real-time visual tracking via dual model adaptive switching[J]. Acta Optica Sinica, 2018, 38(10): 1015002.

[2] 林辉灿, 吕强, 卫恒, 等. 基于VI-SLAM的四旋翼自主飞行与三维稠密重构[J]. 光学学报, 2018, 38(7): 0715004.

    Lin H C, Lü Q, Wei H, et al. Quadrotor autonomous flight and three-dimensional dense reconstruction based on VI-SLAM[J]. Acta Optica Sinica, 2018, 38(7): 0715004.

[3] 肖进胜, 田红, 邹文涛, 等. 基于深度卷积神经网络的双目立体视觉匹配算法[J]. 光学学报, 2018, 38(8): 0815017.

    Xiao J S, Tian H, Zou W T, et al. Stereo matching based on convolutional neural network[J]. Acta Optica Sinica, 2018, 38(8): 0815017.

[4] Gibson J J. The perception of the visual world[M]. Cambridge: The Riverside Press, 1950.

[5] Loianno G, Scaramuzza D, Kumar V. Special issue on high-speed vision-based autonomous navigation of UAVs[J]. Journal of Field Robotics, 2018, 35(1): 3-4.

[6] Meneses M C, Matos L N, Oprado B. Low-cost autonomous navigation system based on optical-flow classification[EB/OL]. (2018-03-11)[2018-10-15]. https:∥arxiv.org/abs/1803.03966.

[7] 万富华. 基于多传感器的无人机定位和避障技术研究[D]. 杭州: 浙江工业大学, 2017.

    Wan F H. Research on multi-sensor based UAV obstacle avoidance technique[D]. Hangzhou: Zhejiang University of Technology, 2017.

[8] 戴碧霞. 基于光流的微小型飞行器室内避障方法研究[D]. 成都: 电子科技大学, 2015.

    Dai B X. Research on obstacle avoidance method of MAV in indoor environment based on optical flow[D]. Chengdu: University of Electronic Science and Technology of China, 2015.

[9] Forster C, Zhang Z C, Gassner M, et al. SVO: Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, 2017, 33(2): 249-265.

[10] Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 611-625.

[11] Longuet-Higgins H C, Prazdny K. The interpretation of a moving retinal image[J]. Proceedingsof the Royal Society of London B, 1980, 208(1173): 385-397.

[12] Matthies L, Szeliski R, Kanade T. Kalman filter-based algorithms for estimating depth from image sequences[M]. Heidelberg: Springer, 1993: 87-130.

[13] de Luca A, Oriolo G, Robuffo Giordano P. Feature depth observation for image-based visual servoing: theory and experiments[J]. The International Journal of Robotics Research, 2008, 27(10): 1093-1116.

[14] Sabatini S, Corno M, Fiorenti S, et al. Vision-based pole-like obstacle detection and localization for urban mobile robots[C]∥Proceedings of the 29th IEEE Intelligent Vehicles Symposium, June 26-30, 2018, Changshu, China.New York: IEEE, 2018: 1209-1214.

[15] Buczko M, Willert V. Flow-decoupled normalized reprojection error for visual odometry[C]∥Proceedings of IEEE 19th International Conference on Intelligent Transportation Systems, November 1-4, 2016, Rio de Janeiro, Brazil. New York: IEEE, 2016: 1161-1167.

[16] Jaegle A, Phillips S, Daniilidis K. Fast, robust, continuous monocular egomotion computation[C]∥Proceedings of IEEE International Conference on Robotics and Automation, May 16-21, 2016, Stockholm, Sweden. New York: IEEE, 2016: 773-780.

[17] 吴政隆, 李杰, 关震宇, 等. 基于光流的固定翼小型无人机自主着陆控制[J]. 系统工程与电子技术, 2016, 38(12): 2827-2834.

    Wu Z L, Li J, Guan Z Y, et al. Optical flow-based autonomous landing control for fixed-wing small UAV[J]. Systems Engineering and Electronics, 2016, 38(12): 2827-2834.

[18] 郭雷, 刘梦瑶, 王岩, 等. 一种双摄像机的飞行器光流检测装置的运动估计方法: CN104880187A[P/OL]. (2015-09-02)[2018-10-15]. https:∥patentimages.storage.googleapis.com/79/9a/cf/6a6371ddd46e75/CN104880187A.pdf.

    Guo L, Liu M Y, Wang Y, et al. A motion estimation method for optical flow detection device of aircraft with two cameras: CN104880187A[P/OL]. (2015-09-02)[2018-10-15]. https:∥patentimages.storage.googleapis.com/79/9a/cf/6a6371ddd46e75/CN104880187A.pdf.

[19] Geiger A, Lenz P, Urtasun R. The KITTI vision benchmark suite[EB/OL]. (2018-09-15)[2018-10-07]. http:∥www.cvlibs.net/datasets/kitti/eval_odometry.php.

[20] Buczko M, Willert V. Monocular outlier detection for visual odometry[C]∥Proceedings of IEEE Intelligent Vehicles Symposium, June 11-14, 2017, Los Angeles, CA, USA. New York: IEEE, 2017: 1124-1131.

[21] 章毓晋. 计算机视觉教程[M]. 北京: 人民邮电出版社, 2011.

    Zhang Y J. A course of computer vision[M]. Beijing: Posts & Telecom Press, 2011.

[22] Jitendra M. Dynamic perspective[EB/OL]. (2015-05-02)[2018-10-07]. http:∥www-inst.eecs.berkeley.edu/~cs280/sp15/lectures/4.pdf.

[23] Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving The KITTI vision benchmark suite[C]∥Proceedings of, June 16-21, 2012, Providence, RI, USA. New York: IEEE, 2012: 3354-3361.

乌萌, 郝金明, 付浩, 高扬. 解耦光流运动场模型的车载平台仿真[J]. 光学学报, 2019, 39(4): 0415005. Wu Meng, Hao Jinming, Fu Hao, Gao Yang. Simulation on Vehicle Platform Based on Decoupled Optical Flow Motion Field Model[J]. Acta Optica Sinica, 2019, 39(4): 0415005.

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!