光学 精密工程, 2020, 28 (1): 234, 网络出版: 2020-03-25
基于深度相机的大场景三维重建
Three-dimensional reconstruction of large-scale scene based on depth camera
计算机视觉 深度相机 累积误差 三维重建 computer vision depth camera accumulated error three-dimensional reconstruction
摘要
针对大场景三维重建中, 由位姿估计的累积误差而导致的相机漂移和重建模型质量低的问题, 提出了减少累积误差的方法。首先, 基于由最新K对深度和彩色图像融合的模型, 最小化输入RGB-D图像的几何误差和亮度误差来跟踪相机。然后, 若相机位置与当前子网格的中心点距离大于给定阈值时, 则将子网格平移体素单元整数倍的距离, 基于新建的子网格继续跟踪相机并重建局部场景模型。最后, 在子网格间以迭代步长式的方法寻找对应表面点, 以对应点间的欧氏距离与亮度误差为约束, 优化全局相机轨迹。基于数据集的实验结果表明,相机位姿估计精度比主流方法提升14.1%, 全局轨迹优化精度提升8%。对于自采数据, 本文设计的系统可减少位姿估计中的累积误差、重建高质量的场景模型。
Abstract
Aiming at the problem of camera drift and low-quality reconstruction caused by accumulated errors in pose estimation for large-scale reconstruction, we proposed a method to reduce accumulated errors. First, based on the model fused by the latest K depth and color images, the geometric and photometric error of the input RGB-D image were minimized to track the camera. Then, if the distance between the camera position and the subvolume center was more than a given threshold, the subvolume was shifted by multiples of voxel size, the camera was continuously tracked and the local scene model was reconstructed based on the newly created subvolume. Finally, the corresponding surface points were searched by the iterative step method between subvolumes, and the global camera trajectory was optimized by Euclidean distance and photometric error between the correspondence. The experimental results based on the dataset show that the camera pose estimation accuracy is improved by 14.1% than the current method, and the global trajectory optimization accuracy is improved by 8%. The system designed in this paper can also reduce the accumulated errors in pose estimation and reconstruct a high-quality scene model for self-collected data.
刘东生, 陈建林, 费点, 张之江. 基于深度相机的大场景三维重建[J]. 光学 精密工程, 2020, 28(1): 234. LIU Dong-sheng, CHEN Jian-lin, FEI Dian, ZHANG Zhi-jiang. Three-dimensional reconstruction of large-scale scene based on depth camera[J]. Optics and Precision Engineering, 2020, 28(1): 234.