中国激光, 2017, 44 (11): 1104003, 网络出版: 2017-11-17
基于多视觉线结构光传感器的大尺度测量方法 下载: 1239次
Method of Large-Scale Measurement Based on Multi-Vision Line Structured Light Sensor
测量 大尺度测量 线结构光传感器 图像拼接 透视变换模型 多视觉 measurement large-scale measurement line structured light sensor image stitching perspective transformation model multi-vision
摘要
为解决现有线结构光大尺度测量标定复杂、精度不高的问题, 提出了一种基于多视觉线结构光传感器的大尺度、高精度测量方法。多视觉线结构光传感器含1个激光器和多个等间距并排的相机(一主多从), 相互间有1/3左右重叠视场(OFOV), 光束覆盖所有相机视宽。传感器标定只需简易的棋盘格靶标, 按张氏标定法采集靶标图像、标定传统参数; 此外, 考虑相机视角的不变性, 利用OFOV内标定图像的已知特征角点, 经图像配准可预标定相邻相机图像拼接的变换矩阵。变换矩阵的逐步连乘得到任意从相机图像转换到主相机成像平面的透视变换模型(PTM)。该方法通过各相机同步采集大尺度物体的局部光条图像, 再利用PTM将所有局部光条快速拼接成完整的光条图像, 最终经光条坐标提取、换算得到光条位置的三维坐标。实验结果表明: 与已有方法相比, 该方法使用简便、精度更高, 重构模型平均构造深度与真实模型仅差8.3%。
Abstract
In order to solve the problems of complex calibration and low accuracy that exist in large-scale measurement of the present line structured light, a measurement method with large-scale and high precision based on multi-vision line structured light sensor is proposed. Multi-vision line structured light sensor is comprised of a laser and several equidistant cameras (a single master and multi-slavers) which are arrayed side by side. About 1/3 overlapped field of view (OFOV) exists between neighboring cameras. The laser beam can cover the total width of field of view of all cameras. The calibration of sensor just needs a simple chessboard target. The method of Zhang′s calibration method is used to acquire the target images and calibrate traditional parameters. In addition, considering that the perspectives of camera are invariable, the transformation matrix of images stitching in adjacent cameras can be pre-calibrated through image registration with the known feature corner points extracted from the internal calibration images in OFOV. Continuous multiplication of the transformation matrixes can finally build the perspective transformation model (PTM) which can be used to transform arbitrary slave camera image to the imaging plane of the master camera. With the proposed method, local light stripe images of large-scale object can be collected simultaneously by cameras. Then, all the local light stripe images are quickly stitched into a complete stripe image with PTM. Finally, 3D coordinates of light stripe location are obtained by the extraction and the conversion of the coordinate of the light stripe. The experimental results show that the proposed method is more accurate and convenient than the existing methods. The average texture depth of the reconstructed model is only 8.3% away from that of real model.
李涛涛, 杨峰, 许献磊. 基于多视觉线结构光传感器的大尺度测量方法[J]. 中国激光, 2017, 44(11): 1104003. Li Taotao, Yang Feng, Xu Xianlei. Method of Large-Scale Measurement Based on Multi-Vision Line Structured Light Sensor[J]. Chinese Journal of Lasers, 2017, 44(11): 1104003.