中国激光, 2017, 44 (11): 1104003, 网络出版: 2017-11-17   

基于多视觉线结构光传感器的大尺度测量方法 下载: 1239次

Method of Large-Scale Measurement Based on Multi-Vision Line Structured Light Sensor
作者单位
1 中国矿业大学(北京)煤炭资源与安全开采国家重点实验室, 北京 100083
2 萍乡学院机械电子工程学院, 江西 萍乡 337055
摘要
为解决现有线结构光大尺度测量标定复杂、精度不高的问题, 提出了一种基于多视觉线结构光传感器的大尺度、高精度测量方法。多视觉线结构光传感器含1个激光器和多个等间距并排的相机(一主多从), 相互间有1/3左右重叠视场(OFOV), 光束覆盖所有相机视宽。传感器标定只需简易的棋盘格靶标, 按张氏标定法采集靶标图像、标定传统参数; 此外, 考虑相机视角的不变性, 利用OFOV内标定图像的已知特征角点, 经图像配准可预标定相邻相机图像拼接的变换矩阵。变换矩阵的逐步连乘得到任意从相机图像转换到主相机成像平面的透视变换模型(PTM)。该方法通过各相机同步采集大尺度物体的局部光条图像, 再利用PTM将所有局部光条快速拼接成完整的光条图像, 最终经光条坐标提取、换算得到光条位置的三维坐标。实验结果表明: 与已有方法相比, 该方法使用简便、精度更高, 重构模型平均构造深度与真实模型仅差8.3%。
Abstract
In order to solve the problems of complex calibration and low accuracy that exist in large-scale measurement of the present line structured light, a measurement method with large-scale and high precision based on multi-vision line structured light sensor is proposed. Multi-vision line structured light sensor is comprised of a laser and several equidistant cameras (a single master and multi-slavers) which are arrayed side by side. About 1/3 overlapped field of view (OFOV) exists between neighboring cameras. The laser beam can cover the total width of field of view of all cameras. The calibration of sensor just needs a simple chessboard target. The method of Zhang′s calibration method is used to acquire the target images and calibrate traditional parameters. In addition, considering that the perspectives of camera are invariable, the transformation matrix of images stitching in adjacent cameras can be pre-calibrated through image registration with the known feature corner points extracted from the internal calibration images in OFOV. Continuous multiplication of the transformation matrixes can finally build the perspective transformation model (PTM) which can be used to transform arbitrary slave camera image to the imaging plane of the master camera. With the proposed method, local light stripe images of large-scale object can be collected simultaneously by cameras. Then, all the local light stripe images are quickly stitched into a complete stripe image with PTM. Finally, 3D coordinates of light stripe location are obtained by the extraction and the conversion of the coordinate of the light stripe. The experimental results show that the proposed method is more accurate and convenient than the existing methods. The average texture depth of the reconstructed model is only 8.3% away from that of real model.

李涛涛, 杨峰, 许献磊. 基于多视觉线结构光传感器的大尺度测量方法[J]. 中国激光, 2017, 44(11): 1104003. Li Taotao, Yang Feng, Xu Xianlei. Method of Large-Scale Measurement Based on Multi-Vision Line Structured Light Sensor[J]. Chinese Journal of Lasers, 2017, 44(11): 1104003.

本文已被 13 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!