首页 > 论文 > 激光与光电子学进展 > 57卷 > 24期(pp:241019--1)

基于多分支结构的点云补全网络

Point Cloud Completion Network Based on Multibranch Structure

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

点云是一种重要的三维表达方式,在计算机视觉和机器人领域都有着广泛的应用。由于真实应用场景中存在遮挡和采样不均匀等情况,传感器采集的目标物体点云形状往往是不完整的。为了提取点云的特征和补全目标点云,提出了一种基于多分支结构的点云补全网络。编码器从输入信息中提取局部特征和全局特征,解码器中的多分支结构将提取的特征转换成点云,以得到目标物体完整的点云形状。在ShapeNet和KITTI数据集以及不同残缺比例、不同几何形状的情况下进行实验,结果表明,本方法可以很好地补充目标缺失的点云,得到完整、直观、真实的点云模型。

Abstract

Point cloud is an important three-dimensional expression, and it has a wide range of applications in computer vision and robotics. Due to occlusion and uneven sampling in real application scenarios, the shape of the target object point cloud collected by the sensor is often incomplete. To achieve the point cloud of feature extraction and shape completion, a new point cloud completion network based on the multibranch structure is proposed in this paper. The encoder is primarily responsible for extracting the global and local features from the input information, and the multibranch structure in the decoder is responsible for converting the features to point clouds to obtain the complete point cloud shape of the object. Experiments are conducted using the ShapeNet and KITTI data sets, with different incomplete proportions and geometric shapes. Results show that the method can well supplement the missing point cloud of the target and obtain a complete, intuitive, and true point cloud model.

广告组1 - 空间光调制器+DMD
补充资料

中图分类号:TP242

DOI:10.3788/LOP57.241019

所属栏目:图像处理

基金项目:国家自然科学基金、 四川省重点研发专项、 四川省重大科技专项;

收稿日期:2020-05-06

修改稿日期:2020-06-24

网络出版日期:2020-12-01

作者单位    点击查看

罗开乾:四川大学计算机学院, 四川 成都 610065四川大学视觉合成图形图像技术重点学科实验室, 四川 成都 610065
朱江平:四川大学计算机学院, 四川 成都 610065四川大学视觉合成图形图像技术重点学科实验室, 四川 成都 610065
周佩:四川大学计算机学院, 四川 成都 610065四川大学视觉合成图形图像技术重点学科实验室, 四川 成都 610065
段智涓:四川大学计算机学院, 四川 成都 610065四川大学视觉合成图形图像技术重点学科实验室, 四川 成都 610065
荆海龙:四川大学视觉合成图形图像技术重点学科实验室, 四川 成都 610065

联系人作者:朱江平(zjp16@scu.edu.cn)

备注:国家自然科学基金、 四川省重点研发专项、 四川省重大科技专项;

【1】Wu Z R, Song S R, Khosla A, et al. 3D ShapeNets: a deep representation for volumetric shapes . [C]∥2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MA, USA. New York: IEEE. 2015, 1912-1920.

【2】Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? the KITTI vision benchmark suite . [C]∥2012 IEEE Conference on Computer Vision and Pattern Recognition, June 16-21, 2012, Providence, RI, USA. New York: IEEE. 2012, 3354-3361.

【3】Berger M, Tagliasacchi A, Seversky L, et al. State of the art in surface reconstruction from point clouds . [C]∥Eurographics 2014 - State of the Art Reports, April 7-11, 2014, Strasbourg, France. Girona: ViRVIG. 2014, 161-185.

【4】Davis J, Marschner S R, Garr M, et al. Filling holes in complex surfaces using volumetric diffusion . [C]∥Proceedings of First International Symposium on 3D Data Processing Visualization and Transmission, June 19-21, 2002, Padova, Italy. New York: IEEE. 2002, 428-441.

【5】Mitra N J, Guibas L J, Pauly M, et al. Partial and approximate symmetry detection for 3D geometry [J]. ACM Transactions on Graphics. 2006, 25(3): 560-568.

【6】Mitra N J, Pauly M, Wand M, et al. Symmetry in 3D geometry: extraction and applications [J]. Computer Graphics Forum. 2013, 32(6): 1-23.

【7】Han F, Zhu S C. Bottom-up/top-down image parsing with attribute grammar [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009, 31(1): 59-73.

【8】Li Y Y, Dai A, Guibas L, et al. Database-assisted object retrieval for real-time 3D reconstruction [J]. Computer Graphics Forum. 2015, 34(2): 435-446.

【9】Felzenszwalb P F, Girshick R B. McAllester D, et al. Object detection with discriminatively trained part-based models [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2010, 32(9): 1627-1645.

【10】Gupta S, Arbelae P, Girshick R, et al. Aligning 3D models to RGB-D images of cluttered scenes . [C]∥2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MA, USA. New York: IEEE. 2015, 4731-4740.

【11】Liu J, Bai D. 3D point cloud registration algorithm based on feature matching [J]. Acta Optica Sinica. 2018, 38(12): 1215005.
刘剑, 白迪. 基于特征匹配的三维点云配准算法 [J]. 光学学报. 2018, 38(12): 1215005.

【12】Zhang K, Qiao S Q, Zhou W Z. Point cloud segmentation based on three-dimensional shape matching [J]. Laser & Optoelectronics Progress. 2018, 55(12): 121011.
张坤, 乔世权, 周万珍. 基于三维形状匹配的点云分割 [J]. 激光与光电子学进展. 2018, 55(12): 121011.

【13】Liu M, Shu Q, Yang Y X, et al. Three-dimensional point cloud registration based on independent component analysis [J]. Laser & Optoelectronics Progress. 2019, 56(1): 011203.
刘鸣, 舒勤, 杨赟秀, 等. 基于独立成分分析的三维点云配准算法 [J]. 激光与光电子学进展. 2019, 56(1): 011203.

【14】Tang Z R, Liu M Z, Jiang Y, et al. Point cloud registration algorithm based on canonical correlation analysis [J]. Chinese Journal of Lasers. 2019, 46(4): 0404006.
唐志荣, 刘明哲, 蒋悦, 等. 基于典型相关分析的点云配准算法 [J]. 中国激光. 2019, 46(4): 0404006.

【15】Wang X H, Wu L S, Chen H W, et al. Feature line extraction from a point cloud based on region clustering segmentation [J]. Acta Optica Sinica. 2018, 38(11): 1110001.
王晓辉, 吴禄慎, 陈华伟, 等. 基于区域聚类分割的点云特征线提取 [J]. 光学学报. 2018, 38(11): 1110001.

【16】Dai A, Qi C R, Nieβner M. Shape completion using 3D-encoder-predictor CNNs and shape synthesis . [C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA. New York: IEEE. 2017, 6545-6554.

【17】Song S R, Yu F, Zeng A, et al. Semantic scene completion from a single depth image . [C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA. New York: IEEE. 2017, 190-198.

【18】Yuan W T, Khot T, Held D, et al. PCN: point completion network . [C]∥2018 International Conference on 3D Vision (3DV), September 5-8, 2018, Verona, Italy. New York: IEEE. 2018, 728-737.

【19】Charles R Q, Hao S, Mo K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation . [C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA. New York: IEEE. 2017, 77-85.

【20】Yang Y Q, Feng C, Shen Y, et al. FoldingNet: point cloud auto-encoder via deep grid deformation [2020-04-30].https:∥arxiv.[2020-04-30]. 0, org/abs/1712: 07262.

【21】Dai A, Chang A X, Savva M, et al. ScanNet: richly-annotated 3D reconstructions of indoor scenes . [C]∥IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 22-25, 2017, Honolulu, Hawaii. New York: IEEE. 2017, 2432-2443.

【22】Armeni I, Sax S, Zamir A R, et al. Joint 2D-3D-semantic data for indoor scene understanding [2020-04-23].https:∥arxiv.[2020-04-23]. 0, org/abs/1702: 01105.

【23】Kingma D P, Ba J. Adam: a method for stochastic optimization [2020-04-20].https:∥arxiv.[2020-04-20]. 0, org/abs/1412: 6980.

【24】Fan H Q, Su H, Guibas L. A point set generation network for 3D object reconstruction from a single image [2020-04-25].https:∥arxiv.[2020-04-25]. 0, org/abs/1612: 00603.

引用该论文

Luo Kaiqian,Zhu Jiangping,Zhou Pei,Duan Zhijuan,Jing Hailong. Point Cloud Completion Network Based on Multibranch Structure[J]. Laser & Optoelectronics Progress, 2020, 57(24): 241019

罗开乾,朱江平,周佩,段智涓,荆海龙. 基于多分支结构的点云补全网络[J]. 激光与光电子学进展, 2020, 57(24): 241019

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF