首页 > 论文 > 激光与光电子学进展 > 56卷 > 6期(pp:61004--1)

基于图像质量和注意力的宫腔镜视频关键帧提取

Key Frame Extraction of Hysteroscopy Videos Based on Image Quality and Attention

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

为解决传统方法中由注意力曲线等长分段导致的精确率偏低的问题,提出了基于图像质量曲线提取关键视频段的方法,从各段内选取注意力值最高的帧作为关键帧。针对本地数据库进行视频关键帧提取,精确率为52.94%,F值为62.77%,分别比Muhammad的方法提高了5.23%和2.65%。

Abstract

In order to solve the problem of low accuracy rate caused by the equal length segmentation of the attention curve in the traditional methods, a scheme is proposed, in which the key video segments are extracted based on the image quality curve. The frame with the highest attention value is selected from each segment as the key frame. On the local database, the precision and F-measure are 52.94% and 62.77%, 5.23% and 2.65% higher than those by the Muhammad method, respectively.

Newport宣传-MKS新实验室计划
补充资料

中图分类号:TP399

DOI:10.3788/lop56.061004

所属栏目:图像处理

基金项目:国家自然科学基金(61303028)

收稿日期:2018-09-03

修改稿日期:2018-10-05

网络出版日期:2018-10-17

作者单位    点击查看

苗强强:武汉理工大学信息工程学院, 湖北 武汉 430070

联系人作者:苗强强(amiao@whut.edu.cn)

【1】Gavio W, Scharcanski J, Frahm J M, et al. Hysteroscopy video summarization and browsing by estimating the physician′s attention on video segments[J]. Medical Image Analysis, 2012, 16(1): 160-176.

【2】de Avila S E F, Lopes A P B, da Luz A, Jr, et al. VSUMM: A mechanism designed to produce static video summaries and a novel evaluation method[J]. Pattern Recognition Letters, 2011, 32(1): 56-68.

【3】dos Santos Belo L, Caetano C A, Jr, do Patrocínio Z K G, Jr, et al. Summarizing video sequence using a graph-based hierarchical approach[J]. Neurocomputing, 2016, 173: 1001-1016.

【4】Chen J, Zou Y X, Wang Y. Wireless capsule endoscopy video summarization: A learning approach based on Siamese neural network and support vector machine[C]∥International Conference on Pattern Recognition, December 4-8, 2016, Cancún Center, Cancún, México. New York: IEEE, 2016: 1303-1308.

【5】Meng J J, Wang H X, Yuan J S, et al. From keyframes to key objects: video summarization by representative object proposal selection[J]. Proceedings of the IEEE, 2016: 1039-1048.

【6】Li J T, Yao T, Ling Q, et al. Detecting shot boundary with sparse coding for video summarization[J]. Neurocomputing, 2017, 266: 66-78.

【7】Ma M Y, Met S, Hon J, et al. Nonlinear kernel sparse dictionary selection for video summarization[C]∥IEEE International Conference on Multimedia and Expo, July 10-14, 2017, Hong Kong, China. New York: IEEE, 2017: 637-642.

【8】Ioannidis A, Chasanis V, Likas A.Weighted multi-view key-frame extraction[J]. Pattern Recognition Letters, 2016, 72: 52-61.

【9】Chen L, Wang Y H. Automatic key frame extraction in continuous videos from construction monitoring by using color, texture, and gradient features[J]. Automation in Construction, 2017, 81: 355-368.

【10】Hamza R, Muhammad K, Lü Z, et al. Secure video summarization framework for personalized wireless capsule endoscopy[J]. Pervasive and Mobile Computing, 2017, 41: 436-450.

【11】Ejaz N, Mehmood I, Baik S W.MRT letter: Visual attention driven framework for hysteroscopy video abstraction[J]. Microscopy Research and Technique, 2013, 76(6): 559-563.

【12】Muhammad K, Ahmad J, Sajjad M, et al. Visual saliency models for summarization of diagnostic hysteroscopy videos in healthcare systems[J]. SpringerPlus, 2016, 5(1): 1495.

【13】Muhammad K, Sajjad M, Lee M Y, et al. Efficient visual attention driven framework for key frames extraction from hysteroscopy videos[J]. Biomedical Signal Processing and Control, 2017, 33: 161-168.

【14】Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision[C]∥International Joint Conference on Artificial Intelligence, August 24-28, 1981, Vancouver, British Columbia. [S. l. : s. n.], 1981: 674-679.

【15】Bay H, Tuytelaars T, van Gool L. SURF: Speeded up robust features[C]∥European Conference on Computer Vision. Berlin, Heidelberg: Springer, 2006: 404-417.

【16】Wang M, Li Z Y, Wang C, et al. Key frame extraction algorithm of sign language based on compressed sensing and SURF features[J]. Laser & Optoelectronics Progress, 2018, 55(5): 051013.
王民, 李泽洋, 王纯, 等. 基于压缩感知与SURF特征的手语关键帧提取算法[J]. 激光与光电子学进展, 2018, 55(5): 051013.

【17】Han T Q, Zhao Y D, Liu S L, et al. Spatially constrained SURF feature point matching for UAV images[J]. Journal of Image and Graphics, 2013, 18(6): 669-676.
韩天庆, 赵银娣, 刘善磊, 等. 空间约束的无人机影像SURF特征点匹配[J]. 中国图象图形学报, 2013, 18(6): 669-676.

【18】Ojala T, Pietikinen M, Harwood D. A comparative study of texture measures with classification based on featured distributions[J]. Pattern Recognition, 1996, 29(1): 51-59.

【19】Yang H X, Chen Y, Zhang F, et al. Face recognition based on improved gradient local binary pattern[J]. Laser & Optoelectronics Progress, 2018, 55(6): 061004.
杨恢先, 陈永, 张翡, 等. 基于改进梯度局部二值模式的人脸识别[J].激光与光电子学进展, 2018, 55(6): 061004.

【20】Surakarin W, Chongstitvatana P. Classification of clothing with weighted SURF and local binary patterns[C]∥International Computer Science and Engineering Conference, Nov. 23-26, 2015, Chiang Mai, Thailand. New York: IEEE, 2015: 1-4.

【21】Luo T J, Liu B H. Fast SURF key-points image registration algorithm by fusion features[J].Journal of Image and Graphics, 2015, 20(1): 95-103.
罗天健, 刘秉瀚. 融合特征的快速SURF配准算法[J]. 中国图象图形学报, 2015, 20(1): 95-103.

【22】Geusebroek J M, van den Boomgaard R, Smeulders A W M, et al. Color invariance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(12): 1338-1350.

【23】Heikkil M, Pietikinen M, Schmid C. Description of interest regions with center-symmetric local binary patterns[M]. Computer Vision, Graphics and Image Processing. Berlin, Heidelberg: Springer, 2006: 58-69.

【24】Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[J].Communications of the ACM, 1981, 24(6): 381-395.

【25】Jin J J, Lu W L, Guo X T, et al. Position registration method of simultaneous phase-shifting interferograms based on SURF and RANSAC algorithms[J]. Acta Optica Sinica, 2017, 37(10): 1012002.
靳京京, 卢文龙, 郭小庭, 等. 基于SURF和RANSAC算法的同步相移干涉图位置配准方法[J]. 光学学报, 2017, 37(10): 1012002.

【26】Kristan M, Per J, Pere M, et al. A Bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform[J]. Pattern Recognition Letters, 2006, 27(13): 1431-1439.

【27】Wang Z M. Review of no-reference image quality assessment[J]. Acta Automatica Sinica, 2015, 41(6): 1062-1079.
王志明. 无参考图像质量评价综述[J]. 自动化学报, 2015, 41(6): 1062-1079.

【28】Devijver P A. On a new class of bounds on Bayes risk in multihypothesis pattern recognition[J]. IEEE Transactions on Computers, 1974, C-23(1): 70-80.

引用该论文

Miao Qiangqiang. Key Frame Extraction of Hysteroscopy Videos Based on Image Quality and Attention[J]. Laser & Optoelectronics Progress, 2019, 56(6): 061004

苗强强. 基于图像质量和注意力的宫腔镜视频关键帧提取[J]. 激光与光电子学进展, 2019, 56(6): 061004

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF