光电工程, 2016, 43 (12): 175, 网络出版: 2016-12-30  

基于场景运动程度的深度视频时域一致性增强

Temporal Consistency Enhancement on Depth Sequences Based on the Motion Intensity of Scene
作者单位
宁波大学 信息科学与工程学院,浙江 宁波 315211
摘要
深度视频静止区域普遍存在深度值时域不一致,导致编码效率下降且影响绘制质量。针对该问题,提出一种基于场景运动程度的深度视频一致性增强算法。首先,应用基于块的直方图差值(Block Histogram Difference, BH)对深度视频每帧之间做相对运动程度量化度量,根据BH 值自适应选取运动程度相对最弱的视频段作为深度值修正源,通过运动检测对相应彩色视频做运动区域分割,接着,利用彩色视频准确的时域一致信息,对深度视频中静止区域的错误变化深度值进行时域一致性校正,最后应用计算复杂度低的时域加权滤波函数对校正后的深度视频进一步优化,得到时域一致性优化的深度视频。本文算法相比于原估计获得的深度视频节省编码码率17.48%~31.75%,深度图所绘制的虚拟视点主观质量提高。
Abstract
The flaw in most depth videos is the temporal inconsistency of the depth value of static region, which decreases encoding performance and the quality of rendering. To solve the problem, this paper proposes a method to enhance the temporal consistency of depth video based on the motion intensity of scene. Firstly, we applied block histogram difference (BH) to measure the relative motion intensity of each depth frame, and selected a segment of video as the source for refinement adaptively according to BH value. Secondly, we detected motion region and made a segmentation for each frame of corresponding color video,then refined the depth value which had changed variously in static regions using the accurate temporal consistency information of color video. Finally, we applied the weighted mode temporal filtering on refined depth video to generate well optimized depth video further. Experiment results show that proposed algorithm can save encoding bite rate ranging from 17.48% to 31.75%, while it improves the subjective quality of rendered virtual views.

富显祖, 王晓东, 娄达平, 秦闯, 章联军. 基于场景运动程度的深度视频时域一致性增强[J]. 光电工程, 2016, 43(12): 175. FU Xianzu, WANG Xiaodong, LOU Daping, QIN Chuang, ZHANG Lianjun. Temporal Consistency Enhancement on Depth Sequences Based on the Motion Intensity of Scene[J]. Opto-Electronic Engineering, 2016, 43(12): 175.

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!