光学 精密工程, 2016, 24 (1): 195, 网络出版: 2016-03-22
基于灰度线性建模的亚像素图像抖动量计算
Estimation of image sub-pixel jitter based on linear model of image gray level
遥感图像 云场景 图像抖动量 亚像素 灰度线性建模 remote sensing image cloud scene image jitter estimation sub-pixel model of image gray level
摘要
为了解决凝视遥感云场景序列图像的亚像素抖动量求解问题, 提出了基于灰度线性建模的亚像素序列图像抖动量计算方法。首先, 利用三参数线性模型描述像素及邻域灰度, 提出了一种图像灰度的线性建模方法。其次, 以序列帧图像相对参考帧图像的抖动量作为线性模型中的优化变量, 以参考帧图像与序列帧图像之间的相似性为优化目标, 建立了亚像素抖动量解算的最小二乘优化方法, 并推导得到了解析计算公式。最后, 利用云场景序列图像进行了算法仿真验证。结果表明, 该方法抖动量的计算误差小于0.1 pixel。将该方法与传统基于特征点的配准算法进行了比较, 结果显示该方法具有较高的抖动量计算精度, 可应用于遥感图像几何定标、目标定位以及时序图像中目标多帧关联检测等。
Abstract
A new method based on the linear model of image gray level was proposed to estimate sub-pixel jitters of a sequence image for cloud scene acquired by staring remote sensing imaging systems. Firstly, the image pixel and corresponding neighborhood gray levels were mathematically described by using linear model with three parameters, and the image gray was modeled. Then, by taking the jitter parameter as optimization variables in the linear mathematic model and the comparability between image sequences and reference frame as optimization objective, a new estimation method for the sub-pixel jitter was proposed based on least square optimization approach. Subsequently, the solving equation was derived. Finally, the method was verified by using simulated image sequences containing the cloud scene. Experimental results indicate that the proposed method is able to implement the subpixel estimation effectively and offers the estimation accuracy no less than 0.1 pixel. The obtained estimation accuracy is higher than that of the conventional feature-based ones, and can be used in geometric calibration and target positioning of remote sensing images as well the multi-frame relative detection of time-series images.
智喜洋, 侯晴宇, 王少游. 基于灰度线性建模的亚像素图像抖动量计算[J]. 光学 精密工程, 2016, 24(1): 195. ZHI Xi-yang, HOU Qing-yu, WANG Shao-You. Estimation of image sub-pixel jitter based on linear model of image gray level[J]. Optics and Precision Engineering, 2016, 24(1): 195.