An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts
This paper presents an algorithm for a cooperative and simultaneous estimation of depth cues: defocus blur and spatial shifts (stereo disparities, two-dimensional (2D) motion, and/or zooming disparities). These cues are estimated from two images of the same scene acquired by a camera evolving in time and/or space and for which the intrinsic parameters are known. This algorithm is based on generalized moment expansion. We show that the more blurred image may be expressed as a function of the partial derivatives of the two images, the blur difference and the horizontal and vertical shifts. Hence, these depth cues can be computed by resolving a system of equations. The behavior of the algorithm is studied for constant and linear images, step edges, lines and junctions. The rules governing the choice of its parameters are then discussed. The proposed algorithm is tested using synthetic and real images. The results obtained are accurate and dense. They confirm that defocus blurs and spatial shifts (stereo disparities, 2D motion, and/or zooming disparities) can be simultaneously computed without using the epipolar geometry. They thus implicitly show that the unified approach allows: (1) blur estimation even if the spatial locations of corresponding pixels do not match perfectly; (2) spatial shift estimation even if some of the intrinsic parameters of the camera have been modified during the capture.