Using Color to Separate Reflection Components
In computer vision, the goal of which is to identify objects and their positions by examining images, one of the key steps is computing the surface normal of the visible surface at each point (“pixel”) in the image. Many sources of information are studied, such as outlines ofsuifaces, intensity gradients, object motion, and color. This article presents a method for analyzing a standard color image to determine the amount of interface (“specular”) and body (“diffuse”) reflection at each pixel. The interface reflection represents the highlights from the original image, and the body reflection represents the original image with highlights removed. Such intrinsic images are of interest because the geometric properties of each type of reflection are simpler than the geometric properties of intensity in a black-and-white image. The method is based upon a physical model of reflection which states that two distinct types of reflection–interface and body reflection–occur, and that each type can be decomposed into a relative spectral distribution and a geometric scale factor. This model is far more general than typical models used in computer vision and computer graphics, and includes most such models as special cases. In addition, the model does not assume a point light source or uniform illumination distribution over the scene. The properties of tristimulus integration are used to derive a new model of pixel-value color distribution, and this model is exploited in an algorithm to derive the desired quantities. Suggestions are provided for extending the model to deal with diffuse illumination and for analyzing the two components of reflection.