Global Surface Reconstruction by Purposive Control of Observer Motion
K. N. Kutulakos and C. R. Dyer, Computer Sciences Department Technical Report 1141, University of Wisconsin - Madison, April 1993.
What real-time, qualitative viewpoint-control behaviors are important for performing global visual exploration tasks such as searching for specific surface markings, building a global model of an arbitrary object, or recognizing an object? In this paper we consider the task of purposefully controlling the motion of an active, monocular observer in order to recover a global description of a smooth, arbitrarily-shaped object.
We formulate global surface reconstruction as the qualitative task of controlling the motion of the observer so that the visible rim slides over the maximal, connected, reconstructible surface regions intersecting the visible rim at the initial viewpoint. We show that these regions are bounded by a subset of the visual event curves defined on the surface.
By studying the epipolar parameterization, we develop four basic behaviors that allow reconstruction of a surface patch around any point in a reconstructible surface region. These behaviors control viewpoint to achieve and maintain a well-defined geometric relationship with the object's surface, rely only on information extracted directly from images (e.g., tangents to the occluding contour), and are simple enough to be executed in real time. We then show how global surface reconstruction can be provably achieved by (1) appropriately integrating these behaviors to iteratively "grow" the reconstructed regions, and (2) obeying four simple rules.