Return to Wisconsin Computer Vision Group Publications Page

Exploring Three-Dimensional Objects by Controlling the Point of Observation
K. N. Kutulakos, Ph.D. Dissertation, Computer Sciences Department Technical Report 1251, University of Wisconsin - Madison, October 1994.

Abstract

In this thesis we study how controlled movements of a camera can be used to infer properties of a curved object's three-dimensional shape. The unknown geometry of an environment's objects, the effects of self-occlusion, the depth ambiguities caused by the projection process, and the presence of noise in image measurements are a few of the complications that make object-dependent movements of the camera advantageous in certain shape recovery tasks. Such movements can simplify local shape computations such as curvature estimation, allow use of weaker camera calibration assumptions, and enable the extraction of global shape information for objects with complex surface geometry. The utility of object-dependent camera movements is studied in the context of three tasks, each involving the extraction of progressively richer information about an object's unknown shape: (1) detecting the occluding contour, (2) estimating surface curvature for points projecting to the contour, and (3) building a three-dimensional model for an object's entire surface. Our main result is the development of three distinct active vision strategies that solve these three tasks by controlling the motion of a camera.

Occluding contour detection and surface curvature estimation are achieved by exploiting the concept of a special viewpoint: For any image there exist special camera positions from which the object's view trivializes these tasks. We show that these positions can be deterministically reached, and that they enable shape recovery even when few or no markings and discontinuities exist on the object's surface, and when differential camera motion measurements cannot be accurately obtained.

A basic issue in building three-dimensional global object models is how to control the camera's motion so that previously-unreconstructed regions of the object become reconstructed. A fundamental difficulty is that the set of reconstructed points can change unpredictably (e.g., due to self-occlusions) when ad hoc motion strategies are used. We show how global model-building can be achieved for generic objects of arbitrary shape by controlling the camera's motion on automatically-selected surface tangent and normal planes so that the boundary of the already-reconstructed regions is guaranteed to "slide" over the object's entire surface.

Our work emphasizes the need for (1) controlling camera motion through efficient processing of the image stream, and (2) designing provably-correct strategies, i.e., strategies whose success can be accurately characterized in terms of the geometry of the viewed object. For each task, efficiency is achieved by extracting from each image only the information necessary to move the camera differentially, assuming a dense sequence of images, and using 2D rather than 3D information to control camera motion. Provable correctness is achieved by controlling camera motion based on the occluding contour's dynamic shape and maintaining specific task-dependent geometric constraints that relate the camera's motion to the differential geometry of the object.