Appearance Models of Three-Dimensional Shape for Machine Vision and Graphics
W. B. Seales, Ph.D. Dissertation, Computer Sciences Department Technical Report 1042, University of Wisconsin - Madison, August 1991.
A fundamental problem common to both computer graphics and model-based computer vision is how to efficiently model the appearance of a shape. Appearance is obtained procedurally by applying a projective transformation to a three-dimensional object-centered shape representation. This thesis presents a viewer-centered representation that is based on the visual event, a viewpoint where a specific change in the structure of the projected model occurs. We present and analyze the basis of this viewer-centered representation and the algorithms for its construction. Variations of this visual-event-based representation are applied to two specific problems: hidden line/surface display, and the solution for model pose given an image contour.
The problem of how to efficiently display a polyhedral scene over a path of viewpoints is cast as a problem of computing visual events along that path. A visual event is a viewpoint that causes a change in the structure of the image structure graph, a model's projected line drawing. The information stored with a visual event is sufficient to update a representation of the image structure graph. Thus the visible lines of a scene can be displayed as viewpoint changes by first precomputing and storing visual events, and then using those events at display time to interactively update the image structure graph. Display rates comparable to wire-frame display are achieved for large polyhedral models.
The rim appearance representation is a new, viewer-centered, exact representation of the occluding contour of polyhedra. We present an algorithm based on the geometry of polyhedral self-occlusion and on visual events for computing a representation of the exact appearance of occluding contour edges. The rim appearance representation, organized as a multi-level model of the occluding contour, is used to constrain the viewpoints of a three-dimensional model that can produce a set of detected occluding-contour features. Implementation results demonstrate that precomputed occluding-contour information efficiently and tightly constrains the pose of a model while consistently accounting for detected occluding-contour features.