Interpolating View and Scene Motion by Dynamic View Morphing
R. A. Manning and C. R. Dyer, Proc. Image Understanding Workshop, 1998, 323-330.
We present a novel technique for interpolating between two views of a dynamic scene. Our approach extends the concept of view morphing introduced in [Seitz and Dyer, 1996] and retains the relative advantages of that method. The interpolation will portray one possible physically valid version of what transpired in the scene during the intervening time between views. The scene is assumed to consist of a small number of objects. Each object can undergo any motion during the time between views as long as the total movement is equivalent to a single, rigid translation. The dynamic view morphing technique can work with widely-spaced reference views, sparse point correspondences, and uncalibrated cameras. When the camera-to-camera transformation can be determined, the virtual objects can be portrayed moving along straight-line, constant-velocity trajectories. Methods are developed for determining the camera-to-camera transformation from information available in the reference views. It is shown that each moving object in a scene has a corresponding fundamental matrix and that the camera-to-camera transformation can be determined from two distinct fundamental matrices. Dynamic view morphing is developed for both pinhole and orthographic cameras, and the use of three or more reference views is discussed. Static view morphing is made more versatile with respect to occlusion, and mosaicing is combined with dynamic view morphing for the case when both reference views share the same optical center. The resulting combination of techniques can be used to fill-in missing gaps in movies, perform "view hand-offs" between cameras at different locations, create movies from still images, perform movie stabilization and compression, track objects during periods of obstruction, and related tasks.