The images generated by real-time 3D graphics systems exhibit enormous frame-to-frame coherence, which is not exploited by the conventional graphics pipeline. I exploit this coherence by decoupling image rendering from image display. My system renders every Nth frame in the conventional manner, and generates the in-between frames with an image warper. The image warper modifies a rendered image so that it is approximately correct for a new viewpoint and view direction.
My image warper uses McMillan's 3D image warp. Unlike perspective image warps, the 3D image warp can correct for changes in viewpoint, even for objects at different depths. As a result, my system does not require the application programmer to segment the scene into different depth layers, as is required by systems that use a perspective image warp.
I attack three major challenges associated with using 3D warping for rendering acceleration: visibility, reconstruction, and performance. I describe how to choose pairs of rendered images so that most of the needed portions of the scene are visible in these images. I describe a framework for the 3D warp reconstruction problem, and develop reconstruction algorithms that produce good quality images. Finally, I describe properties of the 3D warp that could be used to build efficient 3D image warpers in hardware.
My technique can also compensate for rendering-system latency and network latency. I have built a real-time system that demonstrates this capability by displaying rendered images at a remote location, with low latency.
Dissertation (PDF, 4.8 MB)