We address the problem of rendering a scene in one location and displaying it in another, remote location, where the remote user controls the viewpoint. Network latency makes it impossible to accept a new viewpoint, render the complete scene, and return the generated image to the user fast enough to provide immersion. Our render server constructs images and depth maps ("depth images") and sends them over the network to a remote client. The client warps and composites several depth images to form the image displayed to the user. Using multiple images greatly reduces the visual artifacts produced by image warping. The client runs entirely in software and displays images at 6 frames per second.
Paper (151 KB)