Recent advances in virtual-interface and computer technologies, including helmet-mounted displays (HMDs), three-dimensional auditory displays, haptic displays, head and eye position tracking devices, and computer-generated imaging techniques, have permitted the development of multi-sensory, interactive virtual environments. In spite of the dramatic ability of these environments to represent the perceptual world, they are limited by the problem of time delay---the delay between the input to a system and its corresponding output. For example, in the case of HMDs, time delays are present in the sampling of head position by a tracking device attached to the user's helmet and the appearance of the updated image in the HMD. Such delays cause the image to lag behind the user's head movement, thereby causing the image to be displayed in an incorrect position.