Non-Uniform Crosstalk Reduction for Dynamic Scenes

Abstract

Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly perceive depth. Previous work on software crosstalk reduction focussed on the preprocessing of static scenes which are viewed from a fixed viewpoint. However, in virtual environments scenes are dynamic, and are viewed from various viewpoints in real-time on large display areas.

In this paper, three methods are introduced for reducing crosstalk in virtual environments. A non-uniform crosstalk model is described, which can be used to accurately reduce crosstalk on large display areas. In addition, a novel temporal algorithm is used to address the problems that occur when reducing crosstalk in dynamic scenes. This way, high-frequency jitter caused by the erroneous assumption of static scenes can be eliminated. Finally, a perception based metric is developed that allows us to quantify crosstalk. We provide a detailed description of the methods, discuss their tradeoffs, and compare their performance with existing crosstalk reduction methods.

Publication

  • Smit, F. A., van Liere, R., Froehlich, B.
    Non-Uniform Crosstalk Reduction for Dynamic Scenes
    Proceedings of the IEEE Virtual Reality 2007 Conference, pp. 139–146, 2007.
    [preprint][talk][video]