Forschungsprojekt Multi-Frame Rate Rendering and Display

Multi-Frame Rate Rendering and Display

We introduce a new concept for improved interaction with complex scenes: multi-frame rate rendering and display. Multi-frame rate rendering produces a multi-frame rate display by optically or digitally compositing the results of asynchronously running image generators. Interactive parts of a scene are rendered at the highest possible frame rates while the rest of the scene is rendered at regular frame rates. The composition of image components generated with different update rates may cause certain visual artifacts, which can be partially overcome with our rendering techniques. The results of a user study confirm that multi-frame rate rendering can significantly improve the interaction performance while slight visual artifacts are either not even recognized or gladly tolerated by users. Overall, digital composition shows the most promising results, since it introduces the least artifacts while requiring the transfer of frame buffer content between different image generators.


Project Leaders:
  • Dipl.-Inf. Jan P. Springer
  • Prof. Dr. Bernd Fröhlich
Project Participants:
  • Stephan Beck
  • Felix Weiszig


The interactive and high-quality visualization of large models is still a challenging problem even though the capabilities of graphics processing units (GPU) have been dramatically improved over the past years. unfortunately the expectations on visual quality have increased even more, which in general affects the interactivity of applications or interaction quality. These two qualities seem to be at opposite ends of a continuum. Visual quality is mainly dependent on the scene complexity (e.g. the number of primitives), the rendering method, the illumination and shading model, and the display resolution. While all of these factors might also improve the interaction quality, they often lead to low frame rates when excellent visual quality is desired. Interaction quality heavily depends on immediately incorporating user actions into the simulation and image generation process which demands high frame rates.


Our multi-frame rate approach uses multiple image generators to render interactive parts of a scene, e.g. menus, cursor, interaction-related visualizations as well as scene objects currently manipulated by the user, with the highest possible frame rates while the rest of the scene is rendered at regular frame rates. The results of individual image generators are optically or digitally combined into a multi-frame rate image. The optical combination can be achieved by using multiple projectors displaying completely overlapped images on the same screen. Digital image composition requires either dedicated hardware like the Lightning2 system or HP's Sepia/Sepia2 systems or the exchange of color and depth information between different image generators. Our approach for digital composition of asynchronously generated images can be seen as an unconventional case of the Sort-Last technique, which commonly focuses on balancing workload between multiple image generators to improve the overall frame rate. Instead we purposely generate a highly unbalanced load for the image generators to improve the interactivity and responsiveness of an application considerably.


Our approach is motivated by a number of observations made with different application prototypes in the automotive as well as in the oil and gas industry, where often highly complex scenes are explored and manipulated on large projection-based displays:



  • Scenes are mostly static and only small parts of the scene are manipulated, like an oil well or an engine part.


  • System control is quite often used, but menus, sliders, etc. are difficult to manipulate at low frame rates.


  • Head tracking is rarely used. Even if it is used head-tracked users move around very little in most cases. Head tracking seems to work quite well at low frame rates while selection, object manipulation and system control become increasingly difficult.


  • Navigation often involves only the coarse adjustment of view point positions, which can be achieved at relatively low frame rates.


Based on these observations we realized that object selection, object manipulation, and system control require higher frame rates to work reasonably well than navigation and head tracking for the described scenarios. This insight is either completely ignored in current systems and thus interactivity is sacrificed or it is resolved by rndering the whole scene with an appropriate frame rate by reducing visual quality, \eg by sending fewer polygons into the graphics pipeline.


We introduce the concept of multi-frame rate rendering to improve the interaction for complex scenes while maintaining visual quality. Multi-frame rate rendering introduces certain visual artifacts for some situations, which are caused by the different update rates of scene parts. We discuss how different approaches deal with this problem, when it occurs, and how it can be ameliorated. Our user study confirms that users benefit from the improved interaction quality while certain visual artifacts are either not even recognized or gladly tolerated. Overall, digital composition shows the most promising results, since it introduces the least artifacts at the expense

of transferring frame buffer content between different image generators. With the recent ubiquitous availability of single system multi-GPU configurations our approach becomes easy and efficient to implement. For some application types it may be favorable over commonly used Sort-First-based load balancing approaches for multi-GPU environments.

Optical Superposition

Whole scene using two projectors
Contribution of each projector

Digital Composition

object selection (left: as perceibed by user; right: slow client contribution in grey scale)
deselected object (left: as perceived by user; right: slow client contribution in grey scale)


  • J. P. Springer, S. Beck, F. Weiszig, D. Reiners, and B. Fröhlich:
    Multi-Frame Rate Rendering ans Display.
    In Proceedings IEEE Virtual Reality 2007 Conference, pages 195–202. 2007.
  • Talk at VR2007, Virtual Reality 2007 Conference