Three collocated telepresence participants (including the camera) meet two remote participants in a shared virtual scene. Since the viewing perspectives of the local participants are similar, our output-sensitive avatar reconstruction approach allows for optimized remote avatar reconstruction, network bandwith utilization as well as reduced round trip-times for both parties in group-to-group telepresence scenarios.
In this paper, we propose a system design and implementation for output-sensitive reconstruction, transmission and rendering of 3D video avatars in distributed virtual environments. In our immersive telepresence system, users are captured by multiple RGBD sensors connected to a server that performs geometry reconstruction based on viewing feedback from remote telepresence parties. This feedback and reconstruction loop enables visibility-aware level-of-detail reconstruction of video avatars regarding geometry and texture data, and considers individual and groups of collocated users. Our evaluation reveals that our approach leads to a significant reduction of reconstruction times, network bandwidth requirements and round-trip times as well as rendering costs in many situations.