Project Output-Sensitive Avatars for Everyone

Project Output-Sensitive Avatars for Everyone

Prof. Dr. Bernd Fröhlich
M.Sc. Adrian Kreskowski
M.Sc. Gareth Rendle
M.Sc. Anton Lammert

DegreeStudy ProgrammeExamination RegulationsECTS
M.Sc.Computer Science for 
Digital Media
PV18 and lower15
M.Sc.Computer Science for
Digital Media
M.Sc.Computer Science and Mediaall15
M.Sc.Human-Computer InteractionPV17 and lower15
M.Sc.Human-Computer InteractionPV1912/18


Volumetric avatars are real-time 3D reconstructions that allow users of Virtual Reality (VR) systems to be accurately represented in a virtual environment. By fusing colour and depth contributions from RGBD cameras, one can produce volumetric avatar streams that allow rich communication between users in the virtual environment, conveying fluid full-body motions and gestures and facial expressions.

While photo-realism is a desirable characteristic of volumetric avatars in many use cases, there are some contexts in which detailed reconstructions may be unsuitable or unnecessary. For example, virtual environments are often simplified versions of real rooms, where photo-realistic avatars may appear incongruent, making the VR experience less plausible. Moreover, high-resolution textures increase bandwidth requirements when transmitting avatar streams. To address this, parts of the avatar could be textured with a reduced, cartoon-like colour palette.  

In these cases, a family of techniques known as Non-Photorealistic Rendering (NPR) techniques could prove useful. NPR techniques do not attempt to simulate the real world in as much detail as possible, but are instead designed to represent objects in an abstract manner. Depending on the context, their aim may be to improve understanding and recognition of surface features, to reveal hidden parts of an object, or to create visually consistent artistic styles. As part of the NPR4VR project in summer semester 2022, we investigated how different NPR effects can be applied to 3D geometry in stereoscopic viewing contexts.


As well as willingness to work in a team, and enthusiasm for learning about and developing rendering techniques on cutting edge hardware, you should have the following competencies:

  • Solid programming skills in C++ are required
  • Experience in the field of real-time computer graphics or GPGPU programming or shader programming is helpful, but not required

If you are in doubt as to whether you fulfil the requirements, or if you have any further questions regarding the project, we are happy to have a discussion with you before or after the project fair on 9th of October, or after that via email to adrian.kreskowski[at]


The final assessment of your work will be conducted based on the project contributions of every team member, including:

  • Active participation in the project during and in between weekly meetings
  • Design, implementation and evaluation of spatially and temporally consistent non-photorealistic rendering effects usable in a Cross-Platform context within Unity
  • Intermediate talks
  • Intermediate and final project presentations
  • Documentation in form of a short paper

If you are excited about real-time and telepresence, we would be happy to see you at the project fair!