Virtual reality is nowadays often associated with users wearing head mounted displays (HMDs). When observing such fully immersed HMD users, one often wonders what they perceive. What do they explore? Which virtual world are they navigating through? To really understand their experience, the best way is to meet them in the same virtual environment.
We are convinced, that virtual reality is a social medium that enables people to meet and collaborate in shared virtual worlds. In collocated settings, HMD users are often represented as stylized or computer generated (CG) avatars in order to see each other. Being represented as CG avatar might be sufficient in some applications. However, such representations cannot convey body language or facial expressions well. Instead of representing users as CG avatars, modern telepresence systems use real-time 3D capturing and reconstruction technology and represent the users as realistic 3D video avatars. In fact, meeting a user's 3D video avatar inside a telepresence system almost feels like meeting the person in reality. Consequently, HMD users should be represented using similar technology in collaborative applications.
In this project, we will allow collocated HMD users to see themselves and each other inside shared virtual environments as 3D video avatars using lightweight 3D capturing technology. To achieve this, we will equip HMDs with color and depth (RGBD) sensors. Thus, letting the RGBD-sensors become a user’s additional eyes that enable other users standing next them to be captured and integrated into his vision.
We will start with a literature research and presentations. Next, we will learn about virtual reality technology available in our labs. Eventually, we will get to the implementation. We will mount RGBD-sensors for 3D capturing onto the HMDs and calibrate them using our open source calibration frameworks. These lightweight 3D capturing systems will allow us to create spatially consistent 3D reconstructions of the physical environment in real-time. In particular, we will design and implement a novel 3D reconstruction processing pipeline that creates 3D video avatars of the captured collocated persons using our 3D capturing setups. We will then connect our pipeline to our existing geometry streaming module which sends the 3D video avatars to our virtual reality software Avango/Guacamole. In Avango/Guacamole, the 3D video avatars will then be automatically integrated into the virtual scene and displayed in stereoscopic 3D on the HMD.
You will learn about and work on the following topics:
-
Collaborative virtual reality and telepresence technology
-
Setup and calibration of 3D capturing systems using open source frameworks
-
Design and implementation of a real-time 3D reconstruction pipeline that creates 3D video avatars from RGBD-sensor streams
-
Design and implementation of a streaming server which interfaces our existing geometry streaming module
Requirements:
-
Proficient programming skills in C++ (e.g. successful completion of the course on programming languages)
-
Solid understanding of computer graphics (e.g. successful completion of the course on computer graphics)
Assessment:
active participation in the project, active participation in software development, development of a functional prototype, two presentations, final presentation, written documentation |