Computer Vision

In recent years the rapid development of flexible, light-weight and inexpensive sensors has led to an increasingly large amount of image data. The Computer Vision (CV) group focuses on the automated analysis of such image-based information. A major challenge is the precise and efficient interpretation of this flood of data for a variety of demanding applications.

We develop for example innovative computer algorithms to derive spatial information of objects from digital images. This allows us to determine the camera motion in dynamic video sequences and to reconstruct the surface of objects from stereoscopic images. We also apply methods of model-based image analysis for object recognition and real-world modeling.

The new CV lab on the first floor of the Digital Bauhaus Lab (DBL) is equipped with six workstations which have fast access to a high-performance computing server and a research platform for autonomous flight systems. In general , the facilities are intended for students who are working on automatic image analysis and object recognition, photogrammetric computer vision as well as parallel and distributed computing. For many applications our experience in spatial information systems (GIS), building stock survey and geodesy is quite fruitful.

An actual research topic is the autonomous navigation of Unmanned Aerial Systems (UAS). These sensor platforms have shown to be useful for various vision-based tasks, like the deformation monitoring of large scale structures (Hallermann et al., 2014). They can also be used for the autonomous exploration of unknown indoor and outdoor environments, e.g. the analysis of critical infrastructures after natural or technical disasters, dangerous search and rescue tasks as well as cave-exploration in archeology.

However, stable dynamics and safe navigation requires several sensing, computation, control and planning tasks onboard in real-time. In order to prevent potential collisions and reduce some drift artifacts, additional sensors should be integrated (e.g. laser scanners and stereo cameras). The fast UAS dynamics requires accelerated image analysis algorithms, optimization and sensor fusion techniques by using modern concepts of parallel programming.

Selected Projects

Figure 1: The AscTec Pelican research platform
Figure 2: First outdoor test flight

SLAM for UAS (Dr. Jens Kersten, Prof. Volker Rodehorst)

The project covers challenging topics in the field of Simultaneous Localization And Mapping (SLAM) using Unmanned Aerial Systems (UAS). The Computer Vision (CV) group operates the quadrocopter Pelican from AscTec (see Figure 1).

The integrated guidance system uses gyrometers, accelerometers, a pressure sensor, compass and GPS receiver for navigation. Since satellite-based GPS signals are not always available (i.e. for indoor applications) the missing 3D position must be compensated. Here, the Pelican research platform supports high payloads and offers an easy integration of custom sensors (i.e. laser scanner and stereo camera) as well as massive computation power (Intel i7 quad-core with 4GB RAM).

In order to realize autonomous exploration and mapping tasks together with reliable obstacle avoidance, advanced computer vision modules (e.g. using Kalman or particle filters) are implemented.


N. Hallermann, G. Morgenthal and V. Rodehorst: Vision-based deformation monitoring of large scale structures using Unmanned Aerial Systems, 37th IABSE Symposium, Engineering for Progress, Nature and People, Madrid, Spain, 2014, pp. 1-8.