The presented thesis addresses a new technology for active range estimation, so-called time-of-flight (TOF) cameras. Based on the runtime principle, time-of-flight cameras allow the parallel acquisition of multiple distance information and thus enable, in contrast to other approaches, the acquisition of an entire scene in real-time. Consequently, TOF cameras are most suitable for many real-time systems in the area of automatization and interaction, where they are used for, i.a., object or gesture recognition. Due to their novelty, however, the accuracy of time-of-flight senors has been studied barely up to the present.
Experiments in the context of presented thesis revealed error sources whose characteristics result in distance deviations of several centimeters. Those error sources therefore have significant impact onto the accuracy of acquired distance information and the results of vision systems as well. In addition, current TOF cameras are of low resolution compared to other range sensing approaches. Although this circumstance does not represent a real error source, it might have negative influence on the accuracy of automatization algorithms and therefore gives reasons for appropriate pre-processing of the acquired information.
Dealing with basic research, the presented work covers the investigation of the accuracy of current camera models as well as the basic processing steps that are necessary for the enhancement of range images regarding further processing steps.
In the context of camera accuracy, the thesis primary focuses on the systematical error characteristics and discusses the design of phenomenological calibration models covering demodulation- as well as intensity-related deviations. Furthermore, it deals with the compensation of TOF-specific motion artifacts and describes a compensation approach, which is based on optical motion estimation as well as an theoretical axial motion model.
In the context of data processing, the presented thesis deals with the reduction of noise effects as well as the algorithmic refinement of distance information. Regarding distance refinement, two approaches are discussed: explicitly surface approximation using Moving Least Square surfaces as well as edge preservative data upscaling in image space. Furthermore, it covers the fusion of range images with supplementary information as provided by additional imaging sensors, in order to provide multi-modal data for sophisticated vision systems.