Projects

About Light Fields

Light-field imaging is a research field with applicability in a variety of imaging areas including 3D cinema, entertainment, robotics, and any task requiring range estimation. In contrast to binocular or multi-view stereo approaches, capturing light fields means densely observing a target scene through a window of viewing directions. A principal benefit in light-field imaging for range computation is that one can eliminate the error-prone and computationally expensive process of establishing correspondence. The nearly continuous space of observation allows to compute highly accurate and dense depth maps free of matching.

Source: Light-field camera design for high-accuracy depth estimation, M. Diebold, O.Blum, M.Gutsche, S.Wanner, C. Garbe, H.Baker, B.Jaehne,
Videometrics, Range Imaging, and Application SPIE (2015)

New Light Fields Benchmark

In computer vision communities such as stereo, optical flow, or visual tracking, commonly accepted and widely used benchmarks have enabled objective comparison and boosted scientific progress. In the emergent light field community, a comparable benchmark and evaluation methodology is still missing. The performance of newly proposed methods is often demonstrated qualitatively on a handful of images, making quantitative comparison and targeted progress very difficult. To overcome these difficulties, we propose a novel light field benchmark. We provide 24 carefully designed synthetic, densely sampled 4D light fields with highly accurate disparity ground truth.

Source: A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields, Katrin Honauer, Ole Johannsen, Daniel Kondermann, Bastian Goldluecke,
Asian Conference on Computer Vision (2016)

Old Light Fields Benchmark

We present a benchmark database to compare and evaluate existing and upcoming algorithms which are tailored to light-field processing. The data is characterized by a dense sampling of the light fields, which best fits current plenoptic cameras and is a characteristic property not found in current multi-view stereo benchmarks. It allows to treat the disparity space as a continuous space, and enables algorithms based on epipolar plane image analysis without having to refocus first. All datasets provide ground truth depth for at least the center view, while some have additional segmentation data available.

Source: Datasets and Benchmarks for Densely Sampled 4D Light Fields, Sven Wanner, Stephan Meister and Bastian Goldluecke,
Vision, Modeling, and Visualization (2013)

HCI Benchmark

Recent advances in autonomous driving require more and more highly realistic reference data, even for difficult situations such as low light and bad weather. We present a new stereo and optical flow dataset to complement existing benchmarks. It was specifically designed to be representative for urban autonomous driving, including realistic, systematically varied radiometric and geometric challenges which were previously unavailable,

Source: The HCI Benchmark Suite: Stereo And Flow Ground Truth With Uncertainties for Urban Autonomous Driving. Kondermann, D., Nair, R., Honauer, K., Krispin, K., Andrulis, J., Brock, A., Güssefeld, B., Rahimimoghaddam, M., Hofmann, S., Brenner, C. & Jähne, B,
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops

Rotational Light Fields

We present a novel method which allows to reconstruct depth information from data acquired with a circular camera motion, termed circular light fields. With this approach it is possible to determine the full 360° view of target objects. The proposed method finds trajectories of 3D points in the EPIs by means of a modified Hough transform. For this purpose, binary EPI-edge images are used, which not only allow to obtain reliable depth information, but also overcome the limitation of constant intensity along trajectories. Experimental results on synthetic and real datasets demonstrate the quality of the proposed algorithm.

Light Field Calibration

Camera calibration for light field data poses more strict constraints on the accuracy of the calibration method, due to the increased estimation precision possible with light field data. To improve the localization accuracy in calibration scenarios we have developed an robust, fractal, self-identifying calibration target to be used in place of the classic checkerboard calibration target.

Acquisition Equipment

To acquire light field data for both, synthetic and real world scenes, the process is very similar. The camera is moved on a equidistant grid parallel to its own sensor plane and an image is taken at each grid position. Although not strictly necessary, an odd number of grid positions is used for each movement direction as there then exists a well-defined center view which makes the processing simpler. Alternatively also the acquisition via camera array is possible while here the calibration is a very crucial part.