Proj_AboutLF

In considering the light field, we first need to think about light within a ray representation. Light is filling space, in all directions, with rays of various intensities. These rays spread without interfering with each other while traveling independently though space. Light hitting an object surface at position (X_w , Y_w , Z_w ) becomes scattered and reflects in a pencil of rays from the object surface. The reflection direction of each ray of the pencil is describable by (θ, φ). This model of light travelling through space is described by the plenoptic function.

With modern cameras only capturing a discrete number of wavelengths in bands such as red, green and blue, or in a monochromatic integrated band, we may ignore wavelength. Furthermore, through considering only static scenes, we may ignore the temporal variation and consider the time component as constant. In computer graphics images are parametrized in the (x, y) image space. Thus the plenoptic function is represented by the lumigraph.

The lumigraph parametrizes the light field with respect to camera position (s, t) and pixel location (x, y) as visualized in the next figure. The light-field representation becomes

(θ, φ, X_w , Y_w , Z_w ) → (s, t, x, y)
L(s, t, x, y) := P (θ, φ, X_w , Y_w , Z _w ).

With this assumption all cameras are considered as located on a common plane with parallel viewing direction. This implies that all epipoles are located at infinity, facilitating the extraction of epipolar-plane images (EPIs) from the captured data.

 

In a 4D light field two different epipolar-plane images can be extracted. The first relates to the horizontal camera direction and the second to the vertical camera direction. Epipolar-plane images related to the horizontal camera direction are described by the equation

St* ,y* : ∑t* ,y* → R
(x, s) → St* ,y* (x, s) := L(s, t* , x, y*).

where t and y take the values t* and y* to obtain the EPI. In contrast, EPIs related to the vertical camera direction s and x take the values s* and x*. The addressed EPIs are described by the equation

Ss* ,x* : ∑ s* ,x* → R
(y, t) → Ss* ,x* (y, t) := L(s* , t, x* , y).

Each orientation in an EPIs is related to the distance of the imaged surface point. Since an EPI is a slice of an image set, also interpretable as tracking shot, each slope actually shows the displacement of a suface feature while changing the point of view. Thus close objects have a steeper slope as objects in a distance. To determine the orientations in the EPI several methods are established.

One method is the usage of the structure tensor, which provides a fast and EPI processing and leads to highly-accurate results. Disadvantage of this method is that the orientaiton estimation is only local. This leads to a lack of precision in large light fields. Other approaches computing a global solution either apply a simple line fit or analyze only features information to speed up the processing, i.e., extracted by zero crossings.