Open Access

4-D light field reconstruction by irradiance decomposition

IPSJ Transactions on Computer Vision and Applications20179:13

DOI: 10.1186/s41074-016-0014-z

Received: 12 July 2016

Accepted: 29 December 2016

Published: 8 April 2017


Common light sources such as an ordinary flashlight with lenses and/or reflectors make complex 4-D light field that cannot be represented by conventional isotropic distribution model nor point light source model. This paper describes a new approach to estimate 4-D light field using an illuminated diffuser. Unlike conventional works that capture a 4-D light field directly, our method decomposes observed intensities on the diffuser into intensities of 4-D light rays based on inverse rendering technique with prior knowledge. We formulate 4-D light field reconstruction problem as a non-smooth convex optimization problem for mathematically finding the global minimum.


Light field reconstruction Illumination Inverse lighting Convex optimization

1 Introduction

Estimation of lighting environment is important for many applications in photometric methods in computer vision, e.g., photorealistic image synthesis, photometric stereo, and BRDF estimation. For representing a radiant intensity distribution of light sources, various models have been proposed and they can be categorized into four groups as shown in Fig. 1.
Fig. 1

Categorization of light field models. a Isotropic point light source. b Anisotropic point light source. c Set of point light sources. d 4-D light field

As a simplest model of a radiant intensity distribution for a light source, an isotropic point light source has conventionally been used (Fig. 1 a). This type of model has only one intensity parameter. Due to its simplicity, this model established a standard in the field of photometric computer vision [1, 2]. This simplest model is extended to two directions for representing the directivity and the spatial distribution of light sources. For the directivity, an angular radiance distribution is considered (Fig. 1 b) by assigning different intensity parameters for different directions. This model can handle an anisotropic point light source, and is essential for modeling a light with hard directivity like an LED [3]. The other extension is for the spatial distribution for representing the volume of lighting environment (Fig. 1 c). By simply arranging multiple isotropic point-light-sources in a space, the model can handle the spatial distribution of lights. Although these extensions increase the accuracy of lighting environment modeling, they cannot be used to model an actual complex light field, e.g., generated by LEDs or bulbs with reflectors and/or lenses. Differences between illuminating effects of actual lights and modeled ones by (a), (b), (c), become bigger when lights are placed near from the objects, and it prevents photo realistic rendering and high accurate inverse rendering in this situation. 4-D light field (Fig. 1 d), which presents light field by 2-D directivity × 2-D spatial distribution of light sources, is essential for modeling actual lighting environments.

In this study, we focus on reconstructing 4-D light field from images on an illuminated object. In order to estimate 4-D light field, most of conventional works directly capture a huge number of images for all the directions from all 3-D positions and resultantly, they suffer from the cost problem for measuring all rays. Instead of direct capturing, our method estimates unknown parameters of the 4-D light field so that the images rendered with reconstructed lighting would become as similar as possible with captured original images based on an inverse lighting techniques [4]. To achieve the goal of both accurate and robust estimation, we have developed a new inverse lighting method based on a convex optimization technique. Our method introduces the range of possible radiant intensities from a physical constraint and it actualizes the reconstruction of the 4-D light field from a few images.

Remainder of this paper is organized as follows. Section 2 discusses related work and highlights our contributions. Section 3 expresses a basic idea for 4-D light field reconstruction. Section 4 describes an efficient solution for non-smooth convex optimization problem. Section 5 shows experimental results in real scenes. Finally, Section 6 concludes the present study.

2 Related work and contributions

2.1 Direct method for 4-D light field acquisition

Direct methods directly capture the 4-D light field by back-tracing the rays from the camera. Measured 4-D light field is represented by a form of 4-D light ray space based on a set of 2-D images of a scene captured from different view points.

As straight-forward methods for measuring the 4-D light field, some methods that capture 2-D images for all directions from all the 3-D positions using camera mounted on a robot arm have been used [58]. These methods are generally expensive in both measuring cost and time. In order to reduce them, Unger et al. [9, 10] used an array of mirrored spheres and a moving mirrored sphere that travels across the plane. Goesele et al. [11] and Nakamura et al. [12] used various kinds of optical filters that spatially limit the incident light rays to the camera. Cossairt et al. [13] used a lens array and created augmented scenes relighting synthetic objects using captured light field. Although these methods are related to our problem, the direct methods still require a comparatively large amount of images.

2.2 Indirect method for light field reconstruction

Indirect methods, also known as inverse lighting, reconstruct the lighting parameters in a scene by minimizing the difference of observed and computed intensities that can be simulated using CG rendering techniques with known scene geometry, surface property, and lighting environment.

Conventionally, many researches utilized a specular reflection or diffuse reflection components to estimate a lighting environment [14, 15]. These approaches solve a linear system. Ramamoorthi and Hanrahan [16] have shown the reason that inverse lighting problem for global illumination is ill-posed or numerically ill-conditioned, based on the theoretical analysis in the frequency domain using spherical harmonics. On the other hand, Park et al. [3] estimate the 2-D light field emitted from a point light source, which rigidly attached to a camera, using a illuminated plane.

Shadows are areas where direct lights from a light source cannot reach due to the occlusion by other object and thus can provide useful information for estimating the lighting environment. By using the cast shadow information, Sato et al. [17] proposed a method to recover positions of a set of point light sources. Okabe et al. [18] used a Haar wavelet basis to approximate the lighting effect by a small number of basis functions. Takai et al. [4] proposed a skeleton cube as a reference object that creates a self cast shadows from a point light source of arbitrary position.

2.3 Our contributions

As described above, the direct method suffers from the cost problem for data capturing and there is no method that can estimate 4-D light field with the indirect method, as far as we know. In this paper, we propose a method for estimating 4-D light field in an indirect manner that can estimate light field from a few images. In order to achieve this, the problem of estimating 4-D light field is formed as energy minimization problem with several constraints and the problem is solved by convex optimization. It should be noted that this paper is an extension of our conference paper [19]. In this paper, for stable and accurate estimation, we have newly introduced physical constraint with 1-norm regularization for energy function and we have also conducted completely new experiments with new solution.

3 Basic idea for 4-D light field reconstruction

In this section, we first formulate the problem of basic 4-D light field reconstruction as a linear system.

Figure 2 illustrates the conceptual setup for our approach. The light rays, which are emitted from light sources, are passing through the light field plane \(\mathcal {L}\), on which the 4-D light field is defined. The emitted light rays are hitting to the diffuser \(\mathcal {B}\). The camera observes the integral of light ray intensities on the illuminated diffuser. In this situation, our method reconstructs the 4-D light field by decomposing observed intensities into light ray intensities using multiple images in which the diffuser is illuminated from various directions, while camera position and diffuser position are fixed.
Fig. 2

Light rays that hit a diffuser

For reconstructing 4-D light field, we make the following assumptions:
  • The relative positions and postures of a camera, a diffuser and light sources are known.

  • The radiance distribution of light sources is static.

  • The sensor response is linear.

  • The light is not attenuated by scattering or absorption

  • The diffuser’s property (transmission model) is given.

In the followings, we first review the relationship between 4-D light field and observed intensities, and then discuss how to model the inverse problem of estimating 4-D light field from observed intensities.

3.1 Relationship between light field and observed intensities

In this work, we model a 4-D light field as the intensities of rays defined by the parameters of position (u,v) and direction (ϕ,θ) on the light field plane \(\mathcal {L}\) as illustrated in Fig. 3. The center of hemispheres in this figure show sampled positions (u,v) on the light field plane \(\mathcal {L}\) and the arrows inside the hemisphere indicate directions (ϕ,θ) of rays. The observed intensity o i in the local region x i on the diffuser \(\mathcal {B}\) is proportional to the integral of all the intensities of rays that are hitting to x i as follows 1:
$$\begin{array}{@{}rcl@{}} \begin{aligned} \begin{array}{lll} o_{i} &=& {\alpha} \int a_{i}(\boldsymbol{j}) s (\boldsymbol{j}) d \boldsymbol{j}, \\ \boldsymbol{j} &=& \left(u, v, \phi, \theta \right)^{\text{T}}, \end{array} \end{aligned} \end{array} $$
Fig. 3

Illustration of light field parametrization on \({\mathcal {L}}\)

$$\begin{array}{@{}rcl@{}} \begin{aligned} a_{i}({\boldsymbol{j}}) = \left\{ \begin{array}{lll} \rho,& \ &\text{if ray}\, {\boldsymbol{j}} \, \text{hits}\, {x_{i}}, \\ 0,& \ &\text{otherwise}. \end{array} \right. \\ \end{aligned} \end{array} $$
α is a constant value that decides a relative scale of an observed intensity and a light ray intensity. s(j) is an intensity of ray j and ρ is attenuation ratio on \(\mathcal {B}\) 2. Suppose that we observe N intensities \(\boldsymbol {o} = \{ {o}_{1}, \cdots, {o}_{N} \}\in \mathbb {R}^{N}\) at different regions {x 1,,x N } on the diffuser \({\mathcal {B}}\), and the ray j is discretized by (u,v,ϕ,θ) T into M of bundled rays whose intensities are redefined as \({\boldsymbol {s} = \{ {s}_{1}, \cdots, {s}_{M} \}\in \mathbb {R}^{M}}\),
$$\begin{array}{@{}rcl@{}} {s_{j}} = \int_{{u, v} {\in} {l_{i}}} \int_{{\phi, \theta} {\in} {d_{i}}} {s}({\boldsymbol{j}}) d\theta d\phi du dv, \end{array} $$
where l i is a local region on light field plane \({\mathcal {L}}\) and d i is a set of directions that pass through the local region l i on hemisphere. The relationship between the observed intensities o and the radiant intensities s can be formed as a linear system as
$$\begin{array}{@{}rcl@{}} {\boldsymbol{o}} = {\boldsymbol{A}} {\boldsymbol{s}}, \end{array} $$

where, \({\boldsymbol {A} :\mathbb {R}^{M} \mapsto \mathbb {R}^{N}} \) is a matrix of known parameters that represents a rendering process. Equation (4) can be solved by linear least squares in principle if we have sufficient observations.

3.2 Size reduction using spherical harmonics function

In practice, the size of A in Eq. (4) is too large to be solved due to the large number of unknown parameters.

In order to reduce the number of unknown parameters, in this research, we approximate s by the real-spherial harmonics function that is often employed in photometric reconstruction. More specifically, the radiant intensity s i in Eq. (3) of the ray passing through the local region l i on light field plane \({\mathcal {L}}\) in the direction d i , is approximated by an weighted sum of the bases of the real-spherical harmonics function, which is represented as
$$\begin{array}{@{}rcl@{}} \begin{aligned} {s_{i}} = \sum_{f=0}^{F} \sum_{f=-g}^{+g} {{c_{i,\,f,g}}}{({\phi, \theta})}{{y_{f,g}}}, \end{aligned} \end{array} $$

where y f,g is the basis of real-spherical harmonics, and c i,f,g are H=(F+1)2 unknown parameters. Representable distribution of radiant intensities depends on H.

In the case we discretize the light field plane \({\mathcal {L}}\) into L regions, we have \(\hat {H} = H \times L\) unknown parameters: \({\boldsymbol {c} = \{ c_{1}, \cdots, c_{\hat {H}} \}\in \mathbb {R}^{\hat {H}}}\) for modeling 4-D light field. The relationship between s and c can be expressed as:
$$\begin{array}{@{}rcl@{}} \begin{aligned} \boldsymbol{s} = {\boldsymbol{Y}} {\boldsymbol{c}}, \end{aligned} \end{array} $$
where \({\boldsymbol {Y}:\mathbb {R}^{\hat {H}} \mapsto \mathbb {R}^{M}}\) is the matrix that satisfies an orthonormal basis property. By substituting, Eq. (6) to Eq. (4), the following equations are derived.
$$\begin{array}{@{}rcl@{}} \begin{aligned} \begin{array}{lll} {\boldsymbol{o}} &=& {\boldsymbol{A}} {\boldsymbol{Y}} {\boldsymbol{c}}, \\ &=& {\boldsymbol{B}} {\boldsymbol{c}},\\ b_{i,j} &:=& \frac{y_{i,j}({\phi, \theta})R(\omega)}{D({u, v}, {x_{i}})^{2}}, \end{array} \end{aligned} \end{array} $$

where b i,j is an element of matrix \({\boldsymbol {B}} \in \mathbb {R}^{\hat {H}} \mapsto \mathbb {R}^{N}\), R(ω) is a transmission distribution function that is determined by diffuser’s property, ω is determined by the angle between a normal of the diffuser and the ray, and D(·) represents the distance between (u,v) and x i on the diffuser \({\mathcal {B}}\).

4 Efficient solution under insufficient observations

The solution of the linear equation given in Eq. (7) is sensitive to observation noises and errors in pose estimation of the diffuser and often outputs negative intensities due to the lack of valid observations, as shown in the latter experiment. Possible approaches to overcome this problem are gaining more observations or introducing constraints on the parameters.

In this work, in order to achieve stable estimation from a limited number of inputs, we introduce a physical constraint and 1-norm regularization into a light field reconstruction algorithm formulated as a convex optimization problem. In the following, we first give the formulation of the problem, and then describe the details of each constraint.

4.1 Formulation of light field reconstruction problem

We formulate the problem of light field reconstruction as follows:
$$\begin{array}{@{}rcl@{}} \begin{aligned} \arg \min_{\boldsymbol{c}} \{ ||{\boldsymbol{o}} - {\boldsymbol{B}} {\boldsymbol{c}}||^{2}_{2} + \lambda ||{\boldsymbol{c}}||_{1} + \iota_{{\mathcal{V}}}({\boldsymbol{Y}} {\boldsymbol{c}}) \}, \end{aligned} \end{array} $$

where the first term represents the squared error between observed intensities and rendered intensities, and the second term represents the 1-norm of spherical harmonics coefficients and the λ is a weight parameter for 1-norm. The third term represents the physical constraint that limits the numerical range of light ray intensities.

Since each term is convex in Eq. (8), whole function is also convex. Hence, this function has a unique solution.

The convex optimization problem of Eq. (8) can be solved by alternating direction method of multipliers [20], which effectively minimizes the function with iterative manner. By solving this problem, we can get the 4-D light field Y c.

4.2 1-norm regularization

In general, regularization is introduced to prevent over fitting when the number of observations is not sufficiently larger than that of unknown parameters. In this research, we employ 1-norm of the unknown parameters c as a regularization term for preventing this problem. This term makes most elements in c become zero and it selects important bases on the matrix B. The weight parameter λ, which is empirically determined in the experiment, adjusts the number of selected bases.

4.3 Physical constraint based on non-negative constraint for light ray intensity

From physical limitation of radiant intensities, we can make some constraints for intensities s j of rays j. Physically, all the light ray intensities must have non negative values. This property gives:
$$\begin{array}{@{}rcl@{}} \begin{aligned} \begin{array}{ll} {s_{j}} \geq 0 & \forall j \in \boldsymbol{j}. \end{array} \end{aligned} \end{array} $$
On the other hand, as illustrated in Figs. 2 and 4, each ray affects multiple regions x i on the diffuser \({\mathcal {B}}\) in different positions and each region is also affected by multiple radiant intensities s j . Here, if there is no ray except for rays j, observed intensity o i illuminated by s j is represented by A j,i s j where A j,i is a corresponding element of A in Eq. (4). As a result, the intensity of o i is represented:
$$\begin{array}{@{}rcl@{}} \begin{aligned} {o_{i}} = A_{0,i} {s_{0}} + A_{1,i} {s_{1}} + \cdots +A_{j,i} {s_{j}}. \end{aligned} \end{array} $$
Fig. 4

Illustration of physical constraint. Gray region indicates unilluminated region

Each term has a non-negative value, and thus this leads the following constraint:
$$\begin{array}{@{}rcl@{}} \begin{aligned} {s_{j}} {\leq} \frac{{o_{i}} }{A_{j,i}}. \end{aligned} \end{array} $$
From Eqs. (9) and (11), we can derive the following constraint:
$$\begin{array}{@{}rcl@{}} 0 \leq {s_{j}} \leq \min_{i \in \mathcal{X}_{j}} (\frac{{o_{i}}}{A_{j,i}}), \end{array} $$

where \(\mathcal {X}_{j}\) is a set of light rays j which hits to local region x i on the diffuser \({\mathcal {B}}\).

Above constraint can be formed as \({s} \in {\mathcal {V}}\), where \({\mathcal {V}}\) is a closed convex set. From Eq. (6), this formulation can be written as follows due to the orthogonality of the matrix Y:
$$\begin{array}{@{}rcl@{}} {\boldsymbol{Y}} {\boldsymbol{c}} \in {\mathcal{V}}. \end{array} $$
This constraint can be rewritten as a convex function using indicator function \(\iota _{\mathcal {V}} : \mathbb {R}^{M} \to [0, \infty ]\), which is defined by
$$\begin{array}{@{}rcl@{}} \begin{aligned} \iota_{{\mathcal{V}}}({\boldsymbol{Y}} {\boldsymbol{c}}) = \left\{ \begin{array}{lll} 0, & \ &\text{if } {\boldsymbol{Y}} {\boldsymbol{c}} \in {\mathcal{V}}, \\ \infty, & \ & \text{otherwise}. \end{array} \right. \\ \end{aligned} \end{array} $$

5 Experiments

In this section, we verify the effectiveness of the proposed method using a real data set. We first compare our method with several kinds of approaches under different conditions. In these experiments, since the ground-truth light-field map is not available, we quantitatively verify the correctness of our algorithm by computing the photometric errors that is the difference between the captured image and the corresponding relit image using reconstructed light field in real scene. In the following, we call the least square as LS, 1-norm regularization as L1 and physical constraint as PC.

5.1 Setup

Figure 5 a shows an overview of experimental setup in the dark room. An illumination source which is attached to the translation motorized stage is placed for illuminating the diffuser from variable distances. We have used a polystyrene board as the diffuser, and we assumed the board has the Lambertian transmission property. A high dynamic range camera (ViewPLUS Xviii) with the resolution of 642×514 pixels is located at the opposite side of the diffuser from the light source and it captures images as shown in Fig. 5 b. Upper 16-bit depth is used as intensity for each pixel. To reduce the noise effect of acquisition process for all the experiments, we used an average images of 256 images captured with fixed setup. To remove the perspective distortion effect in the captured images, we rectified the captured images so as to orthogonalize images whose each pixel size corresponds to 1×1m m as shown in Fig. 5 c. We have used these orthogonalized images as input images. The illumination distance d is defined as 0m m when the illumination source touches to the diffuser. In this experiment, a light field plane \(\mathcal {L}\) is defined in the coordinate system of the light source at the position of the diffuser plane with d=0m m, for efficiently representing all the 4-D rays emitted from the light source. With above setup, we have estimated light field maps using designated points which are extracted from the saturated areas on a close-up image. Unless otherwise stated, we have used four input images for estimating the light field, the parameter of λ=0.01 in Eq. (8) and the dimension F of the real spherical harmonics function is set to 34.
Fig. 5

Experimental setup. a Setup for measurement and flashlight with lens and reflector. b Captured image for distance 100m m. c Rectified image for distance 100m m

It should be noted that, some reconstructed light fields have negative radiant intensities. In this experiment, we permit a negative intensity for generating relit images.

5.2 Quantitative evaluation

In this section, we have employed a comparatively simple light source shown in Fig. 5 a in which three LEDs are horizontally arranged. In this experiment, for making the discussion simple, we first gave three light source positions for 4-D light field on \(\mathcal {L}\) by computing three center points of saturated regions on the image shown in Fig. 5 c which is taken at the distance d=0m m. We have estimated parameters of three anisotropic light sources for given three points as 4-D light field. We captured ten images of the lit diffuser, and in each image, the light source was at a different position. These zoomed images are shown in the top row of Fig. 6. Among these ten images, four images taken with the light source at distance d=(60,90,120,150m m) are used as input images and the rest are used for evaluation.
Fig. 6

Captured images and relit images by reconstructed light fields from compared methods with F=34. (I) 4-D light field with L1 + PC, (II) 4-D light field with L1, (III) 4-D light field with PC, (IV) 4-D light field by LS without L1 and PC, (V) 2-D light field with an anisotropic point light source, (VI) 2-D light field with a set of isotropic point light sources. Dimension F of spherical harmonics function for (I)–(V) is set to 34. Surrounding red boxes indicate that these distances are included in a set of input images for light field estimation. Images are clipped for zooming-up

With this configuration, the relit images rendered by estimated 4-D/2-D light-fields from the following six methods are compared to show the effectiveness of the proposed method.
  1. (I)

    4-D light field reconstruction with L1 + PC,

  2. (II)

    4-D light field reconstruction with L1,

  3. (III)

    4-D light field reconstruction with PC,

  4. (IV)

    4-D light field reconstruction by LS without L1 and PC,

  5. (V)

    2-D light field reconstruction assuming an anisotropic point light source, and

  6. (VI)

    2-D light field reconstruction assuming a set of isotropic point light sources.


Here, (I) (III) are the variations of the proposed method and (IV) is the baseline method3. For the method (V), middle of given three light positions is used as a position of a point light source since (V) does not have the spatial dimension. For the method (VI), box-style region of the diffuser shown in Fig. 5 c that contains 91×64 points is used as the spatial distribution of lights on \(\mathcal {L}\).

Figure 7 shows light field maps estimated by the methods (I) to (VI). As we can see, each method gives different light fields. In the followings, we discuss the differences of above methods with further results.
Fig. 7

Reconstructed light field maps for the light sources in Fig. 5 a. Gray color indicates unobserved region

5.2.1 Comparison of 4-D and 2-D light field reconstruction methods

Figure 6 shows relit images for various light positions rendered by estimated light fields shown in Fig. 7 for all the compared methods. Figure 8 shows estimation errors which are defined as normalized sum of absolute intensity differences between a ground-truth image and a relit image. As we can see in Fig. 6, 4-D light field based methods (I)-(IV) could reproduce three separated lights on relit images for near range d=[30,45] despite 2-D based methods could not separate them. From Fig. 8, we can see that errors of both 2-D based methods are larger than that of 4-D based methods except for the range d=[60,90] of the method (V). It is considered that larger errors are due to over optimization of the method (V) for near range images. From the results of the methods (V) and (VI), it is obvious that the 2-D light field based methods do not have capability to reproduce good relit images for this light source. We can conclude that 4-D light field estimation is necessary even for a comparatively simple light source.
Fig. 8

Photometric errors between relit images and captured images. Gray dashed lines indicate positions of input images

5.2.2 Effect of constraints

As shown in Figs. 6 and 8, there are very small quantitative and subjective differences in the results for middle to far range d=[60,165] among compared 4-D based methods. However, for near range d=[30,45], considerable differences are exposed. First, we can see unnatural ripples around the lights for the methods (I) to (IV). The ripples for (IV) are harder than others.

As shown in Fig. 7 (IV), LS gives large negative intensities in estimated light field and it has caused over parameter fitting problem. By comparing the pairs of {(I), (II)} and {(III), (IV)} in this figure, we can confirm that negative intensity on the estimated light field is successfully suppressed by using PC. Although we cannot see any subjective differences among relit images for (I), (II), and (III), quantitatively, the errors become smaller when we employed L1 and the method (I) which uses both PC and L1 gives the best scores for near range.

5.2.3 Effect of dimension

Here, we confirm the effect of dimensions for spherical-harmonics function on relit images. Figure 9 shows relit images for variable F for the proposed method (I). Because there are very small differences in far range images when F is 34 or larger, in Fig. 10, we have shown the images for near to middle range images for higher F=69,79 and 89.
Fig. 9

Relit images by proposed method (I) with variable F. Surrounding red boxes indicate that these distances are included in a set of input images for light field estimation. Images are clipped for zooming-up

Fig. 10

Relit images by proposed method (I) with F=69,79, and 89

Figure 11 shows estimation errors for them. As shown in these figures, the errors rapidly decrease as the dimension F is raised, slightly increasing then decreasing at around F=24, but then eventually leveling out. This effect is considered to be caused by the difference of the definitions in photometric errors, as Fig. 11 shows the absolute photometric errors, but the proposed method’s energy function, described in Eq. (8) minimizes the L2 norm of photometric errors. This effect is considered to be caused by the difference of the definitions in photometric errors, although the errors in In Figs. 9 and 10, relit images for near range are continuously changed and the separation of three lights becomes clearer for higher F.
Fig. 11

Photometric errors between relit images and captured images for variable F

As shown in Figs. 9 and 10, We can confirm that the ripples artifacts appeared especially for the near rage of [30-45], become weaker when we can give more resolution for angle direction, i.e., F become higher. In the case, the model does not have enough resolution for angle direction, the model cannot represent both the details of shapes of lights and background region behind the lights. In this situation, from the characteristic of the spherical harmonic function, repetitive patterns easily appear in the image. On the other hand, the reason, why the ripples for (IV) become harder than others, is considered as an over-fitting problem of standard linear programming. Our method successfully reduced this error by convex optimization. Ripples around lights, mentioned in the previous section, are almost disappeared when F=89. From these results, we can say that we need high dimensional parameters for spherical-harmonics function for accurate reconstruction of 4-D light field.

5.2.4 Effect of spacial resolution and arrangement of virtual light sources

Here, we confirm the effect of spacial resolution and arrangement of virtual light sources on relit images. Figure 12 shows the position of virtual light sources (a) to (e), and Fig. 13 shows relit images for them modeled by the proposed method (I). If some virtual light sources exist near the actual light sources positions (except (d) in Fig.12), they could reproduce a similar results with (a) for range d = [60, 165].
Fig. 12

Variational position of virtual light sources. A red point indicates a position of virtual light source

Fig. 13

Relit images by reconstructed light field from variable pattens of light source positions with F = 34. Each variable pattern of light source positions (a) - (e) corresponds to variational position of virtual light sources in Fig. 12. Surrounding red boxes indicate that these distances are included in a set of input images for light field estimation. Images are clipped for zooming-up

However, in the range d = [30, 45], different images are generated except for the images (a) and (e). For (a), we arranged virtual light sources at the center positions of highlights and for (e) number of point light sources are increased as shown in Fig. 12. From this comparison, we can say that as far as we can put virtual light sources in front of the true light positions (i.e., centers of highlights), good results will be given with minimum number of virtual light sources. By comparing (e) with F = 69 in Fig. 10, which have the same number of parameters with (e), we can see that latter result is better than (e)’s. It means the angle resolution is more important than the special resolution, as long as we can put virtual lights to appropriate positions. When we cannot arrange them for centers of highlights (case (d)), the relit image becomes different shapes from ideal ones. On the other hand, when we gave more virtual light sources to different positions from highlights, as shown in (b) and (c), undesired highlights appear on near range images. This is considered as the effect of an overfitting problem for these unnecessary positions. It should be noted that except for near range images, good results are obtained even for (b) and (c), and intensities of undesired highlights on near range images are also darker than those of true highlights. It is because the L1 norm suppress the value of coefficients by selecting an important basis of a spherical harmonic function.

5.2.5 Computational cost

In order to reconstruct 4-D light field with the method (I), it takes 31 h for F=34 in this experiment using a PC (i n t e l ®; c o r e T M i7-3970 3.50GHz × 12, Memory 32 GB, C++ implementation). The core time for computation spent for solving a convex optimization problem in Eq. (8). In this experiment, 5.3 GB memory was required for our implementation. When we set F=89, it takes more than 1 week for 4-D light field reconstruction. In order to reduce the cost, we should find more efficient bases for representing light field and efficient way for solving the problem.

5.3 Results for more complex lights

We have conducted further tests for the proposed method using more complex light sources shown in Fig. 14. In this experiment, we automatically set 35 light source positions for 4-D light field estimation by computing the center of saturated regions on the image with d=0m m. Figures 15 and 16 show the captured images and relit images with F=34 in Fig. 14 a, b, respectively. In these figures, the relit images by the proposed method with L1 and PC (I) and LS based method (IV) are compared.
Fig. 14

Light sources used for further tests. a Triangularly-aligned shell type LED. b Flash light with lens and reflector

Fig. 15

Captured images of triangularly aligned shell type LED and relit images by reconstructed light field form two method: (I) 4-D light field with L1 + PC and (IV) 4-D light field by LS. Dimension F of spherical harmonics function of (I) and (IV) is 34, and 35 light source positions are used to reconstruct 4-D light field. Surrounding red boxes indicate that these distances are included in a set of input images for light field estimation

Fig. 16

Captured images of flash light and relit images by reconstructed light field form two method: (I) 4-D light field with L1 + PC and (IV) 4-D light field by LS. Dimension F of spherical harmonics function of (I) and (IV) is 34, and 35 light source positions are used to reconstruct 4-D light field. Surrounding red boxes indicate that these distances are included in a set of input images for light field estimation

The light source in Fig. 14 a has three LEDs triangularly arranged. We can see that projected shape of each light looks like torus, and part of three shapes are overlapped as shown in Fig. 15. In the relit images, both the methods (I) and (IV) could reproduce good results for the position where input images were captured (red boxed images). However, the LS method (IV) gave completely different shapes for near range d=[30,45] due to the over fitting. In contrast, the proposed method (I) could reproduce much better relit images even for near range.

The light source in Fig. 14 b is a flash light with lens and reflectors. For this light source, although the method (I) recovers higher frequency component in the relit images compared with the method (IV), relit results does not reach a satisfactory level as shown in Fig. 16. It is considered that the poor results are due to the lack of parameters to model the 4-D light field for this complex light, and more input images are also necessary to acquire stable results. At this moment, we need more computational resources to estimate this kind of complex 4-D light field in which virtual light source positions, arose by reflectors in the light, are spatially distributed.

6 Conclusion

In this paper, we have presented a novel 4-D light field reconstruction technique utilizing a physical constraint and a regularization. We have formulated the light field reconstruction problem as a convex optimization problem. This optimization problem was designed to decompose the observed intensities on the measurement plane into light ray intensities. Unlike conventional works, the proposed method can estimate the 4-D light field from a few images without special optics such as a mirror-array, a lens array, and filters. As shown in experiments, we could confirm the effectiveness of both the physical constraint and 1-norm regularization. A remaining weakness in the current implementation of the proposed method is the difficulty for increasing dimensions of parameters due to its high computational cost which prevents to handle more complex lighting environments. In order to relax this problem, we should find more efficient bases for representing the light model to reduce the computation cost, in future work.

Although we assumed the board has Lambertian-transmission property and scattering effect is ignored, which did not give obvious effects in the results in the experiment, for more precise reconstruction, calibration method for the diffuser board should be considered. In addition, we should confirm the sensitivity of the proposed method by using images with artificially added noises.

7 Endnotes

1 In this equation, the intensity o i is represented by a continuous system. The effect of the attenuation and incident angle are considered by the integral of ray j.

2 In this paper, we regard ρ as constant by assuming the transmission property of the diffuser is Lambertian.

3 For LS, we iteratively minimize the function from zero vector using conjugate gradient method, since B in Eq. (7) has small singular values due to the insufficient observations in this experiment. In this case, LS cannot give a unique solution or a stable solution by linear solvers.



This research was supported in part by JSPS KAKENHI Grant No. 23240024 and Grant-in-Aid for Exploratory Research Grant No. 25540086.

Authors’ contributions

TA designed the study, performed the experiments, and drafted the manuscript. TS participated in the design of the study and helped to draft the manuscript. YM conceived of the study and participated in its design. NY gave technical support and conceptual advice. All authors discussed the results and implications and commented on the manuscript at all stages. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Nara Institute of Science and Technology


  1. Debevec P, Malik J (1997) Recoverring high dynamic range radiance maps from photo graphs In: Proc. ACM SIGGRAPH, 369–378.. ACM, Los Angeles.Google Scholar
  2. Hara K, Nishino K, Ikeuchi K (2003) Determining Reflectance and light position from a single image without distant illumination assumption In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, 560–567.. IEEE, Nice.Google Scholar
  3. Park J, Sinha SN, Matsushita Y, Kweon I (2014) Calibrating a non-isotropic near point light source using a plane In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, 226–2274.. IEEE, Ohio.Google Scholar
  4. Takai T, Nimura S, Maki A, Matsuyama T (2007) Self shadows and cast shadows in estimating illumination distribution In: Proc. European Conf. on Visual Media Production, 1–10.. IEEE, England.Google Scholar
  5. Ashdown I (1993) Near-field photometry : a new approach. J Illum Soc27(4): 1293–1301.Google Scholar
  6. Seigel MW, Stock RD (1996) A general near-zone light source model and its application to computer automated reflector design. SPIE Optical Eng35(9): 2661–2679.View ArticleGoogle Scholar
  7. Rykowski RF, Wooley C (1997) Source modeling for illumination design In: Lens Design, Illumination, and Optomechanical Modeling, SPIE, 204–208.
  8. Jenkins DR, Monch H (2000) Source imaging goniometer method of light source characterization for accurate projection system design In: Proc. of SID Conf, 862–865.. Blackwell Publishing Ltd.
  9. Unger J, Wenger A, Hawkins T, Gardner A, Debevec P (2003) Capturing and rendering with incident light fields In: Proc. Eurographics symposium on rendering, 141–149.. ACM, Leuven.Google Scholar
  10. Unger J, Gustavson S, Larsson P, Ynnerman A (2008) Free form incident light fields. Comput Graphics Forum27(4): 1293–1301.View ArticleGoogle Scholar
  11. Goesele M, Granier X, Heidrich W, Seidel H-P (2003) Accurate lightsource acquisition and rendering In: Proc SIGGRAPH, 621–630.. ACM, California.Google Scholar
  12. Nakamura M, Oya S, Okabe T, Lensch H (2008) Acquiring 4D light fields of self-luminous extended light sources using programmable filter. ACM Trans Graphics 27(3): 57:1–57:6.Google Scholar
  13. Cossairt O, Nayar S, Ramamoorthi R (2008) Light field transfer: global illumination between real and synthetic objects. Proc. ACM Graphics 27(3): 57:1—57:6.Google Scholar
  14. Wang Y, Samaras D (2016) Estimation of multiple illuminants from a single image of arbitrary known geometry. IEICE Trans Inf SystE99D(9): 2360–2367.Google Scholar
  15. Frolova D, Simakov D, Basri R (2004) Accuracy of spherical harmonic approximations for images of Lambertian objects under far and near lighting In: Proc. European Conf. on Computer Vision, 574–587.. Springer, Berlin.Google Scholar
  16. Ramamoorthi R, Hanrahan P (2001) A signal-processing framework for inverse rendering. ACM Trans Graphics 12: 117–128.Google Scholar
  17. Sato I, Sato Y, Ikeuchi K (2003) Illumination from shadows. IEEE Trans Pattern Anal Mach Intell25(3): 290–300.View ArticleGoogle Scholar
  18. Okabe T, Sato I, Sato Y (2004) Spherical harmonics vs. Haar wavelets In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, 50–57.. IEEE, Washington.Google Scholar
  19. Aoto T, Taketomi T, Sato T, Mukaigawa Y, Yokoya N (2013) Linear estimation of 4-D illumination light field from diffuse reflections In: Proc. IAPR Asian Conference on Pattern Recognition, 496–500.. IAPR, Naha.Google Scholar
  20. Gabay D, Mercier B (2004) A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput Math Appl23(9): 1022–1027.Google Scholar


© The Author(s) 2017