Skip to content

Advertisement

  • Research Paper
  • Open Access

Decomposition of reflection and scattering by multiple-weighted measurements

IPSJ Transactions on Computer Vision and Applications201810:13

https://doi.org/10.1186/s41074-018-0049-4

  • Received: 16 June 2018
  • Accepted: 26 September 2018
  • Published:

Abstract

An observed image is composed of multiple components based on optical phenomena, such as light reflection and scattering. Decomposing the observed image into individual components is an important process for various computer vision tasks. No general approach to combine them exists although many decomposition methods exist. This paper proposes a general approach to combine different decomposition methods in a linear algebraic manner called multiple-weighted measurements.

Experimental results show that the proposed approach decomposes observed images into four optical components based on diffuse and specular reflection and single and multiple scattering. The decomposed components are applied to material segmentation as an application.

Keywords

  • Decomposition
  • Reflection
  • Scattering
  • Light transport

1 Introduction

An observed image is composed of multiple components based on optical phenomena, such as light reflection and scattering. However, most scene analysis methods in computer vision assume only simple optical phenomena. For example, both shape-from-shading [1] and photometric stereo [2], which obtain the shape of an object, assume that the observed image is due to diffuse reflection. Measurement of reflectance [3] often assumes only reflection not scattering. Thus, decomposition methods are important for various computer vision tasks because unexpected optical components in the observed image could disturb the scene analysis methods.

Various optical components have been targeted for decomposition so that only an expected component is extracted because the expected component is different with respect to the scene analysis methods. For example, a polarization-based method [4] is expected as removing specular reflection component. An active method using a projector-camera system [5] supposes to separate direct and indirect illumination components. However, the polarization-based method also removes single scattering component and the direct illumination component still has various components such as diffuse and specular reflection. Combining those different methods could enable to decompose into more detailed components, but no general approach to combine them exists.

In this paper, we propose a general approach to combine different decomposition methods in a linear algebraic manner called multiple-weighted measurements. With a novel perspective, a decomposition method can be regarded as a weighted measurement, which weakens some of all components with some weights derived from the method. A weighted measurement is formulated in a linear algebra, which makes it possible to combine different kinds of decomposition methods.

Experimental results show that the proposed approach decomposes observed images into four optical components based on diffuse and specular reflection and single and multiple scattering. The decomposed components are applied to material segmentation as an application.

2 Related work

We begin by reviewing existing prior work. Researchers in computer vision and computer graphics have studied to separate, remove, or extract some optical components in observed images. The words they used are different but essentially mean the same as decomposition. Target components are different with respect to an application.

2.1 Reflection component

The first interest is in light reflection. Shafer [6] has proposed the dichromatic reflectance model, in which the color of specular reflection depends on the color of a light source while the color of diffuse reflection depends on the color of an object. And then, a lot of work employed the dichromatic reflectance model to separate the diffuse and specular reflection components [715].

Another effective technique to separate the reflection components is based on polarization. Wolff and Boult [4] utilized linear polarization to remove the specular reflection component of the observed image. Many researchers also used linear or circular polarization [1619]. Both of the methods based on color and polarization can be combined since both are in complementary relationship [2024].

Moreover, other cues are used to separate the diffuse and specular reflection components. Ikeuchi and Sato [25] used both a range and brightness images. Nishino et al. [26] assumed a known geometry of an object to separate view-independent, as diffuse, and view-dependent, as specular, components. Mukaigawa et al. [27] analyzed the reflection components based on photometric linearization. Mallick et al. [28] formulated a decomposition model based on locally spatial and spatio-temporal interactions. Tao et al. [29] used line consistency based on relationship between light field data and the dichromatic model.

Interreflections are often targeted as other reflection component, which is a phenomenon of multiple reflections within a scene. Seitz et al. [30] proposed a theory of inverse light transport to separate interreflections into each bounce component. Bai et al. [31] developed a duality theory of forward and inverse light transports and then separated interreflections.

2.2 Scattering component

Light scattering is often regarded as a component to be removed because it disturbs the scene analysis methods. Gilbert and Pernicka [32] removed the single scattering component in water by using circular polarization. Many polarization-based methods were proposed for removing scattering component toward clear appearance in hazy atmosphere [33] and muddy water [34]. Ghosh et al. [35] separated the scattering components of different layers in a layered object such as human skin by using a polarization-based method. Kim et al. [36] fused the polarization technique with a light field camera to decompose specular reflection, single scattering, and scattering at different layers components.

Narasimhan and Nayar [37, 38] analytically modeled light scattering in atmosphere and then proposed a method to remove scattering components of fog and haze. Wu and Tang [39] decomposed the diffuse and specular reflection, and subsurface scattering components based on the model proposed by Lin and Lee [40].

Nayar et al. [5] proposed an effective method to fast separate direct and global illumination components called high frequency illumination. Gupta et al. [41] combined high frequency illumination with the polarization technique to remove scattering components. Mukaigawa et al. [42] extended high frequency illumination to separate the single and multiple-scattering components. Fuchs et al. [43] employed confocal imaging for descattering. Kim et al. [44] removed the scattering components by analyzing light field data.

2.3 Applications

Various motivations exist for decomposition methods. Light scattering causes an unclear image in atmosphere and water. Many methods proposed to remove the effect of scattering and obtain a clear image [32, 33, 37, 38, 45].

The removed scattering component can be use for reconstructing a depth map [46]. The effect of scattering depends on the distance. Thus, the distance can be estimated once the scattering component is extracted. The scattering component plays an important role in such a method, while that component is often regarded as an obstacle.

Other motivation is to push the envelope in scene analysis. The traditional scene analysis methods cannot work properly in the real world, as mentioned in Section 1. Since the traditional photometric stereo assumes the diffuse reflection, it does not work well for glossy surface. A solution is to separate the specular reflection component based on the dichromatic reflectance model in preprocessing [8, 9, 47]. Inoshita et al. [48] used the nature of single scattering to obtain the shape of a translucent object via high frequency illumination to separate the single and multiple-scattering components [42].

A motivation in computer graphics is to improve the appearance of rendered graphics. Ghosh et al. [35] modeled the layered facial reflectance consisting of specular reflection, single scattering, and shallow and deep subsurface scattering components to achieve high quality rendering. Decomposition also plays an important role for understanding optical phenomena. Separation of single and multiple-scattering components enabled to analyze light propagation in a medium [42]. Wu et al. [49] analyzed a global light transport using time-of-flight imaging through decomposition of the direct illumination, subsurface scattering, and interreflections components.

Decomposition has potential to improve the performance of those applications because the performance depends on the quality of decomposition. That is why the proposed approach can be an important role in various academic fields.

3 Multiple-weighted measurements

An intensity on an image observed by a conventional camera is mixture of signals derived from various optical phenomena, such as light reflection and scattering. Assuming m components in the mixture, the observed image \(\boldsymbol {s} \in \mathbb {R}^{\mathrm {P}}\) is written as below
$$\begin{array}{*{20}l} \boldsymbol{s} = \sum\limits_{i=1}^{m}{\boldsymbol{c}_{i}}, \end{array} $$
(1)
where P is the number of pixels in an image and \(\boldsymbol {c}_{i} \in \mathbb {R}^{\mathrm {P}} (1 \le i \le m)\) is a component image. The purpose of this paper is to obtain each component image ci from multiple observations. The component images can be defined in various manners, e.g., diffuse and specular reflection, single and multiple scattering, or direct and global illumination components. If a method which individually measures each of the components can exist, then no decomposition method is required. However, such an individual measurement does not exist and that is why there are a lot of decomposition methods. Even so, the decomposition methods do not still provide the individual measurement. For example, a decomposition method using polarization is expected to separate the specular reflection component from others, but the separated specular reflection component by polarization still includes the single scattering component. Thus, we regard a decomposition method as extraction of a part of the mixture named a weighted measurement. The decomposition method weakens some components with a weight vector \(\boldsymbol {w} \in \mathbb {R}^{m}\)
$$\begin{array}{*{20}l} \boldsymbol{w} = \left[ w_{1} w_{2} \cdots w_{m} \right]^{\top}. \end{array} $$
(2)
The observed image s can be expressed by using the weight vector w as follows:
$$\begin{array}{*{20}l} \boldsymbol{s} = \sum\limits_{i=1}^{m}{w_{i} \boldsymbol{c}_{i}}. \end{array} $$
(3)
The weighted measurement is formulated in matrix form as follow:
$$\begin{array}{*{20}l} \boldsymbol{s} = \boldsymbol{C} \boldsymbol{w}, \end{array} $$
(4)

where \(\boldsymbol {C} = \left [\boldsymbol {c}_{1} \boldsymbol {c}_{2} \cdots \boldsymbol {c}_{m} \right ] \in \mathbb {R}^{\mathrm {P} \times m}\) as a component matrix.

Given n(≥m) different weighted measurements, an observed image sj(1≤jn) by each of the measurements with a weight vector wj is formulated as below:
$$\begin{array}{*{20}l} \boldsymbol{s}^{j} = \boldsymbol{C} \boldsymbol{w}^{j}. \end{array} $$
(5)
All the measurements can be expressed in matrix form as below:
$$\begin{array}{*{20}l} \boldsymbol{S} = \boldsymbol{C} \boldsymbol{W}, \end{array} $$
(6)

where \(\boldsymbol {S} = \left [ \boldsymbol {s}^{1} \boldsymbol {s}^{2} \cdots \boldsymbol {s}^{n} \right ] \in \mathbb {R}^{\mathrm {P} \times n}\), an observation matrix, and \(\boldsymbol {W} = \left [ \boldsymbol {w}^{1} \boldsymbol {w}^{2} \cdots \boldsymbol {w}^{n} \right ] \in \mathbb {R}^{m \times n}\), a weight matrix, that is called multiple-weighted measurements.

Decomposition which this paper aims at is to obtain the component matrix C. When the shape of the weight matrix is square, n=m, and the rank of the matrix is full, rank(W)=m, then the component matrix C can be computed by
$$\begin{array}{*{20}l} \boldsymbol{C} = \boldsymbol{S} \boldsymbol{W}^{-1}. \end{array} $$
(7)
When the shape is horizontally long rectangle, n>m, and rank(W)=m, then the component matrix \(\hat {\boldsymbol {C}}\) is estimated in a least squares manner as follows:
$$\begin{array}{*{20}l} \hat{\boldsymbol{C}} = \boldsymbol{S} \boldsymbol{W}^{+} = \boldsymbol{S} \left(\boldsymbol{W}^{\top} \boldsymbol{W}\right)^{-1} \boldsymbol{W}^{\top}, \end{array} $$
(8)

where W+ is the pseudo inverse matrix of W. Finally, the decomposition is performed in a linear algebraic manner given a set of weighted measurements.

Additionally, the rank of the weight matrix reveals feasibility of the decomposition in advance before performing measurements. The decomposition is feasible only if the rank is full, rank(W)=m. Otherwise, other measurement methods are required so that the rank is full. According to the nature of least squares, the larger number of combinations is the more stable solution is estimated even if the rank is full.

4 Decomposition of reflection and scattering components

In the previous section, we explained the theory of multiple-weighted measurements. A key of the proposed approach is to design the weight matrix W so that the decomposition becomes feasible. However, we cannot arbitrarily design the weight matrix because a weight vector is derived from a measurement method. This section describes how to build the weight matrix as an implementation.

4.1 Light reflection and scattering components

An observed intensity at a point is a mixture of various optical components. Figure 1a illustrates light reflection and scattering phenomena at the point. Light reflection is often classified into two components; diffuse and specular reflection. Diffuse reflection arises because of a microfacet structure on object surface. On the other hand, specular reflection arises at an interface between the air and the object surface (Fig. 1b).
Fig. 1
Fig. 1

Light reflection and scattering. a Reflection and scattering phenomena at a point. b Reflection is classified into two components; diffuse and specular reflection. c Scattering is also classified into two components; single and multiple scattering. d Interreflections and multiple-scattering phenomena are similarly based on multi-bounce collisions

Light scattering is also classified into two components; single and multiple scattering, according to researches in computer vision [42, 50] and physics [51, 52]. Single scattering is caused by one-bounce collision with a particle, or particle aggregation, inside an object, which is often seen in optically thin media (Fig. 1c). A well-known nature of single scattering is that an intensity of single scattering exponentially decays along its light path. On the other hand, multiple scattering is a phenomenon of multi-bounce collisions, which is often seen in optically thick media (Fig. 1c).

In this paper, we aim at decomposing observed images into the above four optical components; diffuse and specular reflection, and single and multiple scattering. Interreflections are not explicitly modeled in this implementation. Since interreflections and multiple-scattering phenomena are similarly based on multi-bounce collisions with surfaces and inside particles, respectively (Fig. 1d), both of the components are included in the multiple-scattering component.

4.2 Definition of measurement weights

Now, we define weight vectors wj to the four components as previously described. We consider four distinct weight elements which correspond to the four components, i.e., diffuse reflection wDR, specular reflection wSR, single scattering wSS, and multiple scattering wMS. By definition, each weight is in a range of 0≤wi≤1. Therefore, a weight vector \(\boldsymbol w \in \mathbb {R}^{4}\) is explained as
$$ \boldsymbol w = \left[w_{\text{DR}}~w_{\text{SR}}~w_{\text{SS}}~w_{\text{MS}}\right]^{\top}. $$
(9)

In the following, we describe separation methods and their corresponding weight vectors. Note that the weight vectors are theoretically determined from the methodology, instruments, and experimental setup.

4.2.1 Normal observation

An image taken under an ordinary illumination, e.g., the uniform white illumination, condition contains all of the four components. We treat this observation as the one that contains all the components equally without reduction of anything. Therefore, the weight vector is defined as
$$ \boldsymbol w^{\text{NML}} = [1~1~1~1]^{\top}. $$
(10)
Figure 2a shows an image taken under white illumination projected by a projector. In the scene, there are a marble stone, two billiard balls, and three coins.
Fig. 2
Fig. 2

Observed images by several separation methods. a Under an ordinary illumination, b circular polarization, cd direct and global components by high frequency illumination, ef direct and global components by sweeping high frequency illumination, gh direct and global components by high frequency illumination with circular polarization

4.2.2 Circular polarization

Techniques based on circular polarization can separate specular reflection [4, 19] and single scattering [32, 34] from other components. The nature of circular polarization is that right-handed (or left-handed) circularly polarized light cannot transmit through a left-handed (or right-handed) circular polarizer. Since one-bounce collision reverses the handedness of the polarized light, specular reflection and single scattering, which are derived from one-bounce collision with a surface and an inside particle, respectively, change the handedness of the polarized incident light. On the other hand, multi-bounce collisions, such as diffuse reflection and multiple scattering, turn polarized light into unpolarized one. Therefore, putting a same-handed circular polarizer in front of both a light source and a camera can remove specular reflection and single scattering.

However, in practice, a polarizer does not have a perfect capability for light transmission and shielding but the single transmittance ts and the crossed transmittance tc. The single transmittance ts is the ratio of the power of light passed through the polarizer to that of the incident unpolarized light. The crossed transmittance tc is the ratio of the power of light passed through the one-handed polarizer to that of the incident opposite-handed polarized light. Thus, the weight vector is defined as
$$ \boldsymbol w^{\text{CP}} = \left[t_{s}^{2}~t_{s} t_{c}~t_{s} t_{c}~t_{s}^{2}\right]^{\top}. $$
(11)

Figure 2b shows an observed image by using a circular polarization technique in the same scene. We simply put a circular polarizer in front of the projector and the same-handed circular polarizer in front of the camera. As we can see, the coins cannot almost be seen and the highlights on the balls was removed.

4.2.3 High frequency illumination

High frequency illumination, proposed by [5], can separate direct and global illumination components in a scene from observed images under spatially high frequent pattern illuminations, such as a checkerboard pattern. The direct illumination component includes directly reflected light on surfaces in the scene and the global one includes others, such as in-directly reflected light, scattered light, and transmitted light. For the details, we refer the reader to [5]. In this instance, the direct and global illumination components correspond to reflection and scattering ones, respectively. Thus, the weight vectors for the direct and global illumination components are defined as
$$ \left\{ \begin{array}{l} {\boldsymbol w}^{\text{HFI}}_{\mathrm{D}} = [1~1~0~0]^{\top}, \\ {\boldsymbol w}^{\text{HFI}}_{\mathrm{G}} = [0~0~1~1]^{\top}. \end{array} \right. $$
(12)

Separated direct and global illumination components in the scene are shown in Fig. 2c, d, respectively. Since the marble stone is a translucent object, the intensity on the marble stone region is mostly included in the global component. The billiard balls are also translucent to some extend, so the texture on the ball, e.g., the number 3, is blurred in the global component while that is clear in the direct one. We can see specular interreflections on the white ball, which is reflected on the ball again after being reflected on the coins.

4.2.4 Sweeping high frequency illumination

Mukaigawa et al. [42] have proposed sweeping high frequency illumination, which can separate single and multiple-scattering components in a scene by projecting spatially high frequent stripe patterns, inspired by high frequency illumination [5]. The separated direct component includes not only light reflection but also single scattering, while the global one includes the others, such as multiple scattering and intereflections. Therefore, the weight vectors for the direct and global components are defined as
$$ \left\{ \begin{array}{l} {\boldsymbol w}^{\text{SHFI}}_{\mathrm{D}} = [1~1~1~0]^{\top}, \\ {\boldsymbol w}^{\text{SHFI}}_{\mathrm{G}} = [0~0~0~1]^{\top}. \end{array} \right. $$
(13)

Separated direct and global components in the scene are shown in Fig. 2e, f, respectively. Comparing with that in the direct component of high frequency illumination (c), the marble stone region in the direct component (e) is brighter because the single scattering component is included.

4.2.5 New combination: high frequency illumination with circular polarization

A combination of several separation methods let us define another weight vector. For example, we combine the high frequency illumination technique with the circular polarization technique. It is easily implemented with the projector-camera system, which is used to implement the high frequency illumination, and a pair of the same-handed circular polarizers. The combination can separate direct and global components, similar to the results of high frequency illumination, but the specular reflection and single scattering components are removed in both of the components. In this instance, each element of a new weight vector is the product of corresponding elements of the weight vectors; Eqs. (11) and (12). Thus, the weight vectors are defined as
$$ \left\{ \begin{array}{l} {\boldsymbol w}^{\text{HFICP}}_{\mathrm{D}} = {\boldsymbol w}^{\text{HFI}}_{\mathrm{D}} \circ {\boldsymbol w}^{\text{CP}} = \left[t_{s}^{2}~t_{s} t_{c}~0~0\right]^{\top}, \\ {\boldsymbol w}^{\text{HFICP}}_{\mathrm{G}} = {\boldsymbol w}^{\text{HFI}}_{\mathrm{G}} \circ {\boldsymbol w}^{\text{CP}} = \left[0~0~t_{s} t_{c}~t_{s}^{2}\right]^{\top}, \end{array} \right. $$
(14)

where is the Hadamard product operator. Note that the new weight vectors, Eq. (14), are linearly independent of those of the circular polarization, Eq. (11), and the high frequency illumination, Eq. (12). This is a way that we can obtain a new weighted measurement by simply combining several separation methods.

Figure 2g, h shows separated direct and global components, respectively. As we can see, the specular reflection component is removed in the direct component (g). Actually, there exists specular interreflections in the global component of high frequency illumination (d), e.g., coins. However, in the global component (h), those are perfectly removed thanks to the effect of circular polarization.

4.3 Weight matrix

We employ the five weighted measurements, as described above, to implement the multiple-weighted measurements for decomposition into four components; diffuse and specular reflection, and single and multiple scattering. In this instance, the weight matrix \({\boldsymbol W} \in \mathbb {R}^{4 \times 8}\) consists of the eight weight vectors, as
$$ {}\begin{aligned} \boldsymbol{W} &= \left[ s^{\text{NML}}\boldsymbol{w}^{\text{NML}} s^{\text{CP}}\boldsymbol{w}^{\text{CP}} s^{\text{HFI}}\boldsymbol{w}^{\text{HFI}}_{\mathrm{D}} s^{\text{HFI}}\boldsymbol{w}^{\text{HFI}}_{\mathrm{G}}s^{\text{SHFI}}\boldsymbol{w}^{\text{SHFI}}_{\mathrm{D}} \right. \\ & \qquad \qquad \qquad \left. s^{\text{SHFI}}\boldsymbol{w}^{\text{SHFI}}_{\mathrm{G}} s^{\text{HFICP}}\boldsymbol{w}^{\text{HFICP}}_{\mathrm{D}} s^{\text{HFICP}}\boldsymbol{w}^{\text{HFICP}}_{\mathrm{G}} \right], \end{aligned} $$
(15)

where sj is the global scales for each weighted measurement. The scales are decided by an experimental setup. In practice, the scales are normalized to one because the experimental setup is not changed while performing all of the weighted measurements. The eight weight vectors do not have to be linearly independent to each other as long as the weights matrix W has a full-rank. For example, \(\left [ {\boldsymbol w}^{\text {NML}}~{\boldsymbol w}^{\text {HFI}}_{\mathrm {D}}~{\boldsymbol w}^{\text {HFI}}_{\mathrm {G}} \right ]\) consists of linearly dependent columns because \({\boldsymbol w}^{\text {NML}} = {\boldsymbol w}^{\text {HFI}}_{\mathrm {D}} + {\boldsymbol w}^{\text {HFI}}_{\mathrm {G}}\). However, all of them can be combined together in the weight matrix W for a stable computation. Designing a weight matrix can be done ahead before measuring and computing, that is, the rank analysis of the designed weight matrix let us know whether decomposition is feasible, or not, in advance. In this instance, the weight matrix W has full-rank because tstc for a general polarizer. Therefore, the decomposition is a well-posed problem.

In fact, the rank of the weight matrix W can be systematically analyzed in this case. Let us consider the product \(\boldsymbol W \boldsymbol W^{\top } \in \mathbb {R}^{4 \times 4}\). Its determinant has a closed-form expression as
$$ \text{det}\left(\boldsymbol W \boldsymbol W^{\top}\right) = t_{s}^{2} \left(t_{s} - t_{c}\right)^{2} \left(15 t_{s}^{4} + 24 t_{s}^{2} t_{c} \left(t_{c} - t_{s}\right) + 16\right)\!. $$
(16)
Therefore, with the condition tstc>0, the determinant becomes positive; the weight matrix W is full-rank. In practice, the conditioning of the weight matrix W is more important for the stability of the pseudo inverse W+. One of the ways to evaluate the conditioning is to assess the ratio between the largest and smallest singular values, σmax and σmin, of the weight matrix W, which can be numerically computed as
$$ \kappa(\boldsymbol W) = \frac{\sigma_{\text{max}}}{\sigma_{\text{min}}}. $$
(17)

κ(W) is often called the condition number of W.

5 Experiments

First, we verify a result of the decomposition by the proposed approach. In the verification, we use a simple scene where there are some typical materials in Section 5.1. Second, we analyze the repeatability of the decomposition and the effect of each of the weighted measurements in Section 5.2. Finally, we perform the decomposition in various complex scenes and discuss the decomposition results in Section 5.3.

We begin by describing the experimental setup in this section. In all of the experiments performed in this paper, we use a 3M MPro160 projector as a light source and a Point Grey Research Chameleon color camera as a recording device, as shown in Fig. 3a. To employ the circular polarization technique, we used two circular polarizers, Kenko SQ Circular-PL with ts=0.399 and tc=0.0005 as the product-specific values. In measurements of the polarization approach, we put them in front of the projector and the camera, as shown in Fig. 3b. For the high frequency illumination, we project several checkerboard patterns whose block size is a 3×3 pixels square, as shown in Fig. 3c. Figure 3d illustrates a dotted line pattern for the sweeping high frequency illumination, which also consists of only vertically, or horizontally, repeated 3×3 pixels squares.
Fig. 3
Fig. 3

Experimental setup. a Locations of the camera and the projector. b Circular polarizers placed in front of both of the camera and the projector. c A part of a checkerboard pattern used for high frequency illumination. d A part of a dotted line pattern used for sweeping high frequency illumination

In the experiments in this paper, we employ all of the weighted measurements, described in Section 4.2, to obtain the observation matrix S. Since the weight matrix W is defined as Eq. (15), we can compute the component matrix \(\hat {\boldsymbol C}\) by Eq. (8), that is, we can obtain the decomposition into diffuse and specular reflection, and single and multiple-scattering components. Note that all of the weighted measurements are done under the same experimental setup. Thus, we assume all of the global scales in Eq. (15) have been normalized.

5.1 Verification

To verify the decomposition by the proposed approach, we use a simple and well-designed scene, as shown in Fig. 4a. The target scene consists of four typical materials; a ceramic board, a duralumin plate, a block of milky epoxy resin, and a cylinder of polyoxymethylene (POM) resin. On a surface of matte ceramics, light tends to be evenly diffused for all angles because of its microstructure. Duralumin, a type of aluminum alloys, strongly reflects light on its surface; therefore, a specularity becomes dominant. Both of the resins are translucent media, but they have different translucencies, as shown in Fig. 4b. The block of milky epoxy resin consists of an optically thin medium; thus, we can observe a light ray in the medium, which is a feature of single scattering and depends on the incident light angle. On the other hand, in a optically thick medium, such as the cylinder of POM resin, the observed light does not depend on the incident light angle but evenly spreads because of multiple scattering.
Fig. 4
Fig. 4

Verification. a The target scene consists of four materials; a ceramic board, a duralumin plate, a block of milky epoxy resin, and a cylinder of polyoxymethylen (POM). b The resins have different scattering properties. The observations are decomposed into four components; c diffuse and d specular reflection, and e single and f multiple-scattering components. g The proportion of the averaged intensities in each material region

We observed the scene with the five weighted measurements and then decomposed them into the four optical components by computing Eq. (8). The decomposed result is shown in Fig. 4cf, which are (c) diffuse reflection, (d) specular reflection, (e) single scattering, and (f) multiple-scattering components. To analyze the result, we computed the averages of intensities in each material region on each optical component image and summarized the proportion of the averages in Fig. 4g. As similar to our expectation, the dominant optical components varied across the materials; diffuse reflection became dominant in the ceramic board (78.7%), specular reflection in the duralumin plate (68.1%), single scattering in the block of milky epoxy resin (41.0%), and multiple scattering in the cylinder of POM (50.6%). Consequently, the verification shows that the decomposition by the proposed approach leads a significant decomposition of observations into the four optical components; diffuse and specular reflection, and single and multiple scattering, while it is difficult to quantitatively analyze its performance. Note that the decomposition cannot be achieved by applying any of the existing separation methods.

5.2 Analysis of decomposition results

We analyze the decomposition with two different perspectives. First, we show the repeatability of the decomposition. We take each weighted measurement five times under the same experimental setup, and then compare decomposition results. Each of the decomposition results is evaluated in peak signal-to-noise ratio (PSNR) between the others. The comparison resulted in 42.1 dB in PSNR on average with the standard deviation of 1.36 dB. The average (and the standard deviation) of PSNRs for diffuse and specular reflection, and single and multiple-scattering components are 42.1(1.58), 42.0(1.42), 42.2(1.15), and 42.0(1.24) dB, respectively. Consequently, it shows that the repeatability of the decomposition by the proposed approach is quite high.

Second, we evaluate the effect of each of the weighted measurements by comparing the decomposition result with all of them and that without one of them. As shown in Table 1, the comparison results say that disusing one of the weighted measurements leads to a large change in a decomposition result. This is because the total number of weight vectors is few, thus W+ is significantly changed. For example, when the circular polarization is disused, the PSNRs for diffuse and specular reflection, and single scattering components become the lowest. That is, the circular polarization is important for the decomposition. On the other hand, the PSNRs when disusing the high frequency illumination are relatively high. This is because of the redundancy of the multiple-weighted measurements.
Table 1

Evaluation of the effect of each of the weighted measurements

Removed measurement

Diffuse reflection

Specular reflection

Single scattering

Multiple scattering

Normal observation

25.5

26.7

24.1

26.8

Circular polarization

25.2

26.4

24.1

26.6

High frequency illumination

26.3

27.6

25.2

26.1

Sweeping high frequency illumination

25.3

26.4

24.6

24.4

High frequency illumination with circular polarization

26.8

26.9

24.1

26.5

We performed the decomposition without one of the weighted measurements and compared the result with that with all of them in PSNR[dB]

5.3 Decomposition in complex scenes

We apply the decomposition to more realistic and complex scenes, where there are various everyday objects, as shown in Fig. 5. The scene (a) consists of plastic cards, coins, wax candles, and a plastic cup of soap water; the scene (b) consists of a mechanical pencil, a leather pen case, an eraser, an aluminum ruler, and a sticky-paper; and the scene (c) consists of coins, phenolic billiard balls, and a marble stone. Figure 5c shows the decomposed results; diffuse and specular reflection, and single and multiple-scattering components, respectively, from left to right.
Fig. 5
Fig. 5

Decomposition results in complex scenes. ac the target scenes. d the observations in each of the scenes are decomposed into diffuse and specular reflection, and single and multiple-scattering components

In the diffuse reflection, single scattering and multiple-scattering components, an intensity is observed to some extent on all the materials except for metals, such as the coins and the ruler. This is because of subsurface scattering, as mentioned in [53, 54]. Almost all real-world materials are translucent to some extent except for metals. In the specular reflection component, an intensity is observed not only on metal materials but also on other materials because specular reflection arises on a smooth surface, such as the surface of the billiard balls. The scattering media, such as the wax candles, the eraser, and the marble stone, show strong intensities in the single and multiple-scattering components. Optically thin media, such as the soap water, the eraser, and the marble stone, show relatively stronger intensities than the other materials in the single scattering component. Moreover, the intensity in the single-scattering component seems to depend on the shape of an object, e.g., the edges of the wax candles have stronger intensities than other parts. Note that we do not distinguish interreflections from multiple scattering in this paper, as mentioned in Section 4.1, so that interreflections in the scenes are included in the multiple-scattering component.

6 Application: raw material segmentation

The decomposition enables a scene analysis in detail. In this paper, a raw material means unpainted and individually consisting of a single material. The goal of the raw material segmentation, similar to [55], where discriminative illuminations are used for classifying materials, is to classify materials in an image based on the opacity and translucency. Since the proportion of optical components carries significant information about the material property as we have seen in Fig. 4g. To show the potential of the decomposition, we perform the decomposition in a scene, as shown in Fig. 6a, where there are 19 objects with 13 different materials, and then apply a segmentation based on its decomposition result.
Fig. 6
Fig. 6

Raw material segmentation. a The target scene has 19 objects with 13 different raw materials. be Decomposed components by the proposed method; diffuse and specular reflection, and single and multiple reflection, respectively. f Segmented result by k-means clustering with different k values from k=2 to 13

We show the decomposition of observations into the four optical components in Fig. 6be, which are diffuse and specular reflection, and single and multiple-scattering components, respectively. From the decomposition result, we form a normalized 4D feature vector, consisting of the four components, pixel by pixel. And then, we simply perform a conventional k-means clustering method as segmentation to assess the effectiveness of the decomposition. The segmentation results are shown in Fig. 6f with a varying parameter k(2≤k≤13). As a visualization, the same color regions belong to the same segment.

When k=2, the segmentation result clearly shows a distinction between opaque and translucent materials; the blue and green regions correspond to opaque and translucent materials, respectively. When k=3, material 5 (duralumin) is segmented as another isolated region because of its unique material property, i.e., specular reflection is strongly seen on material 5 because duralmin is a type of aluminum alloys. When k=4, material 8 (milky epoxy resin) is segmented as a blue-sky region because of its strong single scattering component. When k=5, materials 13 and 19 (polypropylene resins, PP) are segmented as a different region. A PP resin is a translucent medium with an optically thinner property than the other translucent media except for the milky epoxy resin. In the segmentation result with k=6, material 4 (wood) and material 14 (cowhide) are separately segmented because both of them are opaquer than the other materials segmented as the blue regions. When k=7, material 7 (ceramic) and 17 (paper) are segmented as a new isolated region because of the fact that those materials show stronger scattering components comparing with the other opaque materials. When k=8, material 7 (ceramic) is separated as another region. When k=9, materials 3, 15 (acrylic), and 10 (polyvinyl chloride resin) are mainly separated. However, materials 16, 18 (polyethylene resins, PE), 11, and 12 (candles) are partially separated even though they consist of one material. This is because of the colors and the angle of illumination. When k=10, material 2 (rubber) is separated from material 1 (paper). When k=11,12, some regions on the same materials are separated because of the angle of illumination. The result at k=13 has only 12 segments which are the same as k=12. Consequently, the translucent materials are classified into six types and the opaque materials into six. This application shows that it is reasonable to classify various opaque and translucent materials based on the decomposition by the proposed approach.

Additionally, we compare the segmentation result with a conventional baseline one. Assuming that only RGB channels are available for segmentation, we performed k-means clustering with k=7 in the RGB space, which resulted in Fig. 7a. Apparently, it is difficult to separate segments based on material properties by using a color-based segmentation approach. Comparing with that, the segmentation result by our approach shows a segmentation based on material properties, as shown in Fig. 7b.
Fig. 7
Fig. 7

Comparison of segmentation results. a Segmented result in the RGB space. b Segmented result based on the decomposition

7 Conclusion

In this paper, we proposed the general approach called multiple-weighted measurements, which enables to uniformly combine any kind of separation methods, such as color-based, polarization-based, and active projection-based, to finely decompose observations. As an implementation, we defined the weight vector of five different weighted measurements and combined them in the proposed approach to decompose observations into four optical component; diffuse and specular reflection, and single and multiple-scattering components. The experimental verification showed that the decomposition was reasonable because the proportions of decomposed components were similar to the expectation based on physical property for each material region on the image. In the experiments, we performed the decomposition in the various complex scenes. We also showed the possibility of its application for raw material segmentation. The decomposition enables a novel segmentation based on the opacity and translucency of materials unlike a conventional segmentation based on the colors.

There are a few limitations in the proposed approach. First, a shadow is not explicitly handled in the linear formulation (Eq. (1)). This may yield an unmodeled error in the shadow region as computing the decomposition. Second, unmodeled components are erroneously included in some of the four components. There exist other optical phenomena, such as refraction and fluorescence, although only the four components has been introduced in the paper. For example, refracted light on the plastic cup of soap water in the target scene (a) in Fig. 5 can be seen in the diffuse component. Third, since it is based on a combination of multiple separation methods, a scene has to be static and the total processing time is a summation of ones for which individual separation methods take. The first limitation is a challenging problem to be solved but worthy to be considered in order to expand the applicability of the decomposition. In order to resolve the second limitation, a method which can separate the other components must be added to the proposed approach. The third limitation cannot essentially be resolved, but the total processing time can be reduced if the target components are confined. As shown in Table 1, the implemented combination has a redundancy for the decomposition. That is, there must exist the optical combination corresponding to a target component. If the number of combined measurements is reduced, the total processing time also reduces.

Declarations

Acknowledgements

This work is partly supported by JSPS KAKEN grant JP17J05602, JP17K19979, and JP15H05918.

Funding

This work is partly supported by JSPS KAKEN grant JP17J05602, JP17K19979, and JP15H05918.

Availability of data and materials

The data in this manuscript will not officially be shared because of the originality of its file format.

Authors’ contributions

TT designed and executed the experiments, and wrote the manuscript. YMu is a supervisor and edited the manuscript. YMa is a co-supervisor and edited the manuscript. YY is a supervisor and advised to execute the experiments. All authors reviewed and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Nara Institute of Science and Technology (NAIST), Ikoma, Nara, Japan
(2)
Graduate School of Information Science and Technology, Osaka University, Suita, Osaka, Japan
(3)
The Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka, Japan

References

  1. Horn BKP (1975) Obtaining shape from shading information In: The Psychology of Computer Vision, 115–155.. McGraw-Hill, New York.Google Scholar
  2. Woodham RJ (1980) Photometric method for determining surface orientation from multiple images. Opt Eng 19(1):139–144.View ArticleGoogle Scholar
  3. Ben-Ezra M, Wang J, Wilburn B, Li X, Ma L (2008) An led-only brdf measurement device In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE, Piscataway.Google Scholar
  4. Wolff LB, Boult TE (1991) Constraining object features using a polarization reflectance model. IEEE Trans Pattern Anal Mach Intell (TPAMI) 13(7):635–657.View ArticleGoogle Scholar
  5. Nayar SK, Krishnan G, Grossberg MD, Raskar R (2006) Fast separation of direct and global components of a scene using high frequency illumination In: Proc. of ACM SIGGRAPH, 935–944.. ACM, New York.Google Scholar
  6. Shafer SA (1985) Using color to separate reflection components. Color Res Appl 10(4):210–218.View ArticleGoogle Scholar
  7. Klinker G, Shafer S, Kanade T (1988) The measurement of highlights in color images. Int J Comp Vision (IJCV) 2(1):7–32.View ArticleGoogle Scholar
  8. Sato Y, Ikeuchi K (1994) Temporal-color space analysis of reflection. J Opt Soc Am A 11(7):2990–3002.View ArticleGoogle Scholar
  9. Sato Y, Wheeler M, Ikeuch K (1997) Object shape and reflectance modeling from observation In: Proc. of ACM SIGGRAPH, 379–387.. ACM, New York.Google Scholar
  10. Tan RT, Ikeuch K (2003) Separating reflection components of textured surfaces using a single image In: Proc. of IEEE International Conference on Computer Vision (ICCV), 870–877.. IEEE, Piscataway.View ArticleGoogle Scholar
  11. Kim H, Jin H, Hadap S, Kweon I (2013) Specular reflection separation using dark channel prior In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1460–1467.. IEEE, Piscataway.Google Scholar
  12. Nguyen T, Vo QN, Yang HJ, Kim SH, Lee GS (2014) Separation of specular and diffuse components using tensor voting in color images. Appl Opt 53(33):7924–7936.View ArticleGoogle Scholar
  13. Yang Q, Tang J, Ahuja N (2015) Efficient and robust specular highlight removal. IEEE Trans Pattern Anal Mach Intell (TPAMI) 37(6):1304–1311.View ArticleGoogle Scholar
  14. Akashi Y, Okatani T (2016) Separation of reflection components by sparse non-negative matrix factorization. Comp Vision Image Underst 146:77–85.View ArticleGoogle Scholar
  15. Ren W, Tian J, Tang Y (2017) Specular reflection separation with color-lines constraint. IEEE Trans Image Process 26(5):2327–2337.MathSciNetView ArticleGoogle Scholar
  16. Müller V (1996) Elimination of specular surface-reflectance using polarized and unpolarized light In: Proc. of European Conference on Computer Vision (ECCV), 625–635.. Springer, Berlin.Google Scholar
  17. Debevec P, Hawkins T, Tchou C, Duiker HP, Sarokin W, Sagar M (2000) Acquiring the reflectance field of a human face In: Proc. of ACM SIGGRAPH, 145–156.. ACM, New York.Google Scholar
  18. Ma WC, Hawkins T, Peers P, Chabert CF, Weiss M, Debevec P (2007) Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination In: Proc. of Eurographics Symposium on Rendering, 183–194.. Wiley, Hoboken.Google Scholar
  19. Ghosh A, Chen T, Peers P, Wilson CA, Debevec P (2010) Circularly polarized spherical illumination reflectometry. ACM Trans Graph (ToG) 29(6):162–172.View ArticleGoogle Scholar
  20. Nayar SK, Fang XS, Boult T (1997) Separation of reflection components using color and polarization. Int J Comput Vis (IJCV) 21(3):163–186.View ArticleGoogle Scholar
  21. Lin S, Lee SW (1997) Detection of specularity using stereo in color and polarization space. Comp Vision Image Underst 65(2):336–346.View ArticleGoogle Scholar
  22. Kim DW, Lin S, Hong KS, Shum HY (2002) Variational specular separation using color and polarization In: Proc. of IAPR Workshop on Machine Vision Applications, 176–179. https://dblp.org/db/conf/mva/mva2002.html.
  23. Umeyama S, Godin G (2004) Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images. IEEE Trans Pattern Anal Mach Intell (TPAMI) 26(5):639–647.View ArticleGoogle Scholar
  24. Wang F, Ainouz S, Petitjean C, Bensrhair A (2017) Specularity removal: A global energy minimization approach based on polarization imaging. Comput Vis Image Underst 158:31–39.View ArticleGoogle Scholar
  25. Ikeuchi K, Sato K (1991) Determining reflectance properties of an object using range and brightness images. IEEE Trans Pattern Anal Mach Intell (TPAMI) 13(11):1139–1153.View ArticleGoogle Scholar
  26. Nishino K, Zhang Z, Ikeuchi K (2001) Determining reflectance parameters and illumination distribution from a sparse set of images for view-dependent image synthesis In: Proc. of IEEE International Conference on Computer Vision (ICCV), 599–606.. IEEE, Piscataway.Google Scholar
  27. Mukaigawa Y, Ishii Y, Shakunaga T (2007) Analysis of photometric factors based on photometric linearization. J Opt Soc Am A 24(10):3326–3334.View ArticleGoogle Scholar
  28. Mallick S, Zickler T, Belhumeur P, Kriegman D (2006) Specularity removal in images and videos: A pde approach In: Proc. of European Conference on Computer Vision (ECCV), 550–563.. Springer, Berlin.Google Scholar
  29. Tao MW, Su JC, Wang TC, Malik J, Ramamoorthi R (2016) Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras. IEEE Trans Pattern Anal Mach Intell (TPAMI) 38(6):1155–1169.View ArticleGoogle Scholar
  30. Seitz SM, Matsushita Y, Kutulakos KN (2005) A theory of inverse light transport In: Proc. of IEEE International Conference on Computer Vision (ICCV), 1440–1447.. IEEE, Piscataway.Google Scholar
  31. Bai J, Chandraker M, Ng TT, Ramamoorthi R (2010) A dual theory of inverse and forward light transport In: Proc. of European Conference on Computer Vision (ECCV), 294–307.. Springer, Berlin.Google Scholar
  32. Gilbert GD, Pernicka JC (1967) Improvement of underwater visibility by reduction of backscatter with a circular polarization technique. Appl Opt 6(4):741–746.View ArticleGoogle Scholar
  33. Schechner YY, Narasimhan SG, Nayar SK (2003) Polarization-based vision through haze. Appl Opt 42(3):511–525.View ArticleGoogle Scholar
  34. Treibitz T, Schechner YY (2009) Active polarization descattering. IEEE Trans Pattern Anal Mach Intell (TPAMI) 31(3):385–399.View ArticleGoogle Scholar
  35. Ghosh A, Hawkins T, Peers P, Frederiksen S, Debevec P (2008) Practical modeling and acquisition of layered facial reflectance. ACM Trans Graph (ToG) 27:139. https://dl.acm.org/citation.cfm?id=1409092.View ArticleGoogle Scholar
  36. Kim J, Izadi S, Ghosh A (2016) Single-shot layered reflectance separation using a polarized light field camera In: Proc. of Eurographics Symposium on Rendering.. Wiley, Hoboken.Google Scholar
  37. Narasimhan SG, Nayar SK (2002) Vision and the atmosphere. Int J Comput Vis (IJCV) 48(3):233–254.View ArticleGoogle Scholar
  38. Narasimhan SG, Nayar SK (2003) Contrast restoration of weather degraded images. IEEE Trans Pattern Anal Mach Intell (TPAMI) 25(6):713–724.View ArticleGoogle Scholar
  39. Wu TP, Tang CK (2004) Separating specular, diffuse, and subsurface scattering reflectances from photometric images In: Proc. of European Conference on Computer Vision (ECCV), 419–433.. Springer, Berlin.Google Scholar
  40. Lin S, Lee SW (2000) An appearance representation for multiple reflection components In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 105–110.. IEEE, Piscataway.Google Scholar
  41. Gupta M, Narasimhan SG, Schechner YY (2008) On controlling light transport in poor visibility environments In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE, Piscataway.Google Scholar
  42. Mukaigawa Y, Raskar R, Yagi Y (2011) Analysis of scattering light transport in translucent media. IPSJ Trans Comput Vis Appl 3:122–133.View ArticleGoogle Scholar
  43. Fuchs C, Heinz M, Levoy M, Seidel HP, Lensch HPA (2008) Combining confocal imaging and descattering. Comput Graph Forum 27(4):1245–1253.View ArticleGoogle Scholar
  44. Kim J, Lanman D, Mukaigawa Y, Raskar R (2010) Descattering transmission via angular filtering In: Proc. of European Conference on Computer Vision (ECCV), 86–99.. Springer, Berlin.Google Scholar
  45. Ju M, Zhang D, Wang X (2017) Single image dehazing via an improved atmospheric scattering model. Vis Comput Int J Comput Graph 33(12):1613–1625.Google Scholar
  46. Drews PLJ, Nascimento ER, Botelho SSC, Campos MFM (2016) Underwater depth estimation and image restoration based on single images. IEEE Comput Graph Appl 36(2):24–35.View ArticleGoogle Scholar
  47. Schlüns K, Wittig O (1993) Photometric stereo for non-lambertian surfaces using color information In: Proc. of International Conference on Image Analysis and Processing.. Springer, Berlin.Google Scholar
  48. Inoshita C, Mukaigawa Y, Matsushita Y, Yagi Y (2012) Shape from single scattering for translucent objects In: Proc. of European Conference on Computer Vision (ECCV), 371–384.. Springer, Berlin.Google Scholar
  49. Wu D, O’Toole M, Velten A, Agrawal A, Raskar R (2012) Decomposing global light transport using time of flight imaging In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 366–373.. IEEE, Piscataway.Google Scholar
  50. Chen T, Lensch HP, Fuchs C, Seidel HP (2007) Polarization and phase-shifting for 3d scanning of translucent objects In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE, Piscataway.Google Scholar
  51. Hulst HC, van de Hulst HC (1957) Light scattering: by small particles. Courier Dover Publications, New York.Google Scholar
  52. Meyer WV, Cannell DS, Smart AE, Taylor TW, Tin P (1997) Multiple-scattering suppression by cross correlation. Appl Opt 36(30):7551–7558.View ArticleGoogle Scholar
  53. Wang R, Cheslack-Postava E, Wang R, Luebke D, Chen Q, Hua W, Peng Q, Bao H (2008) Real-time editing and relighting of homogeneous translucent materials. Vis Comput 24(7):565–575.View ArticleGoogle Scholar
  54. Kurachi N (2011) The magic of computer graphics. CRC Press, Florida.View ArticleGoogle Scholar
  55. Liu C, Gu J (2012) Discriminative illumination: per-pixel classification of raw materials based on optimal projections of spectral brdf In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 797–804.. IEEE, Piscataway.Google Scholar

Copyright

© The Author(s) 2018

Advertisement