- Research Paper
- Open Access

# Decomposition of reflection and scattering by multiple-weighted measurements

- Tsuyoshi Takatani
^{1}Email authorView ORCID ID profile, - Yasuhiro Mukaigawa
^{1}, - Yasuyuki Matsushita
^{2}and - Yasushi Yagi
^{3}

**10**:13

https://doi.org/10.1186/s41074-018-0049-4

© The Author(s) 2018

**Received:**16 June 2018**Accepted:**26 September 2018**Published:**23 October 2018

## Abstract

An observed image is composed of multiple components based on optical phenomena, such as light reflection and scattering. Decomposing the observed image into individual components is an important process for various computer vision tasks. No general approach to combine them exists although many decomposition methods exist. This paper proposes a general approach to combine different decomposition methods in a linear algebraic manner called multiple-weighted measurements.

Experimental results show that the proposed approach decomposes observed images into four optical components based on diffuse and specular reflection and single and multiple scattering. The decomposed components are applied to material segmentation as an application.

## Keywords

- Decomposition
- Reflection
- Scattering
- Light transport

## 1 Introduction

An observed image is composed of multiple components based on optical phenomena, such as light reflection and scattering. However, most scene analysis methods in computer vision assume only simple optical phenomena. For example, both shape-from-shading [1] and photometric stereo [2], which obtain the shape of an object, assume that the observed image is due to diffuse reflection. Measurement of reflectance [3] often assumes only reflection not scattering. Thus, decomposition methods are important for various computer vision tasks because unexpected optical components in the observed image could disturb the scene analysis methods.

Various optical components have been targeted for decomposition so that only an expected component is extracted because the expected component is different with respect to the scene analysis methods. For example, a polarization-based method [4] is expected as removing specular reflection component. An active method using a projector-camera system [5] supposes to separate direct and indirect illumination components. However, the polarization-based method also removes single scattering component and the direct illumination component still has various components such as diffuse and specular reflection. Combining those different methods could enable to decompose into more detailed components, but no general approach to combine them exists.

In this paper, we propose a general approach to combine different decomposition methods in a linear algebraic manner called multiple-weighted measurements. With a novel perspective, a decomposition method can be regarded as a weighted measurement, which weakens some of all components with some weights derived from the method. A weighted measurement is formulated in a linear algebra, which makes it possible to combine different kinds of decomposition methods.

Experimental results show that the proposed approach decomposes observed images into four optical components based on diffuse and specular reflection and single and multiple scattering. The decomposed components are applied to material segmentation as an application.

## 2 Related work

We begin by reviewing existing prior work. Researchers in computer vision and computer graphics have studied to separate, remove, or extract some optical components in observed images. The words they used are different but essentially mean the same as decomposition. Target components are different with respect to an application.

### 2.1 Reflection component

The first interest is in light reflection. Shafer [6] has proposed the dichromatic reflectance model, in which the color of specular reflection depends on the color of a light source while the color of diffuse reflection depends on the color of an object. And then, a lot of work employed the dichromatic reflectance model to separate the diffuse and specular reflection components [7–15].

Another effective technique to separate the reflection components is based on polarization. Wolff and Boult [4] utilized linear polarization to remove the specular reflection component of the observed image. Many researchers also used linear or circular polarization [16–19]. Both of the methods based on color and polarization can be combined since both are in complementary relationship [20–24].

Moreover, other cues are used to separate the diffuse and specular reflection components. Ikeuchi and Sato [25] used both a range and brightness images. Nishino et al. [26] assumed a known geometry of an object to separate view-independent, as diffuse, and view-dependent, as specular, components. Mukaigawa et al. [27] analyzed the reflection components based on photometric linearization. Mallick et al. [28] formulated a decomposition model based on locally spatial and spatio-temporal interactions. Tao et al. [29] used line consistency based on relationship between light field data and the dichromatic model.

Interreflections are often targeted as other reflection component, which is a phenomenon of multiple reflections within a scene. Seitz et al. [30] proposed a theory of inverse light transport to separate interreflections into each bounce component. Bai et al. [31] developed a duality theory of forward and inverse light transports and then separated interreflections.

### 2.2 Scattering component

Light scattering is often regarded as a component to be removed because it disturbs the scene analysis methods. Gilbert and Pernicka [32] removed the single scattering component in water by using circular polarization. Many polarization-based methods were proposed for removing scattering component toward clear appearance in hazy atmosphere [33] and muddy water [34]. Ghosh et al. [35] separated the scattering components of different layers in a layered object such as human skin by using a polarization-based method. Kim et al. [36] fused the polarization technique with a light field camera to decompose specular reflection, single scattering, and scattering at different layers components.

Narasimhan and Nayar [37, 38] analytically modeled light scattering in atmosphere and then proposed a method to remove scattering components of fog and haze. Wu and Tang [39] decomposed the diffuse and specular reflection, and subsurface scattering components based on the model proposed by Lin and Lee [40].

Nayar et al. [5] proposed an effective method to fast separate direct and global illumination components called high frequency illumination. Gupta et al. [41] combined high frequency illumination with the polarization technique to remove scattering components. Mukaigawa et al. [42] extended high frequency illumination to separate the single and multiple-scattering components. Fuchs et al. [43] employed confocal imaging for descattering. Kim et al. [44] removed the scattering components by analyzing light field data.

### 2.3 Applications

Various motivations exist for decomposition methods. Light scattering causes an unclear image in atmosphere and water. Many methods proposed to remove the effect of scattering and obtain a clear image [32, 33, 37, 38, 45].

The removed scattering component can be use for reconstructing a depth map [46]. The effect of scattering depends on the distance. Thus, the distance can be estimated once the scattering component is extracted. The scattering component plays an important role in such a method, while that component is often regarded as an obstacle.

Other motivation is to push the envelope in scene analysis. The traditional scene analysis methods cannot work properly in the real world, as mentioned in Section 1. Since the traditional photometric stereo assumes the diffuse reflection, it does not work well for glossy surface. A solution is to separate the specular reflection component based on the dichromatic reflectance model in preprocessing [8, 9, 47]. Inoshita et al. [48] used the nature of single scattering to obtain the shape of a translucent object via high frequency illumination to separate the single and multiple-scattering components [42].

A motivation in computer graphics is to improve the appearance of rendered graphics. Ghosh et al. [35] modeled the layered facial reflectance consisting of specular reflection, single scattering, and shallow and deep subsurface scattering components to achieve high quality rendering. Decomposition also plays an important role for understanding optical phenomena. Separation of single and multiple-scattering components enabled to analyze light propagation in a medium [42]. Wu et al. [49] analyzed a global light transport using time-of-flight imaging through decomposition of the direct illumination, subsurface scattering, and interreflections components.

Decomposition has potential to improve the performance of those applications because the performance depends on the quality of decomposition. That is why the proposed approach can be an important role in various academic fields.

## 3 Multiple-weighted measurements

*m*components in the mixture, the observed image \(\boldsymbol {s} \in \mathbb {R}^{\mathrm {P}}\) is written as below

_{i}from multiple observations. The component images can be defined in various manners, e.g., diffuse and specular reflection, single and multiple scattering, or direct and global illumination components. If a method which individually measures each of the components can exist, then no decomposition method is required. However, such an individual measurement does not exist and that is why there are a lot of decomposition methods. Even so, the decomposition methods do not still provide the individual measurement. For example, a decomposition method using polarization is expected to separate the specular reflection component from others, but the separated specular reflection component by polarization still includes the single scattering component. Thus, we regard a decomposition method as extraction of a part of the mixture named a weighted measurement. The decomposition method weakens some components with a weight vector \(\boldsymbol {w} \in \mathbb {R}^{m}\)

where \(\boldsymbol {C} = \left [\boldsymbol {c}_{1} \boldsymbol {c}_{2} \cdots \boldsymbol {c}_{m} \right ] \in \mathbb {R}^{\mathrm {P} \times m}\) as a component matrix.

*n*(≥

*m*) different weighted measurements, an observed image s

^{j}(1≤

*j*≤

*n*) by each of the measurements with a weight vector w

^{j}is formulated as below:

where \(\boldsymbol {S} = \left [ \boldsymbol {s}^{1} \boldsymbol {s}^{2} \cdots \boldsymbol {s}^{n} \right ] \in \mathbb {R}^{\mathrm {P} \times n}\), an observation matrix, and \(\boldsymbol {W} = \left [ \boldsymbol {w}^{1} \boldsymbol {w}^{2} \cdots \boldsymbol {w}^{n} \right ] \in \mathbb {R}^{m \times n}\), a weight matrix, that is called multiple-weighted measurements.

*n*=

*m*, and the rank of the matrix is full, rank(W)=

*m*, then the component matrix C can be computed by

*n*>

*m*, and rank(W)=

*m*, then the component matrix \(\hat {\boldsymbol {C}}\) is estimated in a least squares manner as follows:

where W^{+} is the pseudo inverse matrix of W. Finally, the decomposition is performed in a linear algebraic manner given a set of weighted measurements.

Additionally, the rank of the weight matrix reveals feasibility of the decomposition in advance before performing measurements. The decomposition is feasible only if the rank is full, rank(W)=*m*. Otherwise, other measurement methods are required so that the rank is full. According to the nature of least squares, the larger number of combinations is the more stable solution is estimated even if the rank is full.

## 4 Decomposition of reflection and scattering components

In the previous section, we explained the theory of multiple-weighted measurements. A key of the proposed approach is to design the weight matrix W so that the decomposition becomes feasible. However, we cannot arbitrarily design the weight matrix because a weight vector is derived from a measurement method. This section describes how to build the weight matrix as an implementation.

### 4.1 Light reflection and scattering components

Light scattering is also classified into two components; single and multiple scattering, according to researches in computer vision [42, 50] and physics [51, 52]. Single scattering is caused by one-bounce collision with a particle, or particle aggregation, inside an object, which is often seen in optically thin media (Fig. 1c). A well-known nature of single scattering is that an intensity of single scattering exponentially decays along its light path. On the other hand, multiple scattering is a phenomenon of multi-bounce collisions, which is often seen in optically thick media (Fig. 1c).

In this paper, we aim at decomposing observed images into the above four optical components; diffuse and specular reflection, and single and multiple scattering. Interreflections are not explicitly modeled in this implementation. Since interreflections and multiple-scattering phenomena are similarly based on multi-bounce collisions with surfaces and inside particles, respectively (Fig. 1d), both of the components are included in the multiple-scattering component.

### 4.2 Definition of measurement weights

^{j}to the four components as previously described. We consider four distinct weight elements which correspond to the four components, i.e., diffuse reflection

*w*

_{DR}, specular reflection

*w*

_{SR}, single scattering

*w*

_{SS}, and multiple scattering

*w*

_{MS}. By definition, each weight is in a range of 0≤

*w*

_{i}≤1. Therefore, a weight vector \(\boldsymbol w \in \mathbb {R}^{4}\) is explained as

In the following, we describe separation methods and their corresponding weight vectors. Note that the weight vectors are theoretically determined from the methodology, instruments, and experimental setup.

#### 4.2.1 Normal observation

#### 4.2.2 Circular polarization

Techniques based on circular polarization can separate specular reflection [4, 19] and single scattering [32, 34] from other components. The nature of circular polarization is that right-handed (or left-handed) circularly polarized light cannot transmit through a left-handed (or right-handed) circular polarizer. Since one-bounce collision reverses the handedness of the polarized light, specular reflection and single scattering, which are derived from one-bounce collision with a surface and an inside particle, respectively, change the handedness of the polarized incident light. On the other hand, multi-bounce collisions, such as diffuse reflection and multiple scattering, turn polarized light into unpolarized one. Therefore, putting a same-handed circular polarizer in front of both a light source and a camera can remove specular reflection and single scattering.

*t*

_{s}and the crossed transmittance

*t*

_{c}. The single transmittance

*t*

_{s}is the ratio of the power of light passed through the polarizer to that of the incident unpolarized light. The crossed transmittance

*t*

_{c}is the ratio of the power of light passed through the one-handed polarizer to that of the incident opposite-handed polarized light. Thus, the weight vector is defined as

Figure 2b shows an observed image by using a circular polarization technique in the same scene. We simply put a circular polarizer in front of the projector and the same-handed circular polarizer in front of the camera. As we can see, the coins cannot almost be seen and the highlights on the balls was removed.

#### 4.2.3 High frequency illumination

Separated direct and global illumination components in the scene are shown in Fig. 2c, d, respectively. Since the marble stone is a translucent object, the intensity on the marble stone region is mostly included in the global component. The billiard balls are also translucent to some extend, so the texture on the ball, e.g., the number 3, is blurred in the global component while that is clear in the direct one. We can see specular interreflections on the white ball, which is reflected on the ball again after being reflected on the coins.

#### 4.2.4 Sweeping high frequency illumination

Separated direct and global components in the scene are shown in Fig. 2e, f, respectively. Comparing with that in the direct component of high frequency illumination (c), the marble stone region in the direct component (e) is brighter because the single scattering component is included.

#### 4.2.5 New combination: high frequency illumination with circular polarization

where ∘ is the Hadamard product operator. Note that the new weight vectors, Eq. (14), are linearly independent of those of the circular polarization, Eq. (11), and the high frequency illumination, Eq. (12). This is a way that we can obtain a new weighted measurement by simply combining several separation methods.

Figure 2g, h shows separated direct and global components, respectively. As we can see, the specular reflection component is removed in the direct component (g). Actually, there exists specular interreflections in the global component of high frequency illumination (d), e.g., coins. However, in the global component (h), those are perfectly removed thanks to the effect of circular polarization.

### 4.3 Weight matrix

where *s*^{j} is the global scales for each weighted measurement. The scales are decided by an experimental setup. In practice, the scales are normalized to one because the experimental setup is not changed while performing all of the weighted measurements. The eight weight vectors do not have to be linearly independent to each other as long as the weights matrix W has a full-rank. For example, \(\left [ {\boldsymbol w}^{\text {NML}}~{\boldsymbol w}^{\text {HFI}}_{\mathrm {D}}~{\boldsymbol w}^{\text {HFI}}_{\mathrm {G}} \right ]\) consists of linearly dependent columns because \({\boldsymbol w}^{\text {NML}} = {\boldsymbol w}^{\text {HFI}}_{\mathrm {D}} + {\boldsymbol w}^{\text {HFI}}_{\mathrm {G}}\). However, all of them can be combined together in the weight matrix W for a stable computation. Designing a weight matrix can be done ahead before measuring and computing, that is, the rank analysis of the designed weight matrix let us know whether decomposition is feasible, or not, in advance. In this instance, the weight matrix W has full-rank because *t*_{s}≫*t*_{c} for a general polarizer. Therefore, the decomposition is a well-posed problem.

*t*

_{s}≫

*t*

_{c}>0, the determinant becomes positive; the weight matrix W is full-rank. In practice, the conditioning of the weight matrix W is more important for the stability of the pseudo inverse W

^{+}. One of the ways to evaluate the conditioning is to assess the ratio between the largest and smallest singular values,

*σ*

_{max}and

*σ*

_{min}, of the weight matrix W, which can be numerically computed as

*κ*(W) is often called the condition number of W.

## 5 Experiments

First, we verify a result of the decomposition by the proposed approach. In the verification, we use a simple scene where there are some typical materials in Section 5.1. Second, we analyze the repeatability of the decomposition and the effect of each of the weighted measurements in Section 5.2. Finally, we perform the decomposition in various complex scenes and discuss the decomposition results in Section 5.3.

*t*

_{s}=0.399 and

*t*

_{c}=0.0005 as the product-specific values. In measurements of the polarization approach, we put them in front of the projector and the camera, as shown in Fig. 3b. For the high frequency illumination, we project several checkerboard patterns whose block size is a 3×3 pixels square, as shown in Fig. 3c. Figure 3d illustrates a dotted line pattern for the sweeping high frequency illumination, which also consists of only vertically, or horizontally, repeated 3×3 pixels squares.

In the experiments in this paper, we employ all of the weighted measurements, described in Section 4.2, to obtain the observation matrix S. Since the weight matrix W is defined as Eq. (15), we can compute the component matrix \(\hat {\boldsymbol C}\) by Eq. (8), that is, we can obtain the decomposition into diffuse and specular reflection, and single and multiple-scattering components. Note that all of the weighted measurements are done under the same experimental setup. Thus, we assume all of the global scales in Eq. (15) have been normalized.

### 5.1 Verification

We observed the scene with the five weighted measurements and then decomposed them into the four optical components by computing Eq. (8). The decomposed result is shown in Fig. 4c–f, which are (c) diffuse reflection, (d) specular reflection, (e) single scattering, and (f) multiple-scattering components. To analyze the result, we computed the averages of intensities in each material region on each optical component image and summarized the proportion of the averages in Fig. 4g. As similar to our expectation, the dominant optical components varied across the materials; diffuse reflection became dominant in the ceramic board (78.7%), specular reflection in the duralumin plate (68.1%), single scattering in the block of milky epoxy resin (41.0%), and multiple scattering in the cylinder of POM (50.6%). Consequently, the verification shows that the decomposition by the proposed approach leads a significant decomposition of observations into the four optical components; diffuse and specular reflection, and single and multiple scattering, while it is difficult to quantitatively analyze its performance. Note that the decomposition cannot be achieved by applying any of the existing separation methods.

### 5.2 Analysis of decomposition results

We analyze the decomposition with two different perspectives. First, we show the repeatability of the decomposition. We take each weighted measurement five times under the same experimental setup, and then compare decomposition results. Each of the decomposition results is evaluated in peak signal-to-noise ratio (PSNR) between the others. The comparison resulted in 42.1 dB in PSNR on average with the standard deviation of 1.36 dB. The average (and the standard deviation) of PSNRs for diffuse and specular reflection, and single and multiple-scattering components are 42.1(1.58), 42.0(1.42), 42.2(1.15), and 42.0(1.24) dB, respectively. Consequently, it shows that the repeatability of the decomposition by the proposed approach is quite high.

^{+}is significantly changed. For example, when the circular polarization is disused, the PSNRs for diffuse and specular reflection, and single scattering components become the lowest. That is, the circular polarization is important for the decomposition. On the other hand, the PSNRs when disusing the high frequency illumination are relatively high. This is because of the redundancy of the multiple-weighted measurements.

Evaluation of the effect of each of the weighted measurements

Removed measurement | Diffuse reflection | Specular reflection | Single scattering | Multiple scattering |
---|---|---|---|---|

Normal observation | 25.5 | 26.7 | 24.1 | 26.8 |

Circular polarization | 25.2 | 26.4 | 24.1 | 26.6 |

High frequency illumination | 26.3 | 27.6 | 25.2 | 26.1 |

Sweeping high frequency illumination | 25.3 | 26.4 | 24.6 | 24.4 |

High frequency illumination with circular polarization | 26.8 | 26.9 | 24.1 | 26.5 |

### 5.3 Decomposition in complex scenes

In the diffuse reflection, single scattering and multiple-scattering components, an intensity is observed to some extent on all the materials except for metals, such as the coins and the ruler. This is because of subsurface scattering, as mentioned in [53, 54]. Almost all real-world materials are translucent to some extent except for metals. In the specular reflection component, an intensity is observed not only on metal materials but also on other materials because specular reflection arises on a smooth surface, such as the surface of the billiard balls. The scattering media, such as the wax candles, the eraser, and the marble stone, show strong intensities in the single and multiple-scattering components. Optically thin media, such as the soap water, the eraser, and the marble stone, show relatively stronger intensities than the other materials in the single scattering component. Moreover, the intensity in the single-scattering component seems to depend on the shape of an object, e.g., the edges of the wax candles have stronger intensities than other parts. Note that we do not distinguish interreflections from multiple scattering in this paper, as mentioned in Section 4.1, so that interreflections in the scenes are included in the multiple-scattering component.

## 6 Application: raw material segmentation

We show the decomposition of observations into the four optical components in Fig. 6b–e, which are diffuse and specular reflection, and single and multiple-scattering components, respectively. From the decomposition result, we form a normalized 4D feature vector, consisting of the four components, pixel by pixel. And then, we simply perform a conventional *k*-means clustering method as segmentation to assess the effectiveness of the decomposition. The segmentation results are shown in Fig. 6f with a varying parameter *k*(2≤*k*≤13). As a visualization, the same color regions belong to the same segment.

When *k*=2, the segmentation result clearly shows a distinction between opaque and translucent materials; the blue and green regions correspond to opaque and translucent materials, respectively. When *k*=3, material 5 (duralumin) is segmented as another isolated region because of its unique material property, i.e., specular reflection is strongly seen on material 5 because duralmin is a type of aluminum alloys. When *k*=4, material 8 (milky epoxy resin) is segmented as a blue-sky region because of its strong single scattering component. When *k*=5, materials 13 and 19 (polypropylene resins, PP) are segmented as a different region. A PP resin is a translucent medium with an optically thinner property than the other translucent media except for the milky epoxy resin. In the segmentation result with *k*=6, material 4 (wood) and material 14 (cowhide) are separately segmented because both of them are opaquer than the other materials segmented as the blue regions. When *k*=7, material 7 (ceramic) and 17 (paper) are segmented as a new isolated region because of the fact that those materials show stronger scattering components comparing with the other opaque materials. When *k*=8, material 7 (ceramic) is separated as another region. When *k*=9, materials 3, 15 (acrylic), and 10 (polyvinyl chloride resin) are mainly separated. However, materials 16, 18 (polyethylene resins, PE), 11, and 12 (candles) are partially separated even though they consist of one material. This is because of the colors and the angle of illumination. When *k*=10, material 2 (rubber) is separated from material 1 (paper). When *k*=11,12, some regions on the same materials are separated because of the angle of illumination. The result at *k*=13 has only 12 segments which are the same as *k*=12. Consequently, the translucent materials are classified into six types and the opaque materials into six. This application shows that it is reasonable to classify various opaque and translucent materials based on the decomposition by the proposed approach.

*k*-means clustering with

*k*=7 in the RGB space, which resulted in Fig. 7a. Apparently, it is difficult to separate segments based on material properties by using a color-based segmentation approach. Comparing with that, the segmentation result by our approach shows a segmentation based on material properties, as shown in Fig. 7b.

## 7 Conclusion

In this paper, we proposed the general approach called multiple-weighted measurements, which enables to uniformly combine any kind of separation methods, such as color-based, polarization-based, and active projection-based, to finely decompose observations. As an implementation, we defined the weight vector of five different weighted measurements and combined them in the proposed approach to decompose observations into four optical component; diffuse and specular reflection, and single and multiple-scattering components. The experimental verification showed that the decomposition was reasonable because the proportions of decomposed components were similar to the expectation based on physical property for each material region on the image. In the experiments, we performed the decomposition in the various complex scenes. We also showed the possibility of its application for raw material segmentation. The decomposition enables a novel segmentation based on the opacity and translucency of materials unlike a conventional segmentation based on the colors.

There are a few limitations in the proposed approach. First, a shadow is not explicitly handled in the linear formulation (Eq. (1)). This may yield an unmodeled error in the shadow region as computing the decomposition. Second, unmodeled components are erroneously included in some of the four components. There exist other optical phenomena, such as refraction and fluorescence, although only the four components has been introduced in the paper. For example, refracted light on the plastic cup of soap water in the target scene (a) in Fig. 5 can be seen in the diffuse component. Third, since it is based on a combination of multiple separation methods, a scene has to be static and the total processing time is a summation of ones for which individual separation methods take. The first limitation is a challenging problem to be solved but worthy to be considered in order to expand the applicability of the decomposition. In order to resolve the second limitation, a method which can separate the other components must be added to the proposed approach. The third limitation cannot essentially be resolved, but the total processing time can be reduced if the target components are confined. As shown in Table 1, the implemented combination has a redundancy for the decomposition. That is, there must exist the optical combination corresponding to a target component. If the number of combined measurements is reduced, the total processing time also reduces.

## Declarations

### Acknowledgements

This work is partly supported by JSPS KAKEN grant JP17J05602, JP17K19979, and JP15H05918.

### Funding

This work is partly supported by JSPS KAKEN grant JP17J05602, JP17K19979, and JP15H05918.

### Availability of data and materials

The data in this manuscript will not officially be shared because of the originality of its file format.

### Authors’ contributions

TT designed and executed the experiments, and wrote the manuscript. YMu is a supervisor and edited the manuscript. YMa is a co-supervisor and edited the manuscript. YY is a supervisor and advised to execute the experiments. All authors reviewed and approved the final manuscript.

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Horn BKP (1975) Obtaining shape from shading information In: The Psychology of Computer Vision, 115–155.. McGraw-Hill, New York.Google Scholar
- Woodham RJ (1980) Photometric method for determining surface orientation from multiple images. Opt Eng 19(1):139–144.View ArticleGoogle Scholar
- Ben-Ezra M, Wang J, Wilburn B, Li X, Ma L (2008) An led-only brdf measurement device In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE, Piscataway.Google Scholar
- Wolff LB, Boult TE (1991) Constraining object features using a polarization reflectance model. IEEE Trans Pattern Anal Mach Intell (TPAMI) 13(7):635–657.View ArticleGoogle Scholar
- Nayar SK, Krishnan G, Grossberg MD, Raskar R (2006) Fast separation of direct and global components of a scene using high frequency illumination In: Proc. of ACM SIGGRAPH, 935–944.. ACM, New York.Google Scholar
- Shafer SA (1985) Using color to separate reflection components. Color Res Appl 10(4):210–218.View ArticleGoogle Scholar
- Klinker G, Shafer S, Kanade T (1988) The measurement of highlights in color images. Int J Comp Vision (IJCV) 2(1):7–32.View ArticleGoogle Scholar
- Sato Y, Ikeuchi K (1994) Temporal-color space analysis of reflection. J Opt Soc Am A 11(7):2990–3002.View ArticleGoogle Scholar
- Sato Y, Wheeler M, Ikeuch K (1997) Object shape and reflectance modeling from observation In: Proc. of ACM SIGGRAPH, 379–387.. ACM, New York.Google Scholar
- Tan RT, Ikeuch K (2003) Separating reflection components of textured surfaces using a single image In: Proc. of IEEE International Conference on Computer Vision (ICCV), 870–877.. IEEE, Piscataway.View ArticleGoogle Scholar
- Kim H, Jin H, Hadap S, Kweon I (2013) Specular reflection separation using dark channel prior In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1460–1467.. IEEE, Piscataway.Google Scholar
- Nguyen T, Vo QN, Yang HJ, Kim SH, Lee GS (2014) Separation of specular and diffuse components using tensor voting in color images. Appl Opt 53(33):7924–7936.View ArticleGoogle Scholar
- Yang Q, Tang J, Ahuja N (2015) Efficient and robust specular highlight removal. IEEE Trans Pattern Anal Mach Intell (TPAMI) 37(6):1304–1311.View ArticleGoogle Scholar
- Akashi Y, Okatani T (2016) Separation of reflection components by sparse non-negative matrix factorization. Comp Vision Image Underst 146:77–85.View ArticleGoogle Scholar
- Ren W, Tian J, Tang Y (2017) Specular reflection separation with color-lines constraint. IEEE Trans Image Process 26(5):2327–2337.MathSciNetView ArticleGoogle Scholar
- Müller V (1996) Elimination of specular surface-reflectance using polarized and unpolarized light In: Proc. of European Conference on Computer Vision (ECCV), 625–635.. Springer, Berlin.Google Scholar
- Debevec P, Hawkins T, Tchou C, Duiker HP, Sarokin W, Sagar M (2000) Acquiring the reflectance field of a human face In: Proc. of ACM SIGGRAPH, 145–156.. ACM, New York.Google Scholar
- Ma WC, Hawkins T, Peers P, Chabert CF, Weiss M, Debevec P (2007) Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination In: Proc. of Eurographics Symposium on Rendering, 183–194.. Wiley, Hoboken.Google Scholar
- Ghosh A, Chen T, Peers P, Wilson CA, Debevec P (2010) Circularly polarized spherical illumination reflectometry. ACM Trans Graph (ToG) 29(6):162–172.View ArticleGoogle Scholar
- Nayar SK, Fang XS, Boult T (1997) Separation of reflection components using color and polarization. Int J Comput Vis (IJCV) 21(3):163–186.View ArticleGoogle Scholar
- Lin S, Lee SW (1997) Detection of specularity using stereo in color and polarization space. Comp Vision Image Underst 65(2):336–346.View ArticleGoogle Scholar
- Kim DW, Lin S, Hong KS, Shum HY (2002) Variational specular separation using color and polarization In: Proc. of IAPR Workshop on Machine Vision Applications, 176–179. https://dblp.org/db/conf/mva/mva2002.html.
- Umeyama S, Godin G (2004) Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images. IEEE Trans Pattern Anal Mach Intell (TPAMI) 26(5):639–647.View ArticleGoogle Scholar
- Wang F, Ainouz S, Petitjean C, Bensrhair A (2017) Specularity removal: A global energy minimization approach based on polarization imaging. Comput Vis Image Underst 158:31–39.View ArticleGoogle Scholar
- Ikeuchi K, Sato K (1991) Determining reflectance properties of an object using range and brightness images. IEEE Trans Pattern Anal Mach Intell (TPAMI) 13(11):1139–1153.View ArticleGoogle Scholar
- Nishino K, Zhang Z, Ikeuchi K (2001) Determining reflectance parameters and illumination distribution from a sparse set of images for view-dependent image synthesis In: Proc. of IEEE International Conference on Computer Vision (ICCV), 599–606.. IEEE, Piscataway.Google Scholar
- Mukaigawa Y, Ishii Y, Shakunaga T (2007) Analysis of photometric factors based on photometric linearization. J Opt Soc Am A 24(10):3326–3334.View ArticleGoogle Scholar
- Mallick S, Zickler T, Belhumeur P, Kriegman D (2006) Specularity removal in images and videos: A pde approach In: Proc. of European Conference on Computer Vision (ECCV), 550–563.. Springer, Berlin.Google Scholar
- Tao MW, Su JC, Wang TC, Malik J, Ramamoorthi R (2016) Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras. IEEE Trans Pattern Anal Mach Intell (TPAMI) 38(6):1155–1169.View ArticleGoogle Scholar
- Seitz SM, Matsushita Y, Kutulakos KN (2005) A theory of inverse light transport In: Proc. of IEEE International Conference on Computer Vision (ICCV), 1440–1447.. IEEE, Piscataway.Google Scholar
- Bai J, Chandraker M, Ng TT, Ramamoorthi R (2010) A dual theory of inverse and forward light transport In: Proc. of European Conference on Computer Vision (ECCV), 294–307.. Springer, Berlin.Google Scholar
- Gilbert GD, Pernicka JC (1967) Improvement of underwater visibility by reduction of backscatter with a circular polarization technique. Appl Opt 6(4):741–746.View ArticleGoogle Scholar
- Schechner YY, Narasimhan SG, Nayar SK (2003) Polarization-based vision through haze. Appl Opt 42(3):511–525.View ArticleGoogle Scholar
- Treibitz T, Schechner YY (2009) Active polarization descattering. IEEE Trans Pattern Anal Mach Intell (TPAMI) 31(3):385–399.View ArticleGoogle Scholar
- Ghosh A, Hawkins T, Peers P, Frederiksen S, Debevec P (2008) Practical modeling and acquisition of layered facial reflectance. ACM Trans Graph (ToG) 27:139. https://dl.acm.org/citation.cfm?id=1409092.View ArticleGoogle Scholar
- Kim J, Izadi S, Ghosh A (2016) Single-shot layered reflectance separation using a polarized light field camera In: Proc. of Eurographics Symposium on Rendering.. Wiley, Hoboken.Google Scholar
- Narasimhan SG, Nayar SK (2002) Vision and the atmosphere. Int J Comput Vis (IJCV) 48(3):233–254.View ArticleGoogle Scholar
- Narasimhan SG, Nayar SK (2003) Contrast restoration of weather degraded images. IEEE Trans Pattern Anal Mach Intell (TPAMI) 25(6):713–724.View ArticleGoogle Scholar
- Wu TP, Tang CK (2004) Separating specular, diffuse, and subsurface scattering reflectances from photometric images In: Proc. of European Conference on Computer Vision (ECCV), 419–433.. Springer, Berlin.Google Scholar
- Lin S, Lee SW (2000) An appearance representation for multiple reflection components In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 105–110.. IEEE, Piscataway.Google Scholar
- Gupta M, Narasimhan SG, Schechner YY (2008) On controlling light transport in poor visibility environments In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE, Piscataway.Google Scholar
- Mukaigawa Y, Raskar R, Yagi Y (2011) Analysis of scattering light transport in translucent media. IPSJ Trans Comput Vis Appl 3:122–133.View ArticleGoogle Scholar
- Fuchs C, Heinz M, Levoy M, Seidel HP, Lensch HPA (2008) Combining confocal imaging and descattering. Comput Graph Forum 27(4):1245–1253.View ArticleGoogle Scholar
- Kim J, Lanman D, Mukaigawa Y, Raskar R (2010) Descattering transmission via angular filtering In: Proc. of European Conference on Computer Vision (ECCV), 86–99.. Springer, Berlin.Google Scholar
- Ju M, Zhang D, Wang X (2017) Single image dehazing via an improved atmospheric scattering model. Vis Comput Int J Comput Graph 33(12):1613–1625.Google Scholar
- Drews PLJ, Nascimento ER, Botelho SSC, Campos MFM (2016) Underwater depth estimation and image restoration based on single images. IEEE Comput Graph Appl 36(2):24–35.View ArticleGoogle Scholar
- Schlüns K, Wittig O (1993) Photometric stereo for non-lambertian surfaces using color information In: Proc. of International Conference on Image Analysis and Processing.. Springer, Berlin.Google Scholar
- Inoshita C, Mukaigawa Y, Matsushita Y, Yagi Y (2012) Shape from single scattering for translucent objects In: Proc. of European Conference on Computer Vision (ECCV), 371–384.. Springer, Berlin.Google Scholar
- Wu D, O’Toole M, Velten A, Agrawal A, Raskar R (2012) Decomposing global light transport using time of flight imaging In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 366–373.. IEEE, Piscataway.Google Scholar
- Chen T, Lensch HP, Fuchs C, Seidel HP (2007) Polarization and phase-shifting for 3d scanning of translucent objects In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.. IEEE, Piscataway.Google Scholar
- Hulst HC, van de Hulst HC (1957) Light scattering: by small particles. Courier Dover Publications, New York.Google Scholar
- Meyer WV, Cannell DS, Smart AE, Taylor TW, Tin P (1997) Multiple-scattering suppression by cross correlation. Appl Opt 36(30):7551–7558.View ArticleGoogle Scholar
- Wang R, Cheslack-Postava E, Wang R, Luebke D, Chen Q, Hua W, Peng Q, Bao H (2008) Real-time editing and relighting of homogeneous translucent materials. Vis Comput 24(7):565–575.View ArticleGoogle Scholar
- Kurachi N (2011) The magic of computer graphics. CRC Press, Florida.View ArticleGoogle Scholar
- Liu C, Gu J (2012) Discriminative illumination: per-pixel classification of raw materials based on optimal projections of spectral brdf In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 797–804.. IEEE, Piscataway.Google Scholar