Skip to main content

Advertisement

Thermal non-line-of-sight imaging from specular and diffuse reflections

Article metrics

  • 153 Accesses

Abstract

This paper presents a non-line-of-sight technique to estimate the position and temperature of an occluded object from a camera via reflection on a wall. Because objects with heat emit far infrared light with respect to their temperature, positions and temperatures are estimated from reflections on a wall. A key idea is that light paths from a hidden object to the camera depend on the position of the hidden object. The position of the object is recovered from the angular distribution of specular and diffuse reflection component, and the temperature of the heat source is recovered from the estimated position and the intensity of reflection. The effectiveness of our method is evaluated by conducting real-world experiments, showing that the position and the temperature of the hidden object can be recovered from the reflection destination of the wall by using a conventional thermal camera.

Introduction

Measuring objects that are hidden from the view of a camera is an important problem in many research fields such as robotic vision, autonomous driving, medical imaging, and remote sensing. Non-line-of-sight (NLOS) imaging is a technique to deal with the problem, which reconstructs the position, shape, and reflectance of the hidden objects from indirect light paths. Light emitted from a light source is reflected off objects multiple times in both line-of-sight (LOS) and NLOS scenes and then detected by the camera. Although the indirect light paths including reflection from the hidden objects can be a clue to recover images of the NLOS scene, such an inverse problem is a challenging task.

In order to tackle this task, most NLOS imaging techniques require a controllable light source. On the other hand, we propose a passive NLOS imaging technique using far infrared (FIR) light. Because all thermal objects radiate FIR light with respect to their temperature, any object in the real world can be regarded as a light source in the FIR wavelengths. We realized this makes the inverse problem simpler. That is, we can formulate it as the one-bounce problem, in which a light ray is assumed to be reflected only once, while the active techniques assume a three-bounce problem. In our formulation, the light emitted from a hidden object is reflected off a wall in the LOS scene and then observed by the camera. We assume that the reflection on the wall can be separated into diffuse and specular components, whose light paths depend on the position of the hidden object, and the intensities relate to the temperature. Thus, separating both the components enables to reconstruct the position and the temperature of the hidden object, as shown in Fig. 1. Experimental results show the effectiveness of the proposed method by using a conventional thermal camera.

Fig. 1
figure1

Experimental results of the position and temperature estimation. Although the target objects are located beyond the occluder, the estimated positions are close to the ground truth. Moreover, the estimated temperatures, shown by color, are consistent with the actual temperatures

Related work

Active NLOS imaging

Most NLOS imaging techniques rely on time-resolved imaging using a controllable light source. The early study used a femtosecond-laser to illuminate a scene and a picosecond-resolution streak camera to obtain time-resolved images [1]. Homodyne time-of-flight sensors are also available to obtain time-resolved images [2] and thus to be applied for NLOS imaging [3]. After the availability of single-photon avalanche diode detectors with time-correlated single-photon counting has increased, it is often used for NLOS imaging [4, 5].

Passive NLOS imaging

A few passive NLOS imaging techniques have been proposed, recently. Torralba and Freeman [6] showed that an accidental camera can image a NLOS scene. Bouman et al. [7] showed that obstructions with edges, e.g., walls, can be exploited as a camera that reveals the NLOS scene beyond them. Saunders et al. [8] proposed a passive NLOS imaging which requires only a single image captured with an ordinary camera to recover a partially hidden scene and the position of an opaque object in the NLOS scene. Beckus et al. [9] also proposed a passive NLOS imaging by a multi-modal fusion of the intensity and the spatial coherence function.

All the existing NLOS imaging techniques expect from the visible to near infrared light to illuminate the scenes. On the other hand, we utilize FIR light to enable the passive NLOS imaging that recovers not only the position but also the temperature of the hidden object.

Thermal NLOS imaging

We assume that a target thermal object is located beyond an occluder outside the view of a thermal camera. The camera observes a flat wall in a LOS scene on which the FIR light emitted from the hidden object is reflected. We assume that the reflection of FIR light can be separated into diffuse and specular reflections, as well as the visible light, and the geometry of the wall and the camera is given. Figure 2 shows the geometry of the scene. To simplify the problem, we assume that the hidden object is sizeless.

Fig. 2
figure2

Geometry of the setup and concept

The goal of this work is to estimate the position and temperature of the hidden object from a single thermal image. The key idea is that peaks in the intensity distribution on the wall depends on the position of the object. An observed intensity on the image plane can be projected onto the wall because of the known geometry. Considering specular and diffuse components separately, a peak in the intensity distribution based on the specular component is around the position where the incident angle is equal to the outgoing angle, as shown in Fig. 2. On the other hand, a peak in the distribution based on the diffuse component is around the position where the distance between the object and wall is minimal. Therefore, once both the distributions are separated, we can estimate the position of the hidden object via the backprojection of expected light paths.

Separation of diffuse and specular reflections

The observed intensity in the image consists of not only the diffuse and specular but also an ambient components. However, the ambient component can easily be removed by subtracting an observation without the hidden object. The separation of diffuse and specular reflections is performed on the 1D distribution of intensities on the wall. Because a camera pixel c and a point on the wall w are corresponding to one to one, we can transform the observed intensity into the intensity on the wall via geometric and radiometric corrections. The intensity on the wall can be represented as below:

$$ I_{\mathrm{w}}(w_{x}) = D_{\mathrm{w}}(w_{x}) + S_{\mathrm{w}}(w_{x}), $$
(1)

where Dw and Sw are the diffuse and specular components, respectively.

Now, we build an empirical model of the diffuse and specular components. Based on our various measurements, we found that both the two components follow Gaussian distributions. We define both the components as:

$$\begin{array}{*{20}l} D_{\mathrm{w}}(w_{x}) &= a_{d} \exp\left(-\frac{(w_{x}-\mu_{d})^{2}}{2 \sigma_{d}^{2}}\right), \end{array} $$
(2)
$$\begin{array}{*{20}l} S_{\mathrm{w}}(w_{x}) &= a_{s} \exp\left(-\frac{(w_{x}-\mu_{s})^{2}}{2 \sigma_{s}^{2}}\right), \end{array} $$
(3)

where ad,μd, and \(\sigma _{d}^{2}\) are the scale, the mean, and the variance of the diffuse component and as,μs, and \(\sigma _{s}^{2}\) are those of the specular component. There is the constraint; σd>σs, because the spatial spreading of diffuse reflection is always larger than that of specular reflection. In order to separate both the components, we optimize:

$$\begin{array}{*{20}l} &[\mu_{d}, \mu_{s}, \sigma_{d}, \sigma_{s}, {\mathrm{a_d}}, {\mathrm{a_s}}] = \\ &\underset{\mu_{d}, \mu_{s}, \sigma_{d}, \sigma_{s},, {\mathrm{a_d}}, {\mathrm{a_s}}}{\text{argmin}} \sum_{w_{x}} \left|I_{\mathrm{w}}(w_{x}) - \hat{I}_{\mathrm{w}}(w_{x}|\mu_{d}, \mu_{s}, \sigma_{d}, \sigma_{s}, {\mathrm{a_d}}, {\mathrm{a_s}})\right|^{2}, \end{array} $$
(4)

where \(\hat {I}_{\mathrm {w}}(w_{x})\) is the reconstructed intensity. Figure 3 shows an example of separation.

Fig. 3
figure3

Separation result. The observed intensity (green) is separated into diffuse component (blue) and specular component (orange)

Position estimation

The position of the hidden object is estimated using the separated diffuse and specular distributions. Because the outgoing angle of specular reflection is equal to the incident angle, a light path from a point on the wall w to the object point is expected as:

$$\begin{array}{*{20}l} \boldsymbol{l} &= \frac{\boldsymbol{w} - \boldsymbol{c}}{\|\boldsymbol{w} - \boldsymbol{c}\|^{2}} - 2\left(\frac{\boldsymbol{w} - \boldsymbol{c}}{\|\boldsymbol{w} - \boldsymbol{c}\|^{2}} \cdot \boldsymbol{n} \right)\boldsymbol{n}, \end{array} $$
(5)

where n is the normal direction of the wall. In general, specular reflection on a real-world material has blurry lobes. Thus, we apply a voting manner to compute the probability distribution of the position. A light path of specular reflection at each point on the wall is tracked along a direction l in a voting space \(\mathcal {V}\) whose y-axis is the normal direction of the wall n and the origin of y is on the wall. Although light paths of diffuse reflection are possibly distributed in all directions, the intensity is expected to depend on the distance between the object point and the point on the wall. Thus, a light path of diffuse reflection at each point on the wall is tracked along the normal direction of the wall. We define a voting value v at a position \((x, y) \in \mathcal {V}\) as:

$$ v(x, y) = v_{d}(x) + v_{s}(x, y), $$
(6)

where vd and vs are voting intensities derived from the diffuse and specular components, respectively. The voting intensity from the specular component at the position (x,y) is based on the corresponding point on the wall, which is determined by \(w_{x}=\frac {c_{x} y + c_{y} x}{c_{y} + y}\). Thus, both the voting intensities are defined as:

$$\begin{array}{*{20}l} v_{d}(x, y) &= D_{\mathrm{w}}(x), \end{array} $$
(7)
$$\begin{array}{*{20}l} v_{s}(x, y) &= S_{\mathrm{w}}\left(\frac{c_{x} y + c_{y} x}{c_{y} + y}\right). \end{array} $$
(8)

Normalizing the voting space makes the probability distribution as:

$$ p(x, y) = \frac{v(x, y)}{\sum_{x} \sum_{y} v(x, y)}. $$
(9)

Finally, finding the position \((\hat {x}, \hat {y})\) where \(p(\hat {x}, \hat {y})\) is a maximum gives us the estimated position of the hidden object.

Temperature estimation

The temperature of the hidden object is estimated the the separated diffuse distribution and the estimated position. FIR light energy E radiated from a thermal object is directly proportional to the fourth power of the object’s thermodynamic temperature T, according to the Stefan-Boltzmann law:

$$ E = \sigma T^{4}, $$
(10)

where σ is the Stefan-Boltzmann constant. The radiated light eventually reaches to the LOS wall but is attenuated along the travelling path. The attenuation rate is dependent on its distance, which is explained by the inverse-square law. The reached light is diffusely reflected from the wall and then observed by the camera. Thus, the scale ad of the diffuse distribution can be expressed as:

$$ a_{d} = \frac{E k_{d}}{\hat{y}^{2}}, $$
(11)

where kd is the diffuse reflectance of the wall and \(\hat {y}\) is the estimated distance between the object and the wall. Once the reflectance is measured, e.g., by calculating from the estimated scale under the known distance and the temperature, the temperature of the hidden object can be estimated as:

$$ T = \left(\frac{a_{d} \hat{y}^{2}}{k_{d} \sigma}\right)^{\frac{1}{4}}. $$
(12)

Experiments

We demonstrate estimations of the position the temperature of a hidden object. In the experimental setup, as shown in Fig. 4a, a thermal camera (Nippon Avionics, InfRec R500) observes a flat melamine wall in a LOS scene, on which printing paper is put, and a hidden object is located beyond a black board in a NLOS scene. Two objects are used as the hidden object: a soldering iron and a hand warmer. The iron can keep different temperatures of 270, 320, and 370 C and the hand warmer warms up to around 50 C when plugged in. An observed image is shown in Fig. 4b in a case of the hand warmer. An aluminum board with grid holes is used for calibrating the camera intrinsic parameters and the geometry between the wall and the camera. The camera and objects are adequately warmed up before starting experiments, and the ambient temperature is kept constant during the experiments.

Fig. 4
figure4

Experimental setup. a A scene with a hand warmer as the hidden object and b an observed thermal image

Position estimation

First, we evaluate the position estimation of the hidden object. We locate the iron with the temperature of 370 C at three different positions A, B, and C in the NLOS scene, as shown in Fig. 1. An observed image is separated into diffuse and specular components for each horizontal line, and then, the voting is performed to get the probability distribution. The probability distribution for each position is shown in Fig. 5a–c, where the blue circle and red cross denote the ground truth and the maximum probability positions, respectively. The errors in the estimated positions for A, B, and C are 27.41, 26.07, and 31.94 mm, respectively. Second, the hand warmer, whose temperature order is close to human body temperature, is located at the position A. The computed probability distribution is shown in Fig. 5d, and the error of the estimated position is 38.08 mm. The results say that the proposed method can reconstruct the position of a hidden object even at a low temperature.

Fig. 5
figure5

Experimental results of the position estimation. ac Probability distributions for the position of the soldering iron at the position A, B, and C, respectively. d Probability distribution of the hand warmer at the position A. The blue circle and red cross denote the ground truth and the maximum probability positions, respectively

Temperature estimation

We demonstrate the temperature estimation of the hidden object. The temperature of the iron is set to 270, 320, and 370C for each of the positions A, B, and C. After the positions are estimated, the temperatures are estimated using Eq. (12). The diffuse reflectance kd is ahead measured from observing the iron with the temperature of 320 C at the position B. The estimated temperatures are shown in Table 1. Because the accuracy of the temperature estimation depends on the position estimation, the error in the position estimation is shown below the estimated temperature. The accuracy cannot be said to be high, but the magnitude relationship for each of the positions is at least consistent with the actual temperature. The selected cases, 320 C at the position A, 370 C at the position B, and 270 C at the position C, are shown in Fig. 1 with the positions and temperatures.

Table 1 Experimental results of the temperature estimation

Conclusion

We proposed thermal NLOS imaging to estimate the position and temperature of a hidden object located beyond an occluder. An observed thermal image is separated into diffuse and specular components. Voting values along all possible light paths for both the components gives us a probability distribution of position. Once the distance between the object and wall is estimated, the temperature of the hidden object can inversely be estimated from the diffuse intensity. Experimental results show that the proposed method can estimate the position and temperature of a soldering iron located at the different positions with the different temperatures, and the position of the hand warmer, whose temperature order is close to human body.

Limitations

In this paper, we assumed a single object in a NLOS scene and its shape is adequately simple, e.g., a cylindrical shape. Because FIR light radiated from objects is attenuated in the air, the work distance of our method is limited, which is currently less than 200 mm. When the temperature of a hidden object is low, it is difficult to apply our method because the signal-to-noise ratio of observations is low as well. The FIR reflection model is empirically built, currently, but it could be derived from thermal physics and optics, which will be our future work.

Availability of data and materials

The data in this manuscript will not officially be shared because of the originality of its file format.

References

  1. 1

    Kirmani A, Hutchison T, Davis J, Raskar R (2009) Looking around the corner using transient imaging In: IEEE International Conference on Computer Vision, 159–166. https://doi.org/10.1109/ICCV.2009.5459160.

  2. 2

    Kitano K, Okamoto T, Tanaka K, Aoto T, Kubo H, Funatomi T, Mukaigawa Y (2017) Recovering temporal psf using tof camera with delayed light emission. IPSJ Trans Comput Vis Appl 9(1):1–6. https://doi.org/10.1186/s41074-017-0026-3.

  3. 3

    Kadambi A, Zhao H, Shi B, Raskar R (2016) Occluded imaging with time-of-flight sensors. ACM Trans Graph 35(2):1–12. https://doi.org/10.1145/2836164.

  4. 4

    Tsai C, Kutulakos KN, Narasimhan SG, Sankaranarayanan AC (2017) The geometry of first-returning photons for non-line-of-sight imaging In: IEEE Conference on Computer Vision and Pattern Recognition, 2336–2344. https://doi.org/10.1109/CVPR.2017.251.

  5. 5

    O’Toole M, Lindell DB, Wetzstein G (2018) Confocal non-line-of-sight imaging based on the light-cone transform. Nature 555(7696):338–341. https://doi.org/10.1038/nature25489.

  6. 6

    Torralba A, Freeman WT (2014) Accidental pinhole and pinspeck cameras. IJCV 110(2):92–112. https://doi.org/10.1007/s11263-014-0697-5.

  7. 7

    Bouman KL, Ye V, Yedidia AB, Durand F, Wornell GW, Torralba A, Freeman WT (2017) Turning corners into cameras: principles and methods In: IEEE International Conference on Computer Vision, 2289–2297. https://doi.org/10.1109/ICCV.2017.249.

  8. 8

    Saunders C, Murray-Bruce J, Goyal VK (2019) Computational periscopy with an ordinary digital camera. Nature 565(7740):472–475. https://doi.org/10.1038/s41586-018-0868-6.

  9. 9

    Beckus A, Tamasan A, Atia GK (2019) Multi-modal non-line-of-sight passive imaging. IEEE Trans Image Process 28(7):3372–3382. https://doi.org/10.1109/TIP.2019.2896517.

Download references

Author information

MK, TK, and TT contributed to the concept, conducted experiments, and wrote the manuscript. KT is a co-supervisor and contributed to the concept. TF is a co-supervisor and edited the manuscript. YM is a supervisor and edited the manuscript. All authors reviewed and approved the final manuscript.

Correspondence to Masaki Kaga.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kaga, M., Kushida, T., Takatani, T. et al. Thermal non-line-of-sight imaging from specular and diffuse reflections. IPSJ T Comput Vis Appl 11, 8 (2019) doi:10.1186/s41074-019-0060-4

Download citation

Keywords

  • Non-Line-of-Sight
  • Thermal Imaging
  • Far Infrared