- Research Paper
- Open Access

# Optical tomography based on shortest-path model for diffuse surface object

- Takafumi Iwaguchi
^{1}Email author, - Takuya Funatomi
^{1, 2}, - Takahito Aoto
^{1}, - Hiroyuki Kubo
^{1}and - Yasuhiro Mukaigawa
^{1}

**10**:15

https://doi.org/10.1186/s41074-018-0051-x

© The Author(s) 2018

**Received:**12 April 2018**Accepted:**25 October 2018**Published:**23 November 2018

## Abstract

We tackle an optical measurement of the internal structure of a diffuse surface object—we define as an object that has a diffuse surface and its interior is transparent, like grapes or hollow plastic bottles. Our approach is based on optical tomography that reconstructs the interior from observations of absorption of light rays from various views, under the projection of the light. The difficulty lies in the fact that a light ray that enters changes its direction at the interaction of the surface, unlike X-ray that travels straight through the object. We introduce a model of light path in the object called *shortest-path model*. We acquire the absorption of light rays through the object by the measurement upon the assumption of the model. Since this measurement acquires insufficient observation to reconstruct the interior by conventional reconstruction algorithms, we also introduce a reconstruction method based on a numerical optimization that a physical requirement of the absorption is taken into account. Our method is confirmed successful to measure the interior in a real-world experiment.

## Keywords

- Optical measurement
- Computed tomography
- Radon transform
- Optimized reconstruction

## 1 Introduction

The measurement of an object’s interior is important in various applications, such as the detection of foreign objects in food and the inspection of the human body in a medical examination. An optical measurement is a safe inspection technology that does not use X-rays and has no risk posed by a radiation dose. Furthermore, optical measurement provides functional information on optical properties, e.g., blood flow is estimated from spectral absorption.

The difficulty in making an optical measurement results from a light ray readily changing path at a point of interaction with an object. We aim to measure the internal structure of an object that has a diffuse surface and an interior that is assumed transparent, where light is absorbed but not scattered. Fruits like grapes, light bulbs with white glass, and hollow plastic bottles are examples of such objects. For such an object, light diffuses at the surface and rays advance in various directions.

Optical tomography is a technique of optical measurement used to reconstruct the interior of an object. Optical projection tomography (OPT) [1] is a simple technique that is the same as X-ray computed tomography (CT) except that it uses visible or infrared light instead of X-rays. It is assumed that light travels in a straight direction in the object, as for X-rays. OPT provides a clear three-dimensional reconstruction of a small specimen and has contributed to many biological studies; however, it cannot deal with a diffuse surface.

One difficulty is the scattering of light in the internal medium. Techniques have been proposed to cope with scattering, e.g., techniques for single scattering [2] and multiple scattering [3–5]. Scattering in the human body is approximated as an isotropic diffusion in diffuse optical tomography; applications of the approximation are mammography [6] and functional imaging of the brain [7, 8]. A major disadvantage of the techniques is that the radiation and probing require contact with the target. As the number of contacts is limited physically, the resolution is limited.

We propose an optical tomography method based on the *shortest-path model* that assumes that a ray scatters only at the surface and travels straight in the interior medium. Light rays in the object are measured without contact by an optical system that consists of a camera, light source, and rotary stage. After paths of light rays are determined considering the geometry, the interior is reconstructed through an inverse Radon transform as for general X-ray CT.

The contribution of this paper is fourfold. First, we propose a simple model of light rays that allows the reconstruction of the internal structure of a diffuse surface object by an inverse Radon transform. Second, we clarify the problem that measurement with a general perspective camera results in insufficient observations for reconstruction. Third, we introduce the limitation of the physically correct value range on the distribution and the smoothness constraint with total variation (TV) semi-norm regularization to reconstruct the full interior from the insufficient observations. Fourth, we clarify the relationship between the placement of the light source in the measurement configuration and evaluate the effect of scattering.

This paper extends our previous work [9] as follows. (1) The measurement is made more practical using a more widely used perspective model rather than the less used orthogonal projection model. (2) We introduce a contour estimation into the framework that breaks the limitation of the shape of the target, previously limited to a cylinder. (3) A reconstruction method, which is peculiar to the shortest-path model, is introduced to deal with the insufficient observations. (4) We discuss the appropriate setup of the measurement and the robustness against scattering.

The remainder of the paper is organized as follows. Section 2 describes the process of acquiring light rays while Section 3 describes the reconstruction method. Section 4 presents the results of a real-world experiment and evaluations made using our method, while we conclude the paper in Section 5.

## 2 Acquisition of light rays

### 2.1 Distribution of the absorbance coefficient and total absorption

*σ*of the target’s interior. The absorption coefficient represents how much light is absorbed as light travels a unit distance. We now define the total absorption

*A*by following the Lambert-Beer law, as the logarithm of

*I*

_{o}(the intensity of light after light travels through the target) divided by

*I*

_{i}(the intensity of light before entering the target):

#### 2.1.1 Radon transform

*Radon transform*. For a simplicity, we consider the problem in two dimensions. When a ray propagates through an area

*Ω*, the total absorption is an integral of the absorption coefficient along the path:

*X*,

*θ*) is written as

*A*(

*θ*,

*X*)) for all possible

*θ*and

*X*. Ideally, these rays are acquired by measuring the transmitted rays when parallel rays are cast toward the target from various angles. This method works well when the paths of rays are not disturbed by the target as in the case of X-rays. However, as illustrated in Fig. 2, each ray entering the object spreads when the target has a diffuse surface. The transmitted rays are no longer parallel, and it is difficult to determine paths of the measured rays.

### 2.2 Shortest-path model

### 2.3 Model validity in the real situation

One of the difficult targets could be the object with thick skin. Because the incident point of the path is determined as the first point where the light from the source hit the surface, an actual incident point of the path should lie on the inner boundary between the skin and the body; therefore, these two points do not match when the skin is thick and the light is spread by the diffusion in the skin. Our model is applicable when the skin is thin enough.

Another factor that could affect the reconstruction is the scattering in the medium. When scattering occurs, the path in the medium is no longer straight. The effect of the scattering is evaluated in Section 4.4.

### 2.4 Measurement and light path alignment

This section describes the measurement based on our model, called the *shortest-path measurement*. After rays passing through the object are measured, their paths are determined in a process called *light path alignment*.

#### 2.4.1 Setup of the measurement

Because the light path is modeled as a straight line, a path in the object is uniquely determined if both ends of the path are specified. If there is light in a large area, which means many rays are cast as illustrated in Fig. 2, the exact point that a ray enters is difficult to determine. Incident light should fall in a small area to avoid this problem. Meanwhile, rays exiting the object are measured by shooting the surface of the target. The shooting is repeated while the object is rotated to collect rays entering at and exiting from various points.

We employ an off-the-shelf perspective camera that provides a wide field of view (FOV). We must consider the FOV because it affects the measurement of a ray. In the case of a perspective projection, a ray from the light source is determined from the relationship between the focal point and the image plane of the camera.

#### 2.4.2 Light path alignment

Paths of a ray in a three-dimensional scene should be computed because they are required for the reconstruction. The three-dimensional coordinates of the points at which a ray enters and exits are determined as follows. The point at which a ray enters is determined by calculating the intersection of the ray from the light source and a contour of the target. Similarly, the point at which a ray exits is determined by calculating the intersection of the ray from the camera and a contour of the target. To uniquely determine these intersections of the ray and the contour of the target, all the contours of the target must not be occluded from the light source or the camera. Therefore, the shape of the object need to be convex in our measurement.

To obtain a target contour, we compute a visual hull [10] as the shape of the target in the following steps. The target is first captured from various views under ambient lighting. A silhouette is then extracted by binarization after subtracting the background from the captured image. A visual hull is finally computed by taking an intersection of the perspective projection of the silhouette on the object space. Since our measurement needs the shape of the target to be convex, it is reasonable to utilize a visual hull that is only applicable for convex shapes.

*X*,

*θ*) fixed on the target. The origin is at the center of rotation in the measurement setup. In a sinogram, horizontal and vertical axes respectively correspond to (

*X*,

*θ*), and an attenuation of the ray is stored, as illustrated in Fig. 1. For each ray, we define an intersection of the ray and a contour of the object in Cartesian coordinates (

*x*,

*y*) that share the same origin as the polar coordinates. By denoting the intersection of a ray from the camera and a target by

**p**

_{l}and the intersection of a ray from the camera and a contour by

**p**

_{c}, the angle of a path

*θ*is calculated as

*x*-axis. A displacement of path

*X*is calculated according to

This sinogram is identical to that used in conventional CT, and the same reconstruction technique can therefore be employed.

### 2.5 Observation rate of the light path

When the surface of the object is measured using a single camera, not all rays in the object are measured depending on the object’s shape and the optical setup. We now look at Fig. 5 to understand the unobserved rays. Rays 1 and 2 cast from the light source enter the object at the same point but exit from different points, before being measured by the camera on the opposite side of the object. While ray 1 is observable because it reaches the surface visible from the camera, ray 2 is unobservable because it reaches the surface unobservable from the camera.

*θ*

_{l}in Fig. 5. Figure 7 shows generated sinograms and the “fullset” sinogram that contains sufficient rays with which to reconstruct the full interior. There are missing areas in the sinograms owing to the unobserved rays. In the case of

*θ*

_{l}=30

^{∘}, there are missing areas on both sides of the sinogram. Likewise, in the case of

*θ*

_{l}=60

^{∘}, there are missing areas on the sides but the areas are smaller. In contrast, a missing area appears at the center in the case of

*θ*

_{l}=120

^{∘}.

We next evaluate the observation rate of rays. Here, we measure the observation rate using coverage—a ratio of the observed number of rays to the number of rays in the fullset sinogram. To describe the missing part, we use the distances *s*_{min} and *s*_{max} as shown in Fig. 6. Let *θ*_{FOV} denotes FOV. These distances change as follows:

*s*

_{max}and

*s*

_{min}in Fig. 6, the coverage is calculated as

*s*

_{max}−

*s*

_{min}. In addition, the coverage takes its maximum at

*θ*

_{l}for the perspective projection when the FOV is 30

^{∘}and 60

^{∘}. It is found that the coverage of FOV =60

^{∘}is lower than that of FOV =30

^{∘}for any

*θ*

_{l}. In addition, we show coverage in the cases of the orthogonal projection that were considered in a previous paper [9]. In the case of orthogonal projection, the coverage is satisfied at

*θ*

_{l}=90

^{∘}; hence, a lack of observations can be avoided using this angle. In contrast, the coverage is never satisfied in the case of perspective projection. The problem of insufficient observations is inevitable unless a single perspective camera is used.

## 3 Reconstruction

When there are insufficient observations, a possible solution is to modify the setup by adding another light source or camera to complete the observation. When it is possible to observe all the paths, the interior should be reconstructed most accurately. One of the difficulties of this approach is that an additional light source or camera must be precisely aligned because the reconstruction is sensitive to misalignment. Another difficulty is that the number and the placement of the light source and the camera depend on the shape of the object. Although the optimal configuration is difficult to find, it is not usable for other objects. Moreover, there is no guarantee of the existence of the configuration that makes the observation complete.

In this paper, we employ numerical optimization to deal with the problem of incomplete observations. The numerical optimization can be used with the multiple light sources and camera.

### 3.1 Formulation as an optimization problem

In the case that the observations are insufficient, the correct reconstruction is difficult because there are multiple solutions that agree with the observation mathematically. We introduce two constraints to eliminate solutions that are not physically correct and to achieve convergence to a more realistic distribution. The first constraint is the physical constraint (PC) on the range of the distribution of the absorption coefficient that is derived from the existing observations. This constraint rejects solutions that are physically wrong; however, there are still many possible distributions. The second constraint is regularization based on the total variation (TV) semi-norm that imposes smoothness on the distribution. This constraint allows convergence to a realistic solution by reducing the effect of noise of the observation.

The first term is a data-fidelity term that implies that a reprojection of an estimated distribution by the Radon transform should be close to a sinogram A_{observed}. The second term is the PC on the distribution, and the third term represents TV semi-norm regularization. Because the objective function of Eq. (13) is convex, we employ the alternating direction method of multipliers to solve the problem.

#### 3.1.1 Reprojection error of the Radon transform

*i*denote an index of a cell of a discrete distribution after serialization. A Radon transform of a ray having index

*j*is written as

*A*and the projection of estimated

**σ**obtained using matrix R. We consider reprojection error only for available observations and measure it using the

*L*−2 norm. Let R

_{observed}denote the Radon transform for available observations and A

_{observed}denote a sinogram of available observations. Finally, the data-fidelity term is derived as

#### 3.1.2 Physical conditions of light absorption

*σ*

_{i}is written as

The upper bound of the absorbance coefficient can be determined by considering the relationship between the total absorption and the distribution of the absorbance coefficient.

*σ*

_{j}. Therefore,

*σ*

_{j}must not exceed the total absorptions of the three light paths, and

*σ*

_{j}is thus constrained as

*σ*

_{j}≤ min(

*A*

_{0},

*A*

_{1},

*A*

_{2}). The absorption at a certain pixel must therefore not be higher than the minimum of all the projections that travel through the pixel. In the general case, the upper bound is written as

where *χ*_{i} is a set of rays that hit *σ*_{j}.

*C*denote the range of absorption:

*ι*

_{C}(

*σ*):

#### 3.1.3 TV minimization

_{TV}as

where ∇_{1} and ∇_{2} are the discrete horizontal and vertical differential operators. The minimization of the semi-norm forces the distribution to vary gradually while preserving the edges. This is preferable in most cases, and we can adjust the effect of the term by choosing a small *λ* whenever it is not suitable.

## 4 Experiment

### 4.1 Appropriate setup of the measurement

We determine the appropriate setup before performing an experiment in a real environment.

*θ*

_{l}in Fig. 5. We also evaluate the interiors reconstructed by the FBP and our reconstruction method. Figure 10 shows the reconstructed interiors for

*θ*

_{l}=0

^{∘}, 30

^{∘}, 60

^{∘}, 90

^{∘}, and 120

^{∘}. In the cases of

*θ*

_{l}=0

^{∘}, 30

^{∘}, and 60

^{∘}, there are missing areas on both sides of the sinogram. The outer parts are not correctly estimated owing to the large missing areas at

*θ*

_{l}=0

^{∘}, but the central part is estimated correctly. There are similar tendencies in the results for

*θ*

_{l}=30

*v*and 60

^{∘}, but the errors are smaller because of the better observation.

In contrast, the center of the sinogram is missing in the cases of *θ*_{l}=90^{∘} and 120^{∘}. It is found that our reconstruction method failed to reconstruct the center of the interior as for reconstruction by the FBP. This is because of the absence of observations of the center; no rays passing through the central area are observed, whereas more than one ray is observed in the previous cases. The whole interior needs to be reconstructed such that the center of the sinogram is not missing. In terms of quality, our method provides a better reconstruction than the FBP. Whereas the result of the FBP has line artifacts and blurring, a clear shape is reconstructed without artifacts using our method.

*θ*

_{l}=30

^{∘}and 60

^{∘}and increase as the number of failure pixels increases. The maximum of the absolute error reflects how the worst pixel is reconstructed. Referring to an absolute error in Fig. 10, the worst pixels are reconstructed from the missing area of the sinogram. It is confirmed that the absolute error is bounded by the physical constraint of the reconstruction.

RSME and max of absolute error versus the light angle

Light angle | RSME | Max. of absolute error |
---|---|---|

0 | 3.15×10 | 1.32×10 |

30 | 2.57×10 | 0.24×10 |

60 | 1.32×10 | 0.60×10 |

90 | 2.31×10 | 2.00×10 |

120 | 7.55×10 | 2.00×10 |

We now look for an appropriate setup such that the coverage of the observation is high, while the center of the sinogram remains filled. Let us review the coverage of the observation in Fig. 8. The coverage takes its maximum at \(\theta _{l} = \frac {\pi - \theta _{\text {FOV}}}{2}\); see Section 2.5 for details. It is noteworthy that the center of the sinogram is missing in the case that \(\theta _{l} > \frac {\pi - \theta _{\text {FOV}}}{2}\). For these reasons, the appropriate setup is \(\theta _{l} = \frac {\pi - \theta _{\text {FOV}}}{2}\); however, care needs to be taken that *θ*_{l} does not exceed the angle.

### 4.2 Experiment on a real object

In this section, we perform an experiment in a real environment to confirm the validity of the shortest-path measurement by comparing the result with a measurement made under a parallel lighting setting.

The target of the experiment is a bin filled with gelatin and blue transparent plastic struck at some distance from the center of the bin.

*θ*

_{l}, the angle between the light and camera direction, is fixed to 45

^{∘}. We chose the angle such that the center of the sinogram is filled while the observed intensity is high enough for a quick measurement.

To calculate the total absorption, a reference object without a plastic stick is measured in addition to the target; the total absorption is then calculated by Eq. (1). Note that this calculation also cancels out the angular nonuniformity of diffusion. Generally, the intensity distribution through the surface is described by the bidirectional transmission distribution function *f*_{T}(*ω*_{i},*ω*_{o}), where *ω*_{i} is the incidence angle and *ω*_{o} is the outgoing angle of the light. Because *s*_{t} and *s*_{r} have *ω*_{i} and *ω*_{o} in common, the bidirectional transmission distribution function *f*_{T} of the surface of the target is cancelled out.

The next step is alignment of the light path. After a contour of the target is estimated considering the visual hull of silhouettes from various views, the light path is aligned with the contour estimated and a sinogram is generated. The interior is reconstructed from the sinogram.

Similarly, we measure the same target under a parallel light setting. The same setup is used except that a parallel light source is cast directly and *θ*_{l} is set to 0^{∘}. The sinogram is generated directly from captured images under the assumption that rays travel straight in the target and measured transmitted rays remain parallel to each other.

We now compare the results of the reconstruction methods. In the result of the FBP, the distribution outside the blue circle is not reconstructed and it corresponds to the missing area in the sinogram. In contrast, our optimization method is able to reconstruct the distribution where there are insufficient observations. It is confirmed that our method has an advantage over the FBP method.

### 4.3 Measurement of arbitrary convex shape

*θ*

_{l}is set to 30

^{∘}. Figure 14 shows the ground truth, the estimated contour, and the reconstructed interior with optimization. A blue line shows the ground truth contour of the object. In the estimated contour, the contour of the triangle is estimated almost correctly. Also, from the reconstructed interior, we can see the circular area at the center is reconstructed without a significant artifact.

### 4.4 Effect of scattering

Our method is based on the assumption of the shortest-path model that only the absorption of light in the object need be considered. However, as we found in the experiment for the real object, scattering in the medium may not be negligible in a practical measurement. It is expected that if the scattering of the medium is strong, our model is no longer a good approximation of paths of rays. In this section, we confirm the effect of scattering in a simulation environment.

*σ*

_{s}and absorption coefficient

*σ*.

*σ*is set to zero in the outer cylinder and 10.0 in the inner cylinder. The refractive index of the media is set to 1.0.

*σ*

_{s}. Note that the radius of the cylinder is 1 and

*σ*

_{s}decides the mean free path of the ray according to 1/

*σ*

_{s}. Figure 16 shows the top view and the projections on the camera for scattering coefficients

*σ*

_{s}of (i) 1.0, (ii) 2.0, (iii) 3.0, and (iv) 5.0. It is found that the projection is clear in (i), where most rays scatter once or twice, and the scattering degrades the projection as

*σ*

_{s}increases to 5.0, where rays scatter more than five times on average. The degradation of the projection directly reflects the quality of the raw and aligned sinograms as shown in Fig. 17. The bottom row shows the reconstruction from the aligned sinogram. We see that the degradation of the sinogram affects the reconstruction. While the highly absorbing part has a clear shape in (i), the shape is more blurry in (ii), (iii), and (iv).

The results show that our measurement is degraded by scattering; however, this can possibly be overcome using descattering techniques [12, 13].

## 5 Conclusion

We investigated the optical measurement of the internal structure of a diffuse surface object. Our framework is built on the shortest-path model that assumes a ray only diffuses on the surface and travels straight inside an object. Our measurement is realized with a simple setup with a rotary stage, light source, and off-the-shelf perspective camera. It was found that the observation of light rays is never sufficient with this setup for the conventional reconstruction method. We solved this problem by introducing a reconstruction method based on numerical optimization. Because of the physical constraint on the light absorption and TV semi-norm regularization, the full interior could be reconstructed. Our method was shown to be able to reconstruct the interior of an object in a real-world experiment. Furthermore, we evaluated the reconstruction with respect to the measurement setup. It was found that the reconstruction is not perfect if rays vital to the reconstruction are not observed. We also confirmed that scattering degrades the measurement; however, the measurement is still useful for a weakly scattering medium.

## Declarations

### Acknowledgements

The authors would like to thank all the people who kindly helped us in conducting this study.

### Funding

This work is partly supported by JSPS KAKENHI Grant Number 26700013, 15K16027, and JST CREST JPMJCR1764.

### Availability of data and materials

The datasets during the current study are available from the corresponding author on reasonable request.

### Authors’ contributions

TI have designed and developed the methodology, performed the experiments, and prepared the manuscript. TF advised on the methodology and paper presentation. TA advised on the reconstruction algorithm, and HK advised on the simulation experiment of CT measurement. YM supervised the research and advised on the paper presentation. All authors read and approved the final manuscript.

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Sharpe J (2004) Optical projection tomography. Annu Rev Biomed Eng 6:209–228.View ArticleGoogle Scholar
- Florescu L, Schotland JC, Markel VA (2009) Single-scattering optical tomography. Phys Rev E 79(3):036607.View ArticleGoogle Scholar
- Yuan B, Tamaki T, Kushida T, Mukaigawa Y, Kubo H, Raytchev B, Kaneda K (2015) Optical tomography with discretized path integral. J Med Imaging 2(3):033501.View ArticleGoogle Scholar
- Ishii Y, Arai T, Mukaigawa Y, Tagawa J, Yagi Y (2013) Scattering tomography by Monte Carlo voting In: IAPR International Conference on Machine Vision Applications, 1–5.Google Scholar
- Akashi R, Nagahara H, Mukaigawa Y, Taniguchi R (2015) Scattering tomography using ellipsoidal mirror In: 2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), 1–5. https://doi.org/10.1109/FCV.2015.7103702.
- Ntziachristos V, Yodh A, Schnall M, Chance B (2000) Concurrent MRI and diffuse optical tomography of breast after indocyanine green enhancement. Proc Natl Acad Sci 97(6):2767–2772.View ArticleGoogle Scholar
- Hebden JC, Gibson A, Yusof RM, Everdell N, Hillman EM, Delpy DT, Arridge SR, Austin T, Meek JH, Wyatt JS (2002) Three-dimensional optical tomography of the premature infant brain. Phys Med Biol 47(23):4155.View ArticleGoogle Scholar
- Culver JP, Siegel AM, Stott JJ, Boas DA (2003) Volumetric diffuse optical tomography of brain activity. Opt Lett 28(21):2061–2063.View ArticleGoogle Scholar
- Iwaguchi T, Funatomi T, Kubo H, Mukaigawa Y (2016) Light path alignment for computed tomography of scattering material. IPSJ Trans Comp Vision Appl 8(1):2.View ArticleGoogle Scholar
- Laurentini A (1994) The visual hull concept for silhouette-based image understanding. IEEE Trans Pattern Anal Mach Intell 16(2):150–162. https://doi.org/10.1109/34.273735.View ArticleGoogle Scholar
- Jensen HW, Christensen PH (1998) Efficient simulation of light transport in scenes with participating media using photon maps In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’98, 311–320.. ACM, New York.View ArticleGoogle Scholar
- Fuchs C, Heinz M, Levoy M, Seidel HP, Lensch HP (2008) Combining Confocal Imaging and Descattering In: Proceedings of the Nineteenth Eurographics Conference on Rendering, EGSR ’08, 1245–1253.. Eurographics Association, Aire-la-Ville.Google Scholar
- Tanaka K, Mukaigawa Y, Kubo H, Matsushita Y, Yagi Y (2015) Recovering inner slices of translucent objects by multi-frequency illumination In: Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference On, 5464–5472.Google Scholar