Skip to main content
  • RESEARCH PAPER
  • Open access
  • Published:

Real-time rendering of aerial perspective effect based on turbidity estimation

Abstract

In real outdoor scenes, objects distant from the observer suffer from a natural effect called aerial perspective that fades the colors of the objects and blends them to the environmental light color. The aerial perspective can be modeled using a physics-based approach; however, handling with the changing and unpredictable environmental illumination as well as the weather conditions of real scenes is challenging in terms of visual coherence and computational cost. In those cases, even state-of-the-art models fail to generate realistic synthesized aerial perspective effects. To overcome this limitation, we propose a real-time, turbidity-based, full-spectrum aerial perspective rendering approach. First, we estimate the atmospheric turbidity by matching luminance distributions of a captured sky image to sky models. The obtained turbidity is then employed for aerial perspective rendering using an improved scattering model. We performed a set of experiments to evaluate the scattering model and the aerial perspective model. We also provide a framework for real-time aerial perspective rendering. The results confirm that the proposed approach synthesizes realistic aerial perspective effects with low computational cost, outperforming state-of-the-art aerial perspective rendering methods for real scenes.

1 Introduction

In real open-air scenes, when a target object viewed by an observer is far, the perceived object’s appearance changes, being fainted and blended to the environmental light color. This natural effect is known as aerial perspective and is due to the light scattering by particles suspended in the atmosphere.

The importance of aerial perspective rendering is reflected in several applications, as illustrated in Fig. 1. It can be employed in image and video composition to generate artistic atmospheric effects over real scenes. It can also be used in computer vision (CV) and computer graphics (CG) for rendering virtual objects with an appearance according to the outdoor scene. Namely, fields such as mixed reality (MR), where CG models are merged into a real scene, can exploit aerial perspective rendering to output more realistic virtual objects.

Fig. 1
figure 1

Aerial perspective rendering with our method. Top row: An input image and the re-targeted synthesized aerial perspective effect (from left to right). Bottom row: A MR application before and after aerial perspective rendering

In general, we have to render an artificial aerial perspective effect on a target object to emulate the natural atmospheric effect. This goal is specially more difficult in real-time applications in outdoor scenes, which present a challenge due to the variant illumination and atmospheric conditions such as clear, hazy, or cloudy days.

A conventional approach for aerial perspective rendering is to find an outdoor light scattering model with parameters that lead to generate a realistic synthesized look according to the real scene. Such scattering models can be analyzed from captured skies using sky illumination models [17]. Due to its simplicity and accuracy for scattering modeling, a heuristic parameter called turbidity (T) has been used to categorize atmospheric conditions [812]. Following that approach, we propose a full-spectrum turbidity-based aerial perspective model that enables us to render realistic aerial perspective effects in real time. Our model heavily relies on estimating turbidity from clear-sky regions of a captured omnidirectional sky image. Thus, scenes where the sky is not visible are beyond the scope of this work.

Method overview: The overview of our aerial perspective rendering approach is illustrated in Fig. 2. Input data is a real omnidirectional sky image captured by fisheye lens camera and the input scene, which can be captured by the same camera or a different perspective or panoramic camera. In image and video composition applications, the input scene is represented by its RGB intensity color, its depth map, and the spectral sensitivity of the camera used to capture the input scene. In MR applications, the input scene is composed of the RGB intensity color of the real scene, the color and depth of the virtual object, and the camera’s spectral sensitivity. The problem addressed in this paper is to estimate the turbidity from the omnidirectional sky image and then use it to render an aerial perspective effect. While the aerial perspective is rendered over the de-hazed input scene in composition applications, it is rendered only on the virtual object in MR. For this purpose, our method consists of the following stages:

  1. 1)

    Turbidity estimation: The captured omnidirectional sky image is compared with turbidity-based sky models to find the turbidity value that provides the best matching (Section 3.4).

  2. 2)

    Aerial perspective rendering: An improved turbidity-based scattering model (Section 5) is used in a full-spectrum aerial perspective rendering equation (Section 4) to generate the final synthesized scene. This stage is performed in real time in a graphics processing unit (GPU) framework (Section 6).

Fig. 2
figure 2

Overview of the proposed method

Contributions: The main contributions of this work are threefold:

  1. 1)

    Improved turbidity-based scattering model for rendering that fit to real atmospheric effects more accurately than previous works [3, 13].

  2. 2)

    A novel full-spectrum, turbidity-based aerial perspective rendering model that synthesizes plausible aerial perspective effects in real scenes and improves over previous works [1315] in terms of visual coherence.

  3. 3)

    A real-time framework for aerial perspective effect rendering. The implementation delivers more than two orders of magnitude speed-up compared to prior art [13], allowing real-time performance needed in applications such as MR.

2 Related work

Previous methods for aerial perspective modeling and rendering rely on understanding the scattering phenomena in the atmosphere. McCartney [16] presented an excellent review of former works on atmospheric optics. His work contains relevant data about the scattering phenomena under different weather conditions categorized by the heuristic parameter turbidity (T). T is used to model the scattering by molecules of air and larger particles, such as haze, and is employed for classifying various atmospheric conditions ranging from pure air to fog. Since the atmospheric phenomenon in [16] is modeled using real data, it has been used in both CV and CG fields. However, such models have been used differently, varying depending on whether the aim is oriented to CV or CG.

2.1 CG-oriented aerial perspective rendering

In this category, the atmospheric optics models are targeted for completely virtual scenes. Preetham et al. [3] presented a full-spectrum turbidity-based analytical sky model for various atmospheric conditions. Based on that model, they developed an approximated scattering model for aerial perspective rendering. Dobashi et al. [17] introduced a fast rendering method to generate various atmospheric scattering effects via graphics hardware. Nielsen [18] presented a real-time rendering system for simulating atmospheric effects. Riley et al. [19] presented a lighting model for rendering several optical phenomena. Schafhitzel et al. [20] rendered planets with atmospheric scattering effects in real time. Bruneton and Neyret [5] rendered both sky and aerial perspective from all viewpoints from the ground to outer space.

The synthesized atmospheric effects generated by the mentioned works are visually plausible in fully CG scenes where the illumination is controlled. However, their direct implementation in real scenes does not have a similar performance, since the scattering models need to be tuned to fit the variant, natural outdoor illumination. Moreover, such models are usually targeted as post processing effects where visual quality is more important than computational cost.

2.2 CV-oriented aerial perspective rendering

This group of methods model the atmospheric phenomenon in real outdoor scenes. Using scattering models, several works were able to restore captured images at different weather conditions. Gao et al. [21] presented an aerial perspective model for haze filtering based on a parameter called maximum visibility. Zhu et al. [15] developed a linear color attenuation prior for image de-hazing based on a parameter called scattering coefficient. The synthesized results from these works successfully corrected and restored at some extent the color of images under hazy conditions. However, these methods are not automatic and the results depend on manual tuning of either the maximum visibility in [21] or the scattering coefficient in [15], which control the amount of de-hazing.

Automatic image restoration approaches have also been proposed in the literature. Narasimhan and Nayar [22] proposed a physics-based scattering model to describe the appearances of real scenes under uniform bad weather conditions. Using that scattering model, their method restored the contrast of one image; nonetheless, their method required a second image of the same scene under a different weather condition. This limitation was overcome by He et al. [14], who proposed an automatic haze-removal approach for single images using a dark channel as prior. Results in [14] showed consistent and fast image de-hazing. However, using their method for aerial perspective rendering leads to appearances that are inconsistent with natural aerial perspective, especially in cases with high haze densities.

To solve the previous drawbacks, Zhao [13] proposed an automatic turbidity-based aerial perspective model. In his approach, turbidity was estimated from captured omnidirectional sky images. The camera’s spectral sensitivity was estimated for conversion from spectral radiance to RGB pixel values. Combining the estimated spectral sensitivity and a simple correction of Preetham’s scattering model [3], his method was able to generate an aerial perspective effect over virtual objects for outdoor MR. However, his method makes the appearance of the synthesized virtual object suffer from a strong aerial perspective effect even for low turbidity values at short distances. Moreover, his approach has a high computational cost.

3 Preliminary

3.1 Aerial perspective modeling

Figure 3 illustrates a general model of aerial perspective. The total light perceived by the observer is a summation of two components: direct transmission and airlight. The direct transmission stands for the light that comes from the target following the optical path and is attenuated until it reaches the observer. The airlight is the environmental light that is scattered in the same direction as the direct transmission and then is attenuated in the way to the observer. The aerial perspective under various atmospheric conditions is broadly modeled as [2224]:

$$\begin{array}{@{}rcl@{}} L(s, \lambda)&{} = {}&L(0, \lambda)e^{-\beta_{\text{sc}}(\lambda) s} \\ &&{+}\: L(\infty, \lambda)\left(1-e^{-\beta_{\text{sc}}(\lambda) s} \right), \end{array} $$
(1)
Fig. 3
figure 3

General aerial perspective model. The total light perceived by the observer is a summation of the direct transmission and airlight

where L(s,λ) is the total light perceived by the observer, L(0,λ) is the light coming from the target without aerial perspective effect, and L(,λ) is the atmospheric light. s is the distance between the target and the observer, and λ is the light wavelength. β sc is the total atmospheric scattering coefficient modeled as

$$\begin{array}{@{}rcl@{}} \beta_{\text{sc}} = \beta_{\mathrm{R}} + \beta_{\mathrm{M}}, \end{array} $$
(2)

where β R is the Rayleigh scattering coefficient that analyzes particles much smaller than λ, such as molecules of air, and β M is the Mie scattering coefficient that models particles whose size is nearly equal to λ, such as particles of haze.

The Rayleigh scattering coefficient is given by [25]

$$ \beta_{\mathrm{R}} = \frac{8\pi^{3}({n}^{2}-1)^{2}}{3{N}\lambda^{4}} \left(\frac{6+3{p}_{n}}{6-7{p}_{n}} \right) e^{-\frac{h}{{H}_{\mathrm{R}0}}}, $$
(3)

and the Mie scattering coefficient is expressed by [26]

$$ \beta_{\mathrm{M}} = 0.434c(T)\pi \left(\frac{2\pi}{\lambda} \right)^{{\nu} -2}K(\lambda)e^{-\frac{h}{{H}_{\mathrm{M}0}}}, $$
(4)

where n=1.0003 is the refractive index of air in the visible spectrum, N=2.545×1025 m−3 is the molecular number density of the standard atmosphere, p n =0.035 is the depolarization factor for air, h is the altitude at the scattering point, H R0=7994 m is the scale height for the Rayleigh scattering, c(T) is the concentration factor that depends on the atmospheric turbidity, ν=4 is the Junge’s exponent, K(λ) is the wavelength-dependent fudge factor, and H M0=1200 m is the scale height for Mie scattering.

3.2 Atmospheric condition via turbidity

Turbidity is defined as the ratio of the optical thickness of the atmosphere composed by molecules of air plus larger particles to the optical thickness of air molecules alone [16]:

$$ T = \frac{\int_{h_{\mathrm{i}}}^{h_{\mathrm{f}}} \!{\beta_{\mathrm{R}}(h)} \, \mathrm{d}h + \int_{h_{\mathrm{i}}}^{h_{\mathrm{f}}} \!{\beta_{\mathrm{M}}(h)} \, \mathrm{d}h }{\int_{h_{\mathrm{i}}}^{h_{\mathrm{f}}} \! {\beta_{\mathrm{R}}(h)} \, \mathrm{d}h }, $$
(5)

where h i and h f are the initial and final altitudes of the optical path, respectively.

Preetham et al. [3] presented an analytical sky model for various atmospheric conditions through turbidity. Their model relates the luminance Y(cd/m2) of sky in any viewing direction V with respect to the luminance at a reference point Y z by

$$ Y = \frac{F(\theta,\gamma,T)}{F(0,\theta_{\mathrm{s}},T)} Y_{\mathrm{z}}, $$
(6)

where F is the sky luminance distribution model of Perez et al. [27], θ is the zenith angle of viewing direction, θ s is the zenith angle of the sun, and γ is the angle of the sun direction with respect to the viewing direction (see coordinates in Fig. 4).

Fig. 4
figure 4

Coordinates in the sky hemisphere where the observer is at the origin

3.3 Rendering equation

In MR applications, we need an equation to convert radiometric formulas, such as the spectral radiance, to pixel color values, such as RGB. In general, when an object is illuminated by a source of light, the reflected light goes through the camera lens and is recorded by its charged couple device (CCD). Then the recorded image intensity for the channel c {r,g,b} can be modeled as

$$ I_{\mathrm{c}} = \int_{380~\text{nm}}^{780~\text{nm}} \! {L(\lambda)q_{\mathrm{c}}(\lambda)} \, \mathrm{d}\lambda, $$
(7)

where L(λ) is the reflected spectral radiance at the object surface, the range 380 to 780 nm stands for the visible spectrum of light, and q c(λ) is the spectral sensitivity of the camera.

The camera’s spectral sensitivity is important for color correction since it compensates the effects of the recording illumination. In this matter, we benefited from Kawakami et al. [28] and the public data of spectral sensitivity for various cameras [29]. They estimated q c(λ) from omnidirectional captured sky images and turbidity-based sky spectra.

3.4 Atmospheric turbidity estimation

The atmospheric turbidity can be estimated by matching the luminance distribution of turbidity-based Preetham sky models and an omnidirectional sky image captured by a fisheye lens camera as in [13]. First, the sun position is estimated at the captured sky image by either finding the center of the saturated area of the sun or using the longitude, latitude, date, and time at the observer’s position. Then the luminance ratio Y i /Y ref (Y from the XYZ color space) is calculated between a sampling point i and a reference point ref that can be the zenith or any other visible point in the captured sky image. The ratio Y i (T)/Y ref(T) is computed at the corresponding points in the Preetham sky models with the same sun position using Eq. (6). The turbidity-based sky model that best matches the captured sky image is the one with the lowest difference between both ratios. Therefore, the targeted turbidity is the solution to the minimization problem:

$$ \underset{T\in [1,20]}{\text{arg min}} \sum\limits_{n=1}^{N} \left| \frac{Y_{i}(T)}{Y_{\text{ref}}(T)} - \frac{Y_{i}}{Y_{\text{ref}}} \right|, $$
(8)

where N is the number of sample points used in the calculation process. In this paper, we solve for the turbidity using the Levenberg-Marquardt algorithm (LMA), a simpler yet efficient approach compared to the particle swarm optimization used in [13].

Since the Preetham sky model does not provide equations for calculating the brightness of cloudy pixels, the Random Sample Consensus (RANSAC) approach is used to remove cloudy pixels (outliers) from the sampling and estimate turbidity only from clear-sky pixels (inliers).

4 Aerial perspective rendering equation

In order to render an aerial perspective effect in applications that contain only real scenes or both real and virtual objects, the RGB color system is more convenient to use than a spectral radiance system. Originally, the aerial perspective rendering equation for one viewing direction can be obtained by replacing the aerial perspective model of Eq. (1) in Eq. (7). From these equations, the observer perceives the intensity value I c of a target object’s pixel at distance s for the channel c {r,g,b} as

$$\begin{array}{@{}rcl@{}} I_{\mathrm{c}} &=& \int \! {L(0,\lambda)e^{-\beta_{\text{sc}}(\lambda, T, h_{0}) s}q_{\mathrm{c}}(\lambda)} \, \mathrm{d}\lambda \\ &&{+}\: \int \! {L(\infty,\lambda) \left(1-e^{-\beta_{\text{sc}}(\lambda,T,h_{0}) s} \right) q_{\mathrm{c}}(\lambda)} \, \mathrm{d}\lambda, \end{array} $$
(9)

where L(0,λ), L(,λ), β sc(·), and s are same as in Eq. (1) and h 0 is the altitude at the observer position.

To simplify Eq. (9) into an RGB-based rendering equation, we can assume q c(λ) to be a narrow band. In this way, we approximated the spectral sensitivity in the direct transmission and airlight by Dirac’s delta function. Generalizing this approximation for any observer’s viewing direction V(θ,ϕ), we obtain

$${} I_{\mathrm{c}}(s,V) = I^{0}_{\mathrm{c}}(V) \Gamma_{\mathrm{c}} (T,s) + I^{\infty}_{\mathrm{c}}(T, V) \left(1- \Gamma_{\mathrm{c}}(T,s) \right), $$
(10)

where \(I^{0}_{\mathrm {c}}\) is the intensity value of a pixel at the target object, at distance s, and viewing direction V, without any aerial perspective effect. \(I^{\infty }_{\mathrm {c}}\) is the sky intensity value at an infinite distance in the same viewing direction V, and Γ c is the attenuation factor approximated as

$$ \Gamma_{\mathrm{c}}(T,s) = \frac{\int_{380~\text{nm}}^{780~\text{nm}} \! {e^{-\beta_{\text{sc}}(\lambda, T, h_{0}) s}q_{\mathrm{c}}(\lambda)} \, \mathrm{d}\lambda}{\int_{380~\text{nm}}^{780~\text{nm}} \! {q_{\mathrm{c}}(\lambda)} \, \mathrm{d}\lambda}. $$
(11)

In real daylight scenes, we can calculate \(I^{\infty }_{\mathrm {c}}(T, V)\) from another viewing direction V (θ ,ϕ), with the same azimuth ϕ but different zenith θ , of the captured sky. First, the visible sky is roughly segmented from the textureless area of the captured image using a watershed algorithm. Then, a horizon region within the visible sky pixels with the highest azimuth angles is estimated. Finally, \(I^{\infty }_{\mathrm {c}}(\cdot)\) is computed from pixels in the horizon region that have the highest intensity value \(I^{\infty }_{\mathrm {c}}(\theta ')\) by

$$ I^{\infty}_{\mathrm{c}}(T, \theta) = I^{\infty}_{\mathrm{c}}(\theta') \varsigma (T, \theta, \theta'), $$
(12)

where ς(·) is an intensity ratio modeled according to Preetham sky models as

$$ \varsigma (T, \theta, \theta') = \frac{1+(0.178T-1.463)e^{({-0.355}T+0.427)/ {\cos \theta}}}{1+(0.178T-1.463)e^{({-0.355}T+0.427)/{\cos \theta'}}}. $$
(13)

5 Improved scattering model for rendering

Scattering models for real scenes require parameters that guarantee a realistic result when rendering the aerial perspective effect. For this purpose, we propose an improved scattering model based on real data of [16] about weather conditions via scattering coefficients, which is summarized in Table 2 of the Appendix. The data in [16] was measured under standard conditions, which is using a spectrally weighted average wavelength (λ=550 nm) for daylight within the visual spectrum at sea level (h=0 m).

5.1 Rayleigh scattering coefficient correction

We can obtain the value of the Rayleigh scattering coefficient of β R=0.0141 km−1 under standard conditions from Table 2 in the Appendix. However, using Eq. (3) for such conditions results in β R=0.0135 km−1. This slight variation in β R of 0.0006 km−1 is actually considerable in terms of the attenuation factor. According to the International Visibility Code summed up in [16], the visibility range in pure air is up to 277 km. This means that a variation of 0.0006 km −1 in the scattering coefficient in that visibility range affects the attenuation factor in 84.69%. To adjust this disparity, we propose a straightforward multiplicative correction factor K R given by

$$ {K}_{\mathrm{R}} = 0.0141/0.0135 = 1.0396. $$
(14)

Then our modified Rayleigh scattering coefficient is given by

$$ \hat{\beta}_{\mathrm{R}} = \frac{8\pi^{3}({n}^{2}-1)^{2}}{3{N}\lambda^{4}} \left(\frac{6+3{p}_{n}}{6-7{p}_{n}} \right) e^{-\frac{h_{0}}{{H}_{\text{R0}}}} \times {K}_{\mathrm{R}}, $$
(15)

where n, N, p n , and H R0 are the same as in Eq. (3), h 0 is the altitude at the observer, and K R is given by Eq. (14).

5.2 Mie scattering coefficient correction

One issue in Preetham’s scattering model [3] is related to the turbidity itself. From Eq. (5), T=1 refers to the ideal case where the Mie scattering coefficient is zero. Thus, the concentration factor c=(0.6544T−0.6510)×10−16 of Preetham [3] and c=(0.6544T−0.6510)×10−18 of Zhao [13] should be zero for T=1. We corrected this issue to ensure a more reliable fitting to the real data in [16] by

$$ \hat{c}(T) = (0.65T-0.65)\times 10^{-16}. $$
(16)

Another issue is the value of the fudge factor K in [3, 13]. The fudge factor affects exponentially to the part of the attenuation factor corresponding to the Mie scattering. Thus, adjusting K to the real data in [16] is essential to handle hazy atmospheric conditions accurately. Preetham et al. [3] and Zhao [13] used a wavelength-dependent K[ 0.65,0.69] for wavelengths λ [ 380,780] nm. However, such fudge factor values do not match the data in [16]. Therefore, we corrected K according to Table 2, calculating an average fudge factor solving Eq. (4) under standard conditions (λ=550 nm and h=0 m). The obtained fudge factor was

$$ {K}_{\mathrm{M}} = 0.0092. $$
(17)

Then our modified Mie scattering coefficient can be written as

$$ \hat{\beta}_{M} = 0.434 \hat{c}(T)\pi \left(\frac{2\pi}{\lambda} \right)^{\nu -2}e^{-\frac{h_{0}}{{H}_{\text{M0}}}}\times {K}_{\mathrm{M}}, $$
(18)

where \(\hat {c}\) is given by Eq. (16), ν and H M0 are the same as in Eq. (4), h 0 is the altitude at the observer, and K M is given by Eq. (17).

6 GPU implementation of the aerial perspective rendering

Nowadays, GPU implementation is common in CV and CG. Given the proposed aerial perspective model, we now present how to implement it on a GPU. First, we show the GPU rendering pipeline that includes both a general rendering pipeline and our proposed fragment shader. Then we explain the fragment shader in more detail.

6.1 GPU rendering pipeline

A 3D graphics rendering pipeline employs 3D objects described by their vertices and primitives to generate color values of pixels to be shown on a display. Figure 5 illustrates a general GPU rendering pipeline in solid lines and the proposed GLSL (OpenGL Shader Language) fragment shader in dashed lines.

Fig. 5
figure 5

The GPU rendering pipeline

In MR applications, we denote VO for the virtual object and BG for the background to which the virtual object is merged. In case of composition applications, the input image is considered as VO, while there is no BG. Without lost of generality, we explain the rendering pipeline only for the MR case.

In general, raw vertices and primitives of a VO inputted to a vertex shader are processed and transformed for a rasterizer. The rasterizer scans and converts the transformed primitives into 3D fragments, which are then processed in the default fragment shader and merged to obtain textured and lighten 2D fragments. Normally, the resulting 2D fragments are stored in a default frame buffer and then go to the display. We propose to employ one off-screen frame buffer for storing the 2D fragments of the VO coming from the default fragment shader and another off-screen frame buffer for storing the captured real scene that we will call BG. Since the aerial perspective rendering of Eq. (10) is an RGB-based model, we implement it on a GPU at a fragment level. We insert the fragment shader between the two off-screen buffers and the default frame buffer in order to render a MR frame where the VO has a synthesized aerial perspective effect seamlessly to the natural atmospheric effect visualized on BG.

6.2 Proposed GLSL fragment shader

Figure 6 illustrates the required parameters for the GLSL fragment shader in order to render an aerial perspective effect. In MR, BG stands for one frame of the real scene with turbidity T captured by a camera with spectral sensitivity q c. Since we estimate T off-line, the proposed fragment shader only needs the position and color of a BG pixel and a VO fragment, respectively, for the computation. The position of a point in world coordinates with respect to an observer located in the origin is given by the depth s, azimuth ϕ, and zenith θ. The BG’s pixel color I BG is given by its RGB values, and the VO’s fragment color I VO is given by the RGB values of the CG model textured and illuminated without aerial perspective effect.

Fig. 6
figure 6

Proposed GLSL fragment shader for rendering with aerial perspective effect

Using the abovementioned parameters, the GLSL fragment shader consists of the following steps:

  1. 1)

    Initialization: The program calls the textures I VO and I BG, the target’s relative depth \(\bar {s}\in \,[\!0,1]\) and position (l x,l y) in 2D screen coordinates, the turbidity T, and the spectral sensitivity q c.

  2. 2)

    Positioning: The target’s absolute position in world coordinates is estimated by

    $$ s = {s}_{\text{near}}/(1-\bar{s}(1-{s}_{\text{near}}/{s}_{\text{far}})), $$
    (19)
    $$ \phi = \arctan(y/x), $$
    (20)
    $$ \theta = \arccos(z / \sqrt{x^{2}+y^{2}+z^{2}}), $$
    (21)

    where the depth s is in meters; s near and s far are the distances of near and far planes, respectively; and (x,y,z) is the target’s relative position in world coordinates computed from

    $$ \left[ \begin{array}{c} x \\ y \\ z \\ w \end{array} \right] = [M_{4\times 4} \times P_{4\times 4} \times \mathrm{R}_{4\times 4}]^{-1} \left[ \begin{array}{l} l_{\mathrm{x}} \\ l_{\mathrm{y}} \\ \bar{s} \\ 1 \end{array} \right], $$
    (22)

    where M and P are the model view matrix and the projection matrix, respectively, and R is the remap matrix given by

    $$ {R}_{4\times 4} = \left[ \begin{smallmatrix} 2 & 0 & 0 & -1 \\ 0 & 2 & 0 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 1 \end{smallmatrix} \right]. $$
    (23)
  3. 3)

    Aerial perspective rendering: The attenuation factor Γ c(T,s) is computed using Eq. (11). \(I^{\infty }_{\mathrm {c}}(T, \phi, \theta)\) is computed according to Eqs. (12) and (13). The target with aerial perspective effect \(\hat {I}_{\text {VO}}\) is calculated using Eq. (10) and then blended with I BG to produce the final result.

7 Experimental results

In this section, we evaluated the turbidity estimation approach and the aerial perspective rendering model. Composition application was used for qualitative and quantitative evaluation of the method. We also provide an application on mixed reality. All the following experiments were run on C++ on a PC with OS: Windows 7; CPU: Corei7 2.93 GHz; RAM: 16 GB; GPU: nVIDIA GTX 550 Ti 4049 MB.

7.1 Turbidity estimation test

We tested our approach for turbidity estimation using static omnidirectional images of both simulated skies and captured skies.

7.1.1 Evaluation with sky models

We estimated turbidity 100 times taking Preetham sky models as input images. For this purpose, we implemented the Preetham sky models, which are illustrated in Fig. 7. These models are sky images of 500 by 500 pixels with different values of turbidity ranging from 2.0 to 9.0 and sun position θ s=58.4° and ϕ s=−179.4°. The atmospheric turbidity was estimated for each input image using Eq. (8). N = 100 random sampling points were taken for each turbidity estimation. The results are shown in Table 1, where \(\bar {T}\) stands for the mean value of turbidity and σ T stands for the corresponding standard deviation. The speed of the turbidity estimation method was 200 sampling points/second.

Fig. 7
figure 7

Implemented Preetham sky models. From left to right: T = 2, T = 3, T = 4, T = 5, T = 7, T = 9

Table 1 Estimated turbidity values using the Preetham sky models as an input image

7.1.2 Evaluation with captured sky images

We estimated turbidity for omnidirectional sky images captured by Canon EOS5D with a fisheye lens at 12 p.m. in different days. The sky images are illustrated in Fig. 8. N=100 random sampling points were used for each turbidity estimation. Turbidity was estimated 50 times for each sky image.

Fig. 8
figure 8

Captured sky images by EOS5D with fisheye lens. From left to right: T = 1.90, T = 2.52, T = 2.54, T = 2.94, T = 3.11, T = 4.36

7.2 Aerial perspective model evaluation

7.2.1 Evaluation of the scattering coefficients

From the proposed corrections, under standard conditions (λ=550 nm and h 0=0 m), our \(\hat {\beta }_{M}\) is approximately 70 times smaller than the β M of [3] and roughly 1.43 times smaller than the corrected Mie scattering coefficient of [13]. We can compare the impact of the Mie scattering coefficient on the aerial perspective effect. To this end, we can employ the approximated values of the attenuation factor of [3, 13] and ours, given by \(\phantom {\dot {i}\!}e^{-\beta _{\text {sc}}s}\), \(\phantom {\dot {i}\!}e^{-0.01\beta _{\text {sc}}s}\), and \(\phantom {\dot {i}\!}e^{-0.0137\beta _{\text {sc}}s}\), respectively. The results illustrated in Fig. 9 show that our attenuation is weaker than Preetham’s attenuation but stronger than Zhao’s attenuation.

Fig. 9
figure 9

Approximate relation between the attenuation factor of Preetham [3], Zhao [13], and our proposal

We also provide a classification of scattering coefficients through turbidity, as illustrated in Fig. 10. From Eqs. (15) and (18), we have

$$ \hat{\beta}_{\text{M1}}/\hat{\beta}_{\text{M2}} = (T_{1} - 1)/(T_{2} -1), $$
(24)
Fig. 10
figure 10

Scattering coefficients through turbidity

where \(\hat {\beta }_{\text {M1}}\) and \(\hat {\beta }_{\text {M2}}\) refer to our improved Mie scattering coefficient for turbidities T 1 and T 2, respectively. Considering a turbidity of 1.6 for an exceptionally clear atmospheric condition, we plotted Fig. 10 using Eq. (24).

7.2.2 Airlight evaluation

We performed a qualitative evaluation of the airlight constituent of real images using our rendering model of Eq. (10). Real scenes of Tokyo city were captured using Canon EOS5D. The experiments aim to show performance using a single image as input. Because of this, we used Google earth to manually estimate a rough depth map of the scenes. Nonetheless, depth maps can be estimated either from two images of the same scene at different weather conditions using the proposed aerial perspective model or from single images using approaches as in [14, 15]. For a fair evaluation and comparison with state-of-the-art approaches [1315], the input images were manually segmented to only apply aerial perspective effect over the scene excluding the sky. In addition, the parameters used in the mentioned approaches were set to be optimal.

Figure 11 illustrates our results as well as the results obtained using methods of [1315] for different atmospheric conditions. Theoretically, the airlight component should only contain information from the environmental light affected by the attenuation factor. While our airlight visibly proved to follow such theoretic consistency, the airlights from the other methods clearly retained color information from the target objects.

Fig. 11
figure 11

Airlight evaluation with real scenes. From top to bottom rows: scenes with turbidities T=1.9, T=2.94, and T=4.36. a Input images. Airlight results of b He et al. [14], c Zhao [13], d Zhu et al. [15], and e ours. Depth map in top-left image was used only by [13] and our method

Moreover, due to the attenuation factor, the more distant the target object is, the more similar to the environmental illumination color the airlight should be. Certainly, observing the far way mountains in Fig. 11 we notice that our airlight also followed that theoretic definition, while [1315] did not succeed to do so.

7.2.3 Evaluation of the aerial perspective effect

In this experiment, we rendered an aerial perspective effect using a given source image I s and compared the synthesized output \(\hat {I}_{\mathrm {t}}\) with a ground truth target image I t of the same scene. We use the subscripts s and t to refer to the source and target, respectively. We dropped the subscript channel c {r,g,b} just for readability; however, the computation was carried out in the three channels. In general, based on Eq. (10), if we assume constant reflectance properties for objects in the scene, we can first estimate the normalized radiance ρ(x) at pixel x in the source image. Since ρ(x) does not depend on the atmospheric condition, that is

$$ \rho(\mathbf{x}) = \frac{I_{\mathrm{s}}^{0}(\mathbf{x})}{I_{\mathrm{s}}^{\infty}(\mathbf{x})} = \frac{I_{\mathrm{t}}^{0}(\mathbf{x})}{I_{\mathrm{t}}^{\infty}(\mathbf{x})}, $$
(25)

the desired aerial perspective can be applied on the normalized radiance. We compute this two-step process directly by

$$\begin{array}{@{}rcl@{}} \hat{I}_{\mathrm{t}}(\mathbf{x})&{} = {}&I_{\mathrm{s}}(\mathbf{x}) \left(\frac{I_{\mathrm{t}}^{\infty}(\mathbf{x}) \Gamma_{\mathrm{t}} (T_{\mathrm{t}},\mathbf{x})}{I_{\mathrm{s}}^{\infty}(\mathbf{x}) \Gamma_{\mathrm{s}} (T_{\mathrm{s}},\mathbf{x})} \right) \\ &&{+}\: I_{\mathrm{t}}^{\infty}(\mathbf{x}) \left(1 - \frac{\Gamma_{\mathrm{t}} (T_{\mathrm{t}},\mathbf{x})}{\Gamma_{\mathrm{s}} (T_{\mathrm{s}},\mathbf{x})} \right), \end{array} $$
(26)

In the evaluation, we used the input image with T s=1.9 of Fig. 11 as the source image since it provides more detailed color information than scenes with higher turbidities. We targeted to ground truth images with T t={2.11, 2.54, 2.94, 4.36}.

Figure 12 displays our qualitative results as well as a comparison with results from [1315]. As can be appreciated from the results, our method generated more visually coherent appearances than the state-of-the-art techniques. Synthesized results of all methods were similar to the ground truth for close objects, such as the biggest building in the scenes. However, while our method prevailed effectively along the entire scene, [1315] suffered from appearance inconsistencies in more distant regions.

Fig. 12
figure 12

Aerial perspective rendering evaluation on real-world images. From top to bottom rows: scenes with T=2.11, T=2.54, T=2.94, and T=4.36. a Ground truth target images. Synthesized results of b He et al. [14], c Zhao [13], d Zhu et al. [15], and e ours

We also performed a quantitative evaluation using two metrics: the hue saturation brightness (HSV) histogram correlation and the structural similarity (SSIM) image quality index [30] (see Fig. 13). The histogram correlation was calculated as

$$ {\text{Corr}}_{\mathrm{c}}(H_{1}, H_{2}) = \frac{\sum\limits_{\mathrm{c}} \left(H_{1}(\mathrm{c})-\bar{H}_{1} \right) \left(H_{2}(\mathrm{c})-\bar{H}_{2} \right)}{\sqrt{\left(H_{1}(\mathrm{c})-\bar{H}_{1} \right)^{2} \left(H_{2}(\mathrm{c})-\bar{H}_{2} \right)^{2}}}, $$
(27)
Fig. 13
figure 13

Quantitative evaluation of different methods of aerial perspective rendering on real scenes with T=2.11, T=2.54, T=2.94, and T=4.36

where H is histogram, \(\bar {H}\) stands for the histogram mean, c {H-S,V}, and lower indexes 1 and 2 correspond to \(\hat {I}_{\mathrm {t}}\) and I t, respectively. In both the HSV correlation and the SSIM index metrics, a higher value represents a higher similarity between the synthesized aerial perspective and the ground truth.

The quantitative results showed that our approach outperformed the methods mentioned beforehand. It is worth noting that while [14] had a better SSIM index than ours only at the least hazy scene, our method provided the highest combined HSV histogram correlation for all scenes. In general, at lower turbidities, [14] rendered compelling results closer to ours than [13, 15]. However, contrary to our method, the quality of [1315] drastically decreased as the haze became denser.

7.3 Application on MR

We applied the aerial perspective effect to a CG model rendered in a real scene. For convenience of the experiment, we employed a fixed view in order to avoid occlusion and tracking issues and focus on the appearance issue. However, this feature is not a limitation since the fixed view issue can be handled using conventional tracking systems. The altitude at the observer position was h 0=40 m above sea level. The distance from the CG model to the observer was around 3500 m. The real scenes of the experiments correspond to the scenes captured for the turbidity estimation test seen in Fig. 8. We use the real scenes with estimated atmospheric turbidities of 1.9, 2.10, 2.94, and 4.36.

The rendered results are shown in Fig. 14. We provide the MR results using Zhao’s method [13] for the comparison. We found that the proposed method synthesized more plausible results in terms of visual coherence between the virtual object and the real scene. In terms of computational cost, our rendering speed (14 fps for a full HD frame size) was 225 times faster than Zhao’s method.

Fig. 14
figure 14

MR application with aerial perspective effect. From top to bottom rows: scenes with turbidities T=1.9, T=2.11, T=2.94, and T=4.36. a Input real scenes. b Virtual object without aerial perspective. c Results of [13]. d Our results

8 Conclusions

We have proposed an efficient turbidity-based method for aerial perspective rendering in real scenes. The atmospheric turbidity is effectively estimated by matching the luminance distributions of a sky model and an omnidirectional captured sky image. An improved scattering model was deduced using real data to classify scattering coefficient values via turbidity. The enhanced scattering model was employed to provide a novel full-spectrum aerial perspective rendering model. Qualitative and quantitative evaluations on real and synthesized data show that the rendering method accomplishes realistic appearances seamlessly to the natural aerial perspective in real time, outperforming related works in terms of appearance quality and computational cost.

9 Appendix

Table 2 corresponds to the classification of weather conditions based on scattering coefficients. The data was adapted from [16], where measurements were carried out under standard conditions. Standard conditions refer to a spectrally weighted average wavelength (λ=550 nm) for daylight within the visual spectrum at sea level (h=0 m).

Table 2 Weather conditions via scattering coefficients

References

  1. Nishita T, Sirai T, Tadamura K, Nakamae E (1993) Display of the earth taking into account atmospheric scattering In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH’93, 175–182.. ACM, New York.

    Chapter  Google Scholar 

  2. Nishita T, Dobashi Y, Kaneda K, Yamashita H (1996) Display method of the sky color taking into account multiple scattering In: Proceedings of the Fourth Pacific Conference on Computer Graphics and Applications (Pacific Graphics ’96), 66–79.

  3. Preetham AJ, Shirley P, Smits B (1999) A practical analytic model for daylight In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’99, 91–100.. ACM Press/Addison-Wesley Publishing Co., New York.

    Chapter  Google Scholar 

  4. Haber J, Magnor M, Seidel HP (2005) Physically-based simulation of twilight phenomena. ACM Trans Graphics (TOG) 24(4): 1353–73.

    Article  Google Scholar 

  5. Bruneton E, Neyret F (2008) Precomputed atmospheric scattering. Comput Graphics Forum 27(4): 1079–1086.

    Article  Google Scholar 

  6. Hosek L, Wilkie A (2012) An analytic model for full spectral sky-dome radiance. ACM Trans Graphics (TOG) 31(4): 95.

    Article  Google Scholar 

  7. Hosek L, Wilkie A (2013) Adding a solar-radiance function to the Hosek-Wilkie skylight model. IEEE Comput Graphics Appl 33(3): 44–52.

    Article  Google Scholar 

  8. Kerker M (1969) The scattering of light and other electromagnetic radiation. Academic press, New York.

    Google Scholar 

  9. Kider Jr, Knowlton D, Newlin J, Li YK, Greenberg DP (2014) A framework for the experimental comparison of solar and skydome illumination. ACM Trans Graphics (TOG) 33(6): 180.

    Article  Google Scholar 

  10. Jung J, Lee JY, Kweon IS (2015) One-day outdoor photometric stereo via skylight estimation In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4521–4529, Boston.

  11. Satilmis P, Bashford-Rogers T, Debattista K, Chalmers A (2016) A machine learning driven sky model In: IEEE Computer Graphics and Applications.

  12. Wang X, Gao J, Fan Z, Roberts NW (2016) An analytical model for the celestial distribution of polarized light, accounting for polarization singularities, wavelength and atmospheric turbidity. J Optics 18(6): 065601.

    Article  Google Scholar 

  13. Zhao H (2012) Estimation of atmospheric turbidity from a sky image and its applications. Ph.D. dissertation, Graduate School of Information Science and Technology, The University of Tokyo.

  14. He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33(12): 2341–2353.

    Article  Google Scholar 

  15. Zhu Q, Mai J, Shao L (2015) A fast single image haze removal algorithm using color attenuation prior. IEEE Trans Image Process 24(11): 3522–3533.

    Article  MathSciNet  Google Scholar 

  16. McCartney EJ (1976) Optics of the atmosphere: scattering by molecules and particles, vol. 1. John Wiley and Sons, Inc., New York,p. 421.

    Google Scholar 

  17. Dobashi Y, Yamamoto T, Nishita T (2002) Interactive rendering of atmospheric scattering effects using graphics hardware In: Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Conference on Graphics Hardware, ser. HWWS ’02, 99–107.. Eurographics Association, Aire-la-Ville, Switzerland.

    Google Scholar 

  18. Nielsen RS (2003) Real time rendering of atmospheric scattering effects for flight simulators. Master’s thesis, Informatics and Mathematical Modelling, Technical University of Denmark, DTU, Richard Petersens Plads, Building 321, DK-2800 Kgs. Lyngby.

  19. Riley K, Ebert DS, Kraus M, Tessendorf J, Hansen C (2004) Efficient rendering of atmospheric phenomena In: Proceedings of the Fifteenth Eurographics Conference on Rendering Techniques, ser. EGSR’04, 375–386.. Eurographics Association, Aire-la-Ville, Switzerland.

    Google Scholar 

  20. Schafhitzel T, Falk M, Ertl T (2007) Real-time rendering of planets with atmospheres In: Journal of WSCG 15(1–3), 91–98.

  21. Gao R, Fan X, Zhang J, Luo Z (2012) Haze filtering with aerial perspective In: 19th IEEE International Conference on Image Processing (ICIP), 989–992.

  22. Narasimhan SG, Nayar SK (2003) Contrast restoration of weather degraded images. Pattern Anal Mach Intell IEEE Trans 25(6): 713–724.

    Article  Google Scholar 

  23. Nayar S, Narasimhan S (1999) Vision in bad weather In: International Conference on Computer Vision, 820–827.

  24. Tan R (2008) Visibility in bad weather from a single image In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 1–8.

  25. Strutt JW (1871) Lviii. On the scattering of light by small particles. London, Edinburgh Dublin Philos Mag J Sci 41(275): 447–454.

    Google Scholar 

  26. Mie G (1908) Beitrage zur optik truber medien, speziell kolloidaler metallosungen. Annalen der Physik 330(3): 377–445.

    Article  MATH  Google Scholar 

  27. Perez R, Seals R, Michalsky J (1993) All-weather model for sky luminance distribution-preliminary configuration and validation. Solar Energy 50(3): 235–245.

    Article  Google Scholar 

  28. Kawakami R, Zhao H, Tan RT, Ikeuchi K (2013) Camera spectral sensitivity and white balance estimation from sky images. Int J Comput Vis 105(3): 187–204.

    Article  MATH  Google Scholar 

  29. Zhao H (2013) Spectral sensitivity database. http://www.cvl.iis.u-tokyo.ac.jp/rei/research/cs/zhao/database.html. Accessed 31 May 2013.

  30. Wang Z, Bovik A, Sheikh H, Simoncelli E (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4): 600–612.

    Article  Google Scholar 

Download references

Acknowledgements

This work was, in part, supported by JSPS KAKENHI Grant Number 16H05864.

Authors’ contributions

CM, TO, and KI designed the study, developed the methodology, and performed the analysis. CM collected the data. CM wrote the manuscript, and TO and KI helped to polish it. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos Morales.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Morales, C., Oishi, T. & Ikeuchi, K. Real-time rendering of aerial perspective effect based on turbidity estimation. IPSJ T Comput Vis Appl 9, 1 (2017). https://doi.org/10.1186/s41074-016-0012-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41074-016-0012-1

Keywords