Real-time rendering of aerial perspective effect based on turbidity estimation

In real outdoor scenes, objects distant from the observer suffer from a natural effect called aerial perspective that fades the colors of the objects and blends them to the environmental light color. The aerial perspective can be modeled using a physics-based approach; however, handling with the changing and unpredictable environmental illumination as well as the weather conditions of real scenes is challenging in terms of visual coherence and computational cost. In those cases, even state-of-the-art models fail to generate realistic synthesized aerial perspective effects. To overcome this limitation, we propose a real-time, turbidity-based, full-spectrum aerial perspective rendering approach. First, we estimate the atmospheric turbidity by matching luminance distributions of a captured sky image to sky models. The obtained turbidity is then employed for aerial perspective rendering using an improved scattering model. We performed a set of experiments to evaluate the scattering model and the aerial perspective model. We also provide a framework for real-time aerial perspective rendering. The results confirm that the proposed approach synthesizes realistic aerial perspective effects with low computational cost, outperforming state-of-the-art aerial perspective rendering methods for real scenes.


Introduction
In real open-air scenes, when a target object viewed by an observer is far, the perceived object's appearance changes, being fainted and blended to the environmental light color. This natural effect is known as aerial perspective and is due to the light scattering by particles suspended in the atmosphere.
The importance of aerial perspective rendering is reflected in several applications, as illustrated in Fig. 1. It can be employed in image and video composition to generate artistic atmospheric effects over real scenes. It can also be used in computer vision (CV) and computer graphics (CG) for rendering virtual objects with an appearance according to the outdoor scene. Namely, fields such as mixed reality (MR), where CG models are merged into a real scene, can exploit aerial perspective rendering to output more realistic virtual objects.
In general, we have to render an artificial aerial perspective effect on a target object to emulate the natural *Correspondence: carlos@cvl.iis.u-tokyo.ac.jp 1 The University of Tokyo, Tokyo, Japan Full list of author information is available at the end of the article atmospheric effect. This goal is specially more difficult in real-time applications in outdoor scenes, which present a challenge due to the variant illumination and atmospheric conditions such as clear, hazy, or cloudy days.
A conventional approach for aerial perspective rendering is to find an outdoor light scattering model with parameters that lead to generate a realistic synthesized look according to the real scene. Such scattering models can be analyzed from captured skies using sky illumination models [1][2][3][4][5][6][7]. Due to its simplicity and accuracy for scattering modeling, a heuristic parameter called turbidity (T) has been used to categorize atmospheric conditions [8][9][10][11][12]. Following that approach, we propose a full-spectrum turbidity-based aerial perspective model that enables us to render realistic aerial perspective effects in real time. Our model heavily relies on estimating turbidity from clear-sky regions of a captured omnidirectional sky image. Thus, scenes where the sky is not visible are beyond the scope of this work.
Method overview: The overview of our aerial perspective rendering approach is illustrated in Fig. 2. Input data is a real omnidirectional sky image captured by fisheye lens camera and the input scene, which can be captured by Aerial perspective rendering with our method. Top row: An input image and the re-targeted synthesized aerial perspective effect (from left to right). Bottom row: A MR application before and after aerial perspective rendering the same camera or a different perspective or panoramic camera. In image and video composition applications, the input scene is represented by its RGB intensity color, its depth map, and the spectral sensitivity of the camera used to capture the input scene. In MR applications, the input scene is composed of the RGB intensity color of the real scene, the color and depth of the virtual object, and the camera's spectral sensitivity. The problem addressed in this paper is to estimate the turbidity from the omnidirectional sky image and then use it to render an aerial perspective effect. While the aerial perspective is rendered over the de-hazed input scene in composition applications, it is rendered only on the virtual object in MR. For this purpose, our method consists of the following stages: 1) Turbidity estimation: The captured omnidirectional sky image is compared with turbidity-based sky models to find the turbidity value that provides the best matching (Section 3.4). 2) Aerial perspective rendering: An improved turbiditybased scattering model (Section 5) is used in a full-spectrum aerial perspective rendering equation (Section 4) to generate the final synthesized scene. This stage is performed in real time in a graphics processing unit (GPU) framework (Section 6).

Contributions:
The main contributions of this work are threefold: 1) Improved turbidity-based scattering model for rendering that fit to real atmospheric effects more accurately than previous works [3,13]. 2) A novel full-spectrum, turbidity-based aerial perspective rendering model that synthesizes plausible aerial perspective effects in real scenes and improves over previous works [13][14][15] in terms of visual coherence. 3) A real-time framework for aerial perspective effect rendering. The implementation delivers more than two orders of magnitude speed-up compared to prior art [13], allowing real-time performance needed in applications such as MR.

Related work
Previous methods for aerial perspective modeling and rendering rely on understanding the scattering phenomena in the atmosphere. McCartney [16] presented an excellent review of former works on atmospheric optics. His work contains relevant data about the scattering phenomena under different weather conditions categorized by the heuristic parameter turbidity (T). T is used to model the scattering by molecules of air and larger particles, such as haze, and is employed for classifying various atmospheric conditions ranging from pure air to fog. Since the atmospheric phenomenon in [16] is modeled using real data, it has been used in both CV and CG fields. However, such models have been used differently, varying depending on whether the aim is oriented to CV or CG.

CG-oriented aerial perspective rendering
In this category, the atmospheric optics models are targeted for completely virtual scenes. Preetham et al. [3] presented a full-spectrum turbidity-based analytical sky model for various atmospheric conditions. Based on that model, they developed an approximated scattering model for aerial perspective rendering. Dobashi et al. [17] introduced a fast rendering method to generate various atmospheric scattering effects via graphics hardware. Nielsen [18] presented a real-time rendering system for simulating atmospheric effects. Riley et al. [19] presented a lighting model for rendering several optical phenomena. Schafhitzel et al. [20] rendered planets with atmospheric scattering effects in real time. Bruneton and Neyret [5] rendered both sky and aerial perspective from all viewpoints from the ground to outer space. The synthesized atmospheric effects generated by the mentioned works are visually plausible in fully CG scenes where the illumination is controlled. However, their direct implementation in real scenes does not have a similar performance, since the scattering models need to be tuned to fit the variant, natural outdoor illumination. Moreover, such models are usually targeted as post processing effects where visual quality is more important than computational cost.

CV-oriented aerial perspective rendering
This group of methods model the atmospheric phenomenon in real outdoor scenes. Using scattering models, several works were able to restore captured images at different weather conditions. Gao et al. [21] presented an aerial perspective model for haze filtering based on a parameter called maximum visibility. Zhu et al. [15] developed a linear color attenuation prior for image dehazing based on a parameter called scattering coefficient. The synthesized results from these works successfully corrected and restored at some extent the color of images under hazy conditions. However, these methods are not automatic and the results depend on manual tuning of either the maximum visibility in [21] or the scattering coefficient in [15], which control the amount of de-hazing.
Automatic image restoration approaches have also been proposed in the literature. Narasimhan and Nayar [22] proposed a physics-based scattering model to describe the appearances of real scenes under uniform bad weather conditions. Using that scattering model, their method restored the contrast of one image; nonetheless, their method required a second image of the same scene under a different weather condition. This limitation was overcome by He et al. [14], who proposed an automatic haze-removal approach for single images using a dark channel as prior. Results in [14] showed consistent and fast image de-hazing. However, using their method for aerial perspective rendering leads to appearances that are inconsistent with natural aerial perspective, especially in cases with high haze densities.
To solve the previous drawbacks, Zhao [13] proposed an automatic turbidity-based aerial perspective model. In his approach, turbidity was estimated from captured omnidirectional sky images. The camera's spectral sensitivity was estimated for conversion from spectral radiance to RGB pixel values. Combining the estimated spectral sensitivity and a simple correction of Preetham's scattering model [3], his method was able to generate an aerial perspective effect over virtual objects for outdoor MR. However, his method makes the appearance of the synthesized virtual object suffer from a strong aerial perspective effect even for low turbidity values at short distances. Moreover, his approach has a high computational cost.  from the target following the optical path and is attenuated until it reaches the observer. The airlight is the environmental light that is scattered in the same direction as the direct transmission and then is attenuated in the way to the observer. The aerial perspective under various atmospheric conditions is broadly modeled as [22][23][24]:

Aerial perspective modeling
where L(s, λ) is the total light perceived by the observer, L(0, λ) is the light coming from the target without aerial perspective effect, and L(∞, λ) is the atmospheric light. s is the distance between the target and the observer, and λ is the light wavelength. β sc is the total atmospheric scattering coefficient modeled as where β R is the Rayleigh scattering coefficient that analyzes particles much smaller than λ, such as molecules of air, and β M is the Mie scattering coefficient that models particles whose size is nearly equal to λ, such as particles of haze. The Rayleigh scattering coefficient is given by [25] and the Mie scattering coefficient is expressed by [26] where n = 1.0003 is the refractive index of air in the visible spectrum, N = 2.545 × 10 25 m −3 is the molecular number density of the standard atmosphere, p n = 0.035 is the depolarization factor for air, h is the altitude at the scattering point, H R0 = 7994 m is the scale height for the Rayleigh scattering, c(T) is the concentration factor that depends on the atmospheric turbidity, ν = 4 is the Junge's exponent, K(λ) is the wavelength-dependent fudge factor, and H M0 = 1200 m is the scale height for Mie scattering.

Atmospheric condition via turbidity
Turbidity is defined as the ratio of the optical thickness of the atmosphere composed by molecules of air plus larger particles to the optical thickness of air molecules alone [16]: where h i and h f are the initial and final altitudes of the optical path, respectively. Preetham et al. [3] presented an analytical sky model for various atmospheric conditions through turbidity. Their model relates the luminance Y (cd/m 2 ) of sky in any viewing direction V with respect to the luminance at a reference point Y z by where F is the sky luminance distribution model of Perez et al. [27], θ is the zenith angle of viewing direction, θ s is the zenith angle of the sun, and γ is the angle of the sun direction with respect to the viewing direction (see coordinates in Fig. 4).

Rendering equation
In MR applications, we need an equation to convert radiometric formulas, such as the spectral radiance, to pixel color values, such as RGB. In general, when an object is illuminated by a source of light, the reflected light goes through the camera lens and is recorded by its charged couple device (CCD). Then the recorded image intensity for the channel c∈{r,g,b} can be modeled as where L(λ) is the reflected spectral radiance at the object surface, the range 380 to 780 nm stands for the visible spectrum of light, and q c (λ) is the spectral sensitivity of the camera. The camera's spectral sensitivity is important for color correction since it compensates the effects of the recording illumination. In this matter, we benefited from Kawakami et al. [28] and the public data of spectral sensitivity for various cameras [29]. They estimated q c (λ) from omnidirectional captured sky images and turbidity-based sky spectra.

Atmospheric turbidity estimation
The atmospheric turbidity can be estimated by matching the luminance distribution of turbidity-based Preetham sky models and an omnidirectional sky image captured by Fig. 4 Coordinates in the sky hemisphere where the observer is at the origin a fisheye lens camera as in [13]. First, the sun position is estimated at the captured sky image by either finding the center of the saturated area of the sun or using the longitude, latitude, date, and time at the observer's position. Then the luminance ratio Y i /Y ref (Y from the XYZ color space) is calculated between a sampling point i and a reference point ref that can be the zenith or any other visible point in the captured sky image. The ratio is computed at the corresponding points in the Preetham sky models with the same sun position using Eq. (6). The turbidity-based sky model that best matches the captured sky image is the one with the lowest difference between both ratios. Therefore, the targeted turbidity is the solution to the minimization problem: where N is the number of sample points used in the calculation process. In this paper, we solve for the turbidity using the Levenberg-Marquardt algorithm (LMA), a simpler yet efficient approach compared to the particle swarm optimization used in [13].
Since the Preetham sky model does not provide equations for calculating the brightness of cloudy pixels, the Random Sample Consensus (RANSAC) approach is used to remove cloudy pixels (outliers) from the sampling and estimate turbidity only from clear-sky pixels (inliers).

Aerial perspective rendering equation
In order to render an aerial perspective effect in applications that contain only real scenes or both real and virtual objects, the RGB color system is more convenient to use than a spectral radiance system. Originally, the aerial perspective rendering equation for one viewing direction can be obtained by replacing the aerial perspective model of Eq. (1) in Eq. (7). From these equations, the observer perceives the intensity value I c of a target object's pixel at distance s for the channel c∈{r,g,b} as where L(0, λ), L(∞, λ), β sc (·), and s are same as in Eq. (1) and h 0 is the altitude at the observer position.
To simplify Eq. (9) into an RGB-based rendering equation, we can assume q c (λ) to be a narrow band. In this way, we approximated the spectral sensitivity in the direct transmission and airlight by Dirac's delta function. Generalizing this approximation for any observer's viewing direction V (θ, φ), we obtain where I 0 c is the intensity value of a pixel at the target object, at distance s, and viewing direction V, without any aerial perspective effect. I ∞ c is the sky intensity value at an infinite distance in the same viewing direction V, and c is the attenuation factor approximated as In real daylight scenes, we can calculate I ∞ c (T, V ) from another viewing direction V (θ , φ), with the same azimuth φ but different zenith θ , of the captured sky. First, the visible sky is roughly segmented from the textureless area of the captured image using a watershed algorithm. Then, a horizon region within the visible sky pixels with the highest azimuth angles is estimated. Finally, I ∞ c (·) is computed from pixels in the horizon region that have the highest intensity value I ∞ c (θ ) by where ς(·) is an intensity ratio modeled according to Preetham sky models as

Improved scattering model for rendering
Scattering models for real scenes require parameters that guarantee a realistic result when rendering the aerial perspective effect. For this purpose, we propose an improved scattering model based on real data of [16] about weather conditions via scattering coefficients, which is summarized in Table 2 of the Appendix. The data in [16] was measured under standard conditions, which is using a spectrally weighted average wavelength (λ = 550 nm) for daylight within the visual spectrum at sea level (h = 0 m).

Rayleigh scattering coefficient correction
We can obtain the value of the Rayleigh scattering coefficient of β R = 0.0141 km −1 under standard conditions from Table 2 in the Appendix. However, using Eq. (3) for such conditions results in β R = 0.0135 km −1 . This slight variation in β R of 0.0006 km −1 is actually considerable in terms of the attenuation factor. According to the International Visibility Code summed up in [16], the visibility range in pure air is up to 277 km. This means that a variation of 0.0006 km −1 in the scattering coefficient in that visibility range affects the attenuation factor in 84.69%.
To adjust this disparity, we propose a straightforward multiplicative correction factor K R given by K R = 0.0141/0.0135 = 1.0396.
Then our modified Rayleigh scattering coefficient is given bŷ where n, N, p n , and H R0 are the same as in Eq. (3), h 0 is the altitude at the observer, and K R is given by Eq. (14).

Mie scattering coefficient correction
One issue in Preetham's scattering model [3] is related to the turbidity itself. From Eq. (5), T = 1 refers to the ideal case where the Mie scattering coefficient is zero. Thus, the concentration factor c = (0.6544T − 0.6510) × 10 −16 of Preetham [3] and c = (0.6544T − 0.6510) × 10 −18 of Zhao [13] should be zero for T = 1. We corrected this issue to ensure a more reliable fitting to the real data in [16] bŷ Another issue is the value of the fudge factor K in [3,13]. The fudge factor affects exponentially to the part of the attenuation factor corresponding to the Mie scattering. Thus, adjusting K to the real data in [16] is essential to handle hazy atmospheric conditions accurately. Preetham et al. [3] and Zhao [13] used a wavelength-dependent K ∈ [0.65, 0.69] for wavelengths λ ∈ [380, 780] nm. However, such fudge factor values do not match the data in [16]. Therefore, we corrected K according to Table 2 Then our modified Mie scattering coefficient can be written aŝ whereĉ is given by Eq. (16), ν and H M0 are the same as in Eq. (4), h 0 is the altitude at the observer, and K M is given by Eq. (17).

GPU implementation of the aerial perspective rendering
Nowadays, GPU implementation is common in CV and CG. Given the proposed aerial perspective model, we now present how to implement it on a GPU. First, we show the GPU rendering pipeline that includes both a general rendering pipeline and our proposed fragment shader. Then we explain the fragment shader in more detail.

GPU rendering pipeline
A 3D graphics rendering pipeline employs 3D objects described by their vertices and primitives to generate color values of pixels to be shown on a display.  In MR applications, we denote VO for the virtual object and BG for the background to which the virtual object is merged. In case of composition applications, the input image is considered as VO, while there is no BG. Without lost of generality, we explain the rendering pipeline only for the MR case.
In general, raw vertices and primitives of a VO inputted to a vertex shader are processed and transformed for a rasterizer. The rasterizer scans and converts the transformed primitives into 3D fragments, which are then processed in the default fragment shader and merged to obtain textured and lighten 2D fragments. Normally, the resulting 2D fragments are stored in a default frame buffer and then go to the display. We propose to employ one offscreen frame buffer for storing the 2D fragments of the VO coming from the default fragment shader and another off-screen frame buffer for storing the captured real scene that we will call BG. Since the aerial perspective rendering of Eq. (10) is an RGB-based model, we implement it on a GPU at a fragment level. We insert the fragment shader between the two off-screen buffers and the default frame buffer in order to render a MR frame where the VO has a synthesized aerial perspective effect seamlessly to the natural atmospheric effect visualized on BG. Figure 6 illustrates the required parameters for the GLSL fragment shader in order to render an aerial perspective effect. In MR, BG stands for one frame of the real scene with turbidity T captured by a camera with spectral sensitivity q c . Since we estimate T off-line, the proposed fragment shader only needs the position and color of a BG pixel and a VO fragment, respectively, for the computation. The position of a point in world coordinates with respect to an observer located in the origin is given by the depth s, azimuth φ, and zenith θ. The BG's pixel color I BG is given by its RGB values, and the VO's fragment color Using the abovementioned parameters, the GLSL fragment shader consists of the following steps:

1) Initialization: The program calls the textures I VO and
I BG , the target's relative depths ∈ [0, 1] and position (l x , l y ) in 2D screen coordinates, the turbidity T, and the spectral sensitivity q c . 2) Positioning: The target's absolute position in world coordinates is estimated by where the depth s is in meters; s near and s far are the distances of near and far planes, respectively; and (x, y, z) is the target's relative position in world coordinates computed from ⎡ where M and P are the model view matrix and the projection matrix, respectively, and R is the remap matrix given by 3) Aerial perspective rendering: The attenuation factor c (T, s) is computed using Eq. (11). I ∞ c (T, φ, θ) is computed according to Eqs. (12) and (13). The target with aerial perspective effectÎ VO is calculated using Eq. (10) and then blended with I BG to produce the final result.

Experimental results
In this section, we evaluated the turbidity estimation approach and the aerial perspective rendering model. Composition application was used for qualitative and quantitative evaluation of the method. We also provide an application on mixed reality. All the following experiments were run on C++ on a PC with OS: Windows 7; CPU: Corei7 2.93 GHz; RAM: 16 GB; GPU: nVIDIA GTX 550 Ti 4049 MB.

Turbidity estimation test
We tested our approach for turbidity estimation using static omnidirectional images of both simulated skies and captured skies.

Evaluation with sky models
We estimated turbidity 100 times taking Preetham sky models as input images. For this purpose, we implemented the Preetham sky models, which are illustrated in Fig. 7. These models are sky images of 500 by 500 pixels with different values of turbidity ranging from 2.0 to 9.0 and sun position θ s = 58.4°and φ s = −179.4°. The atmospheric turbidity was estimated for each input image using Eq. (8). N = 100 random sampling points were taken for each turbidity estimation. The results are shown in Table 1, whereT stands for the mean value of turbidity and σ T stands for the corresponding standard deviation. The speed of the turbidity estimation method was 200 sampling points/second.

Evaluation with captured sky images
We estimated turbidity for omnidirectional sky images captured by Canon EOS5D with a fisheye lens at 12 p.m. in different days. The sky images are illustrated in Fig. 8. N = 100 random sampling points were used for each turbidity estimation. Turbidity was estimated 50 times for each sky image.

Aerial perspective model evaluation 7.2.1 Evaluation of the scattering coefficients
From the proposed corrections, under standard conditions (λ = 550 nm and h 0 = 0 m), ourβ M is approximately 70 times smaller than the β M of [3] and roughly 1.43 times smaller than the corrected Mie scattering coefficient of [13]. We can compare the impact of the Mie scattering coefficient on the aerial perspective effect. To this end, we can employ the approximated values of the attenuation factor of [3,13] and ours, given by e −β sc s , e −0.01β sc s , and e −0.0137β sc s , respectively. The results illustrated in Fig. 9 show that our attenuation is weaker than Preetham's attenuation but stronger than Zhao's attenuation. We also provide a classification of scattering coefficients through turbidity, as illustrated in Fig. 10. From Eqs. (15) and (18), we havê whereβ M1 andβ M2 refer to our improved Mie scattering coefficient for turbidities T 1 and T 2 , respectively. Considering a turbidity of 1.6 for an exceptionally clear atmospheric condition, we plotted Fig. 10 using Eq. (24).

Airlight evaluation
We performed a qualitative evaluation of the airlight constituent of real images using our rendering model of Eq. (10). Real scenes of Tokyo city were captured using Canon EOS5D. The experiments aim to show performance using a single image as input. Because of this, we used Google earth to manually estimate a rough depth map of the scenes. Nonetheless, depth maps can be estimated either from two images of the same scene at different weather conditions using the proposed aerial perspective model or from single images using approaches as in [14,15]. For a fair evaluation and comparison with state-of-the-art approaches [13][14][15], the input images were manually segmented to only apply aerial perspective effect over the scene excluding the sky. In addition, the parameters used in the mentioned approaches were set to be optimal. Figure 11 illustrates our results as well as the results obtained using methods of [13][14][15] for different atmospheric conditions. Theoretically, the airlight component should only contain information from the environmental light affected by the attenuation factor. While our airlight visibly proved to follow such theoretic consistency, the airlights from the other methods clearly retained color information from the target objects.
Moreover, due to the attenuation factor, the more distant the target object is, the more similar to the environmental illumination color the airlight should be. Certainly, observing the far way mountains in Fig. 11 we notice that our airlight also followed that theoretic definition, while [13][14][15] did not succeed to do so.

Evaluation of the aerial perspective effect
In this experiment, we rendered an aerial perspective effect using a given source image I s and compared the synthesized outputÎ t with a ground truth target image I t of the same scene. We use the subscripts s and t to refer to the source and target, respectively. We dropped the subscript channel c∈{r,g,b} just for readability; however, the computation was carried out in the three channels. In general, based on Eq. (10), if we assume constant reflectance properties for objects in the scene, we can first estimate the normalized radiance ρ(x) at pixel x in the source image. Since ρ(x) does not depend on the atmospheric condition, that is  Approximate relation between the attenuation factor of Preetham [3], Zhao [13], and our proposal the desired aerial perspective can be applied on the normalized radiance. We compute this two-step process directly byÎ In the evaluation, we used the input image with T s = 1.9 of Fig. 11 as the source image since it provides more detailed color information than scenes with higher turbidities. We targeted to ground truth images with T t = {2.11, 2.54, 2.94, 4.36}. Figure 12 displays our qualitative results as well as a comparison with results from [13][14][15]. As can be appreciated from the results, our method generated more visually coherent appearances than the state-of-the-art techniques. Synthesized results of all methods were similar to the ground truth for close objects, such as the biggest building in the scenes. However, while our method prevailed effectively along the entire scene, [13][14][15] suffered from appearance inconsistencies in more distant regions.

Fig. 10 Scattering coefficients through turbidity
We also performed a quantitative evaluation using two metrics: the hue saturation brightness (HSV) histogram correlation and the structural similarity (SSIM) image quality index [30] (see Fig. 13). The histogram correlation was calculated as where H is histogram,H stands for the histogram mean, c∈{H-S,V}, and lower indexes 1 and 2 correspond toÎ t and I t , respectively. In both the HSV correlation and the SSIM index metrics, a higher value represents a higher similarity between the synthesized aerial perspective and the ground truth.
The quantitative results showed that our approach outperformed the methods mentioned beforehand. It is worth noting that while [14] had a better SSIM index than ours only at the least hazy scene, our method provided the highest combined HSV histogram correlation for all scenes. In general, at lower turbidities, [14] rendered compelling results closer to ours than [13,15]. However, contrary to our method, the quality of [13][14][15] drastically decreased as the haze became denser.

Application on MR
We applied the aerial perspective effect to a CG model rendered in a real scene. For convenience of the experiment, we employed a fixed view in order to avoid occlusion and tracking issues and focus on the appearance issue. However, this feature is not a limitation since the fixed view issue can be handled using conventional track-  [14], c Zhao [13], d Zhu et al. [15], and e ours. Depth map in top-left image was used only by [13] and our method ing systems. The altitude at the observer position was h 0 = 40 m above sea level. The distance from the CG model to the observer was around 3500 m. The real scenes of the experiments correspond to the scenes captured for the turbidity estimation test seen in Fig. 8. We use the real scenes with estimated atmospheric turbidities of 1.9, 2.10, 2.94, and 4.36.
The rendered results are shown in Fig. 14. We provide the MR results using Zhao's method [13] for the comparison. We found that the proposed method synthesized more plausible results in terms of visual coherence between the virtual object and the real scene. In terms of computational cost, our rendering speed (14 fps for a full HD frame size) was 225 times faster than Zhao's method.  [14], c Zhao [13], d Zhu et al. [15], and e ours

Conclusions
We have proposed an efficient turbidity-based method for aerial perspective rendering in real scenes. The atmospheric turbidity is effectively estimated by matching the luminance distributions of a sky model and an omnidirectional captured sky image. An improved scattering model was deduced using real data to classify scattering coefficient values via turbidity. The enhanced scattering model was employed to provide a novel fullspectrum aerial perspective rendering model. Qualitative and quantitative evaluations on real and synthesized data show that the rendering method accomplishes realistic appearances seamlessly to the natural aerial perspective in real time, outperforming related works in terms of appearance quality and computational cost.  Table 2 corresponds to the classification of weather conditions based on scattering coefficients. The data was adapted from [16], where measurements were carried out under standard conditions. Standard conditions refer to a spectrally weighted average wavelength (λ = 550 nm) for daylight within the visual spectrum at sea level (h = 0 m).