Skip to main content

Variable exposure time imaging for obtaining unblurred HDR images

Abstract

In this paper, we propose a new camera and a new imaging technique for obtaining unblurred high-dynamic-range (HDR) images. In this camera, we achieve pixelwise control of exposure parameters by using a liquid crystal on silicon. In particular, we can control not only the amount of exposure but also the exposure time pixel by pixel. We call the imaging as variable exposure time imaging. By using the variable exposure time imaging, we can suppress motion blur of bright regions. In addition, we propose a motion blur recovering method from variable exposure time images for obtaining more distinct images including darker regions. Experimental results show our method can take a clear HDR image even if the target object moves in HDR scene.

1 Introduction

The dynamic range of image intensity is a very important property of digital camera devices, since the quality of images depends heavily on this property. In general, the dynamic range of digital cameras is much smaller than that of human eyes and natural scenes. Therefore, standard cameras cannot acquire whole information of the scene, and the obtained images suffer from over-/underexposure as shown in Fig. 1. In order to overcome the problem, various kinds of techniques were proposed for obtaining high-dynamic-range (HDR) images. One of the most popular techniques is the HDR image synthesis from multiple images [1–5]. In this technique, multiple images are taken with different exposure parameters, and an HDR image is synthesized from these images. It is widely used, since it can achieve HDR imaging by using ordinary cameras. However, this technique cannot be applied to dynamic scenes, since multiple images with different exposures cannot be obtained simultaneously by using the ordinary cameras.

Fig. 1
figure 1

Over- and underexposures of a digital camera. a Overexposure. b Underexposure

In order to obtain HDR images from a single image, new imaging methods were proposed recently [6–8]. In these methods, the image exposure is controlled pixel by pixel. For example, the exposure of a pixel is widely opened when the pixel observes a dark region, and the exposure is closed when the pixel observes a brighter region. By using this method, we can obtain appropriate images even if the dynamic range of input scenes is very large. In order to achieve it, Mannami et al.[6] combined a liquid crystal on silicon (LCoS) device and an imaging sensor, in which each pixel of LCoS corresponds to each pixel of image sensor and exposure parameter of each pixel can be changed by controlling the transparency of LCoS pixels. Although this method is useful for obtaining HDR images, motion blur of input images cannot be suppressed completely when a dynamic scene has fast motion. In this paper, we propose a new HDR imaging method for obtaining unblurred HDR images. In our method, we control not the transparency of each pixel but its exposure time for controlling exposure conditions of each pixel. We call the imaging method as variable exposure time imaging. By using the variable exposure time imaging, the motion blur of images can be suppressed efficiently, and we can obtain clear HDR images from a single shot.

2 Variable exposure time imaging

2.1 Image exposure model

We first describe ordinary image exposure model. Let us denote E(u,v,t) as analog input energy into position (u,v) on an imaging device at time t. When exposure time of the camera is T, an observed intensity I(x,y) at pixel (x,y) can be described as follows:

$$ I(x,y)={\int_{0}^{T}} \int_{y-\frac{1}{2}}^{y+\frac{1}{2}} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}} E(u,v,t) du dv dt $$
(1)

In particular, when input energy is constant during image exposure, Eq. (1) can be rewritten as follows:

$$\begin{array}{@{}rcl@{}} I(x,y)=T \int_{y-\frac{1}{2}}^{y+\frac{1}{2}} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}} E(u,v,0) du dv \end{array} $$
(2)

When discrete input energy I ′ is defined by \(I'(x,y)=\int _{y-\frac {1}{2}}^{y+\frac {1}{2}} \int _{x-\frac {1}{2}}^{x+\frac {1}{2}} E(u,v,0) du dv\), Eq. (2) can be rewritten as:

$$ I(x,y)=TI'(x,y) $$
(3)

This equation indicates that the obtained intensities in ordinary camera devices are proportional to input energy I(x,y). Although this linear characteristic is effective for taking ordinary scenes, it does not suit for taking high-dynamic-range scene because brightness of the scene becomes larger with exponential order. In order to obtain the HDR scene appropriately, we need non-linear imaging devices.

2.2 Variable exposure time imaging

For this objective, we propose variable exposure time (VET) imaging. In this imaging technique, constant value is not exposure time but integrated intensity at each pixel. Thus, exposure time changes in each pixel for obtaining constant intensity from input scene. Therefore, imaging model in this technique can be represented using variable exposure time T(x,y) as follows:

$$ T(x,y) = \frac{I^{\theta}}{I'(x,y)} $$
(4)

where I θ is a constant. The relationship between the brightness of input scene and the exposure time is shown in Fig. 2 a, and the relationship between the brightness of input scene and the resolution of exposure time is shown in Fig. 2 b. As shown in Fig. 2 b, the resolution of the VET imaging is not constant, and the resolution of the dark region is much higher than that of the bright region. In general, the change in intensity is small in the dark region and is large in the bright region, and thus, we need to extract small change of input brightness in the dark region for obtaining HDR images. Thus, the non-linear characteristics of the VET imaging are suitable for HDR imaging.

Fig. 2
figure 2

a Characteristics of variable exposure time imaging. b Resolution of variable exposure time imaging

However, obtaining completely constant intensity by changing exposure time is difficult. Therefore, we relax Eq. (4) and obtain the exposure time T(x,y) when the following inequality holds at the first time during exposure:

$$ \int_{0}^{T(x,y)} \int_{y-\frac{1}{2}}^{y+\frac{1}{2}} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}} E(u,v,t)dudvdt \geq I^{\theta} $$
(5)

In this equation, image exposure is stopped when the obtained intensity becomes larger than a threshold. In this relaxed equation, we record not only the exposure time T(x,y) but also the obtained image I(x,y). These obtained value can be represented as images shown in Fig. 3. In this paper, we call the image representing the exposure time as exposure time image and the image representing the obtained intensities as variable exposure image.

Fig. 3
figure 3

Variable exposure image (left) and exposure time image (right)

3 Structure of the VET camera

In order to achieve VET imaging, we next consider the structure of the VET camera. In this camera, we control exposure time pixel by pixel. To do that, we measure the accumulated intensity at each pixel in a subframe. For this objective, we combine two ordinary image sensors and an LCoS device.

In our system, these devices are set as shown in Fig. 4. In this structure, input light rays pass through the virtual image plane at first. After that, they are splitted by a beam splitter. One of the splitted rays is entered into the image sensor for measuring the amount of light rays, and the other rays are entered into another image sensor through LCoS to obtain variable exposure images. The exposure period of the measurement image sensor is much smaller than that of the image sensor of variable exposure image as shown in Fig. 5. The images taken by the measurement image sensor are called as measuring images in this paper, and the transparency of LCoS is controlled according to the measuring images. When integrated intensity at a pixel in the measurement image sensor becomes larger than a threshold, the transparency of LCoS of this pixel is controlled to zero, and the input light rays toward the corresponding pixel on the image sensor are blocked as shown in Fig. 5. By the control of LCoS, we can control the exposure time of each pixel in the image sensor.

Fig. 4
figure 4

Structure of the variable exposure time camera

Fig. 5
figure 5

Control of exposure time from the measurement image sensor

4 HDR imaging for static scenes

We next consider HDR image synthesis by using images taken by the VET camera. In the case of static scenes, HDR images can be calculated directly from exposure time images and variable exposure images.

From Eq. (3), input energy I ′(x,y) is calculated from exposure time T(x,y) and variable exposure image I(x,y) as follows:

$$ I'(x,y) = \frac{I(x,y)}{T(x,y)} $$
(6)

I ′(x,y) obtained from Eq. (6) is an HDR image from the variable exposure time imaging.

5 HDR imaging for dynamic scenes

We next consider HDR imaging of dynamic scenes. In fact, we do not need special reconstruction process for bright regions, since the exposure time in bright regions is sufficiently small and the motion blur of these regions is sufficiently small. However, we need to reconstruct motion blur in dark regions which have long exposure time. In this case, HDR image reconstruction is not so easy, since we should reconstruct not only HDR images but also motion blurs simultaneously. When an input image is taken by an ordinary camera, this simultaneous reconstruction is an ill-posed problem. However, we can solve it in our VET camera, since we can obtain additional information from the VET camera.

From the definition of VET imaging, a variable exposure image I(x,y) can be described as follows:

$$ I(x,y) = \int_{0}^{T(x,y)} \int_{y-\frac{1}{2}}^{y+\frac{1}{2}} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}} E(u,v,t)dudvdt $$
(7)

Let us denote image motion in a pixel (x,y) at time t by Δ v(t) and Δ u(t). Assuming the brightness of the scene does not change during the motion of the scene, Eq. (7) can be rewritten as follows:

$$\begin{array}{@{}rcl@{}} I(x,y)=\int_{0}^{T(x,y)} \int_{y-\frac{1}{2}}^{y+\frac{1}{2}} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}} E(u-\Delta u(t),&v-\Delta v(t),t) \\ &du dv dt \end{array} $$
(8)

Furthermore, Eq. (8) can be described by using an HDR image I ′(x,y) as follows:

$$ I(x,y)=\sum_{t=0}^{T(x,y)}I'(x-\Delta u(t),y-\Delta v(t)) $$
(9)

From this equation, we can estimate an unblurred HDR image I ′(x,y), if the image motion Δ u(t) and Δ v(t) is known. Contrarily, if we know the unblurred HDR image I ′(x,y), we can estimate the image motion Δ u(t) and Δ v(t) from I ′(x,y), t(x,y), and I(x,y). Thus, we in this research estimate both the unblurred HDR image I ′(x,y) and the image motion Δ u(t) and Δ v(t) simultaneously from the variable exposure image I(x,y) and the exposure time image T(x,y).

The simultaneous estimation of the unblurred HDR image I ′(x,y) and the image motion Δ u(t) and Δ v(t) can be achieved by minimizing the following cost function:

$$ \begin{array}{rcl} E &=& \sum_{x}\sum_{y} (|| I(x,y)-\mathcal{B}(I'(x,y),\Delta u(t), \Delta v(t))||^{2}\\ &&+ w_{1}||R(I'(x,y)||^{2} \\ &&+ w_{2}||R(\Delta u(x,y))||^{2} + w_{3}||R(\Delta v(x,y))||^{2})\\ &&- w_{3}H({I'}) \end{array} $$
(10)

where w i is the weight of each regularization term. The first term of this equation is based on the blurring process in VET imaging. The blurring process \(\mathcal {B}\) can be described as follows:

$$ \mathcal{B}(I'(x,y), \Delta u(t), \Delta v(t)) = \sum_{t=0}^{T(x,y)}I'(x-\Delta u(t),y-\Delta v(t)) $$
(11)

Thus, the first term of Eq. (10) indicates that a synthesized blur image obtained from the reconstructed HDR image should be identical with the variable exposure image obtained from the VET camera.

The second to the fourth terms of Eq. (10) are smoothness regularization of motion Δ u, Δ v and the reconstructed HDR image I ′. The function R denotes their Laplacian.

The last term of Eq. (10) is a prior on the image gradient distribution H(I ′), which can be described as follows:

$$\begin{array}{@{}rcl@{}} H({I'})=\sum_{i=0}^{255}\min(\widetilde{h}(i),h_{I'}(i)) \end{array} $$
(12)

where \(h_{I'}(i)\phantom {\dot {i}\!}\) is the ith bin of the histogram of image gradient of the reconstructed HDR image I ′ and \(\widetilde {h}(i)\) is that of general unblurred images. In general, histograms of image gradient of unblurred images are similar to each other, and the equation indicates that the intersection of histograms of the reconstructed image and general images becomes large, if the reconstructed image is valid. Thus, by minimizing the cost function E, we can estimate image motions and unblurred HDR images simultaneously.

Note, we need good initial values of image motions for the stable estimation in the above minimization. Thus, we first estimate image motions from a set of sequential images by using the existing optic flow estimation method and use it as the initial value of the simultaneous estimation described above. We use the measuring images obtained from the measurement image sensor as the sequential images.

6 Experimental results

We next show experimental results from the proposed VET camera. Figure 6 a shows a VET camera developed in this research and used in our experiments.

Fig. 6
figure 6

VET camera and experimental environment. a Prototype of the VET camera. b Environment

This VET camera consists of two synchronized cameras and an LCoS device. The two cameras are FLEA3 of PointGrey, and the LCoS is obtained from an LCoS projector. The resolutions of the cameras are 1280 × 1024, and the resolution of the LCoS is 1024 × 600. In our experiments, 233 × 187 partial images of them were used. The exposure period of the VET imaging and the measurement camera was 999.75 and 7.58 ms, respectively. The correspondence of image pixels among these two cameras and LCoS was calibrated by computing the homographies of their images. Note that the transparency of LCoS cannot become 0, even if its control value is 0. In addition, the VET camera has a dark current. Thus, these photographic characteristics were measured beforehand and were reflected to VET imaging in Eq. (8).

The experimental environment is shown in Fig. 6 b. The scene was illuminated partially by a light in order to cause extreme brightness changes in the scene. In addition, some objects were set on a moving stage to generate dynamic scenes and cause image motions. Then, unblurred HDR images were reconstructed by using the proposed method.

For estimating unblurred images and image motions simultaneously, the initial value of the optical flow was estimated by using the Farneback method [9], and the cost function shown in Eq. (10) was minimized by the steepest descent method.

Figure 7 shows a variable exposure image and an exposure time image obtained from the VET camera. From these images, an unblurred HDR image was estimated and shown in Fig. 8 a. For representing HDR images properly, we modified their intensity into 8-bit images by using tone mapping with logarithm. For comparison, we also show an HDR image without optimization in Fig. 8 b and an HDR image derived from the existing method [6] in Fig. 8 c.

Fig. 7
figure 7

Images obtained from the VET camera. a Variable exposure image. b Exposure time image

Fig. 8
figure 8

HDR images obtained from the a proposed method, b proposed method without optimization, and c existing method [6]

Note that the existing method was implemented by using the same VET camera shown in Fig. 6 a. In this method, the exposure time of each pixel is constant, and the transparency of LCoS was controlled based on the intensity in previous frames according to the literature [6].

As shown in Fig. 8 b, c, HDR imaging without optimization and HDR imaging from the existing method have motion blur, since these methods cannot eliminate the effect of image motions.

On the contrary, the HDR image derived from the proposed method does not suffer from the motion blur as shown in Fig. 8 a. In addition, under-/overexposure did not occur in our results. These results show that the proposed method can obtain unblurred HDR images efficiently, even if the scene has extreme brightness changes and motions.

Finally, we show quantitative evaluation by using synthetic images. In this experiment, RMSE from ground truth was evaluated. The results are shown in Fig. 9, where (a) shows the proposed method, (b) shows the proposed method without optimization, (c) shows the existing method, (d) shows a standard long-exposure image, and (e) shows a standard short-exposure image. As shown in Fig. 9, the proposed method is the best in these five methods. These results show that the proposed method is superior to the other existing methods.

Fig. 9
figure 9

RMSE of HDR images derived from five methods

7 Conclusions

In this paper, we proposed a variable exposure time imaging, which can generate HDR images without motion blur efficiently. For realizing the new imaging method, we developed a variable exposure camera. The experimental results show that the proposed method can reconstruct clear HDR images, even if the input scene has extreme brightness changes and motions.

References

  1. Madden BC (1993) Extended intensity range imaging. Technical report.

  2. Debevec PE, Malik J (1997) Recovering high dynamic range radiance maps from photographs In: Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 369–378.

  3. Mitsunaga T, Nayer SK (1999) Radiometric self calibration In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 374–380.

  4. Mann S, Picard RW (1996) On Being ‘undigital’ With Digital Cameras: Extending Dynamic Range By Combining Differently Exposed Pictures In: Proc. IS&T’s 48th Annual Conference, 442–448.

  5. Bogoni L (2000) Extending dynamic range of monochrome and color images through fusion In: Proceedings of the IEEE International Conference on Pattern Recognition (ICPR), 7–12.

  6. Mannami H, Sagawa R, Mukaigawa Y, Echigo T, Yagi Y (2007) High dynamic range camera using reflective liquid crystal In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 1–8.

  7. Nayar SK, Branzoi V (2003) Adaptive dynamic range imaging: optical control of pixel exposures over space and time In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 1168–1175.

  8. Nayar SK, Branzoi V, Boult TE (2004) Programmable imaging using a digital micromirror array In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 436–443.

  9. Farneback G (2003) Two-frame motion estimation based on polynomial expansion, Image Analysis(Bigun J, Gustavsson T, eds.). Springer, Berlin Heidelberg. Lecture Notes in Computer Science, Vol. 2749.

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank the Global Symbiotic Information Research Center of Nagoya Institute of Technology for the provision of laboratory facilities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fumihiko Sakaue.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

SU designed and carried out the experiments and wrote the manuscript mainly. FS contributed to concept and wrote the manuscript partially. JS conceived and conducted the study. All authors reviewed and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Uda, S., Sakaue, F. & Sato, J. Variable exposure time imaging for obtaining unblurred HDR images. IPSJ T Comput Vis Appl 8, 3 (2016). https://doi.org/10.1186/s41074-016-0005-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41074-016-0005-0

Keywords