We next consider HDR imaging of dynamic scenes. In fact, we do not need special reconstruction process for bright regions, since the exposure time in bright regions is sufficiently small and the motion blur of these regions is sufficiently small. However, we need to reconstruct motion blur in dark regions which have long exposure time. In this case, HDR image reconstruction is not so easy, since we should reconstruct not only HDR images but also motion blurs simultaneously. When an input image is taken by an ordinary camera, this simultaneous reconstruction is an ill-posed problem. However, we can solve it in our VET camera, since we can obtain additional information from the VET camera.

From the definition of VET imaging, a variable exposure image *I*(*x*,*y*) can be described as follows:

$$ I(x,y) = \int_{0}^{T(x,y)} \int_{y-\frac{1}{2}}^{y+\frac{1}{2}} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}} E(u,v,t)dudvdt $$

(7)

Let us denote image motion in a pixel (*x*,*y*) at time *t* by *Δ*
*v*(*t*) and *Δ*
*u*(*t*). Assuming the brightness of the scene does not change during the motion of the scene, Eq. (7) can be rewritten as follows:

$$\begin{array}{@{}rcl@{}} I(x,y)=\int_{0}^{T(x,y)} \int_{y-\frac{1}{2}}^{y+\frac{1}{2}} \int_{x-\frac{1}{2}}^{x+\frac{1}{2}} E(u-\Delta u(t),&v-\Delta v(t),t) \\ &du dv dt \end{array} $$

(8)

Furthermore, Eq. (8) can be described by using an HDR image *I*
^{′}(*x*,*y*) as follows:

$$ I(x,y)=\sum_{t=0}^{T(x,y)}I'(x-\Delta u(t),y-\Delta v(t)) $$

(9)

From this equation, we can estimate an unblurred HDR image *I*
^{′}(*x*,*y*), if the image motion *Δ*
*u*(*t*) and *Δ*
*v*(*t*) is known. Contrarily, if we know the unblurred HDR image *I*
^{′}(*x*,*y*), we can estimate the image motion *Δ*
*u*(*t*) and *Δ*
*v*(*t*) from *I*
^{′}(*x*,*y*), *t*(*x*,*y*), and *I*(*x*,*y*). Thus, we in this research estimate both the unblurred HDR image *I*
^{′}(*x*,*y*) and the image motion *Δ*
*u*(*t*) and *Δ*
*v*(*t*) simultaneously from the variable exposure image *I*(*x*,*y*) and the exposure time image *T*(*x*,*y*).

The simultaneous estimation of the unblurred HDR image *I*
^{′}(*x*,*y*) and the image motion *Δ*
*u*(*t*) and *Δ*
*v*(*t*) can be achieved by minimizing the following cost function:

$$ \begin{array}{rcl} E &=& \sum_{x}\sum_{y} (|| I(x,y)-\mathcal{B}(I'(x,y),\Delta u(t), \Delta v(t))||^{2}\\ &&+ w_{1}||R(I'(x,y)||^{2} \\ &&+ w_{2}||R(\Delta u(x,y))||^{2} + w_{3}||R(\Delta v(x,y))||^{2})\\ &&- w_{3}H({I'}) \end{array} $$

(10)

where *w*
_{
i
} is the weight of each regularization term. The first term of this equation is based on the blurring process in VET imaging. The blurring process \(\mathcal {B}\) can be described as follows:

$$ \mathcal{B}(I'(x,y), \Delta u(t), \Delta v(t)) = \sum_{t=0}^{T(x,y)}I'(x-\Delta u(t),y-\Delta v(t)) $$

(11)

Thus, the first term of Eq. (10) indicates that a synthesized blur image obtained from the reconstructed HDR image should be identical with the variable exposure image obtained from the VET camera.

The second to the fourth terms of Eq. (10) are smoothness regularization of motion *Δ*
*u*, *Δ*
*v* and the reconstructed HDR image *I*
^{′}. The function *R* denotes their Laplacian.

The last term of Eq. (10) is a prior on the image gradient distribution *H*(*I*
^{′}), which can be described as follows:

$$\begin{array}{@{}rcl@{}} H({I'})=\sum_{i=0}^{255}\min(\widetilde{h}(i),h_{I'}(i)) \end{array} $$

(12)

where \(h_{I'}(i)\phantom {\dot {i}\!}\) is the *i*th bin of the histogram of image gradient of the reconstructed HDR image *I*
^{′} and \(\widetilde {h}(i)\) is that of general unblurred images. In general, histograms of image gradient of unblurred images are similar to each other, and the equation indicates that the intersection of histograms of the reconstructed image and general images becomes large, if the reconstructed image is valid. Thus, by minimizing the cost function *E*, we can estimate image motions and unblurred HDR images simultaneously.

Note, we need good initial values of image motions for the stable estimation in the above minimization. Thus, we first estimate image motions from a set of sequential images by using the existing optic flow estimation method and use it as the initial value of the simultaneous estimation described above. We use the measuring images obtained from the measurement image sensor as the sequential images.