Open Access

Nuclear detection in 4D microscope images of a developing embryo using an enhanced probability map of top-ranked intensity-ordered descriptors

  • Xian-Hua Han1, 2Email author,
  • Yukako Tohsato3,
  • Koji Kyoda3,
  • Shuichi Onami3,
  • Ikuko Nishikawa2 and
  • Yen-Wei Chen2
IPSJ Transactions on Computer Vision and Applications20168:8

DOI: 10.1186/s41074-016-0010-3

Received: 9 April 2016

Accepted: 14 October 2016

Published: 3 November 2016

Abstract

Nuclear detection in embryos is an indispensable process for quantitative analysis of the development of multicellular organisms. Due to the overlap in the distribution of pixel intensity of nuclear and cytoplasmic regions and the large variation of pixel intensity even within the same type of cellular components in different embryos, it is difficult to separate nuclear regions from the surrounding cytoplasmic region in differential interference contrast (DIC) microscope image. This study explores a discriminative representation of a local patch around a fixed pixel, called top-ranked intensity-ordered descriptor (TRIOD), which is prospected to distinguish the smoothed texture in the nucleus from the irregular texture in cytoplasm containing yolk granules. Then, a probability process is employed to model nuclear TRIOD prototypes, and the enhanced nuclear probability map can be constructed with the TRIODs of all pixels in a DIC microscope image. Finally, a distance-regularized level set method, which not only considers the probability change in a nearby pixel but also regularizes the contour smoothness, is applied to refine the initial localization by simply thresholding on the enhanced probability map. Experimental results show that the proposed strategy can give much better performance for segmentation of nuclear regions than the conventional strategies.

Keywords

Top-ranked intensity-ordered descriptor TRIOD Enhanced probability map Nuclear detection Level set method

1 Introduction

Identification of gene function during animal development is the main task of developmental biology. Genetic perturbation and analysis in animal embryos often clarify the function of the specific gene. Due to the complete genome sequence (roughly 20,000 protein-coding genes [1]) and short lifetime, the nematode Caenorhabditis elegans is generally used as a model organism in biology. By observing a perturbed early C. elegans embryo using a microscope of Nomarski differential interference contrast (DIC) optics [2], biologists manually analyze early embryonic events and evaluate the roles of a specific gene in the developmental process. Detection of various components of interests in the embryo is the first indispensable step to extract the quantitative measure of the early embryonic events such as growth of cell number, changes of nuclear position and shape, and timing of cell division.

Since the early embryonic events are closely related to the position, morphological variations, number changes of nuclei, and so on, automated detection of nuclei in DIC microscope images plays an essential role for understanding animal development. This work focuses on detecting nuclei and cytoplasm in 3-dimensional (3D) time-lapse images of a C. elegans embryo recorded by DIC microscope [2]. Two images of the 3D DIC microscope images are shown in Fig. 1 a, which are obtained from different C. elegans embryos. It can be seen that the nuclear region is much smoother than the cytoplasmic region with yolk granules. We take one nuclear and cytoplasmic regions from both images as explored regions (Fig. 1 a) denoted by different color frames and compare the statistical distributions of (1) different types of explored regions from the same embryo and (2) the same type of explored regions from different embryos shown in Fig. 1 b. Figure 1 b manifests that the intensities of different cellular components can be completely overlapped (the middle histogram in Fig. 1 b). However, the same type of explored regions from different embryos possibly has very different intensity distributions (the right histogram in Fig. 1 b). Therefore, it would be impossible to only use the intensity to identify the nuclear and cytoplasmic regions, and thus, exploring a discriminating representation using texture is necessary for distinguishing the cytoplasmic and nuclear regions.
Fig. 1

The statistical analysis of intensity distribution from cytoplasmic and nuclear regions. a Two DIC microscope images and their explored regions (red, blue, pink, and green frames). b The compared intensity histograms from different explored regions, where the blue, red, green, and pink rectangles mean the corresponding explored regions framed by blue, red, green, and pink colors in Fig. 1 a. The blue and red histograms in the left figure are for the framed regions by blue and red colors in Fig. 1 a, and those in the middle figure are for the framed regions by green and pink colors in Fig. 1 a, respectively, while those in the right figure are for the framed regions by green and blue colors

There are several studies that attempt to explore the dynamic information of 3D time-lapse DIC microscope images. Two computer-assisted systems: SIMI BioCell [3] and 3D-DIASemb [4], have been constructed for tracking the changes of the 3D nuclear positions in the 3D time-lapse DIC microscope images. These two systems identify the positions and sizes of the nuclei via displaying 3D time-lapse DIC microscope images with a graphical user interface. However, the nuclei are still detected manually, and thus, it is a laborious task. In order to efficiently implement the task of analyzing dynamic changes in DIC microscope images, several efforts aimed to automatically identify different types of regions of the C. elegans embryo image and especially detect the nuclear regions from the surrounding cytoplasm. Yasuda et al. [5] combined several types of edge features for detecting the nuclear and membrane regions. Because of the large variation in the same type of regions and possibly similar intensity distribution between different types of regions, this method leads to a lot of numbers of false positive and missing detection, which were required to be corrected by laborious hand-tuning. Hamahashi et al. [6] proposed to transform a raw image in 3D time-lapse DIC microscope images into local entropy domain (local entropy image) to enhance the cytoplasmic region (suppress the nuclear region) and then track the nuclei in dynamic image sequence.The method used the statistics (local entropy) of a local patch centered on the focused pixel instead of pixel intensity and can adapt any 3D time-lapse images for enhancing the cytoplasmic region. However, since only one statistic of the local patch based on its distribution is computed, it produced low contrast and blurred boundary between the nuclear and cytoplasmic regions in a local entropy image. On the other hand, Ning et al. [7] explored a multilayer convolutional neural network with the intensities of local patches as the initial input and attempted to recognize five categories: cell wall, cytoplasm, nucleus membrane, nucleus, and outside medium. This complex framework needs strength effort as post-processing for giving acceptable identification by using an energy-based model and a set of elastic models and thus leads to high computational cost.

This study proposes a simple but efficient framework for automatically recognizing the nuclear regions from the surrounding cytoplasmic region. It is well known that the cytoplasm contains a lot of yolk granules which can appear in any position under irregular frequency on the background with similar intensity, in contrast to the smooth nuclear region. Therefore, it is impossible to distinguish the nuclear pixel from the cytoplasm pixel by only using pixel intensity, while it is also difficult to recognize using a local patch, which is the feature for representing the centered pixel, due to the irregular appearance of yolk granules. In this study, we explore a discriminated descriptor for a local patch centered in a pixel, called top-ranked intensity-ordered descriptor (TRIOD), which can retain the intensity variation of the yolk granules in cytoplasm local patch without destroying the smooth intensity in nuclear local patch. Due to the small variation of the TRIODs for nuclear pixel representation, we collect a set of nuclear TRIODs as prototypes and apply a probability process to model them. With the constructed model, we can transform a raw DIC microscope image into a nuclear-enhanced probability map, which achieves very high contrast between the nuclear and cytoplasmic regions. Finally, a distance-regularized level set (DRLS) method [8], which not only considers the probability change in a nearby pixel but also regularizes the contour smoothness, is applied to refine the initial localization by simply thresholding on the enhanced probability map. The proposed framework for nuclear detection is shown in Fig. 2, where the top figure manifests the construction procedure of probability models for the nuclear TRIOD prototypes and the bottom part gives the computation step of a nuclear-enhanced probability map, and the DRLS-based refinement is employed for obtaining final detection results. We evaluate the effectiveness and performance of our proposed framework for nuclear detection.
Fig. 2

The proposed nuclear detection framework. The top figure denotes the construction procedure of probability models based on TRIODs, and the bottom part gives the procedure of nuclear detection. For constructing of probability models, we firstly extract some nuclear regions from several randomly selected embryo images and then obtain the TRIODs from all l×l local patches centered on the pixels in the nuclear regions, which are used as the input vector for constructing the nuclear models. In the nuclear detection procedure, the TRIODs for all pixels in the input embryo are computed as the input vector to the constructed probability model for obtaining the transformed nuclear-enhancement map, and then, the level set method is used for nuclear detection on the transformed map

This paper is organized as follows. Section 2 describes the discriminated TRIOD for representing the nuclear and cytoplasm pixel and introduces the construction of the probability models for computation of the nuclear-enhanced probability map. A refinement detection via a distance-regularized level set method is investigated in Section 3. Experimental results and conclusions are given in Sections 4 and 5, respectively.

2 Enhanced probability map based on top-ranked intensity-ordered descriptors

This section firstly introduces a discriminative representation for irregular texture, called top-ranked intensity-ordered descriptors. Due to the small variance of the nuclear region (smooth), we propose to model nuclear TRIODs as a multiple Gaussian process and transform the raw DIC microscope image into a nuclear-enhanced probability map for final segmentation. In the following, we evaluate the transformed probability maps under different parameters.

2.1 Top-ranked intensity-ordered descriptors

As statistically analyzed in the above section, the cytoplasmic region includes a lot of yolk granules bulging out from the background, which can be considered as irregular texture variance. For local texture representation, a number of methods have been proposed in the literature for computer vision problems. The most popular strategies are based on histograms such as SIFT (Scale-Invariant Feature Transform) [9], GLOH (Gradient Location-Orientation Histogram), and their recently improved versions [10, 11] for partially dealing with some kinds of variations and distortions in the processed images. However, while the above descriptors have shown promising performance for representing local patches with regular structure in different computer vision problems, they cannot handle more complex illumination change. On the other hand, in order to obtain robust features to illumination change, local binary pattern (LBP) texture operator and its many extensions [12, 13] have been widely used in vision literature and obtained good performance vis-a-vis illumination change. All the local feature representations developed in the computer vision field would encounter difficulty for handling irregular structures like the unorganized yolk granules in the cytoplasmic region of DIC microscope images. Therefore, a local entropy method [6] was introduced to deal with the irregular texture in DIC microscope image, which only considers the frequency of pixel intensity and gives the statistic measure of information capacity in a local patch. This method can work well despite the large intensity variance in the DIC microscope images of different organisms. However, only one statistic measure (local entropy) for a local patch is extracted, and thus, the representation ability is limited for distinguishing the large variant cytoplasmic and smooth nuclear regions.

This study proposes a discriminative irregular texture representation, which is prospected to give similar descriptors for the cytoplasm texture even with large variance and also a consistent representation of nuclear but different from the cytoplasmic one. Let us represent the ith focused pixel in a DIC microscope image as a small l×l patch, which can be re-arranged into a vector s i =[s i,0,s i,1,,s i,D−1], and s i R D (D=l×l−1). Since the yolk granules possibly appear at any location in the local patch, they will give a very different vector even for the same cytoplasm type of pixels. Thus, we attempt to sort the un-ordered vector and only take a subset of the top-ranked value as the descriptor. In order to handle the intensity variance in DIC microscope images of different organisms, we firstly subtract the mean value from the vector s i and calculate the absolute magnitudes:
$$ \bar{s}_{i,j}=|s_{i,j} - \frac{1}{D} \sum\limits_{d=0}^{D-1}s_{i,d}|. $$
(1)
Then, we sort the vector \(\bar {\mathbf {s}}_{i}\) in a non-ascending order and obtain the re-ordered vector \(\hat {\mathbf {s}}_{i}\) with \(\hat {s}_{i,0} \geq \hat {s}_{i,1} \geq \cdots \geq \hat {s}_{i,D-1}\). Finally, we take the K-large magnitude elements in \(\hat {\mathbf {s}}_{i}\) as the texture representation, named as top-ranked intensity ordered descriptors (TRIODs). For all pixels in the DIC microscope image, the K-dimensional TRIODs can be extracted from the l×l local patches around the focused pixels. We visualize the first and the second top-three-magnitude elements of all pixels by combining them into color images as shown in Fig. 3 b, c, which manifest the proposed TRIODs can obtain high contrast between the nuclear and cytoplasmic regions. The first top-three-magnitude elements denote the first to the third elements of the TRIOD, while the second top-three-magnitude elements denote the fourth to the sixth ones of the TRIOD. Since the elements in the TRIODs have already sorted in a non-ascending order, the near elements in the TRIOD have no large difference, and thus, almost all pixels in the combined color images have gray intensity.
Fig. 3

The visualization of the proposed TRIODs. a A raw slice DIC microscope image. The corresponding color image by combining the first top-three elements (b) and the second top-three elements (c) of the TRIODs

2.2 Nucleus model construction for probability map transformation

As shown in Fig. 3, it can be seen that the variance of the nuclear TRIOD elements are much smaller than those for cytoplasm pixels, and thus, it would be much easier to model the nuclear TRIODs than the cytoplasmic ones. Therefore, this study attempts to construct a probability model of nuclear TRIODs and transform any TRIOD with the constructed model to a nuclear-enhanced probability map. Given some nuclear regions from the DIC microscope image of any organism, a set of nuclear TRIODs X=[x 1,x 2,,x N ], called nuclear TRIOD prototypes, can be extracted. The intuitive way to use the N TRIODs for constructing the nuclear model is to directly use M Gaussian models as follows:
$$ P(\mathbf{X}) = \sum\limits_{m=1}^{M} w_{m} \mathrm{N} \left(\mathbf{X}|\mu_{m}, \Sigma_{m}\right), $$
(2)
where N(X|μ m ,Σ m ) is a Gaussian model. If we set K as the nuclear TRIOD prototype number, the mean vector of the mth Gaussian model is the mth TRIOD, i.e., μ m =x m . Σ m and w m are the co-variance matrix and weight parameters of the mth Gaussian model, respectively. Since the M Gaussian models are centered on the N=M TRIODs, respectively, all model weights can be set as \(\frac {1}{M}\) which means all the models have the same contribution to the final probability process. For simplicity, we assume Σ k is a diagonal matrix and its determinant is \({\sigma ^{2}_{m}}\), which is calculated as the mean of the square distances between the the mean vectors of the Gaussian model and their five nearest neighbors as follows:
$$ {\sigma^{2}_{m}}= \sum_{\mathbf{x}_{i} \in \text{NN}_{5}({\mu}_{m})} | \mathbf{x}_{i}- {\mu}_{m} |^{2}, $$
(3)

where x i NN 5(μ m ) denotes the five nearest neighbors in X to the mean μ m of the mth Gaussian model.

In the constructed nuclear model, if the Gaussian component number M (such as M=N) is large, it needs to calculate the fitting degrees (probability) of a test TRIOD to all constructed Gaussian models, and thus, the computational cost will be increased linearly to the component number. In order to reduce computational time, we simply cluster the N nuclear TRIOD prototypes into M (M<<N) groups and obtain group centers as the mean [μ 1,μ 2,,μ M ].

For any test TRIOD x t , we can calculate its probability belonging to the mth Gaussian model as
$$ {\upgamma}(m|\mathbf{x}_{t})=\frac{w_{m} N\left(\mathbf{x}_{t}| {\mu}_{m}, {\sigma_{m}^{2}}\right)}{\sum_{m=1}^{M} w_{m} N\left(\mathbf{x}_{t}| {\mu}_{m}, {\sigma_{m}^{2}}\right)}. $$
(4)
Since all the M Gaussian models are for fitting nuclear TRIODs, it is prospected that any test nuclear TRIOD always can be well fitted by several similar models. However, the cytoplasm TRIODs are generally far from the constructed models, and then, the probability to any constructed model will be low. This study uses the J highest probabilities to give the final map magnitude for a TRIOD x t as follows:
$$ \text{PM}(\mathbf{x}_{t})=\frac{1}{J} \sum\limits_{j=1}^{J} \hat{\upgamma}(j|\mathbf{x}_{t}), $$
(5)

where \(\hat {\gamma }(j|\mathbf {x}_{t})\) is the jth largest probability of the M models.

2.3 Evaluation of the transformed probability maps under different parameters

As shown in the flowchart of our proposed nuclear detection strategy (Fig. 2), we randomly select the slices from the DIC microscope images of three embryos, from which several nuclear regions can be manually given for preparing the nuclear prototypes. In our experiments, about 2000 K (\(K=\text {int}\left (\frac {l\times l-1}{2}\right)+1\), “int” means taking the integer number) nuclear TRIOD prototypes are extracted from l×l local patches in the given nuclear regions, and thus, the computational cost would be high if all prototypes are used as the mean parameters of the probability models. Therefore, we employ clustering algorithm for grouping them into M representative TRIOD prototypes, which correspond to K Gaussian models. Figure 4 shows the nuclear-enhanced probability maps under different model numbers of TRIODs, which are extracted from 9×9 local patches. The profiles of the horizontal lines from the map images are also manifested in Fig. 4 c, which shows the higher contrasts between the cytoplasmic and nuclear regions using larger numbers of probability models. With the probability map using the Gaussian models of all TRIOD prototypes as criterion measure, we calculate the summed squared difference of all pixels between the probability maps, and the summed difference values for four DIC slices from different organisms are given in Fig. 4 d. It can be seen that the summed differences after 50 probability models change a little and seem to give the local minimums for all four samples. The computational times (second) are also given in Fig. 4 e. In order to balance the trade-off between computational cost and contrast, we select 50 probability models to fit the nuclear TRIODs in the following experiments.
Fig. 4

The evaluation of the nuclear-enhanced probability maps with different numbers of Gaussian model. a A raw DIC microscope image. b The nuclear-enhanced probability maps with model numbers: 1, 50, 100, and all available prototype numbers. c The plotted profiles at the red lines. d The summed squared difference. e The computational time

Next, we validate the effect of the local patch sizes to the probability maps by setting the parameters l of the local patch (l×l) as 3,5,7,9,, and the top-ranked intensity-ordered descriptors with \(K=\text {int}\left (\frac {l\times l-1}{2}\right)+1\) elements are extracted for pixel representation. The resulted probability maps with different local patch sizes are shown in Fig. 5, which indicates that the proposed strategy with small local patch sizes can correctly enhance almost all nuclear pixels (true positive) but inaccurately emphasize some cytoplasm pixels (false positive), while using a large size of local patches would lead to miss-enhancement of nuclear pixels especially in the boundary regions but more clear enhanced probability maps.
Fig. 5

The evaluation of the nuclear-enhanced probability maps with different local patch sizes

Finally, the different dimensions of the TRIOD feature are evaluated. We set the dimension (K) of the TRIOD feature as 1, \(\text {int}\left (\frac {l\times l}{8}\right)+1\), \(\text {int}\left (\frac {l\times l-1}{4}\right)+1\), \(\text {int}\left (\frac {\left (l\times l-1\right)\times 3}{8}\right)+1\), \(\text {int}\left (\frac {l\times l}{2}\right)+1\), with l=5, 7, and 9. Figure 6 shows the transformed probability maps with K= 1, \(\text {int}\left (\frac {l\times l}{4}\right)+1\), \(\text {int}\left (\frac {l\times l-1}{2}\right)+1\), \(\text {int}\left (\frac {\left (l\times l-1\right)\times 3}{4}\right)+1\) under l=5,7,9. We also define the discriminated degrees for measuring the distinguish ability of the transformed probability maps between the nuclear and the surrounding cytoplasmic regions as follows:
$$ \text{DS}(\mathbf{p}) = \frac{\left(m1_{p}-m2_{p}\right)^{2}} {4\times \left(s1_{p}+s2_{p}\right)}, $$
(6)
Fig. 6

The transformed probability maps with different dimensions K of the TRIODs, which are extracted from the local patch sizes l=5,7,9. The row images from the top to the bottom denote the transformed maps with l=5,7,9, and the column images from the left to the right denote those with the TRIOD dimensions K=1, \(\text {int}\left (\frac {l\times l}{4}\right)+1\), \(\text {int}\left (\frac {l\times l-1}{2}\right)+1\), \(\text {int}\left (\frac {\left (l\times l-1\right)\times 3}{4}\right)+1\), respectively

where p denotes a transformed probability map, m1 p and m2 p are the mean values of the nuclear and cytoplasmic pixels in the transformed map such as the pixel in the regions shown in Fig. 7 a, and s1 p and s2 p denote the variances of the nuclear and cytoplasmic pixels in the transformed map, respectively. The larger the DS(p) is, the higher the contrast between the nuclear and the surrounding cytoplasmic regions would be. Figure 7 b gives the quantitative measurements of the transformed probability maps with different K under l= 5 and 9. From Fig. 6 b, it can be seen that the discriminated degrees with \(K=\text {int}\left (\frac {l\times l}{2}\right)+1\) manifest acceptable distinguish ability and cannot be greatly improved even increasing K. Therefore, we set \(K=\text {int}{\left (\frac {l\times l}{2}\right)}+1\) for the proposed TRIOD features in all the following experiments. Segmentation of the nuclear region with the level set method introduced in the next section will be based on the enhanced probability maps with different patch sizes l and the corresponding TRIOD feature dimension K.
Fig. 7

The quantitative evaluation for the transformed probability maps with different dimensions K of the TRIODs. a The cytoplasmic and nuclear regions for quantitative evaluation. b The discriminated degree of different transformed maps

In addition, we will also give compared nuclear detection results with the local entropy method [6], which possibly uses different patch sizes for measuring the local entropy of a pixel. In our experiments, the local entropy method is implemented, and the transformed local entropy images with different patch sizes (l=5,9,13,17,21) are shown in Fig. 8 a. Furthermore, Fig. 8 b manifests the discriminated degrees of the transformed local entropy images with different patch sizes calculated by Eq. (6), which denotes the transformed image with patch size l=9 has much highest distinguish ability as being proven in [6]. Therefore, the following nuclear detection is based on the transformed local entropy image with the same seed points as the proposed framework for fair comparison.
Fig. 8

The quantitative evaluation for the local entropy images with local patch sizes. a The transformed local entropy images with different local patch sizes. b The discriminated degrees of the transformed local entropy images with different patch sizes calculated by Eq. (6)

3 Level set-based detection

The basic idea of the level set method is to represent a contour as the zero level set of a function, called a level set function (LSF), and formulate the contour’s motion as the evolution of the level set function. For image segmentation, the level set function can be formulated as ϕ(x,y,t), on the image coordinate space [x,y]Ω and time direction, which embeds the dynamic contour at the zero level set. Assuming the LSF ϕ takes positive values outside the zero level contour and negative values inside, the inward normal vector of the embedding contour can be expressed as =−ϕ/|ϕ|, where is the gradient operator. The evolution of the LSF ϕ can be formulated as the following partial differential equation (PDE):
$$ \frac{\partial\phi}{\partial t}=F|\nabla\phi|, $$
(7)
where F is the speed function that controls evolution of the LSF. The conventional level set method generally results in LSF irregularity [14, 15] in the evolution procedure, and thus, Li et al. proposed a general variational level set formulation with a distance regularization term and an external energy term that controls the evolution of the zero level contour toward the desired locations. The designed objective energy function to be minimized is formulated as
$$ E(\phi)=\lambda R_{p}(\phi)+E_{\text{ext}}(\phi), $$
(8)

where λ>0 is a constant for controlling the trade-off between two terms and E ext(ϕ) is the external energy on the processed images, which is defined such that it achieves a minimum when the zero level set of LSF ϕ is evolved to an object boundary (refer to detail in formation of [8]). R p (ϕ) is the level set regularization term as defined in [8].

This study attempts to segment the nuclear regions based on the enhanced probability maps using the level set method, which needs initial contour (initial LSF) for evolution. According to the evaluation of the proposed nuclear-enhanced strategy in the above section, we can see that the probability maps with the large local patch size (13×13) can achieve a very clear nuclear-enhanced region except some missed pixels in the nuclear boundary, which promises the possibility of well-recognizing the enhanced nuclear pixels from their surround via a simple thresholding procedure. The achieved nuclear regions by thresholding can be used to produce the initial level set, which then is evolved on the enhanced probability map with a small local patch size (such as 3×3) for giving the precise nuclear boundary. In addition, the series of available DIC microscope images at a fixed time point is a 3D volume, where the nuclear regions in the middle Z-slice are usually larger than others. Thus, the nuclear regions are firstly segmented in the middle Z-slice using the above strategy and automatically extend the segmentation to the top/down for all slices. The procedure of automatically segmenting the nuclear regions from a 3D DIC volume at a fixed time point is as follows
  1. (1)

    Implementing the initial segmentation (Fig. 9 a) of the middle Z-slice image on the nuclear-enhanced probability map with 13×13 local patches by a thresholding procedure and calculating the initial LSF ϕ.

     
  2. (2)

    Evolving the LSF ϕ on the nuclear-enhanced probability map with 3×3 local patches and achieving the final refinement of the segmentation result for the middle Z-slice image (Fig. 9 b).

     
  3. (3)

    Top-slice segmentation procedure: (a) eroding the previous segmentation regions (Fig. 9 c) using morphological filter for calculating the initial LSF and (b) refining the segmentation results (Fig. 9 d) on the probability map with 3×3 local patches.

     
  4. (4)

    Down-slice segmentation procedure as in step (3).

     
Fig. 9

Comparison of the detected nuclear regions with/without DRLS refinement. a The initial segmentation using simple thresholding on the middle Z-slice (initial LSF). b The final localization result via DRLS refinement. c The initial eroded segmentation from the previous slice (b). d The final localization of the next slice of b

With the above procedure, the nuclear regions of a 3D DIC volume can be automatically segmented

4 Experimental results

4.1 Material

Our nuclear detection method was implemented on three dimensional (3D) time-lapse microscope images of a C. elegans embryo obtained from the Worm Developmental Dynamics Database (WDDD; http://so.qbic.riken.jp/wddd/cdd/index.html) constructed by Kyoda et al. [16]. WDDD provides 3D time-lapse DIC microscope images for 50 wild-type embryos and 136 RNAi embryos in which one embryonic lethal gene was silenced by RNA interference. All sets of time-lapse images were recorded at 40-s intervals during the first three rounds of cell division of an embryo. At each time point, DIC microscope images were recorded with 66 consecutive focal planes spaced at 0.5- μ m intervals and 600×600 pixels spaced at 0.1 μ m × 0.1 μ m intervals. We randomly selected three slice images of three wild-type embryos from WDDD for constructing the nuclear models and then implemented our proposed method on the DIC microscope images of five other randomly selected wild-type embryos for evaluation.

4.2 Results

We firstly compare the transformed probability maps of two DIC microscope images from different embryos using our proposed TRIOD and directly using the local patch for pixel representation, respectively, and the local image entropy [6] in Fig. 10. From Fig. 10, we can see that the probability map images with the TRIOD can give a much clearer enhanced nuclear region than those directly using the local patch for pixel representation. On the other hand, the local entropy image can increase the contrast between the cytoplasmic and nuclear regions in some extent; however, the boundary of the transformed entropy images is quite blurred, and the intensity variance in the same regions of cellular components (cytoplasm/nucleus) is also large.
Fig. 10

Comparison of the transformed maps with different methods. a Raw DIC microscope images. b The local entropy images [6]. c The nuclear-enhanced probability maps directly using the local patches for pixel representation (the same modeling process using the local patches instead of TRIODs). d The probability maps using TRIODs

Next, the distance-regularized level set method is employed for segmentation from the middle Z-slice to the top/down slices on the transformed maps (local entropy image and the probability map with TRIODs). Three images with segmented results are shown in Fig. 11, where red, green, and blue lines denote the segmented results using local entropy, our proposed probability map, and manual segmentation (ground-truth segmentation), respectively. For segmentation results using local entropy, the initial LSF of the middle Z-slice is given manually, and the initial LSF for the next slice is propagated in the same way as introduced in Section 3. Figure 11 manifests that the segmented nuclear regions are easy to be diffused to the cytoplasmic region or be diminished to very small regions especially on the top/down slices. Figure 12 a, b shows the 3D visualization of the nuclear regions in ground-truth and by our proposed strategy for an embryo volume, respectively. In order to evaluate the detected nuclear regions qualitatively, we use two metrics: dice coefficient (DICE) and area overlap (AO), which are defined as follows:
$$ \text{DICE} = \frac{2|S_{\text{GT}}\cap S_{\text{Seg}}|} {|S_{\text{GT}}|+ |S_{\text{Seg}}|}; \text{AO} = \frac{|S_{\text{GT}}\cap S_{\text{Seg}}|} {|S_{\text{GT}}\cup S_{\text{Seg}}|}, $$
(9)
Fig. 11

The final results of the nuclear regions detected by different methods for three slice images of an embryo, which are refined by DRLS on the transformed local entropy images (green curve), and our proposed probability maps (red curve), and manually created ground-truth region (blue curve). a A top slice image. b A middle slice. c A down slice

Fig. 12

The rendered surface of the detected nuclear regions for an embryo. a The rendered surface of the ground-truth nuclear regions. b The surface of the final nuclear localization results using our proposed strategy

where S G and S Seg denote the nuclear regions of ground-truth and using our proposed strategy, respectively. The compared performances are given in Fig. 13 a, b on DIC microscope images of one wild-type embryo which shows that our proposed method can give 8090 % dice coefficients and 7090 % AO values even for two sides of images while the local entropy-based method failed for two sides of images despite the similar detection performances to our proposed method around the middle Z-slice. The similar detection performances were achieved by our proposed method on DIC microscope images of other wild-type embryos, and some examples of the detected nuclear on our proposed probability map and the local entropy image are shown in Figs. 14 and 15, respectively. Figure 14 gives the transformed probability maps, the local entropy images, and the nuclear detection results of two slice images from one wild-type embryo, where the red and green contours denote the detected nuclear boundary on our probability maps and the local entropy images, respectively. Figure 15 gives the nuclear detection results on four slice images from four wild-type embryos and validates that our proposed strategy (red contours) achieves much better detection performance than the local entropy-based method (green contours).
Fig. 13

The quantitative evaluation of the nuclear detection. a The compared DICE coefficients. b The compared area overlap (AO)

Fig. 14

The compared results using our proposed probability map- and local entropy-based methods. a The input slice images. b The local entropy images. c The transformed probability. d The detected nuclear regions. Red contour denotes the detected nuclear boundary by our method, and green contour denotes the boundary by the local entropy-based method

Fig. 15

The compared results using our proposed probability map- and local entropy-based methods. Each row shows four slice images from a wild-type embryo. Red contour denotes the detected nuclear boundary by our method, and green contour denotes the boundary by the local entropy-based method

5 Conclusions

This study presented a nucleus-enhanced probability process with top-ranked intensity-ordered descriptors (TRIOD) and employed the distance-regularized level set method for accurately localizing the nuclear regions. The proposed TRIOD is explored to represent the irregular texture and has promising discriminative property for distinguishing the smooth nuclear region and irregular cytoplasm textures. After the nucleus-enhancing processing by a probability model, the distance-regularized level set method is used for automated detection of nuclear regions from 3D DIC microscope images. Experiments showed that our proposed framework can achieve very promising performance.

Declarations

Acknowledgements

This work was supported in part by the National Bioscience Database Center (NBDC) of the Japan Science and Technology Agency (JST), the Grant-in-Aid for Scientific Research from the Japanese Ministry for Education, Science, Culture and Sports (MEXT) under the Grant No. 15K00253 and 16H01436 and the New Energy and Industrial Technology Development Organization (NEDO).

Authors’ contributions

XHH carried out the nuclear detection studies, conducted the experiments, and drafted the manuscript. YT and KK participated in the acquisition of the data and the analysis and interpretation of the data and drafted the manuscript. SO, IN, and YWC have revised the draft critically for important intellectual content and given the final approval of the version to be published. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

About the authors

Xian-Hua Han received a B.E. degree from Chongqing University, Chongqing, China, a M.E. degree from Shandong University, Jinan, China, and a D.E. degree in 2005, from the University of the Ryukyus, Okinawa, Japan. From April 2007 to March 2013, she was a post-doctoral fellow and an associate professor at the College of Information Science and Engineering, Ritsumeikan University, Japan. She is now a senior researcher at the National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan. Her current research interests include image processing and analysis, feature extraction, machine learning, computer vision, and pattern recognition. She is a member of the IEEE, IEICE.

Yukako Tohsato received her M.E. from Kyushu Institute of Technology University in 1997 and her Ph.D. degree from Osaka University in 2002. She worked as a research associate and then as an assistant researcher at Osaka University from 2002 to 2004. She was an assistant professor at Ritsumeikan University from 2004 to 2012. She is currently a research scientist at RIKEN Quantitative Biology Center. Her research interest is in bioinformatics and systems biology.

Koji Kyoda received his Ph.D. from Keio University in 2005. He is now a research scientist at RIKEN Quantitative Biology Center. His research interests include systems biology, bioimage informatics, high-throughput biological data analysis, and biological database integration.

Shuichi Onami received his D.V.M. from The University of Tokyo in 1994 and his Ph.D. from The Graduate School for Advanced Studies in 1998. He was an associate professor at Keio University from 2002 to 2006 and joined RIKEN as a senior scientist at Genomic Sciences Center in 2006. He is now a team leader at RIKEN Quantitative Biology Center. His current research interests include mathematical modeling of animal development and its application to medicine.

Ikuko Nishikawa received the degrees of Bachelor, Master, and Doctor of Science from Kyoto University by the research in physics. She is now a professor at Ritsumeikan University. Her current research interests include bioinformatics, machine learning, and optimization.

Yen-Wei Chen received a B.E. degree in 1985 from Kobe University, Kobe, 584 Japan, a M.E. degree in 1987, and a D.E. degree in 1990, both from Osaka University, Osaka, Japan. From 1991 to 1994, he was a Research Fellow at the Institute of Laser Technology, Osaka. From October 1994 to March 2004, he was an associate professor and a professor with the Department of Electrical and Electronics Engineering, University of the Ryukyus, Okinawa, Japan. He is currently a professor with the College of Information Science and Engineering, Ritsumeikan University, Japan. He is also a chair professor with the college of Computer Science and Technology, China. He was an Overseas Assessor of the Chinese Academy of Science and Technology, an associate Editor of the International Journal of Image and Graphics (IJIG), an Editorial Board member of the International Journal of Knowledge-Based Intelligent Engineering Systems. His research interests include computer vision, pattern recognition and image processing He has published more than 300 research papers in these fields. Dr. Chen is a member of the IEEE, IEICE, Japan.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
The Artificial Intelligence Research Center, Advanced Industrial Science and Technology
(2)
Ritsumeikan University
(3)
Laboratory for Developmental Dynamics, RIKEN Quantitative Biology Center

References

  1. The C. elegans Sequencing Consortium (1998) Genome sequence of the nematode C. elegans: a platform for investigating biology. Science282(5396): 2012–2018.Google Scholar
  2. Thomas C, DeVries P, Hardin J, White J (1996) Four-dimensional imaging: computer visualization of 3D movements in living specimens. Science 273(5275): 603–607.View ArticleGoogle Scholar
  3. Schnabel R, Hutter H, Moerman D, Schnabel H (1997) Assessing normal embryogenesis in Caenorhabditis elegans using a 4D microscope: variability of development and regional specification. Dev Biol 184(2): 234–265.View ArticleGoogle Scholar
  4. Heid PJ, Voss E, Soll DR (2002) 3D-DIASemb: a computer-assisted system for reconstructing and motion analyzing in 4D every cell and nucleus in a developing embryo. Dev Biol 245(2): 329–347.View ArticleGoogle Scholar
  5. Yasuda T, Bannai H, Onami S, Miyano S, Kitano H (1999) Towards automatic construction of cell-lineage of C. elegans from Nomarski DIC microscope images. Genome Inform 10: 144–154.Google Scholar
  6. Hamahashi S, Onami S, Kitano H (2005) Detection of nuclei in 4D Nomarski DIC microscope images of early Caenorhabditis elegans embryos using local image entropy and object tracking. BMC Bioinformatics 6: 125.View ArticleGoogle Scholar
  7. Ning F, Delhomme D, LeCun Y, Piano F, Bottou L, Barbano PE (2005) Toward automatic phenotyping of developing embryos from videos. IEEE Trans Image Process 14(9): 1360–1371.View ArticleGoogle Scholar
  8. Li C, Xu C, Gui C, Fox M (2010) Distance regularized level set evolution and its application to image segmentation. IEEE Trans Image Process 19(12): 3243–3254.MathSciNetView ArticleGoogle Scholar
  9. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2): 91–110.View ArticleGoogle Scholar
  10. Mikolajczyk K, Schmid C (2005) A performance evaluation of local descriptors. IEEE Trans Pattern Anal Mach Intell 27(10): 1615–1630.View ArticleGoogle Scholar
  11. Tola E, Lepetit V, Fua P (2010) DAISY: an efficient dense descriptor applied to wide-baseline stereo. IEEE Trans Pattern Anal Mach Intell 32(5): 815–830.View ArticleGoogle Scholar
  12. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7): 971–987.View ArticleMATHGoogle Scholar
  13. Gupta R, Patil H, Mittal A (2010) Robust order-based methods for feature description In: Proceedings of 2010 IEEE Conference on Computer Vision and Pattern Recognition, 334–341.. IEEE, Piscataway.View ArticleGoogle Scholar
  14. Sethian J (1999) Level set methods and fast marching methods. Cambridge University Press, Cambridge.MATHGoogle Scholar
  15. Osher S, Fedkiw R (2002) Level set methods and dynamic implicit surfaces. Springer-Verlag New York, Inc., New York.MATHGoogle Scholar
  16. Kyoda K, Adachi E, Masuda E, Nagai Y, Suzuki Y, Oguro T, Urai M, Arai R, Furukawa M, Shimada K, Kuramochi J, Nagai E, Onami S (2013) WDDD: Worm Developmental Dynamics Database. Nucleic Acids Res 41(Database issue): D732–D737.View ArticleGoogle Scholar

Copyright

© The Author(s) 2016