Skip to main content

A practical person authentication system using second minor finger knuckles for door security

Abstract

This paper proposes a person authentication system using second minor finger knuckles, i.e., metacarpophalangeal (MCP) joints, for door security. This system acquires finger knuckle patterns on MCP joints when a user takes hold of a door handle and recognizes a person using MCP joint patterns. The proposed system can be constructed by attaching a camera onto a door handle to capture MCP joints. Region of interest (ROI) images around each MCP joint can be extracted from only one still image, since all the MCP joints are located on the front face of the camera. Phase-based correspondence matching is used to calculate matching scores between ROIs to take into consideration deformation of ROIs caused by hand pose changes. Through a set of experiments, we demonstrate that the proposed system exhibits the efficient performance of MCP recognition and also show the potential possibilities of second minor finger knuckles for biometric recognition.

1 Introduction

A hand has a lot of biometric traits such as fingerprint, palmprint, finger/palm vein, finger knuckle, and hand geometry. Among such traits, a finger knuckle is a relatively new biometric trait in contrast with famous biometric traits such as face, fingerprint, and iris [1]. An outer surface of a finger has three knuckles: a distal interphalangeal (DIP) joint, a proximal interphalangeal (PIP) joint, and a metacarpophalangeal (MCP) joint as shown in Fig. 1. Kumar et al. [2] categorized three finger joints into major and minor finger knuckles, where a DIP joint is a first minor finger knuckle, a PIP joint is a major finger knuckle, and an MCP joint is a second minor finger knuckle. It is easy to capture such patterns on a finger knuckle by a camera. This advantage allows us to develop a flexible and compact biometric authentication system. A finger knuckle is also expected to be distinctive as well as a fingerprint and a palmprint, although statistical analysis using a huge dataset has to be required to demonstrate the uniqueness of finger knuckle patterns [2]. This paper focuses on the use of finger knuckle patterns to develop a person authentication system for door security.

Fig. 1
figure 1

A taxonomy of finger knuckle joints: Blue-colored circles indicate distal interphalangeal (DIP) joints, green-colored circles indicate proximal interphalangeal (PIP) joints, and red-colored circles indicate metacarpophalangeal (MCP) joints

Table 1 shows a summary of researches on finger knuckle recognition. Most researches [317] focused on recognition algorithms for texture patterns of PIP joints and evaluated its performance using a public finger knuckle image database such as the PolyU FKP database [18]. The images in the PolyU FKP database are captured under the controlled conditions, since the subject puts his/her finger on fixed blocks in order to reduce the spatial variations and capture clear line features of a finger knuckle. Although it is suitable for researchers to develop a fundamental recognition algorithm using finger knuckle patterns, it may not be practical. Most researches [6, 7, 1116, 19] employed coding approaches to extract features by applying spatial filters to images and binarizing their responses, where a variety of types of Gabor filter are usually used as a spatial filter. Effectiveness of such coding approaches have been demonstrated in iris recognition [20] and palmprint recognition [21]. Some researches [810] employed local feature descriptors such as SIFT and SURF, which are used in the field of computer vision. Another approach [5, 12, 15, 17] employed Band-Limited Phase-Only Correlation (BLPOC), which is an image matching technique using the phase components in 2D Discrete Fourier Transforms (2D DFTs) of given images [22]. Among of them, some researches [1517] exhibited efficient performance on person authentication using finger knuckle patterns.

Table 1 Summary of researches on finger knuckle recognition

There are also a few works on finger knuckle recognition under practical situations. Kumar et al. [4] have proposed a finger knuckle recognition algorithm using multiple patterns acquired from the index, middle, ring, and little fingers. They demonstrated that the matching score calculated by combining four PIP joints is effective for person authentication. Cheng et al. [19] have proposed a contactless PIP joint recognition system using a camera embedded on smartphones. This was the first attempt to develop a practical person authentication system using PIP joints for smartphones. Therefore, the recognition performance was not necessarily good. Aoyama et al. [23] have proposed a finger knuckle recognition system for a door handle. This system acquired PIP joint patterns when a user takes hold of a door handle and recognized a person using acquired patterns. Hence, the users do not pay attention to the authentication process. This system also used the combined information of the four knuckles to improve performance of finger knuckle recognition. Kusanagi et al. [24] have developed an improved version of Aoyama’s system by using video sequences.

There are a few works on finger knuckle recognition using MCP and DIP joints compared with PIP joints. Kumar [25] has proposed a finger knuckle recognition algorithm using both major and first minor finger knuckle patterns, i.e., PIP and DIP joints. Combination of two joint patterns improved performance of finger knuckle recognition. Kumar et al. [2] have also considered the use of texture patterns around MCP joints to identify persons. Both works gave us the fundamental investigation of biometric recognition using minor finger knuckle joints, since the performance has been evaluated using images of a hand with the fingers and thumb spread apart put on a flat plane.

This paper focuses on the use of second minor finger knuckles, i.e., MCP joints, for biometric recognition and develop a practical person authentication system using MCP joints. We consider person authentication using MCP joints for a door handle which is inspired by the concept of Aoyama’s system [23]. Aoyama’s system has to embed a camera into a door, since this system captures texture patterns on PIP joints when a user took hold of a door handle, resulting in increasing the cost. Local images around each PIP joint are not always extracted from only one still images suggested by Kusanagi et al. [24]. On the other hand, our proposed system uses MCP joints for person authentication. Texture patterns on MCP joints can be captured using a camera attached on a door handle. In this case, MCP joints are located on the front face of the camera. Therefore, a local image around each MCP joint can be extracted from only one still image. Phase-based correspondence matching [26] is used to calculate matching scores between MCP joint patterns as well as the conventional PIP joint recognition systems [23, 24]. Through a set of experiments, we demonstrate that the proposed system exhibits the efficient performance of MCP recognition and also shows the potential possibilities of minor finger knuckles for biometric recognition.

The main contributions of this work are summarized as follows:

  1. 1.

    This is the first attempt to use finger knuckle pattern on MCP joints for person authentication in a practical situation.

  2. 2.

    The prototype of a door security system using finger knuckle recognition is developed. The use of MCP joints makes it possible to develop a user-friendly person authentication system for door security.

2 Finger knuckle recognition system for door security

This section describes an overview of the proposed system. We develop the MCP joint recognition system inspired by the concept of finger knuckle recognition systems for door handles [23, 24].

Fingers have three joints, i.e., DIP, PIP, and MCP joints, as shown in Fig. 1. When a user takes hold of a door handle to open a door, it is easy to capture PIP and MCP joints by a camera. DIP joints are faced to the floor, and DIP joints of the index and middle fingers may be behind the thumb. Therefore, DIP joints are not suitable to use person authentication for door security.

The conventional systems using PIP joints consist of a handle, a camera, and a light source, where the camera has to be located so as to face toward PIP joints. When a user takes hold of a door handle, the system captures an image or a video sequence and recognizes a user using PIP joint patterns. The advantage is that the image acquisition process is not intrusive, that is, the user only has to open the door by taking hold of the door handle. The disadvantage is that the shape of PIP joints may be varied in each image acquisition due to hand pose variations, resulting in decreasing the recognition performance. In addition, the camera and the light source have to be embedded into the door. Hence, the door has to be refined and it takes much cost.

According to the fundamental investigation by Kumar et al. [2], MCP joints have sufficient distinctiveness for person authentication as well as PIP joints. MCP joints can be captured by attaching a camera onto a door handle and using the ambient light. Therefore, only a little effort is required to make a system for MCP joint recognition compared with the case of PIP joint recognition. Moreover, the variation of MCP joints is smaller than that of PIP joints, when a user takes hold on a door handle.

To clarify the potential possibilities of MCP joint recognition based on the above consideration, we developed a prototype system for MCP joint recognition as shown in Fig. 2. Table 2 shows the specification of the developed system. The camera is located above the door handle, assuming that the camera is attached on the door. There is no light source, that is, the ambient light is used to take images, assuming the indoor use. Images captured by this system include illumination changes caused only by the ambient light. In practical situation, images include strong daylight, reflection, etc., resulting in images with halation and blur, which significantly decrease recognition performance. In order to take desired images for MCP joint recognition, an appropriate camera filter and an optional light source have to be used.

Fig. 2
figure 2

Overview of the developed system for MCP joint recognition

Table 2 Specification of the developed system

3 MCP joint recognition

This section describes the proposed MCP joint recognition algorithm, which consists of four steps: (i) image acquisition, (ii) region of interest (ROI) extraction, (iii) ROI matching, and (iv) score fusion. Figure 3 shows the flow diagram of the proposed algorithm. The detail of each step is described in the following.

Fig. 3
figure 3

Flow diagram of the proposed MCP joint recognition system

3.1 Image acquisition

An image of back of a hand including the MCP joints of the index, middle, ring, and little fingers is captured under ambient light conditions using a camera located onto a door handle. Figure 4 a shows an acquired input image by the developed system.

Fig. 4
figure 4

MCP joint detection and ROI extraction. a Input image f(n 1,n 2). b Edge image f e (n 1,n 2). c Region around fingers f (n 1,n 2). d Vertical projection V(n 2). e Valley detection between fingers. f MCP joint detection. g Extracted ROI image for each finger

3.2 ROI extraction

This step extracts a ROI image from the captured hand image. The position of MCP joints is detected according to the valleys between fingers. The size of images is 1280 × 960 pixels as mentioned in Section 2. The captured image is resized into 640 × 480 pixels in order to reduce the amount of memory usage and the computation time, assuming that this algorithm is implemented on embedded systems. The input image is indicated by f(n 1,n 2), where 1≤n 1≤480 and 1≤n 2≤640.

First, both ends of a hand are detected from the input image. The edge image f e (n 1,n 2) as shown in Fig. 4 b is obtained by applying the Sobel filter to f(n 1,n 2). The position between the camera and the door handle is fixed. Hence, the location of the door handle in the image is known in advance. Let d be the center of the handle in the vertical direction and d be the coordinate of the end on the handle toward the door in the vertical direction as shown in Fig. 4 b. In the case of the developed system, d=220 and d =300. The horizontal coordinate of both ends of a hand is detected by

$$\begin{array}{*{20}l} e_{l} &=& \min\left\{ n_{2} | f_{e}(d,n_{2})>0\right\}, \end{array} $$
(1)
$$\begin{array}{*{20}l} e_{r} &=& \max\left\{ n_{2} | f_{e}(d',n_{2})>0\right\}, \end{array} $$
(2)

where e l and e r indicate the vertical coordinate of left and right ends of the hand, respectively. d is used to detect e r , since the right-sided end on the handle may be detected as the edge of the hand, if d is used.

Next, the vertical coordinate of each finger is obtained. To reduce the effect of background noise, the region f (n 1,n 2) located around the door handle is extracted from f(n 1,n 2) as follows:

$$\begin{array}{*{20}l} f'(n_{1},n_{2}) = f(n_{1},n_{2}) \rvert_{320 \leq n_{1} \leq 380, e_{l} \leq n_{2} \leq e_{r}}. \end{array} $$
(3)

As mentioned above, f (n 1,n 2) can be extracted from the fixed position of f(n 1,n 2), since the relation between the camera and the handle is fixed in the developed system. The range 320≤n 1≤380 is empirically determined so as to extract the region between MCP and PIP joints in this paper. Figure 4 c shows the extracted region f (n 1,n 2). The intensity value around the boundary between fingers is lower than others, and fingers are located in the vertical position. Hence, the boundary between fingers can be detected by projecting pixels of f (n 1,n 2) in the vertical direction. The vertical projection V(n 2) of f (n 1,n 2) is calculated by

$$\begin{array}{*{20}l} V(n_{2}) = \sum_{n_{1}} f'(n_{1},n_{2}). \end{array} $$
(4)

Figure 4 d shows the result of vertical projection of f (n 1,n 2). The three local minima of V(n 2) are detected as boundaries between fingers indicated by v m(m=1,2,3), where each index of m corresponds to the boundary between index and middle fingers, middle and ring fingers, and ring and little fingers, respectively.

Finally, the coordinates of each MCP joint are defined. The edge is tracked from each v m to the valley between fingers using the boundary tracking algorithm [27] as shown in Fig. 4 e. The coordinate of the end of each valley is indicated by \(\mathbf {w}^{m} = (w^{m}_{1},w^{m}_{2})\). We can consider that the geometric relation among MCP joints and valleys is almost the same, since everyone has almost the same structure of a hand. Therefore, the rule-based approach can be used to detect the coordinates of each MCP joint using valley location w m. The center coordinate of each MCP joint, u, is defined by

$$\begin{array}{*{20}l} \mathbf{u}^{1} &=& (u_{1}^{1},u_{2}^{1}) = \left(w^{1}_{1}-75,\frac{e^{l}+w^{1}_{2}}{2}\right), \end{array} $$
(5)
$$\begin{array}{*{20}l} \mathbf{u}^{2} &=& (u_{1}^{2},u_{2}^{2}) = \left(w^{2}_{1}-75,\frac{w^{1}_{2}+w^{2}_{2}}{2}\right), \end{array} $$
(6)
$$\begin{array}{*{20}l} \mathbf{u}^{3} &=& (u_{1}^{3},u_{2}^{3}) = \left(w^{2}_{1}-75,\frac{w^{2}_{2}+w^{3}_{2}}{2}\right), \end{array} $$
(7)
$$\begin{array}{*{20}l} \mathbf{u}^{4} &=& (u_{1}^{4},u_{2}^{4}) = \left(w^{3}_{1}-75,\frac{w^{3}_{2}+e^{r}}{2}\right), \end{array} $$
(8)

where i=1,2,3,4 and each index i corresponds to the index, middle, ring, and little fingers, respectively. The region with 150 × 150 pixels centered on u i is extracted as the ROI image.

3.3 ROI matching

Phase-based correspondence matching [26] is used to calculate matching scores between ROI images, which employs (i) a coarse-to-fine strategy using image pyramids for robust correspondence search and (ii) a local block matching method using BLPOC. The image deformation is observed in ROI images captured in the different timing due to hand rotation, although ROI images extracted from MCP joints have smaller deformation than those from PIP joints. Such deformation can be approximated by small translations in a local area. Intensity variation can be observed in ROI images due to different illumination condition. BLPOC is one of the image matching methods robust against illumination changes. Therefore, we decide to employ phase-based correspondence matching as well as the conventional PIP joint recognition systems [23, 24].

Fundamentals of POC and BLPOC are briefly described in the following. Consider two N 1×N 2 images, f(n 1,n 2) and g(n 1,n 2), where the index ranges are n 1=−M 1,,M 1 (M 1>0) and n 2=−M 2,,M 2 (M 2>0) for mathematical simplicity, and hence N 1=2M 1+1 and N 2=2M 2+1. The discussion could be easily generalized to non-negative index ranges with power-of-two image size. Let F(k 1,k 2) and G(k 1,k 2) denote the 2D Discrete Fourier Transforms (DFTs) of f(n 1,n 2) and g(n 1,n 2), respectively. The normalized cross power spectrum R FG (k 1,k 2) is given by

$$\begin{array}{*{20}l} R_{FG}(k_{1},k_{2}) =& \ \frac{F(k_{1},k_{2})\overline{G(k_{1},k_{2})}} {\left |F(k_{1},k_{2})\overline{G(k_{1},k_{2})} \right|}, \end{array} $$
(9)

where \(\overline {G(k_{1},k_{2})}\) is the complex conjugate of G(k 1,k 2). The POC function r fg (n 1,n 2) is the 2D Inverse DFT (2D IDFT) of R FG (k 1,k 2) and is given by

$$\begin{array}{*{20}l} {}r_{fg}(n_{1},n_{2}) = \frac{1}{N_{1}N_{2}}\sum_{k_{1},k_{2}}R_{FG}(k_{1},k_{2}) W_{N_{1}}^{-k_{1}n_{1}}W_{N_{2}}^{-k_{2}n_{2}}, \end{array} $$
(10)

where \(\sum _{k_{1},k_{2}}\) denotes \(\sum _{k_{1}=-M_{1}}^{M_{1}}\sum _{k_{2}=-M_{2}}^{M_{2}}\). When two images are similar, their POC function gives a distinct sharp peak. When two images are not similar, the peak drops significantly. The height of the peak gives a good similarity measure for image matching, and the location of the peak shows the translational displacement between the images. The idea of BLPOC is to eliminate meaningless high frequency components in the calculation of normalized cross power spectrum R FG [22]. Assume that the ranges of the effective frequency band are given by k 1=−K 1,,K 1 and k 2=−K 2,,K 2, where 0≤K 1M 1 and 0≤K 2M 2. Thus, the effective size of frequency spectrum is given by L 1=2K 1+1 and L 2=2K 2+1. The BLPOC function is given by

$$\begin{array}{*{20}l} r_{fg}^{K_{1}K_{2}}(n_{1},n_{2}) = \frac{1}{L_{1}L_{2}}{\sum_{k_{1},k_{2}}}'R_{FG}(k_{1},k_{2}) W_{L_{1}}^{-k_{1}n_{1}}W_{L_{2}}^{-k_{2}n_{2}}, \end{array} $$
(11)

where n 1=−K 1,,K 1, n 2=−K 2,,K 2, and \(\sum '_{k_{1},k_{2}}\) denotes \(\sum _{k_{1}=-K_{1}}^{K_{1}}\sum _{k_{2}=-K_{2}}^{K_{2}}\). Note that the maximum value of the correlation peak of the BLPOC function is always normalized to 1 and does not depend on L 1 and L 2.

Phase-based correspondence matching consists of a coarse-to-fine strategy using image pyramids and a local block matching method using BLPOC. Let p be a coordinate vector of a reference point in the ROI image I(n 1,n 2) registered in the database. In this paper, the number of reference points is 10×10. The problem of correspondence matching is to find a coordinate vector q in the input ROI image J(n 1,n 2) that corresponds to the reference pixel p in the registered ROI image I(n 1,n 2). The procedure of phase-based correspondence matching is briefly described in the following.

  • Step 1: For l=1,2,,l max−1, create the l-th layer images I l(n 1,n 2) and J l(n 1,n 2), i.e., coarser versions of I 0(n 1,n 2) and J 0(n 1,n 2), recursively as follows:

    $$\begin{array}{@{}rcl@{}} I^{l}(n_{1},n_{2}) &=& \frac{1}{4}\sum_{i_{1}=0}^{1} \sum_{i_{2}=0}^{1} I^{l-1}(2n_{1}+i_{1},2n_{2}+i_{2}),\\ J^{l}(n_{1},n_{2}) &=& \frac{1}{4}\sum_{i_{1}=0}^{1} \sum_{i_{2}=0}^{1} J^{l-1}(2n_{1}+i_{1},2n_{2}+i_{2}). \end{array} $$
  • Step 2: For every layer l=1,2,,l max, calculate the coordinate \(\mathbf {p}_{l}=(p^{l}_{1},p^{l}_{2})\) corresponding to the original reference point p 0 recursively as follows:

    $$\begin{array}{@{}rcl@{}} \begin{array}{rcl} \mathbf{p}^{l} &=& \lfloor\frac{1}{2}\mathbf{p}^{l-1}\rfloor = \left(\lfloor\frac{1}{2}p^{l-1}_{1}\rfloor, \lfloor\frac{1}{2}p^{l-1}_{2}\rfloor\right), \end{array} \end{array} $$
    (12)

    where z denotes the operation to round the element of z to the nearest integer toward minus infinity.

  • Step 3: We assume that \(\mathbf {q}^{l_{\text {max}}}=\mathbf {p}^{l_{\text {max}}}\) in the coarsest layer. Let l=l max−1.

  • Step 4: From the l-th layer images I l(n 1,n 2) and J l(n 1,n 2), extract two small images f l(n 1,n 2) and g l(n 1,n 2) with their centers on p l and 2q l+1, respectively. The size of image blocks is W×W pixels.

  • Step 5: Estimate the displacement between f l(n 1,n 2) and g l(n 1,n 2) using BLPOC. Let the estimated displacement vector be δ l. The l-th layer correspondence q l is determined as follows:

    $$\begin{array}{@{}rcl@{}} \begin{array}{rcl} \mathbf{q}^{l} &=& 2\mathbf{q}^{l+1}+\boldsymbol{\delta}^{l}. \end{array} \end{array} $$
    (13)
  • Step 6: Decrement the counter by 1 as ll−1 and repeat from Step 4 to Step 6 while l≥0.

  • Step 7: From the original images I 0(n 1,n 2) and J 0(n 1,n 2), extract two image blocks with their centers on p 0 and q 0, respectively. Calculate the BLPOC function between the two blocks. The peak value of the BLPOC function is obtained as a measure of reliability in local block matching. Finally, we obtain the corresponding point pairs and their reliability.

In this paper, we employ parameters: l max=2, W=48, K 1/M 1=K 2/M 2=0.5 for BLPOC.

The matching score is calculated according to the corresponding point pairs and their reliability. If the reliability, i.e., the peak value of BLPOC function, is below the threshold, the corresponding point pair is removed as outliers. We empirically confirmed that high recognition rate is obtained when the threshold is set from 0.2 to 0.5. The best result is obtained when the threshold is 0.3 in this paper. Figure 5 shows an example of correspondence matching. In the case of the genuine pair, the location of corresponding points on the registered image represents deformation between registered and input images. In addition, the reliability of almost all of corresponding point pairs exceeds the threshold. On the other hand, in the case of the impostor pair, the location of corresponding points on the registered image is random. This means that the translational displacement between the images cannot be estimated correctly. The reliability of almost all of corresponding point pairs is below the threshold. According to the above, the number of reliable corresponding points is used to evaluate the similarity between ROI images. The matching score S i for each finger is defined by

$$ S^{i} = \frac{\text{\# of corresponding point pairs}}{\text{\# of reference points}}, $$
(14)
Fig. 5
figure 5

Result of phase-based correspondence matching for middle fingers. a Genuine pair and b Impostor pair. The left is the input image and the right is the registered image, where red dots indicate corresponding point pairs and blue dots indicate outliers, i.e., their reliability is below threshold

where i=1,2,3,4 and each index i corresponds to the index, middle, ring, and little fingers, respectively.

3.4 Score fusion

The matching scores are calculated from four finger knuckles as mentioned above. To enhance the recognition performance, the final matching score S is calculated by combining all the matching scores. There are some approaches to combine matching scores [28]. We decide to use the simple sum rule, taking into consideration the performance and the computation time. The final matching score S is defined by

$$ S = \sum_{i=1}^{4} S_{i}. $$
(15)

4 Experiments and discussion

This section describes experiments to evaluate performance of MCP joint recognition using the proposed system.

A hand image database is created using the proposed system as shown in Fig. 2. Images are collected from 28 subjects in two separate sessions, where the time interval between the first and second sessions is more than 1 week. The size of images is 1280 × 960 pixels as mentioned in Section 2. In each session, five images are captured from the left and right hands. To increase the number of combinations, we assume that the left and right hand images taken from the same subject are different from each other. The mirror-reversed image of the left hand image is used in the experiments. As a result, the database contains 560 images with 56 subjects and 10 different images of each subject. The number of genuine pairs is 2520 (=10 C 2×56), and the number of impostor pairs is 154,000 (=56 C 2×10×10).

Figure 6 shows examples of hand images and extracted ROI images. In the case of the developed system, all the ROIs can be extracted correctly. On the other hand, in the case of the finger knuckle recognition system using PIP joints, ROIs cannot be always extracted from captured hand images as described in [23, 24]. Therefore, the use of MCP joints makes it possible to achieve stable ROI extraction compared with that of PIP joints. There are two experiments: Experiment 1 uses each finger and Experiment 2 uses multiple fingers. The recognition performance is evaluated by a Receiver Operating Characteristic (ROC) curve and an Equal Error Rate (EER) [1].

Fig. 6
figure 6

Images in the database: Images in each row are captured from the same person, and left and right columns indicate 1st and 2nd sessions, respectively. Four small images below the acquired hand image are ROI images extracted from each MCP joint and red points indicate the detected MCP joints

The performance of the proposed method is compared with the conventional finger knuckle matching methods such as BLPOC [2, 5], CompCode [29], and LGIC [12]. BLPOC is used for PIP joints in [5] and MCP joints in [2]. The BLPOC function between two ROI images is calculated by Eq. (11), and its maximum peak value is obtained as a matching score. CompCode (competitive code) proposed by Kong et al. [30] is generated by applying a bank of Gabor filters with orientation parameters. ROI images are coded as orientations having the maximum response for each pixel. The matching score is calculated by the Hamming distance. LGIC, i.e., local-global information combination, is a combination of BLPOC and CompCode. BLPOC is used to extract global features, while CompCode is used to extract local features. A translational displacement between ROI images is estimated by BLPOC, and the common areas are extracted according to the estimated displacement. A global matching score between common areas is calculated by BLPOC, while a local matching score is calculated by CompCode. The final matching score is obtained by a weighted sum of global and local matching scores. Kumar et al. [2] have suggested that BLPOC exhibited the best performance in finger knuckle recognition of MCP joints from their fundamental investigation. On the other hand, Zhang et al. [12] demonstrated that LGIC exhibited better performance than BLPOC in finger knuckle recognition of PIP joints. Hence, we decided to compare the performance of the proposed method with BLPOC, CompCode, and LGIC.

4.1 Experiment 1

Experiment 1 evaluates recognition performance for each finger such as the index, middle, ring, and little fingers. Figure 7 shows ROC curves for each finger, and Table 3 shows the summary of EERs for each finger and each matching method. BLPOC exhibits low performance for the index, ring, and little fingers, although BLPOC shows good performance on MCP joint pattern recognition in [2]. The global BLPOC-based methods [2, 5] can handle only the translational displacement between images. Therefore, the recognition performance is decreased, since there is nonlinear deformation between ROI images due to hand pose changes. CompCode [29] exhibits the worst performance for the index, middle, and ring fingers, since CompCode can handle small translational displacement between images. LGIC [12] show better performance than BLPOC and CompCode, since LGIC is a combination of BLPOC and CompCode. On the other hand, the proposed method using phase-based correspondence matching exhibits the best performance for all the fingers compared with other methods, since phase-based correspondence matching can take into account nonlinear image deformation. The EER of the little finger is the highest for all the methods. The ROI image of the little finger includes large perspective deformation compared with those of other fingers. The position of the little finger is unstable compared with other fingers. As a result, the large deformation is caused even for a small hand pose change. In the case of using PIP joints, EERs of the index and little fingers are higher than those of other fingers, since image deformation of PIP joints on the index and little fingers is larger than that of MCP joints.

Fig. 7
figure 7

ROC curves for each finger in Experiment 1. a Index finger. b Middle finger. c Little finger. d Ring finger

Table 3 EERs [%] for each finger knuckle recognition algorithm in Experiment 1

4.2 Experiment 2

Experiment 2 evaluates recognition performance for the combination of adjacent 2 4 fingers such as the (i) index and middle fingers, (ii) middle and ring fingers, (iii) ring and little fingers, (iv) index, middle, and ring fingers, (v) middle, ring, and little fingers, and (vi) all the four fingers. Figure 8 shows ROC curves for each combination, and Table 4 shows the summary of EERs for each combination and each matching method. Note that recognition performance when combining little fingers and other fingers was not evaluated in [23]. The extraction rate of ROIs in [23] was 46% for index fingers, 86% for middle fingers, 84.2% for ring fingers, and 27.8% for little fingers. The number of genuine pairs is not enough to evaluate performance when combining little fingers and other fingers, since the number of ROIs of little fingers is significantly small compared with other fingers. The fused matching score is calculated by the sum rule as mentioned in Section 3.4. Combining multiple fingers improves recognition performance compared with the single finger use. When combining more than three finger knuckles, recognition performance of the methods is significantly improved. In all the cases, the recognition performance of the proposed method is the highest compared with other methods. The EER when combining four MCP joints for the proposed method is 2.36% as shown in Table 4. In the case of using PIP joints, EER was 1.54% combining middle and ring fingers [23] and 2.08% combining four fingers [24], although all the ROIs cannot be extracted from hand images. The advantage of the proposed method compared with [23] and [24] is that ROIs can be extracted from all the fingers and the matching score can be calculated from the combination of all the fingers. This advantage is important to develop a user-friendly person authentication system, since the conventional methods [23] and [24] may need multiple image acquisition even for the authenticated user to extract ROIs. The number of genuine pairs combining middle and ring fingers of [23] is 1166, which is 64.78% of all the possible combination of genuines and the number of genuine pairs combining four fingers of [24] is 1901, which is 84.49% of all the possible combination of genuines. Therefore, the use of MCP joints makes it possible to achieve stable and reliable person authentication compared with that of PIP joints because of performance on ROI extraction and matching.

Fig. 8
figure 8

ROC curves for each combination of fingers. a Index and middle fingers. b Middle and ring fingers. c Ring and little fingers. d Index, middle, and ring fingers. e Middle, ring, and little fingers. f all the fingers

Table 4 EERs [%] for each matching algorithm in Experiment 2

We also consider the other experiment which evaluates the recognition performance when the database is separately used as 1st and 2nd sessions. Table 5 shows the summary of EERs in this experiment. EERs are lower than those when images in both sessions are used. This result indicates that hand pose is significantly different between 1st and 2nd sessions even for the same person. To improve recognition performance for hand pose variation, we have to introduce geometric correction in preprocessing and employ the matching algorithm robust against large image deformation.

Table 5 EERs [%] for each matching algorithm in 1st session (upper) and 2nd session (lower)

4.3 Computation time

The computation time of the proposed algorithm is evaluated using MATLAB R2013a on Intel Core i5-4250U (1.3 GHz). The computation time for ROI extraction and ROI matching is 141 and 91 ms, respectively.

5 Conclusion

This paper proposed a person authentication system using MCP joints for door security. The proposed system can be constructed by attaching a camera onto a door handle. This system can be applied to the existing doors with simple construction compared with the conventional systems using PIP joints for a door handle which need to embed a camera into a door. ROI images around each MCP joint can be extracted from only one still image, since MCP joints are located on the front face of the camera. ROI images captured in the different timing include deformation due to hand pose changes. The use of phase-based correspondence matching makes it possible to calculate reliable matching scores when ROI images have deformation compared with conventional methods. Through a set of experiments, we demonstrated that the proposed system exhibits the efficient performance of MCP recognition. Person authentication using finger knuckles may be difficult to introduce high security access applications such as border controls, since further investigation is required to demonstrate the uniqueness and the distinctiveness of finger knuckle patterns. On the other hand, this paper presented the potential possibilities of minor finger knuckles for biometric recognition. The proposed system will be acceptable for commercial applications such as building access control due to its convenience. In future, we will develop a multiple finger knuckle recognition system which employs major and minor finger knuckles.

A preliminary version of this paper is presented in ACPR 2015 [ 32 ].

References

  1. Jain AK, Flynn P, Ross AA (2008) Handbook of biometrics. Springer, US.

    Book  Google Scholar 

  2. Kumar A, Xu Z (2014) Can we use second minor finger knuckle patterns to identify humans?Proc IEEE Comput Soc Conf Conf Comput Vis Pattern Recognit Workshop: 106–112.

  3. Ravikanth C, Kumar A (2007) Biometric authentication using finger-back surface. Proc IEEE Comput Soc Conf Conf Comput Vis Pattern Recognit: 1–6.

  4. Kumar A, Ravikanth C (2009) Personal authentication using finger knuckle surface. IEEE Trans Inf Forensic Secur4(1): 98–110.

    Article  Google Scholar 

  5. Zhang L, Zhang L, Zhang D (2009) Finger-knuckle-print verification based on band-limited phase-only correlation. Lect Notes Comput Sci (CAIP2009)5702: 141–148.

    Article  Google Scholar 

  6. Kumar A, Zhou Y (2009) Personal identification using finger knuckle orientation features. Electron Lett45(20): 1023–1025.

    Article  Google Scholar 

  7. Zhang L, Zhang L, Zhang D, Zhu H (2010) Online finger-knuckle-print verification for personal authentication. Pattern Recog43: 2560–2571.

    Article  MATH  Google Scholar 

  8. Morales A, Travieso CM, Ferrer MA, Alonso JB (2011) Improved finger-knuckle-print authentication based on orientation enhancement. Electron Lett47(6): 380–381.

    Article  Google Scholar 

  9. Le-qing Z (2011) Finger knuckle print recognition based on SURF algorithm. Proc Int’l Conf Fuzzy Syst Knowl Discov: 1879–1883.

  10. Badrinath GS, Nigam A, Gupta P (2011) An efficient finger-knuckle-print based recognition system fusing SIFT and SURF matching scores. Proc Intl’ Conf Inf Commun Secur: 374–387.

  11. Xiong M, Yang W, Sun C (2011) Finger-knuckle-print recognition using LGBP. Proc Int’l Conf Adv Neural NetwPart II: 270–277.

    Google Scholar 

  12. Zhang L, Zhang L, Zhang D, Zhu H (2011) Ensemble of local and global information for finger-knuckle-print recognition. Pattern Recognit44: 1990–1998.

    Article  Google Scholar 

  13. Zhang L, Li H, Shen Y (2011) A novel Riesz transforms based coding scheme for finger-knuckle-print recognition. Proc Int’l Conf Hand-Based Biom: 1–6.

  14. Shariatmadar ZS, Faez K (2011) An efficient method for finger-knuckle-print recognition based on information fusion. Proc Int’l Conf Signal Image Process Appl: 210–215.

  15. Zhang L, Zhang L, Zhang D, Guo Z (2012) Phase congruency induced local features for finger-knuckle-print recognition. Pattern Recog45: 2522–2531.

    Article  Google Scholar 

  16. Gao G, Zhang L, Yang Y, Zhang L, Zhang D (2013) Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion. IEEE Trans Image Process22(12): 5050–5062.

    Article  MathSciNet  Google Scholar 

  17. Aoyama S, Ito K, Aoki T (2014) A finger-knuckle-print recognition algorithm using phase-based local block matching. Inform Sci268: 53–64.

    Article  Google Scholar 

  18. PolyU FKP Database. http://www4.comp.polyu.edu.hk/~biometrics/. Accessed 8 Apr 2016.

  19. Cheng KY, Kumar A (2012) Contactless finger knuckle identification using smartphones. Proc Int’l Conf Biom Spec Interest Group: 1–6.

  20. Burge MJ, Bowyer K (2013) Handbook of iris recognition. Springer-Verlag, London.

  21. Kong A, Zhang D, Kamel M (2009) A survey of palmprint recognition. Pattern Recog42(7): 1408–1418.

    Article  Google Scholar 

  22. Ito K, Nakajima H, Kobayashi K, Aoki T, Higuchi T (2004) A fingerprint matching algorithm using phase-only correlation. IEICE Trans FundamE87-A(3): 682–691.

    Google Scholar 

  23. Aoyama S, Ito K, Aoki T (2013) A multi-finger knuckle recognition system for door handle. Proc Int’l Conf Biom Theory Appl SystO-18: 1–7.

    Google Scholar 

  24. Kusanagi D, Aoyama S, Ito K, Aoki T (2014) Multi-finger knuckle recognition from video sequence: extracting accurate multiple finger knuckle regions. Proc Int’l Joint Conf Biom1–8.

  25. Kumar A (2012) Can we use minor finger knuckle images to identify humans?Proc Int’l Conf Biom Theory Appl Syst55–60.

  26. Ito K, Iitsuka S, Aoki T (2009) A palmprint recognition algorithm using phase-based correspondence matching. Proc Int’l Conf Image Process1977–1980.

  27. Gonzalez RC, Woods RE (1992) Digital image processing. Pearson Education, New Jersey.

  28. Ross AA, Nandakumar K, Jain AK (2006) Handbook of multibiometrics. Springer, US.

    Google Scholar 

  29. Zhang L, Zhang L, Zhang D (2009) Finger-knuckle-print: a new biometric identifier. Proc Int’l Conf Image Process1981–1984.

  30. Kong AW-K, Zhang D (2004) Competitive coding scheme for palmprint verification. Proc Int’l Conf Pattern Recog1: 520–523.

    Google Scholar 

  31. Flea, 3 1.3 MP Color USB3 Vision, Point Grey Research Inc. https://www.ptgrey.com/flea3-13-mp-color-usb3-vision-e2v-ev76c560-camera. Accessed 8 Apr 2016.

  32. Kusanagi D, Aoyama S, Ito K, Aoki T (2015) A person authentication system using second minor finger knuckles for door handle. Proc Asian Conf Pattern RecogOS9-01: 1–5.

    Google Scholar 

Download references

Acknowledgements

This work was supported, in part, by JSPS KAKENHI Grant Numbers 15H02721.

Authors’ contributions

DK carried out this study, made a database, performed the experiments, and drafted the manuscript. SA carried out this study, performed the experiments and their analysis, and helped to draft the manuscript. KI conceived of the study, performed the analysis of the experimental results, and drafted the manuscript. TA participated in the design and coordination of this study and helped to draft the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Koichi Ito.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kusanagi, D., Aoyama, S., Ito, K. et al. A practical person authentication system using second minor finger knuckles for door security. IPSJ T Comput Vis Appl 9, 8 (2017). https://doi.org/10.1186/s41074-017-0016-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41074-017-0016-5

Keywords