A novel framework for cross-spectral iris matching
© The Author(s) 2016
Received: 8 April 2016
Accepted: 14 October 2016
Published: 5 November 2016
Previous work on iris recognition focused on either visible light (VL), near-infrared (NIR) imaging, or their fusion. However, limited numbers of works have investigated cross-spectral matching or compared the iris biometric performance under both VL and NIR spectrum using unregistered iris images taken from the same subject. To the best of our knowledge, this is the first work that proposes a framework for cross-spectral iris matching using unregistered iris images. To this end, three descriptors are proposed namely, Gabor-difference of Gaussian (G-DoG), Gabor-binarized statistical image feature (G-BSIF), and Gabor-multi-scale Weberface (G-MSW) to achieve robust cross-spectral iris matching. In addition, we explore the differences in iris recognition performance across the VL and NIR spectra. The experiments are carried out on the UTIRIS database which contains iris images acquired with both VL and NIR spectra for the same subject. Experimental and comparison results demonstrate that the proposed framework achieves state-of-the-art cross-spectral matching. In addition, the results indicate that the VL and NIR images provide complementary features for the iris pattern and their fusion improves notably the recognition performance.
Among the various traits used for human identification, the iris pattern has gained an increasing amount of attention for its accuracy, reliability, and noninvasive characteristics. In addition, iris patterns possess a high degree of randomness and uniqueness which is true even between identical twins, and the iris remains constantly stable throughout an adult’s life [1, 2].
The initial pioneering work on iris recognition, which is the basis of many functioning commercial systems, was conducted by Daugman . The performance of iris recognition systems is impressive as demonstrated by Daugman  who reported false acceptance rates of only 10−6 on a study of 200 billion cross-comparisons. Additionally, the potential of iris biometrics has also been affirmed with 1.2 trillion comparison by tests carried out by the National Institute of Standards and Technology (NIST) which confirmed that iris biometrics has the best balance between accuracy, template size, and speed compared to other biometric traits .
Iris recognition technology nowadays is widely deployed in various large-scale applications such as the border crossing system in the United Arab Emirates, Mexico national ID program, and the Unique Identification Authority of India (UIDAI) project . As a case in point, more than one billion residents have been enrolled in the UIDAI project where about 1015 all-to-all check operations are carried out daily for identity de-duplication using iris biometrics as the main modality [5, 6].
Nearly all currently deployed iris recognition systems operate predominately in the near-infrared (NIR) spectrum capturing images at 800–900 nm wavelength. This is because there are fewer reflections coming from the cornea and the dark pigmented irides look clearer under the NIR light. In addition, external factors such as shadows and diffuse reflections become less under NIR light [7, 8].
The color of the irides is governed by the congruity of two molecules: eumelanin (black/brown) and pheomelanin (red/yellow). Dark pigmented irides have a high concentration of eumelanin. As the latter deeply absorbs visible light (VL), stromal features of the iris are only revealed under NIR and they become hidden in VL so the information related to the texture is revealed rather than the pigmentation. On the other hand, pheomelanin is dominant in light-pigmented irides. Capturing such irides under NIR light eliminates most of the rich pheomelanin information because the chromophore of the human iris is only visible under VL [8, 9]. Consequently, capturing iris images under different light conditions reveals different textural information.
Research in VL iris recognition has been gaining more attention in recent years due to the interest in iris recognition at a distance [10, 11]. In addition, competitions such as the Noisy Iris Challenge Evaluation (NICE)  and the Mobile Iris Challenge Evaluation  focus on the processing of VL iris images. This attention to visible wavelength-based iris recognition is boosted by several factors such as (1) visible range cameras can acquire images from long distance and they are cheaper than NIR cameras and (2) surveillance systems work in the visible range by capturing images of the body, face, and iris which could be used later for authentication .
Since both VL and NIR iris recognition systems are now widely deployed, studying the performance difference of iris recognition systems exploiting NIR and VL images is important because it gives insight into the essential features in each wavelength which in turn helps to develop a robust automatic identification system. On the other hand, cross-spectral iris recognition is essential in security applications when matching images from different lighting conditions is desired.
In this paper, we therefore propose a method for cross-spectral iris images matching. To the best of our knowledge, this attempt is amongst the first in the literature to investigate the problem of VL to NIR iris recognition (and vice versa) dealing with unregistered iris images belonging to the same subject. In addition, we investigate the difference in iris recognition performance with NIR and VL imaging. In particular, we investigate iris performance in each channel (red, green, blue, and NIR) and the feasibility of cross-channel authentication (i.e., NIR vs. VL). Furthermore, enhancing the iris recognition performance with multi-channel fusion is attained.
A novel framework for cross-spectral iris recognition capable of matching unregistered iris images captured under different lighting conditions
Filling the gap in multi-spectral iris recognition by exploring the performance difference in iris biometrics under NIR and VL imaging
Boosting iris recognition performance with multi-channel fusion
The rest of this paper is organized as follows: related works are given in Section 2. The proposed framework for cross-spectral iris matching is explained in Section 3. Section 4 presents the experimental results and the discussion while Section 5 concludes this paper.
2 Related work
Iris recognition technology has witnessed a rapid development over the last decade driven by its wide applications in the world. At the outset, Daugman  proposed the first working iris recognition system which has been adopted later by several commercial companies such as IBM, Irdian, and Oki. In this work, the integro-differential operator is applied for iris segmentation and the 2D Gabor filters are utilized for feature extraction while the Hamming distance scores serve as a comparator. The second algorithm is due to Wildes  who applied the Hough transform for localizing the iris and the Laplacian pyramid to encode the iris pattern. However, this algorithm has a high computational demand.
Another interesting approach was proposed by Sun and Tan  exploiting ordinal measures for iris feature representation. Unlike the traditional approaches that use quantitative values, the ordinal measure focuses on qualitative values to represent features. The multi-lobe differential filters have been applied for iris feature extraction to generate a 128-byte ordinal code for each iris image. Then, the error rates have been calculated based on the measured Hamming distances between two ordinal templates of the same class.
All the previous work assessed iris recognition performance under NIR. The demand for more accurate and robust biometric systems has increased with the expanded deployment of large-scale national identity programs. Hence, researchers have investigated iris recognition performance under different wavelengths or the possibility of fusing NIR and VL iris images to enhance recognition performance. Nevertheless, inspecting the correlation of NIR and VL iris images has been understudied, and the problem of cross-spectral iris recognition is still unsolved.
Boyce et al.  explored iris recognition performance under different wavelengths on a small multi-spectral iris databases consisting of 120 images from 24 subjects. According to the authors, higher accuracy was achieved for the red channel compared to green and blue channels. The study also suggested that cross-channel matching is feasible. However, iris images were fully registered and captured under ideal conditions. In , the authors employed the feature fusion approach to enhance the recognition performance of iris images captured under under both VL and NIR. The wavelet transform and discrete cosine transform were used for feature extraction while the features were augmented with the ordered weighted average method to enhance the performance.
In Ngo et al. , a multi-spectral iris recognition system was implemented which employed eight wavelengths ranges from 405 to 1550 nm. The results on a database of 392 iris images showed that the best performance was achieved with a wavelength of 800 nm. Cross-spectral experiment results demonstrated that the performance degraded with larger wavelength difference. Ross et al.  explored the performance of iris recognition in wavelengths beyond 900 nm. In their experiments, they investigated the possibility of observing different iris structures under different wavelengths and the potential of performing multi-spectral fusion for enhancing iris recognition performance. Similarly, Ives et al.  examined the performance of iris recognition under a wide range of wavelengths between 405 and 1070 nm. The study suggests that illumination wavelength has a significant effect on iris recognition performance. Hosseini et al.  proposed a feature extraction method for iris images taken under VL using a shape analysis method. Potential improvement in recognition performance was reported when combining features from both NIR and VL iris images taken from the same subject.
Recently, Alonso-Fernandez et al.  conducted comparisons on the iris and periocular modalities and their fusion under NIR and VL imaging. However, the images were not taken from the same subjects as the experiments were carried out on different databases (three databases contained close-up NIR images, and two others contained VL images). Unfortunately, this may not give an accurate indication about the iris performance as the images do not belong to the same subject. In , the authors suggested enhancing iris recognition performance in non-frontal images through multi-spectral fusion of iris pattern and scleral texture. Since the scleral texture is better seen in VL and the iris pattern is observed in NIR, multi-spectral fusion could improve the overall performance.
In terms of cross-spectral iris matching, the authors in  proposed an adaptive method to predict the NIR channel image from VL iris images using neural networks. Similarly, Burge and Monaco [23, 24] proposed a model to predict NIR iris images using features derived from the color and structure of the visible light iris images. Although the aforementioned approaches ([14, 23, 24]) achieved good results, their methods require the iris images to be fully registered. Unfortunately, this is not applicable in reality because it is very difficult to capture registered iris images from the same subject simultaneously.
In our previous work , we explored the differences in iris recognition performance across the VL and NIR spectra. In addition, we investigated the possibility of cross-channel matching between the VL and NIR imaging. The cross-spectral matching turns out to be challenging with an equal error rate (EER) larger than 27 %. Lately, Ramaiah and Kumar  emphasized the need for cross-spectral iris recognition and introduced a database of registered iris images and conducted experiments on iris recognition performance under both NIR and VL. This database is not available yet. The results of cross-spectral matching achieved an EER larger than 34 % which confirms the challenge of cross-spectral matching. The authors concluded their paper by: “it is reasonable to argue that cross-spectral iris matching seriously degrades the iris matching accuracy”.
3 Proposed cross-spectral iris matching framework
Matching across iris images captured in VL and NIR is a challenging task because there are considerable differences among such images pertaining to different wavelength bands. Although, the appearance of different spectrum iris images looks different, the structure is the same as they belong to the same person. Therefore, we exploited various photometric normalization techniques and descriptors to alleviate these differences. In this context, we employed the Binarized Statistical Image Features (BSIF) descriptor , DoG filtering in addition to a collection of the photometric normalization techniques available from the INface Toolbox1 [28, 29]: adaptive single scale retinex, non-local means, wavelet based normalization, homomorphic filtering, multi-scale quotient, Tan and Triggs normalization, and multi-scale Weberface (MSW).
Among these illumination techniques and descriptors, the DoG, BSIF, and MSW are noticed to reduce the iris cross-spectral variations. These models are described in the next subsections.
3.1 Difference of Gaussian (DoG)
Here, σ 0<σ 1 to construct a bandpass filter. The values of σ 0 and σ 1 are empirically set to 1 and 2, respectively. The DoG filter has a low computation complexity and is able to alleviate the illumination variation and aliasing. As there are variations in the frequency between VL and NIR images, the DoG filter is efficient because it suppresses these variations and alleviates noise and aliasing which paves the way for a better cross-spectral matching .
3.2 Binarized statistical image features (BSIF)
The BSIF  have been employed due to their ability to tolerate image degradation such as rotation and blurring . Generally speaking, feature extraction methods usually filter the images with a set of linear filters then quantize the response of such filters. In this context, BSIF filters are learned by exploiting the statistics of natural images rather than using manually built filters. This has resulted in promising results for classifying the texture in different biometric traits [31, 32].
The binarized feature b i is obtained based on the response values by setting b i =1 if s i >0 and b i =0 otherwise. The filters are learned from natural images using independent component analysis by maximizing the statistical independence of s i . Two parameters control the BSIF descriptor: the number of the filters (length n of the bit string) and the size of the filter l. In our approach, we used the default set of the filters2 which were learned from 5000 patches. Empirical results demonstrated that a filter size of 7×7 with 8 bits gives the best results.
3.3 Multi-scale Weberfaces (MSW)
Inspired by Weber’s law which states that the ratio of the increment threshold to the background intensity is a constant , the authors in  showed that the ratio between local intensity of a pixel and its surrounding variations is constant. Hence, in , the face image is represented by its reflectance and the illumination factor is normalized and removed using the Weberface model. Following this, we applied the Weberface model to the iris images to remove the illumination variations that result from the differences between the VL and NIR imaging, thus making the iris images illumination invariant.
Following the works of [28, 29], the Weberface algorithm has been applied with three scales using the following values: σ= [1 0.75 0.5], Neighbor=[9 25 49] and alfa= [2 0.2 0.02]. The steps of the Weberface algorithm are listed in Algorithm 1.
3.4 Proposed scheme
Unlike previous works [14, 23, 24] in which they require fully registered iris images and learn models that lack the ability of generalization, our framework does not require any training and works on unregistered iris images. This combination along with its decision level fusion achieved encouraging results as illustrated in the next section.
4 Results and discussion
In this work, our aim is to ascertain true cross-spectral iris matching using images taken from the same subject under the VL and NIR spectra. In addition, we investigate the iris biometric performance under different imaging conditions and the fusion of VL+NIR images to boost the recognition performance. The recognition performance is measured with the EER and the receiver operating characteristic (ROC) curves.
The experiments are conducted on the UTIRIS database  from the University of Tehran. This database contains two sessions with 1540 images; the first session was captured under VL while the second session was captured under NIR. Each session has 770 images taken from the left and right eye of 79 subjects where each subject has an average of five iris images.
4.2 Pre-processing and feature extraction
Typically, an iris recognition system operates by extracting and comparing the pattern of the iris in the eye image. These operations involve four main steps namely, image acquisition, iris segmentation, normalization, feature extraction, and matching .
The UTIRIS database includes two types of iris images, half of which are captured in the NIR spectrum while the other half are captured under the VL spectrum. The VL session contains images in the sRGB color space which then are decomposed to the red, green, and blue channels. To segment the iris in the eye image, the circular Hough transform (CHT) is applied because the images used in our experiments were captured under a controlled environment so they can be segmented with circular approaches [36, 37].
After feature extraction, the Hamming distance is used to find the similarity between two IrisCodes in order to decide if the vectors belong to the same person or not. Then, the ROC curves and the EER are used to judge the iris recognition performance for the images in each channel as illustrated in the next subsections.
4.3 NIR vs. VL performance
For feature extraction, the normalized iris image is convolved with the 1D log-Gabor filter to extract the features where the output of the filter is phase quantized to four levels to form the binary iris vector .
EER (%) of different channels comparison on the UTIRIS database
4.4 Light-eyed vs. dark-eyed
As mentioned before, capturing iris images under NIR light eliminates most of the rich melanin information because the chromophore of the human iris is only visible under VL [8, 9]. Therefore, light-pigmented irides exhibit more information under visible light. Figure 2 shows a green-yellow iris image captured under NIR and VL. It can be seen that the red channel reveals more information than the NIR image. So, intuitively, the recognition performance would be better for such images in the VL rather than the NIR spectrum.
On the contrary, with dark-pigmented irides, stromal features of the iris are only revealed under NIR and they become hidden in VL so the information related to the texture is revealed rather than the pigmentation as shown in Fig. 2. Therefore, the recognition performance for the dark-pigmented irides would give better results if the images were captured under NIR spectrum.
4.5 Cross-spectral experiments
Cross-spectral study is important because it shows the feasibility of performing iris recognition in several security applications such as information forensics, security surveillance, and hazard assessment. Typically a person’s iris images are captured under NIR but most of the security cameras operate in the VL spectrum. Hence, NIR vs. VL matching is desired.
On the contrary, the red channel gave the best performance compared to the green and blue channels. This can be attributed to the small gap in the wavelength of the red channel (780 nm) compared to the NIR (850 nm). Therefore, the comparisons of red vs. NIR is considered as the baseline for cross-spectral matching. Table 1 shows the EER of cross-channel matching experiments.
4.5.1 Cross-spectral matching
Cross-spectral performance turned out to be a challenging task with EER >27 % which is attributable to matching unregistered iris images from different spectral bands. Hence, to achieve an efficient cross-spectral matching, adequate transformations before the feature extraction are needed.
Experiments on different descriptors for cross-spectral matching
LBP (different combinations) 
Adaptive single scale retinex
Non-local means normalization
Multi-scale self quotient
Tan and Triggs normalization
For all cross-spectral experiments, we have adopted the leave-one-out approach to obtain the comparison results . Hence, for each subject with (m) iris samples, we have set one sample as a probe and the comparison is repeated iteratively by swapping the probe with the remaining (m−1) samples. The experiments for each subject are repeated (m(m−1)/2) times, and the final performance is measured in terms of EER by taking the minimum of the obtained comparison scores of each subject.
4.5.2 Cross-spectral fusion
To further enhance the performance of cross-spectral matching, the fusion of the G-DoG, G-BSIF, and G-MSW is considered. Different fusion methods are investigated namely, feature fusion, score fusion and decision fusion, out of which the decision fusion is observed to be the most effective.
Experiments on different fusion strategies for cross-spectral matching
Score fusion (min)
Decision fusion (AND)
A low false accept rate (FAR) is preferred to achieve a secure biometric system. To enhance the performance of our system and reduce the FAR, a fusion at the decision level is performed. Thus, the conjunction “AND” rule is used to combine the decisions from the G-DoG, G-BSIF, and G-MSW. This means that a false accept can only happen when all the previous descriptors produce a false accept .
It can be seen from the previous equations that the joint probability of false rejection increases while the joint probability of false acceptance decreases when using the AND conjunction rule.
All the previous descriptors (G-DoG, G-BSIF, and G-MSW) are considered as local descriptors. It can be argued that the fusion of local and global features could enhance the performance further. We wish to remark that fusing the local and global features would require further stages to augment the resultant global and local scores as they will be in different range/type . Such stages will increase the complexity of the cross-spectral framework. We have carefully designed the proposed framework so that all three descriptors (G-DoG, G-BSIF, and G-MSW) generate homogenous scores (binary template). Therefore, a single comparator (Hamming distance) can be quickly used for score matching.
4.6 Multi-spectral iris recognition
The VL and NIR images in the UTIRIB database are not registered. Therefore, they provide different iris texture information. The cross-channel comparisons demonstrated that red and NIR channels are the most suitable candidates for fusion as they gave the lowest EER compared to other channels as shown in Figs. 4 and 6, so it is common sense to fuse them in order to boost the recognition performance. Score level fusion is adopted in this paper due to its efficiency and low complexity . Hence, we combined the matching scores (Hamming distances) from both the red and NIR images using sum rule-based fusion with equal weights to generate a single matching score. After that, the recognition performance is evaluated again with the ROC curves and EER.
4.7 Comparisons with related work
Although the previous works [14, 23, 24] reported good results in terms of cross-spectral iris matching, it must be noted that these works have adopted fully registered iris images and learn models that lack the ability of generalization.
4.8 Processing time
All experiments were conducted on a 3.2-GHz core i5 PC with 8 GB of RAM under the Matlab environment. The proposed framework consists of four main descriptors namely, BSIF, DoG, MSW, and 1D log-Gabor filter. The processing times of the 1D log-Gabor filter, BSIF, and DoG descriptors are 10, 20, and 70 ms, respectively, while the MSW processing times is 330 ms. Therefore, the total computations time of the proposed method is less than half a second which implies its suitability for real time applications.
In this paper, a novel framework for cross-spectral iris matching was proposed. In addition, this work highlights the applications and benefits of using multi-spectral iris information in iris recognition systems. We investigated iris recognition performance under different imaging channels: red, green, blue, and NIR. The experiments were carried out on the UTIRIS database, and the performance of the iris biometric was measured.
We drew the following conclusions from the results. According to Table 2, among a variety of descriptors, the difference of Gaussian (DoG), BSIF, and multi-scale Weberface (MSW) were found to give good cross-spectral performance after integrating them with the 1D log-Gabor filter. Table 4 and Fig. 6 showed a significant improvement in the cross-spectral matching performance using the proposed framework.
In terms of multi-spectral iris performance, Fig. 4 showed that the red channel achieved better performance compared to other channels or the NIR imaging. This can be attributed to the large number of the light-pigmented irides in the UTIRIS database. It was also noticed from Fig. 6 that the performance of the iris recognition varied as a function of the difference in wavelength among the image channels. Fusion of the iris images from the red and NIR channels notably improved the recognition performance. The results implied that both the VL and NIR imaging were important to form a robust iris recognition system as they provided complementary features for the iris pattern.
The first author would like to thank the Ministry of Higher Education and Scientific Research (MoHESR) in Iraq for supporting this work.
MA carried out the design of the iris cross-spectral matching framework, performed the experiments, and drafted the manuscript. SD participated in the design of the framework and helped in drafting the manuscript. WW participated in the comparison experiments and helped in drafting the manuscript. JC guided the work, supervised the experimental design, and helped in drafting the manuscript. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Daugman J (1993) High confidence visual recognition of persons by a test of statistical independence. Pattern Anal Mach Intell IEEE Trans 15(11): 1148–1161.View ArticleGoogle Scholar
- Sun Z, Tan T (2009) Ordinal measures for iris recognition. IEEE Trans Pattern Anal Mach Intell 31(12): 2211–2226.View ArticleGoogle Scholar
- Daugman J (2006) Probing the uniqueness and randomness of IrisCodes: results from 200 billion iris pair comparisons. Proc IEEE 94(11): 1927–1935.View ArticleGoogle Scholar
- Grother PJ, Quinn GW, Matey JR, Ngan ML, Salamon WJ, Fiumara GP, Watson CI (2012) IREX III: performance of iris identification algorithms, Report, National Institute of Standards and Technology.Google Scholar
- Jain AK, Nandakumar K, Ross A (2016) 50 Years of biometric research: accomplishments, challenges, and opportunities. Pattern Recognit Lett 79: 80–105.View ArticleGoogle Scholar
- Daugman J (2007) Evolving methods in iris recognition. IEEE International Conference on Biometrics: Theory, Applications, and Systems, (BTAS07), (online). http://www.cse.nd.edu/BTAS_07/John_Daugman_BTAS.pdf. Accessed Sept 2016.
- Daugman J (2004) How iris recognition works. IEEE Trans Circ Syst Video Technol 14(1): 21–30.View ArticleGoogle Scholar
- Hosseini MS, Araabi BN, Soltanian-Zadeh H (2010) Pigment melanin: pattern for iris recognition. IEEE Trans Instrum Meas 59(4): 792–804.View ArticleGoogle Scholar
- Meredith P, Sarna T (2006) The physical and chemical properties of eumelanin. Pigment Cell Res 19(6): 572–594.View ArticleGoogle Scholar
- Dong W, Sun Z, Tan T (2009) A design of iris recognition system at a distance In: Chinese Conference on Pattern Recognition, (CCPR 2009), 1–5.. IEEE, Nanjing. http://ieeexplore.ieee.org/document/5344030/.View ArticleGoogle Scholar
- Proenca H, Filipe S, Santos R, Oliveira J, Alexandre LA (2010) The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance. IEEE Trans Pattern Anal Mach Intell 32(8): 1529–1535.View ArticleGoogle Scholar
- Bowyer KW (2012) The results of the NICE.II iris biometrics competition. Pattern Recognit Lett 33(8): 965–969.View ArticleGoogle Scholar
- De Marsico M, Nappi M, Riccio D, Wechsler H (2015) Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit Lett 57(0): 17–23.View ArticleGoogle Scholar
- Jinyu Z, Nicolo F, Schmid NA (2010) Cross spectral iris matching based on predictive image mapping In: Fourth IEEE International Conference on Biometrics: Theory Applications and Systems (BTAS’10), 1–5.. IEEE, Washington D.C.Google Scholar
- Wildes RP (1997) Iris recognition: an emerging biometric technology. Proc IEEE 85(9): 1348–1363.View ArticleGoogle Scholar
- Boyce C, Ross A, Monaco M, Hornak L, Xin L (2006) Multispectral iris analysis: a preliminary study In: Computer Vision and Pattern Recognition Workshop, 51–51.. IEEE, New York, doi:10.1109/CVPRW.2006.141. http://www.cse.msu.edu/~rossarun/pubs/RossMSIris_CVPRW06.pdf. Accessed Sept 2016.Google Scholar
- Tajbakhsh N, Araabi BN, Soltanianzadeh H (2008) Feature fusion as a practical solution toward noncooperative iris recognition In: 11th International Conference on Information Fusion, 1–7.. IEEE, Cologne.Google Scholar
- Ngo HT, Ives RW, Matey JR, Dormo J, Rhoads M, Choi D (2009) Design and implementation of a multispectral iris capture system In: Asilomar Conference on Signals, Systems and Computers, 380–384.. IEEE, Pacific Grove.Google Scholar
- Ross A, Pasula R, Hornak L (2009) Exploring multispectral iris recognition beyond 900 nm In: IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems. (BTAS’09), 1–8.. IEEE, Washington D.C.Google Scholar
- Ives RW, Ngo HT, Winchell SD, Matey JR (2012) Preliminary evaluation of multispectral iris imagery In: IET Conference on Image Processing (IPR 2012), 1–5.. IET, London.View ArticleGoogle Scholar
- Alonso-Fernandez F, Mikaelyan A, Bigun J (2015) Comparison and fusion of multiple iris and periocular matchers using near-infrared and visible images In: 2015 International Workshop on Biometrics and Forensics (IWBF), 1–6.. IEEE, Gjøvik.Google Scholar
- Crihalmeanu SG, Ross AA (2016) Multispectral Ocular Biometrics. In: Bourlai T (ed)Face Recognition Across the Imaging Spectrum, 355–380.. Springer International Publishing, Cham. ISBN:978-3-319-28501-6. doi:10.1007/978-3-319-28501-6_15.View ArticleGoogle Scholar
- Burge MJ, Monaco MK (2009) Multispectral iris fusion for enhancement, interoperability, and cross wavelength matching In: Proceeding of SPIE 7334, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV, Vol. 7334.. SPIE. pp 73341D–1–73341D–8. doi:10.1117/12.819058. http://spie.org/Publications/Proceedings/Paper/10.1117/12.819058. Accessed Sept 2016.
- Burge M, Monaco M (2013) Multispectral iris fusion and cross-spectrum matching. Springer, London. pp 171–181.View ArticleGoogle Scholar
- Abdullah MAM, Chambers JA, Woo WL, Dlay SS (2015) Iris biometric: is the near-infrared spectrum always the best? In: 3rd Asian Conference on Pattern Recognition (ACPR2015), 816–819.. IEEE, Kuala Lumpur, doi:10.1109/ACPR.2015.7486616, http://ieeexplore.ieee.org/document/7486616/. Accessed Sept 2016.Google Scholar
- Ramaiah NP, Kumar A (2016) Advancing Cross-Spectral Iris Recognition Research Using Bi-Spectral Imaging. In: Singh R, Vatsa M, Majumdar A, Kumar A (eds)Machine Intelligence and Signal Processing, 1–10.. Springer India, New Delhi. ISBN:978-81-322-2625-3. doi:10.1007/978-81-322-2625-3_1.View ArticleGoogle Scholar
- Kannala J, Rahtu E (2012) BSIF: binarized statistical image features In: 21st International Conference on Pattern Recognition (ICPR), 1363–1366.. IEEE, Tsukuba Science City.Google Scholar
- Štruc V, Pavesic N (2009) Gabor-based kernel partial-least-squares discrimination features for face recognition. Informatica 20(1): 115–138.MATHGoogle Scholar
- Štruc V, Pavesic N (2011) Photometric Normalization Techniques for Illumination Invariance. Advances in Face Image Analysis: Techniques and Technologies. IGI Global. pp 279–300.Google Scholar
- Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19(6): 1635–1650.MathSciNetView ArticleGoogle Scholar
- Arashloo SR, Kittler J (2014) Class-specific kernel fusion of multiple descriptors for face verification using multiscale binarised statistical image features. IEEE Trans Inf Forensic Secur 9(12): 2100–2109.View ArticleGoogle Scholar
- Li X, Bu W, Wu X (2015) Palmprint Liveness Detection by Combining Binarized Statistical Image Features and Image Quality Assessment. In: Yang J, Yang J, Sun Z, Shan S, Zheng W, Feng J (eds)Biometric Recognition: 10th Chinese Conference, CCBR 2015, Tianjin, China, November 13-15, 2015, Proceedings, 275–283.. Springer International Publishing, Cham. ISBN:978-3-319-25417-3. doi:10.1007/978-3-319-25417-3_33.View ArticleGoogle Scholar
- Jain AK (1989) Fundamentals of digital image processing. Prentice-Hall, Inc., Upper Saddle River.MATHGoogle Scholar
- Wang B, Li W, Yang W, Liao Q (2011) Illumination normalization based on Weber’s law with application to face recognition. IEEE Signal Process Lett 18(8): 462–465.View ArticleGoogle Scholar
- Masek L, Kovesi P (2003) MATLAB source code for a biometric identification system based on iris patterns.Google Scholar
- Abdullah MAM, Dlay SS, Woo WL, Chambers JA (2016) Robust iris segmentation method based on a new active contour force with a noncircular normalization. IEEE Trans Syst Man Cybernet SystPP(99): 1–14. doi:10.1109/TSMC.2016.2562500. http://ieeexplore.ieee.org/document/7473859/. Accessed Sept 2016.View ArticleGoogle Scholar
- Abdullah MAM, Dlay SS, Woo WL (2014) Fast and accurate pupil isolation based on morphology and active contour In: The 4th International conference on Signal, Image Processing and Applications, Vol. 4, 418–420.. IACSIT, Nottingham.Google Scholar
- Raja KB, Raghavendra R, Vemuri VK, Busch C (2015) Smartphone based visible iris recognition using deep sparse filtering. Pattern Recogn Lett 57: 33–42.View ArticleGoogle Scholar
- Maltoni D, Maio D, Jain A, Prabhakar S (2003) Multimodal biometric systems. Springer, New York. pp 233–255.Google Scholar
- Fang Y, Tan T, Wang Y (2002) Fusion of global and local features for face verification In: 16th International Conference on Pattern Recognition, Vol. 2, 382–385.. IEEE, Quebec City.Google Scholar
- He M, Horng S-J, Fan P, Run R-S, Chen R-J, Lai J-L, Khan MK, Sentosa KOPerformance evaluation of score level fusion in multimodal biometric systems. Pattern Recognit 43(5): 1789–1800.Google Scholar
- Wild P, Radu P, Ferryman J (2015) On fusion for multispectral iris recognition In: 2015 International Conference on Biometrics (ICB), 31–37.. IEEE, Phuket.View ArticleGoogle Scholar
- Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7): 971–987.View ArticleMATHGoogle Scholar