Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB...

19
Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images Anis Farihan Mat Raffei a , Hishammuddin Asmuni a,, Rohayanti Hassan b , Razib M. Othman c a Laboratory of Biometrics and Digital Forensics, Faculty of Computing, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia b Laboratory of Biodiversity and Bioinformatics, Faculty of Computing, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia c Laboratory of Computational Intelligence and Biotechnology, Faculty of Computing, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia article info Article history: Received 3 October 2012 Received in revised form 11 November 2013 Accepted 9 February 2014 Available online 20 February 2014 Keywords: Reflection removal RGB color Line intensity profile Support vector machine Iris recognition Frontal eye image abstract Iris recognition is a promising method for accurate identification of a person where the capability of iris segmentation determines its overall performance. Correct iris area has to be determined so that an individual’s unique features can be extracted and compared during feature extraction and template matching processes. However, current methods fall short in correctly identifying and classifying reflections in an eye image. This has often led to errors in iris boundary localization and consequently increases the equal error rate in iris recognition. This study thus intends to propose a method that combines a line intensity profile and a support vector machine where the former identifies reflections in eye images, and the latter classifies reflections and non-reflections. The combined method was tested using 1000 eye images from the UBIRISv2 database. Results showed that the combined method provided almost 99.9% classification accuracy. Generally, it has less than 10.5% equal error rate and high decidability index in iris recognition. Ó 2014 Elsevier Inc. All rights reserved. 1. Introduction Reflections appear as the brightest area and have the maximum intensity pixel values. They appear in an eye image when the lights that enter from windows or the camera are reflected by the wet surface of the cornea. The wet surface covers the iris and protects it from the environment. Reflections can also occur when capturing an image of the eyes of a person wearing eyeglasses because the lenses reflect the light. In iris recognition systems, the iris segmentation stage is a vital part to iden- tify a person. According to Jeong et al. [7] and Labati and Scotti [8], iris segmentation methods can be accurate if the eye image is captured in a closely controlled setting. However, in reality, the eye images may contain ‘noise’ such as reflections, which form the brightest pixels in an eye image [20]. Such noise decreases the performance of the iris segmentation and ultimately the accuracy of the iris boundaries detection. To enhance or maintain the performance of iris segmentation meth- ods, the reflections, especially in the iris, must be removed or reduced. Almeida [1] proposed two rules for identifying reflec- tions which are based on threshold values and the white area. The first rule categorizes reflection as images with pixels of more than 250 intensity units. The second rule classifies the brightest pixels in the small white areas (namely sclera), as reflections. Tan et al. [20] has proposed an adaptive thresholding technique for identifying the reflections where pixels that are in the brightest 5% of the pixels in eye images are classified as reflections. Then, a bilinear interpolation technique is used http://dx.doi.org/10.1016/j.ins.2014.02.049 0020-0255/Ó 2014 Elsevier Inc. All rights reserved. Corresponding author. Tel.: +607 5532353; fax: +607 5565044. E-mail address: [email protected] (H. Asmuni). Information Sciences 276 (2014) 104–122 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins

Transcript of Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB...

Page 1: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Information Sciences 276 (2014) 104–122

Contents lists available at ScienceDirect

Information Sciences

journal homepage: www.elsevier .com/locate / ins

Fusing the line intensity profile and support vector machine forremoving reflections in frontal RGB color eye images

http://dx.doi.org/10.1016/j.ins.2014.02.0490020-0255/� 2014 Elsevier Inc. All rights reserved.

⇑ Corresponding author. Tel.: +607 5532353; fax: +607 5565044.E-mail address: [email protected] (H. Asmuni).

Anis Farihan Mat Raffei a, Hishammuddin Asmuni a,⇑, Rohayanti Hassan b, Razib M. Othman c

a Laboratory of Biometrics and Digital Forensics, Faculty of Computing, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysiab Laboratory of Biodiversity and Bioinformatics, Faculty of Computing, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysiac Laboratory of Computational Intelligence and Biotechnology, Faculty of Computing, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia

a r t i c l e i n f o a b s t r a c t

Article history:Received 3 October 2012Received in revised form 11 November 2013Accepted 9 February 2014Available online 20 February 2014

Keywords:Reflection removalRGB colorLine intensity profileSupport vector machineIris recognitionFrontal eye image

Iris recognition is a promising method for accurate identification of a person where thecapability of iris segmentation determines its overall performance. Correct iris area hasto be determined so that an individual’s unique features can be extracted and comparedduring feature extraction and template matching processes. However, current methods fallshort in correctly identifying and classifying reflections in an eye image. This has often ledto errors in iris boundary localization and consequently increases the equal error rate in irisrecognition. This study thus intends to propose a method that combines a line intensityprofile and a support vector machine where the former identifies reflections in eye images,and the latter classifies reflections and non-reflections. The combined method was testedusing 1000 eye images from the UBIRISv2 database. Results showed that the combinedmethod provided almost 99.9% classification accuracy. Generally, it has less than 10.5%equal error rate and high decidability index in iris recognition.

� 2014 Elsevier Inc. All rights reserved.

1. Introduction

Reflections appear as the brightest area and have the maximum intensity pixel values. They appear in an eye image whenthe lights that enter from windows or the camera are reflected by the wet surface of the cornea. The wet surface covers theiris and protects it from the environment. Reflections can also occur when capturing an image of the eyes of a person wearingeyeglasses because the lenses reflect the light. In iris recognition systems, the iris segmentation stage is a vital part to iden-tify a person. According to Jeong et al. [7] and Labati and Scotti [8], iris segmentation methods can be accurate if the eyeimage is captured in a closely controlled setting. However, in reality, the eye images may contain ‘noise’ such as reflections,which form the brightest pixels in an eye image [20]. Such noise decreases the performance of the iris segmentation andultimately the accuracy of the iris boundaries detection. To enhance or maintain the performance of iris segmentation meth-ods, the reflections, especially in the iris, must be removed or reduced. Almeida [1] proposed two rules for identifying reflec-tions which are based on threshold values and the white area. The first rule categorizes reflection as images with pixels ofmore than 250 intensity units. The second rule classifies the brightest pixels in the small white areas (namely sclera), asreflections. Tan et al. [20] has proposed an adaptive thresholding technique for identifying the reflections where pixels thatare in the brightest 5% of the pixels in eye images are classified as reflections. Then, a bilinear interpolation technique is used

Page 2: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 105

to insert four adjacent pixel colors into the reflection. Sankowski et al. [19] computed their own threshold value, Tref, usingEq. (1) to identify reflections before using two morphological operations, dilation and closure, to enhance the reflections.

Table 1Differen

Cate

a

b

c

Tref ¼ Iave þ P � ðImax � IaveÞ; ð1Þ

where P 2 (0,1), Iave is the mean intensity of the eye image pixels, and Imax is the mean of the brightest 4% of the eye imagepixels. The reflection areas are removed by inserting the red, green, and blue (RGB) colors from the pixels surrounding thereflection. All existing methods can remove or reduce reflections if their pixel intensities meet the threshold values. How-ever, for large reflection areas that have different intensities, these methods may fail to detect and remove weak reflections.Even if adaptive threshold techniques are used, it can still be difficult to set accurate threshold values for the varying inten-sities. At times, the white regions of an eye, or the sclera, can be falsely classified as a reflection, reversely make it difficult toestablish accurate threshold values as well.

In this study, a fusion method is proposed to solve these problems in RGB color eye images. This fusion method (LIPSVM)is a combination of the line intensity profile (LIP) and a support vector machine (SVM). LIP was used to provide differentthreshold values for varying reflection intensities based on the idea that a reflection pixel’s red intensity would be higherthan its green and blue intensities. SVM was used to distinguish reflections from non-reflections. This paper is composedof several sections. Section 2 explains on the data used in this study, which is UBIRIS.v2. Section 3 explicates the LIPSVMmethod used to address reflections on eye images. The iris recognition stages are also explained in this section. The exper-imental results are discussed in Section 4, and the conclusions and recommendations for future work are presented inSection 5.

2. RGB eye images database

There are two types of light wavelengths that can be captured in an eye image, which are near infrared and visible light.Examples of near infrared light databases are the University of Bath (BATH) [14], the Chinese Academy of Sciences (CASIA)[13], the Multimedia University (MMU) [12], version one of the University of Beira Interior (UBIRISv1) [15], and West Vir-ginia University (WVU) [2]. Examples of visible light databases include the University of Olomuc (UPOL) [5] and versiontwo of the University of Beira Interior (UBIRISv2) [16]. All databases can be publicly accessed for iris recognition researches,

t categories of eye images and reflections that were selected from the UBIRISv2 database.

gory Example and description of eye image Total eyeimages

Category Example and description of eye image Total eyeimages

Left Right Left Right

Frontal eye with small reflections 122 152 d Frontal eye with spectacle and largereflections

44 32

Frontal eye with large reflections 187 202 e Frontal eye with off-focus and smallreflections

58 38

Frontal eye with spectacle and smallreflections

20 31 f Frontal eye with off-focus and large reflections 69 45

Page 3: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

106 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

and in this study, a visible light database was used. Although the UPOL and UBIRISv2 databases both contained ‘noise’, orreflections in the eye images, the UBIRISv2 database was chosen as the preferred database. The UBIRISv2 eye images werecaptured from different distances, while the UPOL used a rigid image acquisition protocol. This also means that the UBIRISv2has a larger and more diverse database of pixel intensities and reflections.

The UBIRIS.v2 database created by the University of Beira Interior was downloaded from http://iris.di.ubi.pt/ubiris2.html.This database has 11,102 eye images and 522 iris images with more realistic ‘noisy’ effects such as reflection as well as blur-ring and occlusion from hair, eyeglasses, and contact lenses. The eye images were obtained from 261 subjects using a CanonEOS 5D, and the distances ranged from four to eight meters. Approximately 60% of the subjects were captured in two ses-sions, and the remaining 40% were captured in one session. In each session, 15 eye images for each (left and right) eye werecaptured. All eye images had a resolution of 400 � 300 pixels, a standard RGB (sRGB) color representation, and were in theTagged Image File Format (TIFF). Approximately 100 subjects were used in this study and 1000 eye images were randomly

RGB Iris Image

Reflection Identification: Line Intensity Profile

Reflection Classification: Support Vector Machine

Preprocessing

Filling in Reflections: Neighboring Intensity Interpolation

Segmentation

Iris Localization: Circular Hough Transform

Eyelid Occlusion: Linear Hough Transform

Eyelashes Occlusion: Thresholding

Normalization Rubber Sheet Model

Feature Extraction: 1-D Log Gabor Filter

Template Matching: Hamming Distance

LIPSVM

Start

End

Evaluation and Analysis: Intensity, Decidability, Accuracy, Equal Error

Rate

Fig. 1. Block diagram of proposed iris recognition.

Strong reflection Weak reflection

Fig. 2. Frontal left eye image with strong and weak reflections.

Page 4: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 107

selected. The 1000 images was a combination of 500 left and 500 right eye images. Table 1 shows the different categories ofeye images and reflections from the UBIRIS.v2 database. For each subject, five frontal eye images of the left and right eyeswith reflections taken from different distances were chosen.

3. Iris recognition

In this study, the iris recognition was done in several stages (see Fig. 1). The first stage was a preprocessing stage thatcould be further broken down into three substages. Our proposed method, LIPSVM, used LIP for reflection identificationand SVM for reflection classification during this stage. The second stage was a segmentation stage that relied on a circularHough Transform (HT), a linear HT, and thresholding, to localize the iris boundaries. The third stage was a normalizationstage where the rubber sheet model developed by Daugman [4] became the fundamental to adjust the dimensions of theiris. At the fourth stage, the features of the iris were extracted, and the unique features for each iris were acquired usinga one dimensional (1-D) Log Gabor filter. These features were then compared against the unique features of other irises.Template matching was initiated at the fifth stage, and similar values between two different iris features were obtainedusing a Hamming distance. The last stage entailed an evaluation and analysis of the iris recognition process. Here, eye imageintensities, accuracy of reflection classification, equal error rates, and decidability index of iris recognition were calculated.

3.1. Preprocessing

The preprocessing stage is important to remove ‘noise’, such as reflections, that can incorrectly detect the inner and outeriris boundaries during the segmentation stage. Many methods for removing reflections in the eye images have been pro-posed by existing researchers [1,7,19,20], but most have failed to solve the problem of weak reflections. They tend to falselyclassify the sclera and areas of high illumination as reflections. In this study, the proposed LIPSVM method was used to ad-dress these issues. The preprocessing stage was composed of three substages which were; reflection identification, reflectionclassification and filling in reflections. Each stage is explained below.

3.1.1. Reflection identification by line intensity profileThe intensity profile of RGB values in eye images is often used to analyze reflections. The brightest and strongest reflec-

tion intensities can be easily recognized, but weaker reflections with lower intensities are difficult to recognize (see Fig. 2)because they fall below specific threshold values. To solve this problem, the threshold method of line intensity profile (LIP)suggested by Lee et al. [9] was applied; this method was originally proposed to remove specular reflection in tooth colorimages. The LIP method recognizes reflections when the red intensity of a pixel is higher than its blue and green intensities.Under such circumstances, the pixel is considered as a reflection. The LIP pseudo code for reflection identification iscomprised of two steps: to discover reflections and to validate reflections. Step one (lines 6–13 in Fig. 3) is proposed by

Fig. 3. LIP pseudo code for reflection identification (where Ired,rgb(i, j, 1) is the red channel of the eye image; Igreen,rgb(i, j, 2) is the green channel of the eyeimage; Iblue,rgb(i, j, 3) is the blue channel of the eye image; and Irgb(i, j, all) is the entire color channel of the eye image).

Page 5: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

108 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

Lee et al. [9] and is used to discover the pixels that have a red intensity that is higher than the green and blue intensities; thispseudo code inverse the green and blue intensities in every pixel in an eye image by subtracting their intensities from themaximum intensity value, which is 255. If their intensities are lower than the red intensity, then those pixels are consideredto be reflections. In Table 2, the fourth column (the inverse intensity of the green and blue pixels) shows the reflections foundin the eye images where the dark red color (reflections) has appeared. Based on our understanding, step two (lines 14–21 inFig. 3) validates whether the discovered reflections fulfill the aforesaid criteria and may act as a label data for the

Table 2Example of the reverse intensity of green and blue pixel eye images based on the eye images in Table 1.

Category Original eye image withreflections

Original eye image withreflections on LOI graph

Inverse intensity of green andblue pixels

Inverse intensity of green and bluepixels on LOI graph

a Frontal eye with small reflections

b Frontal eye with large reflections

c Frontal eye with spectacle and small reflections

d Frontal eye with spectacle and large reflections

e Frontal eye with off-focus and small reflections

f Frontal eye with off-focus and large reflections

Page 6: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 109

classification process later. Reflections in the eye images can be analyzed using the line of interest (LOI) graph where thisgraph is used to display or verify the existence of reflections in the eye images by plotting the intensities of the row and col-umn eye images on the x-axis and y-axis respectively. The reflections cannot be observed clearly on the LOI graph if the ori-ginal eye images are used, but they can be observed clearly if the inverse intensities of the green and blue eye images aretaken into account. The existence of reflections in the eye images can be viewed in the fifth column of Table 2, where theintensity of red is higher than the intensities of green and blue. Using the LIP method alone, identifying reflections was lessaccurate if some non-reflection areas in the eye images have a high level of illumination due to lights from windows or thecamera. To overcome this drawback, we improved the reflection identification method by including SVM to correctly classifythe reflection areas.

3.1.2. Reflection classification by SVMAs mentioned in the previous section, the classification process must be performed to classify reflection from non-

reflection areas because the LIP method alone may wrongly identify the reflections. In this study, SVM was used to determinethe relationship between the pixels of the reflections and non-reflections. The classification process for this stage was basedon pixel classification. The pixel data of an eye image was given as two different sets of vectors in a one dimensional space,and the SVM formed would separate the hyperplane in that space. The two different vector sets generally represent a set ofpixel intensities from the reflection areas and non-reflection areas respectively. The sets of pixel intensities were then usedas input features in the training data, and the corresponding labels for the data would be either 1 or�1. In this study, approx-imately 20 sets of pixels from the reflection areas and 100 additional pixel sets from the non-reflection areas in an eye imagewere sampled to obtain the training data. The reflection areas’ sets of pixel intensity values were derived using the formulafrom lines 7 to 13 in Fig. 3. Those from the non-reflection areas were extracted based on pixels that had higher red intensitythan the blue and green intensities. For the label data, 1 was used to represent reflections and �1 was used to indicate non-reflections. To obtain the label data, the pixel intensity values were validated using the formula from lines 15 to 21 in Fig. 3.There were several kernel functions, including linear, polynomial and radial basis functions (RBFs). At this classificationstage, a non-linear kernel was preferred. RBF was chosen as a main kernel because it worked well in most cases[17–18,21–22]. Any remaining pixels that were not used during the training phase were used for the testing phase. The over-all process that has been used to train the data is presented in Fig. 4, and results for the reflection classifications can beviewed in Table 6, Section 4, where the weak and strong reflections have already been classified accordingly.

3.1.3. Filling in reflections by near intensity interpolationAfter classifying the reflections using SVM, the reflections were removed by inserting adjacent colors according to the

method proposed by Sankowski et al. [19]. Here, the reflections were enhanced using morphological operations beforethe adjacent colors were inserted which involved dilation and closure operations. The dilation operation enlarged the areasof reflection, and the closure operation associated reflections that were located side-by-side by inserting a discontinuitiesedge between them and smoothing their outer edges. Sankowski et al. [19] recommended substituting the intensity of everypixel in the reflection areas with a near intensity interpolated from RGB values located outside the reflection. For every pixelin the reflection, four adjacent neighbors located on the outer surface of the reflection (left, right, upper, and bottom) were

Fig. 4. Framework for training classification reflection and non-reflection areas using SVMs.

Page 7: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

110 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

determined. For every nth neighboring pixel, its weight, wn was measured as equivalent to the inverse distance, dn to thepixel being inserted. Then, the inserted pixel RGB value, cfill was filled based on the pixel elements of its four neighbors.The cfill component was filled using Eq. (2), where cn denoted the red, blue, and green color elements of the nth neighboringpixel. This process began with red and was followed by green and blue.

Table 3Exampl

Cate

a

b

c

d

e

f

e of iris segmentation for with and without reflection removal in the eye images based on eye images in Table 1.

gory Type

Without reflection removal With reflection removal

Frontal eye with small reflections

Frontal eye with large reflections

Frontal eye with spectacle and small reflections

Frontal eye with spectacle and large reflections

Frontal eye with off-focus and small reflections

Frontal eye with off-focus and large reflections

Fig. 5. Homogenous rubber sheet modeled by Daugman [4].

Page 8: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 4Exampl

Cate

a

b

c

d

e

f

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 111

cfill ¼P4

n¼1wncnP4nwn

ð2Þ

where wn = 1/dn for n = 1, 2, . . . , 4. After the process of filling in the reflections, the intensities of the classified reflectionswere given intensities that were similar to the adjacent neighboring pixels so that, for the next process of reflection identi-fication and classification, these pixels would not be detected and classified as reflections. If this was not performed, thesame pixels could be identified and classified again as reflections. This is because the intensity of the reflections has not beenreduced to intensity of similar adjacent neighboring pixels. This could also affect the performance of the iris segmentationbecause edges of reflections could be wrongly identified as iris edges.

3.2. Segmentation

The next step was to localize the iris boundaries. Many researchers [3,6] have used the Hough Transform (HT) to detectiris boundaries. This study used a circular HT based on a voting system because it was very tolerant of noise. The upper and

e of normalized iris for with and without reflection removal in the form of a rubber sheet model based on the eye images in Table 1.

rgory Type

Without reflection removal With reflection removal

Frontal eye with small reflections

Frontal eye with large reflections

Frontal eye with spectacle and small reflections

Frontal eye with spectacle and large reflections

Frontal eye with off-focus and small reflections

Frontal eye with off-focus and large reflections

Page 9: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 5Example of a feature extraction template obtained from a 1-D log Gabor filter for with and without reflection removal based on the eye images in Table 1.

Category Type

Without reflection removal With reflection removal

a Frontal eye with small reflections

b Frontal eye with large reflections

c Frontal eye with spectacle and small reflections

d Frontal eye with spectacle and large reflections

e Frontal eye with off-focus and small reflections

f Frontal eye with off-focus and large reflections

Comparison with different features of eye images

HD value

b

Frontal eye with large reflections

0.4800

Template feature of an eye image

c

Frontal eye with spectacle and small reflections

0.4580

a

Frontal eye with small reflections

d

Frontal eye with spectacle and large reflections

0.4384

e

Frontal eye with off-focus and small reflections

0.4321

f

Frontal eye with off-focus and large reflections

0.4331

Fig. 6. An example result of template matching using the hamming distance without reflection removal based on the eye images in Table 1.

112 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

Page 10: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 113

lower eyelids in eye images were separated using a linear HT, which was superior to parabolic HT because it could analyzefewer parameters, and thus required less computational time [11]. To remove the eyelashes from the eye images, a simplethresholding method was used; if the length of the eyelashes was lower than the threshold value, it would be considered asan eyelash. Different iris databases have different thresholding values for eyelashes. In this research database; the threshold-ing value for the eyelashes was set to 20 pixels. The results for the removal of the eyelids and eyelashes are shown in Table 3,where they are covered with black rectangles. At this stage, the RGB color eye images with the reflections removed wereconverted to grayscale images beforehand. The results of the segmentation stage are shown in Table 3 which depicted thatwhen reflections are not removed, some of the inner and outer iris boundaries cannot be localized properly.

3.3. Normalization

The unique iris for each individual varies in size and even the pupil of the same person can change due to different light-ing conditions and distances. This affects pattern matching. In this study, the irises were normalized by adjusting the dimen-sions of each iris to permit comparisons between iris templates. The irises were adjusted to be equal in size so that thematching features had similar spatial locations. During this process, the homogenous rubber sheet modeled by Daugman[4] was used, to remap every point inside the iris to a pair of polar coordinates (r, h), where r is on the interval [0,1], andh is an angle between [0, 2p] (Fig. 5). The center of the pupil was set as the reference point [11], and the radial vectors passedthrough the iris. To form a pair of polar coordinates, the Masek [11] parameter was used to represent a rectangle that was20 � 240 pixels in size. Table 4 shows the mask normalization of an iris, in which the white regions marked the noise createdby eyelids, eyelashes, and reflections.

3.4. Feature extraction by the 1-D log Gabor filter

The feature extraction process provides accuracy to the iris recognition process when the most distinctive features of theiris can be successfully extracted and then encrypted to compare between templates. Most iris recognition systems employ aband pass decomposition of the iris image to create an iris template. In this study, a 1-D log Gabor filter was used to each rowof the normalized iris to extract the local feature points in the Cartesian coordinate system from the segmented iris. With theparameters determined during the normalization process, the number of bits used for the iris template was set to 9600. Thesame number of bits was used for the mask template to mask out the corrupted regions within the iris. Table 5 contains anexample of feature templates obtained using this method.

3.5. Template matching by hamming distance

Template matching can be done using various methods such as the Normalized correlation based matching and Euclideandistance matching methods to recognize patterns. Here, the hamming distance method (HD) was implemented to match two

Comparison with different features of eye images

HD value

b

Frontal eye with large reflections

0.4632

Template feature of an eye image

c

Frontal eye with spectacle and small reflections

0.4504

a

Frontal eye with small reflections

d

Frontal eye with spectacle and large reflections

0.4384

e

Frontal eye with off-focus and small reflections

0.4321

f

Frontal eye with off-focus and large reflections

0.4180

Fig. 7. An example result of template matching using the hamming distance with reflection removal based on the eye images in Table 1.

Page 11: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 6Result of reflection classification and removal based on the eye images in Table 1.

Category Reflection classification areasby SVM

Eye image after reflectionsremoval

Inverse intensity of green and bluepixels in the eye image withoutreflections

Inverse intensity of green andblue pixels on the LOI graph

a Frontal eye with small reflections

b Frontal eye with large reflections

c Frontal eye with spectacle and small reflections

d Frontal eye with spectacle and large reflections

e Frontal eye with off-focus and small reflections

f Frontal eye with off-focus and large reflections

114 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

iris templates due to its popularity [4,10–11] used for this purpose. Its calculations are usually achieved in two steps. Thefirst step is a logical operation in which the XOR of the two binary vectors of length k are determined. In the second step,the total number of ones is summed. The similarity score or HD is found by dividing the obtained total by k. The hammingdistance is formulated as follows:

Page 12: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Fig. 8. A graph comparison of accuracy for reflection classification.

Table 7A comparison of the results from reflection classification using different methods and reflection removal in the frontal eye, with small reflections.

Category Methods Reflection classification Eye images after reflection removal

a Frontal eye with small reflections Almeida [1]

Tan et al. [20]

Sankowski et al. [19]

LIP

LIPSVM

LIPPNN

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 115

Page 13: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 8A comparison of the results from reflection classification using different methods and reflection removal in the frontal eye, with large reflections.

Category Methods Reflection classification Eye images after reflection removal

b Frontal eye with large reflections Almeida [1]

Tan et al. [20]

Sankowski et al. [19]

LIP

LIPSVM

LIPPNN

116 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

HD ¼ 1k

Xk

i¼1

Pi � Q i ð3Þ

where P and Q are two templates from different iris images, and k is the total number of bits in those two templates. Everyiris yields a bit pattern different from that produced different areas of the same iris though their codes still be in parallel. TheHD between the two patterns is equal to 0.5 if the two bit patterns are completely different and the difference is random innature. The possibility of obtaining a score of 0.5 for any bit is either 1 or 0. The possibility that the bits would agree or dis-agree is halfway between the two patterns. On the other hand, the HD is close to 0 if the patterns are extracted from the sameiris because they can be highly correlated. Figs. 6 and 7 show an example of template matching using the HD and eye imagesfrom Table 1, where the HD values are reported to be more than 0.4 and thus recognized as originating from different people.

4. Evaluation and analysis

This study was conducted in a Matlab (R2009a) environment using modified Masek [11] code for a UBIRIS.v2 database.Reflection identification and classification were evaluated by analyzing the intensity and degree of accuracy that could beobtained in the classification of the reflections. The analysis of eye images intensity after reflection removal is shown inthe LOI graph in Table 6. The LOI graph demonstrated that, after removing the reflections in the eye images, the red intensity

Page 14: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 9A comparison of the results from reflection classification using different methods and reflection removal in the frontal eye with spectacle and small reflections.

Category Methods Reflection classification Eye images after reflection removal

c Frontal eye with spectacle and small reflections Almeida [1]

Tan et al. [20]

Sankowski et al. [19]

LIP

LIPSVM

LIPPNN

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 117

was lower or equal to the green and blue intensities. In the fourth column of Table 6, the inverse intensity of green and blueindicated that the dark red reflections have been removed. The second and third columns in Table 6 show examples of reflec-tions that have been correctly classified by LIPSVM (the red color denotes strong reflections while the yellow color denotesweak reflections) and removed through neighboring intensity interpolation that then inserted pixel colors into the reflec-tions using pixel colors found outside of the reflections.

Fig. 8 shows the percentage of accuracy achieved in reflection classification, the formula used here is found below:

ACCref ¼MN� 100% ð4Þ

where M is the total number of pixels in the reflection that have been correctly classified, and N is the total number of pixelsin the reflection areas. Fig. 8 has also shown that the methods used by Almeida [1], Sankowski et al. [19] and Tan et al. [20]have provided less than 90% degree of accuracy for the classification of reflections due to imperfect detection or removal ofweak reflections that have fallen below the threshold values. There were also instances in which the sclera was falsely clas-sified as a reflection. The identification of reflections using only the LIP method, which had more accurately classified reflec-tions compared to other methods, detected 89.5% in the left eye images, 90.8% in the right eye images and 90.2% in thecombined eye images. However, it had failed to correctly detect reflections in eye images with a high illumination level.

Page 15: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 10A comparison of the results from reflection classification using different methods and reflection removal in the frontal eye, with spectacle and large reflections.

Category Methods Reflection classification Eye images after reflection removal

d Frontal eye with spectacle and large reflections Almeida [1]

Tan et al. [20]

Sankowski et al. [19]

LIP

LIPSVM

LIPPNN

118 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

Other than using SVM, a perceptron neural network (PNN) was also used to classify the reflections. When LIP was com-bined with PNN (LIPPNN), the accuracy in classifying reflections was more than 90%, but the LIPSVM method performed bet-ter with an almost 99.9% degree of accuracy for all eye images. The success of the LIPSVM method was due to the ability ofSVM to differentiate between reflections and non-reflections. Tables 7–12 show several examples of the comparison resultsfor reflection classification and eye images categories. These examples revealed that weak or large reflections could not bedetected and removed using the methods proposed by Almeida [1], Sankowski et al. [19] and Tan et al. [20]. The LIP methodhad falsely classified highly illuminated areas as reflections, but the combined method of LIP with SVM and PNN had cor-rectly detected the reflections. The performances of reflection classifications using LIPSVM and LIPPNN were almost thesame.

The recognition performance of UBIRIS.v2 was evaluated using the equal error rate (EER) and the decidability. The EERdetermines the percentage of both the false acceptance rate (FAR) and false rejection rate (FRR) being equal. The lowerthe EER value is, the higher the accuracy of the iris recognition method. Fig. 9 displays the EER percentages for differentmethods and for sideway eye images. The implementation of the combined method of LIP and SVM gave the lowest percent-ages: 9.7% for left eye images, 8.5% for right eye images and 10.4% for combined eye images. LIP’s combination with PNNresulted in 10.2% for both sideway eye images and 11% for combined eye images. The methods of Almeida [1], Sankowskiet al. [19] and Tan et al. [20] had more than 20% equal error rate. This was because poor iris segmentation had affected

Page 16: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 11A comparison of the results from reflection classification using different methods and reflection removal in the frontal eye, with off-focus and small reflections.

Category Methods Reflection classification Eye images after reflection removal

e Frontal eye with off-focus and small reflections Almeida [1]

Tan et al. [20]

Sankowski et al. [19]

LIP

LIPSVM

LIPPNN

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 119

the extraction of the features from the iris, and since some non-iris regions were selected, the iris features available for com-parison became even fewer. As a result, the dissimilarity values increased during the matching process.

The decidability, D, is a crucial factor in iris recognition because its value determines the separation distance betweenintra-class and inter-class distribution. If this value is large, it means that the separation distance is also large. The levelof decidability in an iris recognition process can verify and validate the identity of a person. Decidability is calculated asfollows:

D ¼ jpa � pbjffiffiffiffiffiffiffiffiffiffiffir2

aþr2b

2

q ð5Þ

where pa, pb and ra, rb are the means and standard deviations of HD for each FAR and FRR. In Fig. 10, the methods of Almeida[1], Sankowski et al. [19] and Tan et al. [20] have an index of less than 2 for all eye images because they have localized the irisinaccurately due to existing reflections. Therefore, if a smaller number of bits were to be extracted, it would increase thepossibility of a false acceptance. The proposed methods here had yielded better results where the combination of LIP withSVMs provided the largest separation: an index of 2.226 for the left eye images, 2.177 for the right eye images and 2.292 forthe combined eye images. The combination of LIP with PNNs yielded an index of 2.221 for the left eye images, 2.016 for theright eye images, and 2.232 for the combined eye images. The method of LIP alone had an index of 2.044 for the left eye

Page 17: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Table 12A comparison of the results from reflection classification using different methods and reflection removal in the frontal eye, with off-focus and large reflections.

Category Methods Reflection classification Eye images after reflection removal

f Frontal eye with off-focus and large reflections Almeida [1]

Tan et al. [20]

Sankowski et al. [19]

LIP

LIPSVM

LIPPNN

Fig. 9. A graph comparison of equal error rates for iris recognition.

120 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

Page 18: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

Fig. 10. A graph comparison of the decidability index for iris recognition.

A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122 121

images, 1.988 for the right eye images, and 2.184 for the combined eye images. The larger the separation, the lower the pos-sibility of false acceptance; as such, it means that the ability of this method to identify an imposter is high.

5. Conclusions

Existing methods of iris recognition are unable to correctly identify and classify reflections and non-reflections in the RGBeye images. This limitation has caused improper iris boundary segmentation because the edges of reflections are wronglyidentified as iris edges. This study has proposed a combined LIPSVM method that uses a line intensity profile to identifyreflections. A reflection is recognized when the red intensity is higher than the green and blue intensities in the pixels;and used the support vector machine in classifying reflections and non-reflections. The combined method can overcomethe inability of existing methods to remove reflections in RGB eye images. Its performance is proven for reflection classifi-cation where a low equal error rate and high decidability value have been achieved. Future studies will thus need to focus onimproving the iris segmentation since an iris’s boundaries are not circular or elliptical in a non-cooperative environment.Also, iris segmentation is currently a very time-consuming process.

Acknowledgments

The authors highly appreciate the contribution of University of Beira Interior for providing the UBIRIS.v2 database and ofMasek for providing the codes. This research has been funded by GATES Scholars Foundation (GSF) of GATES BIOTECH Solu-tion Sdn. Bhd. company (Grant No. LTRGSF/SU/2011-04) and MyMaster Scholarship of Ministry of Higher Education Malay-sia, and Universiti Teknologi Malaysia through Research University Grant (Project No. PD/2012/12448).

References

[1] P.D. Almeida, A knowledge-based approach to the iris segmentation problem, Image Vis. Comput. 28 (2) (2010) 238–245.[2] Center for Identification Technology Research, WVU Off-Angle Iris Database, 2004. <http://www.citer.wvu.edu>.[3] Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, J. Andrian, A highly accurate and computationally efficient approach for unconstrained iris

segmentation, Image Vis. Comput. 28 (2) (2010) 261–269.[4] J. Daugman, How iris recognition works, IEEE Trans. Circ. Syst. Video Technol. 14 (1) (2004) 21–30.[5] M. Dobes, L. Machala, UPOL Iris Image Database, 2004. <http://phoenix.inf.upol.cz/iris/>.[6] A. Ghanizadeh, A.A. Abarghouei, S. Sinaie, P. Saad, S.M. Shamsuddin, Iris segmentation using an edge detector based on fuzzy sets theory and cellular

learning automata, Appl. Opt. 50 (19) (2011) 3191–3200.[7] D.S. Jeong, J.W. Hwang, B.J. Kang, K.R. Park, C.S. Won, D.-K. Park, J. Kim, A new iris segmentation method for non-ideal iris images, Image Vis. Comput.

28 (2) (2010) 254–260.[8] R.D. Labati, F. Scotti, Noisy iris segmentation with boundary regularization and reflections removal, Image Vis. Comput. 28 (2) (2010) 270–277.[9] S.-T. Lee, T.-H. Yoon, K.-S. Kim, K.-D. Kim, W. Park, Removal of specular reflections in tooth color image by perceptron neural nets, in: Proceedings of the

2nd International Conference on Signal Processing Systems (ICSPS), Dalian, China, 2010, pp. V1-285–V1-289.[10] R.A. Marino, F.H. Alvarez, L.H. Encinas, A crypto-biometric scheme based on iris-templates with fuzzy extractors, Inf. Sci. 195 (2012) 91–102.[11] L. Masek, Recognition of Human Iris Patterns for Biometric Identification, 2003. <http://www.csse.uwa.edu.au/~pk/studentprojects/libor>.[12] Multimedia University, MMU Iris Image Database, 2004. <http://pesona.mmu.edu.my/~cteo/>.[13] P.J. Phillips, K.W. Bowyer, P.J. Flynn, Comments on the CASIA version 1.0 iris dataset, IEEE Trans. Pattern Anal. Mach. Intell. 29 (10) (2007) 1869–1870.[14] N. Popescu-Bodorin, V.E. Balas, AI challenges in iris recognition. Processing tools for BATH iris image database, in: Proceedings of the 11th WSEAS

International Conference on Automation and Information, World Scientific and Engineering Academy and Society (WSEAS), Iasi, Romania, 2010, pp.116–121.

[15] H. Proenca, L.A. Alexandre, UBIRIS: a noisy iris image database, in: F. Roli, S. Vitulano (Eds.), Image Analysis and Processing – ICIAP 2005, Springer,2005, pp. 970–977.

[16] H. Proenca, S. Filipe, R. Santos, J. Oliveira, L.A. Alexandre, The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance, IEEE Trans. Pattern Anal. Mach. Intell. 32 (8) (2010) 1529–1535.

[17] K. Roy, P. Bhattacharya, Iris recognition with support vector machines, in: D. Zhang, A. Jain (Eds.), Advances in Biometrics, Springer, 2005, pp. 486–492.

Page 19: Fusing the line intensity profile and support vector machine for removing reflections in frontal RGB color eye images

122 A.F. Mat Raffei et al. / Information Sciences 276 (2014) 104–122

[18] K. Roy, P. Bhattacharya, R.C. Debnath, Multi-class SVM based iris recognition, in: Proceedings of the 10th International Conference on Computer andInformation Technology (ICCIT), Dhaka, India, 2007, pp. 1–6.

[19] W. Sankowski, K. Grabowski, M. Napieralska, M. Zubert, A. Napieralski, Reliable algorithm for iris segmentation in eye image, Image Vis. Comput. 28 (2)(2010) 231–237.

[20] T. Tan, Z. He, Z. Sun, Efficient and robust iris segmentation of noisy iris images for non-cooperative iris recognition, Image Vis. Comput. 28 (2) (2010)223–230.

[21] M. Vatsa, R. Singh, A. Noore, Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing, IEEETrans. Syst., Man Cyber. Part B – Cyber. 38 (4) (2008) 1021–1035.

[22] F. Wang, J. Han, X. Yao, Iris recognition based on multialgorithmic fusion, WSEAS Trans. Inf. Sci. Appl. 4 (12) (2007) 1415–1421.