[IEEE 2008 8th International Conference on Hybrid Intelligent Systems (HIS) - Barcelona, Spain...

6
Conflict Detection and Bayesian Conditioning for estimating the Reliability of each LVQ Network in a group engaged at Iris Biometric Identification Germano Vallesi, Anna Montesanto, Aldo Franco Dragoni Università Politecnica delle Marche (Italy) [email protected] , [email protected] , [email protected] Abstract The main problem with iris biometric identification systems is the presence of noises in the image of the eye (eyelid, eyelashes, etc…). To remove it many authors apply appropriate preprocessing to the image, but unfortunately this yields losses of information. Our work aims at correctly recognizing the subject also in presence of high rates of noise. The basic idea is that of partitioning the image of iris into 8 not-interleaved segments of the same size. Each segment is given to an LVQ network which generates prototypes with a high resistance to noise. Notwithstanding this, the 8 LVQ nets may still disagree in identifying the subject. In this paper we apply a method developed by the “belief revision” community to identify conflicts and rearrange the degrees of reliability of each expert (the LVQ nets) through a Bayesian algorithm. This estimated ranking of reliability is useful to take the final decision. 1. Introduction The biometrics automated personal identification has received considerable attention with increasing emphasis on access control. Among biometric technologies, iris recognition is noted for its high reliability and is currently a subject great interest in academia and industry [1, 2]. Irises are particularly advantageous for use in biometric recognition systems. Iris has been shown to be especially stable throughout a person’s life. The patterns in the human iris are fixed from about one year in age and remain constant. The procedure is very quick and non-invasive, requiring only that a photograph be taken. Iris and the retina have been shown to have higher degrees of distinctiveness than hand or finger geometry. The actual pattern within the iris is not determined by genetics and is so random that in fact an individuals left and right iris are as different from each other as each is to an iris from a different individual. Even identical twins have completely different iris patterns. One iris contains more data than is contained in a person’s finger, face and a hand combined. This data-richness means that it is possible to gain an accurate pattern match even if the eye is partially obscured by eyelashes or eyelids. Much work has been done on coding the human iris. According to the various iris features utilized, these works can be grouped into three main categories: Zero- crossing representation [3]; Phase-based method [4, 5]; Texture analysis [2, 6]. In general, a typical iris recognition system includes four steps: iris imaging, iris detection, iris image quality assessment and iris recognition. Our paper wants to work into the last two steps, working directly on the image of the iris with a pattern recognition technique obtained from the union of neural networks (LVQ) [7] and Bayesian Conditioning [8, 9, 10]. The iris database used for the training of the neural networks and to the test is the CASIA [11] database. 2. Biometry The French ophthalmologist Alphonse Bertillon was the first one who proposed the use of iris pattern as a basis for personal identification [12], but most works are done in the last decade. Daugman [1, 4] used multiscale quadrature wavelets to extract texture phase structure information of the iris to generate a 2048-bit iris code and he compared the difference between a pair of iris representation by computing their Hamming distance. He showed that for identification, it is enough to have a lower than 0.34 Hamming distance with any of the iris template in database. Wildes et al. [2, 13] represented the iris pattern using a four-level Laplacian pyramid and the quality of matching was determined by the normalized correlation results between the acquired iris image and the stored template. Boles and Boashash [14] used zero-crossing of 1D wavelet at various resolution levels to distinguish the texture of Eighth International Conference on Hybrid Intelligent Systems 978-0-7695-3326-1/08 $25.00 © 2008 IEEE DOI 10.1109/HIS.2008.35 619

Transcript of [IEEE 2008 8th International Conference on Hybrid Intelligent Systems (HIS) - Barcelona, Spain...

Conflict Detection and Bayesian Conditioning for estimating the Reliability of each LVQ Network in a group engaged at Iris Biometric Identification

Germano Vallesi, Anna Montesanto, Aldo Franco Dragoni Università Politecnica delle Marche (Italy)

[email protected], [email protected], [email protected]

Abstract

The main problem with iris biometric identification

systems is the presence of noises in the image of the eye (eyelid, eyelashes, etc…). To remove it many authors apply appropriate preprocessing to the image, but unfortunately this yields losses of information. Our work aims at correctly recognizing the subject also in presence of high rates of noise. The basic idea is that of partitioning the image of iris into 8 not-interleaved segments of the same size. Each segment is given to an LVQ network which generates prototypes with a high resistance to noise. Notwithstanding this, the 8 LVQ nets may still disagree in identifying the subject. In this paper we apply a method developed by the “belief revision” community to identify conflicts and rearrange the degrees of reliability of each expert (the LVQ nets) through a Bayesian algorithm. This estimated ranking of reliability is useful to take the final decision. 1. Introduction

The biometrics automated personal identification has received considerable attention with increasing emphasis on access control. Among biometric technologies, iris recognition is noted for its high reliability and is currently a subject great interest in academia and industry [1, 2].

Irises are particularly advantageous for use in biometric recognition systems. Iris has been shown to be especially stable throughout a person’s life. The patterns in the human iris are fixed from about one year in age and remain constant. The procedure is very quick and non-invasive, requiring only that a photograph be taken. Iris and the retina have been shown to have higher degrees of distinctiveness than hand or finger geometry. The actual pattern within the iris is not determined by genetics and is so random that in fact an individuals left and right iris are as different

from each other as each is to an iris from a different individual. Even identical twins have completely different iris patterns. One iris contains more data than is contained in a person’s finger, face and a hand combined. This data-richness means that it is possible to gain an accurate pattern match even if the eye is partially obscured by eyelashes or eyelids.

Much work has been done on coding the human iris. According to the various iris features utilized, these works can be grouped into three main categories: Zero-crossing representation [3]; Phase-based method [4, 5]; Texture analysis [2, 6]. In general, a typical iris recognition system includes four steps: iris imaging, iris detection, iris image quality assessment and iris recognition. Our paper wants to work into the last two steps, working directly on the image of the iris with a pattern recognition technique obtained from the union of neural networks (LVQ) [7] and Bayesian Conditioning [8, 9, 10].

The iris database used for the training of the neural networks and to the test is the CASIA [11] database. 2. Biometry

The French ophthalmologist Alphonse Bertillon was the first one who proposed the use of iris pattern as a basis for personal identification [12], but most works are done in the last decade. Daugman [1, 4] used multiscale quadrature wavelets to extract texture phase structure information of the iris to generate a 2048-bit iris code and he compared the difference between a pair of iris representation by computing their Hamming distance. He showed that for identification, it is enough to have a lower than 0.34 Hamming distance with any of the iris template in database. Wildes et al. [2, 13] represented the iris pattern using a four-level Laplacian pyramid and the quality of matching was determined by the normalized correlation results between the acquired iris image and the stored template. Boles and Boashash [14] used zero-crossing of 1D wavelet at various resolution levels to distinguish the texture of

Eighth International Conference on Hybrid Intelligent Systems

978-0-7695-3326-1/08 $25.00 © 2008 IEEE

DOI 10.1109/HIS.2008.35

619

iris. Sanchez-Reillo and Sanchez-Avila in [15] provided a partial implementation of the algorithm by Daugman. Also their other work on developing the method of Boles and Boashash by using different distance measures (such as Hamming and Euclidean distances) for matching was reported in [3].

Table 1. The comparison of methods

Methods Matching process Kind of feature Result

Daugmann [1]

Hamming distance

Binary

Perfect identification, overcome the occlusions

problem, but time wasting.

Wildes et al.

[2]

Normalized correlation

Image

Time wasting in matching process. It can

be used only in identification phase not

recognition.

Boles and

Boashash [14]

Two dissimilarity functions:

learning and classification

1D signature

Not complete recognition rate, high EER, fast

process time, simple 1D feature vector, fast

processing.

Sanchez-Reillo and Sanchez-

Avila [15]

Euclidean and

Hamming distances

1D signature

Medium classification rate, cannot cope with occlusions problem, simple 1D features.

Lim et al. [6]

LVQ neural

network

1D binary

vector

Poor recognition rate, complicated classifier,

big EER, occlusion problem.

Ma et al. [17]

Nearest feature

line

1D real-valued feature vector with the length

of 384

Time wasting in feature extraction, cannot cope with occlusions problem.

Ma et al. [16]

Weighted Euclidean distance

1D real-valued feature vector with the length

of 160

Big EER, poor recognition rate, cannot cope with occlusions problem.

Ma et al. [21]

XOR operation

1D integer-

valued feature vector with the length of 660

Improved last their works, good recognition

rate, claims 100% correct recognition,

cannot overcome the occlusions of upper

eyelid and eyelashes.

Lim et al. [6] used 2D Haar wavelet and quantized the 4th-level high-frequency information to form an 87-binary code length as feature vector and applied an LVQ neural network for classification. Tisse et al. [5] constructed the analytic image (a combination of the original image and its Hilbert transform) to demodulate the iris texture. Ma et al. [16, 17, 18] adopted a well-known texture analysis method to capture both global and local details in iris. They studied well Gabor filter families for feature extraction in some paper. Bae et al. [19] projected the iris signal onto a bank of basis vectors derived by independent component analysis and quantized the resulting projection coefficients as features. Nam et al [20] exploited a scale-space filtering to extract unique features that use the direction of concavity of an image from an iris image. Using sharp variations points in iris was represented by Ma et al. [21]. They constructed one-dimensional intensity signal and used a particular class of wavelets with vector of position sequence of local sharp variation point as features.

The Table 1 [22] showed the kind of features, matching strategy, and the result of some of this works.

3. Learning Vector Quantization (LVQ)

Learning vector quantization (LVQ), originally was introduced by Linde et al. [23] and Gray [24] as a tool for image data compression, and later it was adapted by Kohonen [7] for pattern recognition, because it is a special case of SOM, where LVQ is a supervised Neural Network that uses class information to move the weight vector slightly, so as to improve the quality of the classifier decision regions.

In our system we used the LVQ1; it consists of two layers, input and output. The dimension of the input layer is the same of the input vector. Each node in the output layer represents one output class. The activation of each output node depends on Euclidean distance between the input vector and the input weight vector.

During the training, the weight vector is adjusted according to the output class and the target class. The target class is the desirable target. Let wc be the input weight vector of the output class node, x is the target class. The weights are adjusted according to the following equations:

( 1) ( ) ( ) ( )[ ( ) ( )] (1)c c cw n w n n s n x n w nα+ = + − 0 ( ) 1,

1 if the calssification is correct,( ) (2)

1 if the classification is wrong,

n

s n

α< <

+=

⎧⎨⎩

where α(n) is the learning rate at the nth epoch. Is desirable that it decrease monotonically with the number of iterations n. The input weights of other nodes remain unchanged. Using this algorithm, the input weight vector will get closer to the input vector as time progresses.

The LVQ algorithms are sensitive to the starting point, i.e. the initial values of prototype vectors, which can affect both the speed of convergence and the final recognition error. Considering the high speed of the k-means algorithm, it is recommended that the k-means algorithm be implemented at the beginning of the LVQ training algorithm, and that the resultant cluster centres be assigned as the initial prototype vectors. This initializing approach could be extremely valuable for databases with a huge number of samples or with a slow rate of convergence.

The performance of LVQ algorithms depends to the size of network, database, training algorithm and initial point. 4. Distributed Belief Revision

620

In this contest we could have a collective activity of a set of interacting agents (LVQ Neural Networks), in which each component contributes with its local beliefs. The integration of the different opinions is performed not by an external supervisor, but by the entire group through an election mechanism, where each agent exchanges information with the other group components using a local belief revision mechanism.

Let S = {S1, … ,Sn} is the set of the sources, and T = {<S1, R1>, … ,<Sn, Rn>} is the reliability set, where Ri (a real in [0, 1]) is the reliability of Si, interpreted as the a priori probability that Si is reliable. Shafer and Srivastava [25] defined the a priori reliability of a source as the probability that the source is reliable. These degrees of probability are “translated” by the Theory of Evidence into belief-function values on the given pieces of information. However, after all the sources have given their information, it is possible estimate their a posteriori degree of reliability from the cross-examination of their outputs. To be congruent with the a priori reliability, the a posteriori reliability must be also a probability value, not a belief-function value. For this reason Dragoni et al. [8, 9, 10] had adopted the Bayesian conditioning instead of the Theory of evidence in a decision aid for judicial proceedings to help assess witness deception. It is used to eliminate information of low credibility and find maximally consistent subsets of evidence. This technique works considering the hypothesis that only the sources belonging to Φ⊆ S is reliable. If the sources are independent, then the probability of this hypothesis is:

( ) (1 ) (3) i i

i iS S

R R R∈Φ ∉Φ

Φ = ⋅ −∏ ∏

and this combined reliability is calculated for any subset of S. it holds that:

2

( ) 1 (4) S

RΦ∈

Φ =∑

Possibly, the sources belonging to a certain Φ cannot all be considered reliable because they gave contradictory information. In this case, the combined reliabilities of the remaining subsets of S are subjected to the Bayesian conditioning so that they sum up again to “1”; i.e., dividing each of them by 1 – R(Φ). In the case where there are more subsets of S (Φ1, … , Φ1), containing sources which cannot all be considered reliable, then R(Φ) = R(Φ1)+ … + R(Φ1). The revised reliability NRi of a source Si is defined as the sum of the conditioned combined reliabilities of the surviving subset of S containing Si. An important feature of this way to recalculate the sources’ reliability is that if Si is involved in contradictions, then NRi ≤ Ri, otherwise NRi = Ri.

5. Results

This work is an iris recognition system based on neural networks, but unlike previous works [6] the input of networks are the images of irises and not their transformation (2D Haar wavelet transform, 2D Gabor filter, etc. ...). We want create a system able to of assigning a resource (whatever it may be), accessible only by a small group of users each with a different level of access to the resource itself.

The irises used come from the CASIA database, which were taken at random 12 subjects from 411 of database, to form the core group on which to achieve the work.

Figure 1. Flow chart of the project

These images have 512X64 pixels dimension, and

then require large amounts of memory for training. To solve this problem the iris image was cut into 8 equal parts of 64X64 pixels, which can be given in input to 8 LVQ networks, in this way the training can be done even on PCs with few resources at their disposal. The creation of 8 separate LVQ networks, to identify a subject, needs an assessment tool for the 8 replies produced by the networks. In fact whenever the networks are in conflict with each other, this instrument must be able to determine which of the statements produced by the networks is the most reliable. This assessment tool is a Bayesian algorithm, whose goal is to assess the reliability of each network to establish a winner in the event of conflict.

The scheme of the work proposed in this paper is shown in Figure 1. As shown in this figure, it consists of three levels: Iris recognition and cut; Training of the neural networks LVQi; Bayesian conditioning. Detailed description of these steps is given below.

IRIS RECOGNITION SOFTWARE

IRIS RIBBON CUT

LVQ1 LVQ2 LVQ3 LVQ4 LVQ5 LVQ6 LVQ7 LVQ8

BAYESIAN CONDITIONING

DISPLAY NAME OF IDENTIFIED

621

5.1. Iris Recognition The first action of preprocessing is to determine iris

edge includes inner (with pupil) and outer (with sclera) edges Figure 2(c). Both the inner boundary and the outer boundary of a typical iris can approximately be taken as circles but these two circles are usually not co-centric.

The Canny method [26] was applied to these images for edge detection. The edge image was then thresholded. This operation may produce on edge image broken points, spurious edges, and various thicknesses. This image was cleaned using some morphological operation, like clean random dots, remove small edge lines and the broken lines are connected via the “close” procedure of binary morphology. Figure 2(b) is the edge image after these procedures. There is a clear circle in the edge image that represents the outer edge of the pupil (the inner boundary of the iris). The edge above and below the circle are the edges of the eyelids and eyelashes. Then the Hough circle transform [27] was applied to find the best circle and estimates the circle parameters [centre (x0,y0) and radius r0] for the pupillary and iris boundary, Figure 2(c).

(a) (b) (c)

Figure 2. Edge Detection When the iris region is successfully segmented from

an eye image, the next stage is to find a transformation which can project the iris region into a fixed two dimensional areas in order to be arranged for comparison process. The normalization process projects iris region into a constant dimensional ribbon so that two images of the same iris under different conditions have characteristic features at the same spatial location. Daugman [1, 4] suggested a normal Cartesian to Polar transform that remaps each pixel in iris area into a pair of polar coordinates (r,θ) where r and θ is on interval [0;1] and [0;2π] respectively. Our system work whit a ribbon of 512X64 pixel (512 pixel along θ and 64 pixels along r), Figure 3.

Figure 3. Iris Polar Transformation

This image are now cut in 8 square (64X64 pixel), and with these new images we training the respectively 8 neural networks LVQ.

5.2. LVQ Training

LVQ is a supervised Neural Network that uses class

information to move the weight vector slightly, so as to improve the quality of the classifier decision regions.

The number of node in each layer depends on the input data and classes of the system. The input neurons are as many as the input iris image pixel (64X64) of the training pattern. The number of the output node is 108 to assign in an optimal way the 12.

An input vector X, composed by the value of pixels of the image, is picked from the training set. If the class labels of the input vector X and a weight vector W of a node agree, then the weight vector W is moved in the direction of the input vector X. If, on the other hand, the class labels of the input vector X and the weight vector W disagree, then the weight vector W is moved away from the input vector X (Eq. 1, 2).

The training was done in series of 10.000 epochs until a maximum of 100.000 epochs.

The learning rate is calculated with the following equation:

( )( ) (5)tt e βα η −=

where α(t) decrease monotonically with the number of iterations t (η = 0,25 and β = 0,0000099, values obtained after a series of tests to optimize networks). This equation (Eq. 5) was chosen from among those most often used in literature, because ensures an optimal reduction of learning rate.

The training set is composed of 12 images (left eye, taken from CASIA database), one from each subject, and the test set is composed of other 60 images from the same subjects (5 for each subject).

In Table 2 is showed the performance obtained from Training and test sets for each network.

Table 2. Performance of the LVQ networks

LVQ1 LVQ2 LVQ3 LVQ4 LVQ5 LVQ6 LVQ7 LVQ8

Training T 100% 100% 100% 100% 100% 100% 100% 100%

F 0% 0% 0% 0% 0% 0% 0% 0%

Test set T 61,7% 68,3% 70% 63,3% 38,3% 45% 50% 51,7%

F 38,3% 31,7% 30% 36,7% 61,7% 55% 50% 48,3%

5.3. Bayesian Conditioning

In this model of ANN, each node is more or less

associated (Euclidean distance) to a subject of the training set. During the test, each node of each network

622

has a distance associated with the input. As a response of the network have taken the 3 nodes closest to the input. The LVQ networks always does not agree in their responses, in some cases one or more of them recognize a subject instead of another (presence of noise), to overcome these situations of disagreement between the networks has introduced the Bayesian conditioning [8, 9, 10, 25], that is used to find maximally consistent subsets of statements produced by LVQ networks, eliminating all information with low credibility, selecting only the statements made by the networks with greater reliability.

Initially all networks have the same reliability, for every conflict the networks that fall into minority lose credibility than most other, in this way with every new images the reliability of networks rebase.

Calculated the reliability for each network, the identity of the subject is established through two methods: Inclusion Based algorithm and sum of the Reliability. The algorithm of Inclusion Based sort and select the elements of the set of conflict generated by networks B (B = {B1; …; Bn} = {Node2 ← (LVQ2, LVQ3, LVQ5, LVQ7); Node7 ← (LVQ1, LVQ3); Node1 ← (LVQ2, LVQ5, LVQ8)}).

The Inclusion based method eliminates always the least credible one among conflicting pieces of knowledge.

Let 1'' 'nB B B= ∪ ∪… and 1'' '' ''nB B B= ∪ ∪…

two consistent subsets of B where 'i iB B B= ∩ and

'' ''i iB B B= ∩ , than '' 'B B≤ iff there exists a stratum

i such that ' ''i iB B⊃ and for any j<i, ' ''j jB B= . The sum of Reliabilities method calculates the

addition of reliability of each subset and sorts these results, the subset with the max sum is the winner.

To test our work, we have taken 100 random subjects from the 48 images (test set) of 12 subjects. From each random subject all the LVQ networks produce their statements (3 nodes).

In Table 3 is showed the results obtained form the application of the Bayesian conditioning with Inclusion based and sum of reliability methods, on the test set.

Table 3. Accuracy rate

Average of LVQ Networks Sum of Reliability Inclusion Based

True Identify

False Identify

True Identify

False Identify

True Identify

False Identify

Test set

56,1 %

43,9 %

62 %

38 %

70 %

30 %

6. Conclusion

This paper shows how in spite of some LVQ networks, which constitute our group of experts, have low percentages of correct recognition for the test set.

The application of a Bayesian conditioning algorithm on this set of neural networks is able to obtain good results in the identification of subjects.

The use of Inclusion Based instead of the Sum of Reliability has been proven to be the best method for choosing the winners in cases of conflict between networks (Table 3).

The reliability calculated with the Bayesian algorithm after the Tests set, confirm the trends of LVQ networks applied directly to the test set (Table 2), in fact the first 4 networks (LVQ1, LVQ2, LVQ3, LVQ4) are more reliable compared with the remaining 4 (subject to more noise). 7. References [1] J.G. Daugman, “High confidence visual recognition of persons by a test of statistical independence”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, 1993, pp. 1148–1161. [2] R.P. Wildes, J. C. Asmuth, G. L. Green, et al., “A machinevision system for iris recognition,” Machine Vision and Applications, vol. 9, no. 1, 1996, pp. 1–8. [3] C. Sanchez-Avila and R. Sanchez-Reillo, “Iris-based biometric recognition using dyadic wavelet transform,” IEEE Aerospace and Electronic SystemsMagazine, vol. 17, no. 10, 2002, pp. 3–6. [4] J.G. Daugman, “Demodulation by complex-valued wavelets for stochastic pattern recognition”, International Journal of Wavelets, Multiresolution, and Information Processing, vol. 1, no. 1, 2003, pp. 1–17. [5] C. Tisse, L. Martin, L. Torres, and M. Robert, “Person identification technique using human iris recognition,” in Proceedings of the 15th International Conference on Vision Interface (VI ’02), Calgary, Canada, May 2002, pp. 294–299. [6] S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient iris recognition through improvement of feature vector and classifier,” ETRI Journal, vol. 23, no. 2, 2001, pp. 61–70. [7] T. Kohonen, “Learning vector quantization”, in Self-Organising Maps, Springer Series in Information Sciences. Berlin, Heidelberg, New York: Springer-Verlag, 1995, 3rd ed. [8] A.F. Dragoni, “Belief revision: from theory to practice”, The Knowledge Engineering Review, 2001, pp. 147-179. http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=70969&fulltextType=RV&fileId=S026988899700204X [9] A.F. Dragoni, P. Giorgini, and E. Nissan, “Belief revision as applied within a descriptive model of jury deliberations”, Information and Communications Technology Law, 2001, pp. 53-65.

623

[10] A.F. Dragoni, S. Animali, “Maximal Consistency and Theory of Evidence in the Police Inquiry Domain”, in CYbernetics and Systems: An International Journal, vol. 34, n. 6-7, Taylor & Francis, 2003, pp. 419-465. [11] CASIA Iris image database, institute of automation (IA). Beijing, China: Chinese Accademy of Sciences. http://www.cbsr.ia.ac.cn/IrisDatabase.htm [12] A. Bertillon, “La couleur de l’Iris”, Rev. Of Science, vol. 36, no.3, France, 1885, pp. 65-73. [13] R.P. Wildes, “Iris recognition: an emerging biometric technology,” Proceedings of the IEEE, vol. 85, no. 9, 1997, pp. 1348–1363. [14] W. Boles and B. Boashash, “A human identification technique using images of the iris and wavelet transform,” IEEE Transactions on Signal Processing, vol. 46, no. 4, 1998, pp. 1085–1088. [15] R. Sanchez-Reillo and C. Sanchez-Avila, “Iris recognition with low template size,” in Proceedings of the 3rd International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA ’01), Halmstad, Sweden, June 2001, pp. 324–329. [16] L. Ma, Y. Wang, and T. Tan, “Personal iris recognition based on multichannel Gabor filtering,” in Proceedings of the 5th Asian Conference on Computer Vision (ACCV ’02), Melbourne, Australia, January 2002. [17] L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric filters,” in Proceedings of the 16th International Conference on Pattern Recognition, vol. 2, Quebec City, Quebec, Canada, August 2002, pp. 414–417. [18] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, 2003, pp. 1519– 1533.

[19] K. Bae, S.I. Noh, and J. Kim, “Iris feature extraction using independent component analysis,” in Proceedings of the 4th International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA ’03), Guildford, UK, June 2003, pp. 838–844. [20] K.W. Nam, K.L. Yoon, J.S. Bark, and W.S. Yang, “A feature extraction method for binary iris code construction,” in 12 EURASIP Journal on Advances in Signal Processing Proceedings of the 2nd International Conference on Information Technology for Application (ICITA ’04), Harbin, China, January 2004. [21] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Transactions on Image Processing, vol. 13, no. 6, 2004, pp. 739–750. [22] A. Poursaberi, B.N. Araabi, “Iris Recognition for Partially Occluded Images : Methodology and Sensitiity Analysis”, EURASIP Journal on Applied Signal Processing, 2006, pp. 1-12. [23] Y. Linde, A. Buzo, R.M. Gray, “An algorithm for vector quantizer design”, IEEE Trans. Commun. COM-28, 1980, pp. 84–95. [24] R.M. Gray, “Vector quantization”, IEEE ASSP Mag. 1, 1984, pp. 4–29. [25] Shafer, G., and S. Srivastava, The Bayesian and belief-function formalisms a general Perspective for Auditing. In Readings in Uncertain Reasoning, Morgan Kaufmann, 1990. [26] J. Canny, “A computational approach to edge detection”, IEEE “Methods and means to recognize complex patterns”, Transactions on Pattern Analysis and Machine Intelligence, PAMI-8, 1986, pp. 679-714. [27] P.V.C Hough, “Methods and means to recognize complex patterns”, U.S. Patent 3 069 654, 1962.

624