Improved fingerprint identification with supervised filtering enhancement

8
Improved fingerprint identification with supervised filtering enhancement Abdullah Bal, Aed M. El-Saba, and Mohammad S. Alam An important step in the fingerprint identification system is the reliable extraction of distinct features from fingerprint images. Identification performance is directly related to the enhancement of fingerprint images during or after the enrollment phase. Among the various enhancement algorithms, artificial- intelligence-based feature-extraction techniques are attractive owing to their adaptive learning proper- ties. We present a new supervised filtering technique that is based on a dynamic neural-network approach to develop a robust fingerprint enhancement algorithm. For pattern matching, a joint transform correlation (JTC) algorithm has been incorporated that offers high processing speed for real-time appli- cations. Because the fringe-adjusted JTC algorithm has been found to yield a significantly better corre- lation output compared with alternate JTCs, we used this algorithm for the identification process. Test results are presented to verify the effectiveness of the proposed algorithm. © 2005 Optical Society of America OCIS codes: 100.2980, 070.5010, 070.6110, 100.4550. 1. Introduction A fingerprint identification system is based on the matching of minute details of the ridge–valley struc- tures of fingerprints. In the literature, a total of 18 different types of local ridge–valley descriptions have been identified. 1 Among them, ridge endings and ridge bifurcations, which are usually called minutiae, are the two most prominent structures used in a fin- gerprint identification system. In general, the effi- cient extraction of minutiae from digital fingerprint images is an extremely difficult task. The perfor- mances of the minutiae extraction algorithms are associated with the quality of the digital fingerprint- image input. Ideally, the ridge–valley structures in a fingerprint image are well defined. Each ridge is sep- arated by two parallel narrow valleys, and each val- ley is separated by two narrow ridges. However, in practice, such well-defined ridge–valley structures are not always visible in the scanned fingerprint im- ages. In general, a fingerprint image is corrupted by various kinds of noise, such as creases, smudges, and holes. Therefore algorithms that can alleviate these types of problems by increasing the visibility of mi- nutiae are needed. 1 Recently artificial-intelligence-based techniques were developed for image-enhancement and feature- extraction processes such as Hopfield neural net- works and cellular neural networks. 2–4 Essential to these studies are filtering approaches that use ad- justable weight coefficients for various input images. Among these techniques, the supervised filtering technique has been found to be attractive owing to its fast processing speed and architectural simplicity. In this technique, the basic filtering architecture is in- corporated into a dynamic neuron. After the training step, the adjusted filter coefficients are used to en- hance fingerprint images. The enhanced fingerprint images are then introduced in a fringe-adjusted joint transform correlator (JTC) 5 for pattern matching, be- cause the fringe-adjusted JTC has shown remarkable promise for real-time pattern recognition applica- tions. Unlike matched filtering, the JTC technique avoids complex filter fabrication, relaxes the meticu- lous alignment of system components, and provides near-real-time parallel Fourier transformation of the reference image and the unknown input scene. In the following sections, we describe the proposed fingerprint enhancement algorithm and identi- fication system in detail. Section 2 addresses the su- pervised filtering technique. The fringe-adjusted JTC-based identification system is introduced in Sec- tion 3. Section 4 contains simulation results for the fingerprint enhancement and the identification sys- The authors are with the Department of Electrical and Com- puter Engineering, University of South Alabama, Mobile, Alabama 36688-0002. M. S. Alam’s e-mail address is [email protected]. Received 16 May 2004; revised manuscript received 14 October 2004; accepted 14 October 2004. 0003-6935/05/050647-08$15.00/0 © 2005 Optical Society of America 10 February 2005 Vol. 44, No. 5 APPLIED OPTICS 647

Transcript of Improved fingerprint identification with supervised filtering enhancement

Improved fingerprint identification with supervisedfiltering enhancement

Abdullah Bal, Aed M. El-Saba, and Mohammad S. Alam

An important step in the fingerprint identification system is the reliable extraction of distinct featuresfrom fingerprint images. Identification performance is directly related to the enhancement of fingerprintimages during or after the enrollment phase. Among the various enhancement algorithms, artificial-intelligence-based feature-extraction techniques are attractive owing to their adaptive learning proper-ties. We present a new supervised filtering technique that is based on a dynamic neural-networkapproach to develop a robust fingerprint enhancement algorithm. For pattern matching, a joint transformcorrelation (JTC) algorithm has been incorporated that offers high processing speed for real-time appli-cations. Because the fringe-adjusted JTC algorithm has been found to yield a significantly better corre-lation output compared with alternate JTCs, we used this algorithm for the identification process. Testresults are presented to verify the effectiveness of the proposed algorithm. © 2005 Optical Society ofAmerica

OCIS codes: 100.2980, 070.5010, 070.6110, 100.4550.

1. Introduction

A fingerprint identification system is based on thematching of minute details of the ridge–valley struc-tures of fingerprints. In the literature, a total of 18different types of local ridge–valley descriptions havebeen identified.1 Among them, ridge endings andridge bifurcations, which are usually called minutiae,are the two most prominent structures used in a fin-gerprint identification system. In general, the effi-cient extraction of minutiae from digital fingerprintimages is an extremely difficult task. The perfor-mances of the minutiae extraction algorithms areassociated with the quality of the digital fingerprint-image input. Ideally, the ridge–valley structures in afingerprint image are well defined. Each ridge is sep-arated by two parallel narrow valleys, and each val-ley is separated by two narrow ridges. However, inpractice, such well-defined ridge–valley structuresare not always visible in the scanned fingerprint im-ages. In general, a fingerprint image is corrupted byvarious kinds of noise, such as creases, smudges, andholes. Therefore algorithms that can alleviate these

types of problems by increasing the visibility of mi-nutiae are needed.1

Recently artificial-intelligence-based techniqueswere developed for image-enhancement and feature-extraction processes such as Hopfield neural net-works and cellular neural networks.2–4 Essential tothese studies are filtering approaches that use ad-justable weight coefficients for various input images.Among these techniques, the supervised filteringtechnique has been found to be attractive owing to itsfast processing speed and architectural simplicity. Inthis technique, the basic filtering architecture is in-corporated into a dynamic neuron. After the trainingstep, the adjusted filter coefficients are used to en-hance fingerprint images. The enhanced fingerprintimages are then introduced in a fringe-adjusted jointtransform correlator (JTC)5 for pattern matching, be-cause the fringe-adjusted JTC has shown remarkablepromise for real-time pattern recognition applica-tions. Unlike matched filtering, the JTC techniqueavoids complex filter fabrication, relaxes the meticu-lous alignment of system components, and providesnear-real-time parallel Fourier transformation of thereference image and the unknown input scene.

In the following sections, we describe the proposedfingerprint enhancement algorithm and identi-fication system in detail. Section 2 addresses the su-pervised filtering technique. The fringe-adjustedJTC-based identification system is introduced in Sec-tion 3. Section 4 contains simulation results for thefingerprint enhancement and the identification sys-

The authors are with the Department of Electrical and Com-puter Engineering, University of South Alabama, Mobile, Alabama36688-0002. M. S. Alam’s e-mail address is [email protected].

Received 16 May 2004; revised manuscript received 14 October2004; accepted 14 October 2004.

0003-6935/05/050647-08$15.00/0© 2005 Optical Society of America

10 February 2005 � Vol. 44, No. 5 � APPLIED OPTICS 647

tem. Finally, concluding comments are given in Sec-tion 5.

2. Supervised Filtering Technique

Neural networks can be classified into recurrent andfeed-forward categories. Feed-forward networks donot have feedback elements; the output is calculateddirectly from the input through feed-forward connec-tions. In recurrent networks, the output depends notonly on the current input to the network, but also onthe current or previous outputs or states of thenetwork. For this reason, recurrent networks aremore powerful than feed-forward networks and areextensively used in control, optimization, and signalprocessing applications. By utilizing the aforemen-tioned properties, we developed a new supervised fil-tering technique that incorporates the recurrentneural-network and filtering approaches. Instead ofweight matrices and weighted summations, super-vised filtering employs a fixed-sized filter mask andconvolution operation, as shown in Fig. 1.

The recurrent flow of supervised filtering is de-scribed by formation of a convolution between theinput and the filter mask and then summation of thebias scalar value in the nonlinear activation function.Mathematically, this can be expressed as

am, n(t � 1) � f (� �i��s

s

�j��s

s

hi, jam�i, n�j(t)�� b), (1)

where hi, j is the filter mask that is designed for a sizeof s � s pixels, b is the scalar bias value, f is thenonlinear activation function, t is the iteration step,and am, n�t � 1� and am, n�t� are outputs and inputs ofthe system, respectively. In this study, a 3 � 3 filtermask �s � s� is selected. Initially, the input image isdefined as

am, n(0) � p, (2)

where p is the entire input image. The activationfunction can be formulated as a nonlinear piecewisefunction, expressed as

f(x) �� �1 x � �1x �1 � x � 11 x � 1

. (3)

In the training of the supervised filtering, the goal isto minimize a cost function E. The minimization isaccomplished by adjustment of the filter coefficients

in an appropriate manner. The cost (error) functioncan be defined by use of the actual output, am, n�t� 1�, and the desired output, dm, n, as

E � (1�2) �m

�n

�am, n(t � 1) � dm, n2. (4)

The gradient-descent algorithm6 includes the deriva-tion of the error function in order to minimize the costfunction. For each iteration, the filter coefficients andthe bias value are updated as follows:

h(t � 1) � h(t) � �E(t)h(t) , (5)

b(t � 1) � b(t) � �E(t)b(t) , (6)

where � represents the learning rate. Because theconvolution operation offers multidimensional multi-plication, supervised filtering runs as a single-layerneural network. Thus the partial derivatives of Eqs.(5) and (6) can be conveniently computed with thechain rule of calculus, given by

Eh �

Ec(t)

c(t)h , (7)

Eb �

Ec(t)

c(t)b . (8)

The second term in Eqs. (7) and (8) can easily becomputed, because the net input c is an explicit func-tion of the coefficients and bias value in that itera-tion:

cm, n(t) � �i��s

s

�j��s

s

hi, jam�i, n�j(t) � b. (9)

Therefore

c(t)h � a(t),

c(t)b � 1. (10)

The first term in Eqs. (7) and (8) is called the sensi-tivity of E, which is calculated by use of the followingequation:

Ec(t) � � 2f �(c(t))(d � a(t � 1)), (11)

where f � represents the derivation of the activationfunction, d is the desired output, and a�t � 1� is theactual output of the network at iteration �t � 1�.6

Fig. 1. Supervised filtering architecture.

648 APPLIED OPTICS � Vol. 44, No. 5 � 10 February 2005

Finally the updated coefficients and bias value arerewritten by use of Eqs. (10) and (11) as

h(t � 1) � h(t) � 2�f �(c(t))am, n(t)(d � a(t � 1)), (12)

b(t � 1) � b(t) � 2�f �(c(t))(d � a(t � 1)). (13)

In contrast to the feed-forward neural network, theproposed filtering technique uses consecutive itera-tion values called recurrent flow, as shown in Eqs.(12) and (13), respectively. When the learning algo-rithm converges to the minimum error, the last-updated filter and bias coefficients are saved for thetest phase of supervised filtering.4,6

3. Fringe-Adjusted Joint Transform Correlation

Fringe-adjusted JTC has shown significant promisefor real-time optical pattern recognition and object-tracking applications.7–9 Unlike matched filtering,JTC avoids complex filter fabrication, relaxes the me-ticulous alignment of system components, and pro-vides near-real-time parallel Fourier transformationof the reference image and the unknown inputscene.10 The input of JTC includes the reference andthe unknown input images, which can be formulatedas

f(x, y) � r(x, y � y0) � �i�1

n

ti(x � xi, y � yi)

� n(x, y � y0), (14)

where r�x, y � y0� represents the reference image; andt�x, y � y0� represents the input scene containing ntargets t1�x � x1, y � y1�, t2�x � x2, y � y2�, . . . ,tn(x�xn, y � yn), respectively, which is corrupted bynoise n�x, y � y0�. In the JTC architecture shown inFig. 2, lens 2 performs a two-dimensional (2D) Fou-rier transform of the input joint image, and the cor-responding joint power spectrum (JPS) is captured bythe CCD camera, given by

|F(u, v)|2 � |R(u, v)|2 � �i�1

n

|Ti(u, v)|2 � |N(u, v)|2

� 2 �i�1

n

|Ti(u, v)|

� |R(u, v)|cos�ti(u, v) � r(u, v)� uxi � vyi � 2vy0]� 2|R(u, v)||N(u, v)|cos�r(u, v)� n(u, v) � 2vy0)]

� 2 �i�1

n

|Ti(u, v)||N(u, v)|cos�ti(u, v)

� n(u, v) � uxi � vyi

� 2 �i�1

n

�k � 1

k�i

n

|Ti(u, v)||Tk(u, v)|

� cos�ti(u, v) � k(u, v)� uxi � uxk � vyi � vyk], (15)

where |R�u, v�|, |Ti�u, v�|, and |N�u, v�| are theamplitudes and r�u, v�, ti�u, v�, and n�u, v� are thephases of the Fourier transforms of r�x, y�, ti�x, y�, andn�x, y�, respectively; and u and � are the frequency-domain variables. In Eq. (15), the first three termscorrespond to the zero-order term, the fourth termcorresponds to the desired cross correlation betweenthe reference image and the input scene targets, andthe remaining terms correspond to the cross correla-tion between the reference image and the noise andinput scene targets, respectively. It should be notedthat the presence of identical target or nontarget ob-jects in the input scene yields undesired autocorrela-tion peaks or false alarms. To eliminate such falsealarms, we used a Fourier plane image-subtractiontechnique11,12 in which the input-scene-only powerspectrum and the reference-image-only power spec-trum are subtracted from the JPS of Eq. (15) beforeapplication of the inverse Fourier-transform opera-tion to yield the correlation output. After applyingFourier plane image subtraction, one may expressthe modified JPS as follows:

P(u, v) � |F(u, v)2 � |T(u, v)|2 � |R(u, v)|2

� 2 �i�1

n

|T, (u, v)||R(u, v)|cos�u(u, v)

� r(u, v) � uxi � vyi � 2vy0]� 2|R(u, v)| | N(u, v)|cos�r(u, v)� n(u, v) � 2vy0]. (16)

From Eq. (16), it is evident that the subtraction op-eration eliminates the false alarms generated by sim-ilar input scene targets as well as the cross-correlation terms between other objects that may bepresent in the input scene.

A classical JTC usually yields large correlationsidelobes as well as a large correlation peak width, astrong zero-order peak, and a low optical efficien-cy.13,14 To alleviate the limitations associated withclassical and binary JTCs, the fringe-adjusted JTCtechnique was proposed in which JPS is multiplied by

Fig. 2. Fringe-adjusted JTC architecture.

10 February 2005 � Vol. 44, No. 5 � APPLIED OPTICS 649

a fringe-adjusted filter (FAF) before application of theinverse Fourier-transform operation to yield the cor-relation output. The FAF is defined as

H(u, v) � C(u, v)[D(u, v) � |R(u, v)|2]�1, (17)

where C�u, v� and D�u, v� are either constants orfunctions of u and �, respectively. When C�u, v� � 1

and |R�u, v�|2 �� D�u, v�, the FAF function H�u, v�can be approximated as

H(u, v) |R(u, v)|�2. (18)

The modified JPS of Eq. (16) is then multiplied by theFAF of Eq. (18) to yield the modified fringe-adjustedJPS, given by

Fig. 3. Fingerprint enhancement process by supervised filtering technique: (a) original image and (b) enhanced image.

Fig. 4. Fingerprint verification: (a) input joint image, (b) 2D correlation output, (c) 3D correlation output.

650 APPLIED OPTICS � Vol. 44, No. 5 � 10 February 2005

O(u, v) � H(u, v) � P(u, v) |R(u, v)|�2 � P(u, v).(19)

An inverse Fourier transform of Eq. (19) yields thecorrelation output that consists of a pair of cross-correlation peaks corresponding to the known refer-ence object and the unknown input scene object.

4. Fingerprint Enhancement and Identification System

Fingerprint images are obtained by use of the exper-imental setup described in detail in Ref. 15. The pro-posed supervised filtering technique is then appliedto enhance the fingerprint images. Figure 3(a) showsan original fingerprint image, and the correspondingenhanced image is shown in Fig. 3(b).

For fingerprint identification, we used the fringe-adjusted JTC architecture mentioned above. To eval-uate the performance of the identification process, weused the following three widely used correlation per-formance parameters: the peak-to-correlation energy(PCE), the peak-to-sidelobe ratio (PSR), and thepeak-to-clutter ratio (PCR).16–18 The PCE is definedas

PCE �|P�x�, y��|2

Ep, (20)

where |p�x�, y��| represents the peak magnitude,and the correlation plane energy, Ep, is defined as

Ep � �x

�y

|p(x, y)|2. (21)

For sharp correlation peaks, Ep should be muchsmaller than |p�x�, y��|2 and PCE should be as largeas possible.

The second performance measure, PSR, is definedas

PSR �|p(x�, y�)| � �

, (22)

where |p�x�, y��| represents the peak magnitude, �represents the mean, and � represents the standarddeviation of the 20 � 20 pixels around the peak inthe correlation output plane.

Fig. 5. Fingerprint verification: (a) enhanced input joint image, (b) 2D correlation output, (c) 3D correlation output.

Table 1. Verification Performance Parameters

Verification Peak PSR PCE PCR

Original image 0.0014 79.42 0.00084 1.90Enhanced image 0.1532 1255 0.0079 24.08Ratio 10900% 1580% 940% 1267%

10 February 2005 � Vol. 44, No. 5 � APPLIED OPTICS 651

The PCR offers a measure that involves the corre-lation peaks of the desired target and the back-ground, defined as

PCR ��p(x�, y�)��p(x�, y�)�

, (23)

where |p�x�, y��| is the peak value that correspondsto the target and |p�x�, y��| is the second-highestpeak value that corresponds to the clutter.

To evaluate the effect of the proposed enhancementtechnique on the verification process, we introducedthe preprocessed fingerprint images in the fringe-adjusted JTC. At first, verification (one-to-one com-parison) is tested for the unprocessed images. Figure4(a) shows an input joint image in which the originalfingerprint images without preprocessing are used.The corresponding correlation output is depicted inFigs. 4(b) and 4(c) in the form of 2D and three-dimensional (3D) plots, respectively. The perfor-mance parameters for this case are correlation peakintensity � 0.0014, PSR � 79.42, PCE � 0.00084,and PCR � 1.9, respectively.

The proposed enhancement algorithm was then ap-plied to the input joint image of Fig. 4(a), and the

corresponding enhanced fingerprint image is shownin Fig. 5(a). The fringe-adjusted JTC operation isthen applied to Fig. 5(a), and the corresponding cor-relation output is depicted in Figs. 5(b) and 5(c) in theform of 2D and 3D plots, respectively. The perfor-mance parameters corresponding to this case are cor-relation peak intensity � 0.1532, , PSR � 1255,PCE � 0.0079, and PCR � 24.08. Comparing Fig.4(b) with Fig. 5(b), we observe that the correlationpeaks are narrower and sharper in the latter, indi-cating a great improvement in the verification per-formance. The performance parameters for bothcases are tabulated in Table 1 for convenience. FromTable 1, it is obvious that the enhancement processyields verification results that are 9–15 times betterthan those for the case without preprocessing.

We further investigate the effect of proposed en-hancement techniques on the identification (one-to-many-comparison) process. First, we consider thecase without the application of the enhancement al-gorithm. Five fingerprint images are selected fromthe database, and one known fingerprint is selectedfor identification and placed in the lower left-handside of the template to form the input joint image, asshown in Fig. 6(a). Figure 6(a) is introduced in the

Fig. 6. Fingerprint identification: (a) input joint image, (b) 2D correlation output, (c) 3D correlation output.

652 APPLIED OPTICS � Vol. 44, No. 5 � 10 February 2005

fringe-adjusted JTC, and the corresponding correla-tion output is shown in Figs. 6(b) and 6(c) in the formof 2D and 3D plots, respectively. Next, the enhance-ment algorithm is applied to Fig. 6(a), and the corre-sponding preprocessed input joint image is shown inFig. 7(a). The fringe-adjusted JTC operation is thenapplied to Fig. 7(a), and the corresponding correlationoutput is depicted in Figs. 7(b) and 7(c) in the form of2D and 3D plots, respectively. Table 2 summarizesthe results that correspond to Figs. 6 and 7 for theidentification process. Comparing the identificationresults based on enhanced and original fingerprintimages, we observe that the supervised-filtering-based enhancement technique improves the identifi-cation performance as much as 2–7 times.

5. Conclusion

We introduced a simple and fast enhancement algo-rithm based on the concept of supervised filtering.

Enhanced fingerprint images obtained with the pro-posed technique are used in the fringe-adjusted JTCsetup for the verification and identification processes.Significant improvements in the identification perfor-mance parameters are obtained when the originalfingerprint images are enhanced by use of the pro-posed technique. Biometric systems that are based onfingerprint identification would greatly benefit fromthe proposed preprocessing technique.

References1. L. Hong, A. Jain, S. Pankanti, and R. Bolle, “Fingerprint en-

hancement,” in Proceedings of the 3rd IEEE Workshop on Ap-plications of Computer Vision (WACV ’96) (Institute ofElectrical and Electronics Engineers, Piscataway, N.J., 1996),pp. 202–207.

2. E. Saatci and V. Tavsanoglu, “Fingerprint image enhancementusing CNN Gabor-type filters,” in Proceedings of the SeventhIEEE International Workshop on Cellular Neural Networksand Their Applications (CNNA ’02) (Institute of Electrical andElectronics Engineers, Piscataway, N.J., 2002), pp. 377–382.

3. O. N. Ucan, A. Bal, and M. Mercimek, “Corner detection usingdynamic neural networks,” J. Elect. Electron. 2, 537–539(2002).

4. A. Bal and M. S. Alam, “Feature extraction technique based onHopfield neural network and joint transform correlation,” inOptical Information Systems II, B. Javidi and D. Psaltis, eds.,Proc. SPIE 5557, 343–348 (2004).

5. M. S. Alam and M. A. Karim, “Fringe-adjusted joint transform

Fig. 7. Fingerprint identification: (a) enhanced input joint image, (b) 2D correlation output, (c) 3D correlation output.

Table 2. Identification Performance Parameters

Identification Peak PSR PCE PCR

Original image 0.0015 222.83 0.00063 2.517Enhanced image 0.0321 739.72 0.0016 19.1251Ratio 2140% 331% 253% 759%

10 February 2005 � Vol. 44, No. 5 � APPLIED OPTICS 653

correlation,” Appl. Opt. 32, 4344–4350 (1993).6. T. M. Hagan, H. B. Demuth, and M. Beale, Neural Networks

Design (PWS-Kent, Boston, Mass., 1995).7. F. Cheng, F. T. S. Yu, and D. A. Gregory, “Multitarget detec-

tion using spatial synthesis joint transform correlator,” Appl.Opt. 32, 6521–6526 (1993).

8. Q. Tang and B. Javidi, “Multiple-object detection with a chirp-encoded joint transform correlator,” Appl. Opt. 32, 4344–4350(1993).

9. P. C. Miller, M. Royce, P. Virgo, M. Fiebig, and G. Hamlyn,“Evaluation of an optical correlator automatic target recogni-tion system for acquisition and tracking in densely clutterednatural scenes,” Opt. Eng. 38, 1814–1825 (1999).

10. B. Javidi and C. Kuo, “Joint transform image correlation usinga binary spatial light modulator at the Fourier plane,” Appl.Opt. 27, 663–665 (1988).

11. S. K. Rogers, J. D. Cline, and M. Kabrisky, “New binarizationtechniques for joint transform correlation,” Opt. Eng. 29,1018–1093 (1990).

12. M. S. Alam, “Deblurring using fringe-adjusted joint transformcorrelation,” Opt. Eng. 37, 556–564 (1998).

13. M. S. Alam and M. A. Karim, “Improved correlation discrim-ination in a multiobject bipolar joint transform correlator,”Opt. Laser Technol. 24, 45–50 (1992).

14. F. T. S. Yu, F. Cheng, T. Nagata, and D. A. Gregory, “Effects offringe binarization of multi-object joint transform correlation,”Appl. Opt. 28, 2988–2990 (1989).

15. A. Bal, M. S. Alam, and A. El-Saba, “Optical fingerprint iden-tification using cellular neural network and joint transformcorrelation,” in Optical Information Systems II, B. Javidi andD. Psaltis, eds., Proc. SPIE 5557, 349–355 (2004).

16. B. V. K. V. Kumar and L. Hassebrook, “Performance measuresfor correlation filters,” Appl. Opt. 29, 2997–3001 (1990).

17. R. Singh and B. V. K. V. Kumar, “Performance of the extendedmaximum average correlation height (EMACH) filter and thepolynomial distance classifier correlation filter (PDCCF) formulticlass SAR detection and classification,” in Algorithms forSynthetic Aperture Radar Imagery IX, E. G. Zelnio, ed., Proc.SPIE 4727, 265–276 (2002).

18. A. Mahalanobis, A. R. Sims, and A. V. Nevel, “Signal-to-cluttermeasure for measuring automatic target recognition perfor-mance using complimentary eigenvalue distribution analysis,”Opt. Eng. 42, 1144–1151 (2003).

654 APPLIED OPTICS � Vol. 44, No. 5 � 10 February 2005