A hierarchical multiscale and multiangle system for human face...

12
Pattern Recognition 32 (1999) 1237}1248 A hierarchical multiscale and multiangle system for human face detection in a complex background using gravity-center template Jun Miao!, Baocai Yin!, *, Kongqiao Wang", Lansun Shen", Xuecun Chen! ! Department of Computer Science, Beijing Polytechnic University, Beijing 100022, People+s Republic of China "Signal and Information Processing Lab, Beijing Polytechnic University, Beijing 100022, People+s Republic of China Received 13 May 1998; received in revised form 28 October 1998; accepted 28 October 1998 Abstract This paper presents a novel faster search scheme of gravity-center template matching compared with the traditional search method in an image for human face detection, which signi"cantly saves the time consumed in rough detection of human faces in a mosaic image. Besides, the system is able to detect rather slanted faces ( !253&253) and faces with much horizontal rotating angles ( !453&453) and vertical rotating angles (!303&303), which are in various sizes and at unknown locations in an unconstrained background. A specially de"ned recognition rate of 86.7% has been reached for such faces. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. Keywords: Face detection; Multiscale; Multiangle; Mosaic image; Mosaic-edge gravity-center template 1. Introduction Human face recognition, as a speci"c content of pattern recognition, is much signi"cant in various ap- plications as TV-conference, virtual reality, intelligent human-computer interface and security system. A pro- cedure for human face recognition generally includes two phases. A. face location/detection and B. facial feature detection. The former refers to search and label human faces appeared in an image, while the latter means to locate feature points of face organs or components such as eyebrows, eyes, nose or mouth, or extract other facial features to classify human faces or human facial expres- sion. Neither of them is a simple problem. Many people have done much work on them. The representative * Corresponding author. Tel.: 86-10-67391742; Fax: 86-10- 67392297; E-mail: yinbc@bjpu.edu.cn methods include face template matching [1], mosaic technique [2,3], deformable template matching [4,5], pro"le detecting [6}9], &&eigenface'' [10] or &&eigenpic- ture'' [11] scheme, similar discriminant function approach [12], neural network recognition [13,14], self- organizing system [15,16] and usage of multiple informa- tion including caption, color, sound or motion [17}20]. Among them, [1,2,13,18}20] are mainly on phase A. Refs. [10,16] include both phases A and B. The rest belong to phase B. There have been a few of automatic face detection systems [1,2,13,18}20]. All the systems are designed for complex backgrounds except that in [13]. In [1], two "xed face templates were introduced, which is obviously limited in searching faces in various con"gurations. Ref. [2] introduces a technique of mosaic searching, which searches faces in an mosaicized image instead of in the source image pixel by pixel. However, it still cannot avoid 0031-3203/99/$20.00 ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 3 1 - 3 2 0 3 ( 9 8 ) 0 0 1 5 6 - 3

Transcript of A hierarchical multiscale and multiangle system for human face...

Pattern Recognition 32 (1999) 1237}1248

A hierarchical multiscale and multiangle systemfor human face detection in a complex background

using gravity-center template

Jun Miao!, Baocai Yin!,*, Kongqiao Wang", Lansun Shen", Xuecun Chen!

! Department of Computer Science, Beijing Polytechnic University, Beijing 100022, People+s Republic of China"Signal and Information Processing Lab, Beijing Polytechnic University, Beijing 100022, People+s Republic of China

Received 13 May 1998; received in revised form 28 October 1998; accepted 28 October 1998

Abstract

This paper presents a novel faster search scheme of gravity-center template matching compared with the traditionalsearch method in an image for human face detection, which signi"cantly saves the time consumed in rough detection ofhuman faces in a mosaic image. Besides, the system is able to detect rather slanted faces (!253&253) and faces withmuch horizontal rotating angles (!453&453) and vertical rotating angles (!303&303), which are in various sizes andat unknown locations in an unconstrained background. A specially de"ned recognition rate of 86.7% has been reachedfor such faces. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

Keywords: Face detection; Multiscale; Multiangle; Mosaic image; Mosaic-edge gravity-center template

1. Introduction

Human face recognition, as a speci"c content ofpattern recognition, is much signi"cant in various ap-plications as TV-conference, virtual reality, intelligenthuman-computer interface and security system. A pro-cedure for human face recognition generally includes twophases. A. face location/detection and B. facial featuredetection. The former refers to search and label humanfaces appeared in an image, while the latter means tolocate feature points of face organs or components suchas eyebrows, eyes, nose or mouth, or extract other facialfeatures to classify human faces or human facial expres-sion. Neither of them is a simple problem. Many peoplehave done much work on them. The representative

*Corresponding author. Tel.: 86-10-67391742; Fax: 86-10-67392297; E-mail: [email protected]

methods include face template matching [1], mosaictechnique [2,3], deformable template matching [4,5],pro"le detecting [6}9], &&eigenface'' [10] or &&eigenpic-ture'' [11] scheme, similar discriminant functionapproach [12], neural network recognition [13,14], self-organizing system [15,16] and usage of multiple informa-tion including caption, color, sound or motion [17}20].Among them, [1,2,13,18}20] are mainly on phase A. Refs.[10,16] include both phases A and B. The rest belong tophase B.

There have been a few of automatic face detectionsystems [1,2,13,18}20]. All the systems are designed forcomplex backgrounds except that in [13]. In [1], two"xed face templates were introduced, which is obviouslylimited in searching faces in various con"gurations. Ref.[2] introduces a technique of mosaic searching, whichsearches faces in an mosaicized image instead of in thesource image pixel by pixel. However, it still cannot avoid

0031-3203/99/$20.00 ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.PII: S 0 0 3 1 - 3 2 0 3 ( 9 8 ) 0 0 1 5 6 - 3

Fig. 1. System architecture.

Fig. 2. Preprocessing.

Fig. 3. (a) Source image. (b) Rotated image.

searching faces cell (unit) by cell in the various-scale-mosaicized image, so it cannot save much time. Thesetwo systems detect only front faces in still grey-levelimages. As for [18}20] although they can detect front orslanted faces in complex backgrounds, additional in-formation of caption, color or motion in color scenes isused. Here we present a hierarchical system using a tech-nique of gravity-center template in a &&skip'' mode in stillgray-level mosaicized images, which signi"cantly savesmuch time consumed in rough detection of human faces.And the system is able to detect rather slanted faces(!253}253) and faces with much horizontal rotatingangle (!453}453) and vertical rotating angles(!303}303), which are in various sizes and at unknownlocations in an unconstrained background. Especially,we de"ne and use a special recognition rate (see details atSection 3) which is much stricter than others' de"nitions.

2. System architecture

The system consists of four stages including the "rststage of preprocessing and the next three stages ofgravity-center template matching, gray-level check, andedge-level check (see Fig. 1). Input a source image, thesystem will output an object image with face locations.

2.1. Preprocessing

This step is composed of four modules, rotating image,mosaicizing image, extracting mosaic edge and calculat-ing gravity-center (shown in Fig. 2). Input a source imageto the "rst module, then the last module will transform itinto a gravity-center picture, which aims to extract fea-tures as fair as possible for the next step of fast roughdetection of face candidates. It is wished that there wouldbe no faces lost.

2.1.1. Rotating imageIn general human face detection systems, face orienta-

tion is often restricted tightly. For instance, a frontface turns left or right in horizontal plane no morethan 153. In paper plane, a technique of RadialSymmetry of Gradient Orientation [21] can detectslanted faces about $253 away from vertical line.In our system, the source image is rotated 9times from !203 to 203 in a step of 53. Slanted faces($253) can be detected by simply rotating the sourceimage with some angles for the system assumes that allfaces to be detected are front faces. Fig. 3 shows anexample.

1238 J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248

Fig. 5. (a) Horizontal mosaic edges (a square represents an edge unit). (b) Overlaid with mosaic edges.

Fig. 6. (a) Gravity-center picture. (b) Overlaid with gravity centers.

Fig. 4. Mosaic or mosaicized image (mosaic unit size"4]4).

2.1.2. Mosaicizing imageLocating human faces in a complex background is a

complicated problem because of faces' various sizes andarbitrary positions. Most of previous work has to scanthe whole source image in pixel level.

Because the main components of human face such asdouble eyebrows, double eyes, nose bottom and mouth

almost orient all in horizontal direction and their scalesin vertical direction are approximately equal. So, if wetransform the source image into a mosaic image [2,3](with reference to Fig. 4) with di!erent scales, and whenthe scale (size) of mosaic unit (cell) is proper, a front face'components will be simultaneously contained in 4 rowsof the mosaicized image. Ref. [2] adopting the techniqueof mosaic matching showed a better e!ect. However, itcannot yet avoid scanning the full mosaic image one unitby one unit. In order to save the plenty of time unnecess-arily consumed on the search without objects, we pres-ents a new searching scheme for rough detection whosepreceded work are of extracting mosaic horizontal edge(Fig. 5a) and transforming the mosaic image into a grav-ity-center picture (Fig. 6a).

2.1.3. Extracting mosaic edgeIn a mosaic image, if human faces exist, there will be

some pieces of serial units whose gray values are smallerthan that of the units around them. Actually, some ofthese pieces possibly correspond to the areas of eyes,

J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248 1239

Fig. 7. (a) Four gravity-center templates (point-patterns). (b) Six subareas of face used in stage 2. (c) Nine subareas of the used in stages3 and 4

eyebrows, nose bottom and mouth (see Fig. 5b). Viewingeach mosaic unit as a basic edge element and letting itsvalue equal the mean gray value of all pixels in the unit,we use the Laplace operator to extract horizontal edgesin the mosaic image, and we got the mosaic-edge picture(see Fig. 5a). Here, we take edge threshold"10]mosaic-unitsize]mosaic-unitsize.

2.1.4. Calculating gravity centerAmong those horizontal mosaic edges, we discard

some whose lengths are too long, and calculate the posi-tions of the gravity centers of the rest edges. Replacingthe rest edge segments with their gravity centers, themosaic gravity center picture is obtained (shown inFig. 6a). From Fig. 6b, it is easy to see that some gravitycenters correspond to the face components and re#ecttheir spatial relation of symmetry structure. It is justwhat we will utilize to roughly detect the human facecandidates in a skip mode at the next stage.

2.2. Gravity center template matching

In the mosaic gravity center picture produced from laststage, the system can search faces in a skip mode that is toscan only the gravity centers where probably exist facesother than to scan the full mosaic image. As for eachpoint (gravity center) scanned, we assume it is the left (orright) eyebrow/eye point. Then we let the machine scanits right-down (or left-down) rectangle area to search ifthere are other 3}5 points that represent the secondpossible left (or right) eye point, one or two possible right

(or left) eyebrow/eye points, one nose point or one mouthpoint.

According to the basic spacial structure of face compo-nents, we de"ned four classes of point patterns used tocorrespond to a real face. Correspondingly we designedfour gravity center templates (with reference to Fig. 7a)used for matching the points in scanned areas in thegravity center picture .

In order to "nd the rectangle areas in which the grav-ity-centers have one of the above point-patterns, wesummarized tens of matching rules through plenty ofexperiments. If we divided the scanned rectangle area intosix subareas which respectively correspond to left eye-brow/eye area (1) right eyebrow/eye area (2), lower nosearea (3), mouth area (4), upper nose area (5) and the cheekarea (6) (shown in Fig. 7b), letting Ni represent the gravitycenter number in area (i), then we have the following rules:

(1) 1)N1, N2)2 and N3, N4)1 and N5)1

and N6"0.

(2) N1#N2"4 and N4"1 or 2)N1#N2)3

and N3"N4"1

Besides, there are quite a few rules for scanning pro-cedure to determine the borders of 6 subareas and checkif the distances of every two points satisfy some designeddistance rules.

This stage will quickly output candidates includingsome false faces which are planed to be removed duringthe next stages. Fig. 8 shows its e!ect.

1240 J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248

Fig. 8. (a) Candidates after gravity-center template matching. (b) Overlaid with candidates.

Fig. 9. (a) Rest candidates after gray check. (b) Overlaid wit rest candidates.

2.3. Gray-level check

Each of the face candidates transferred from last stageis divided into nine subareas (shown in Fig. 7c) in themosaic image according to the positions of face compon-ent gravity centers, which include eyebrow/eye areas(1, 2), nose areas (3, 5), cheek areas (6}9) and mouth area(4). The upper area (subarea 1, 5, 2), the middle area(subarea 6, 3, 7) and the lower area (subarea 8, 4, 9) arerespectively projected in vertical direction with their grayvalues of units of the mosaic image. And the full face areais projected in horizontal direction. Check whether theprojection results are reasonable or not. For example, ifthere is one but only one maximum for the upper area,one but only one minimum for the middle area and thelower area, three maximums and two minimums for thefull face area, it can be accepted as a candidate for the laststage of edge check.

Particularly, if the mean gray values of areas 6 and7 are larger than that of areas 1 and 2 or the mean grayvalue of area 4 is far di!erent from that of areas 1 and 2,the candidate will be rejected, which is illustrated in Figs.8 and 9.

2.4. Edge-level check

It is the "nal detecting stage which extracts the hori-zontal or vertical edges in pixel-level with two Sobeloperators (threshold"100) in each of nine subareas(shown in Fig. 7c) in the source image. The edge pixelnumber's distribution is recorded and their proportionsare calculated. We have designed several rules with fourparameters b0, b1, b2 and b3. Let Ei represent the numberof the horizontal edge pixels in subarea (i), we have thefollowing inequalities about the proportions:

E1

E2,E3

E1'b0, (1)

E5

E1#E5,

E5

E5#E3(b1, (2)

E6

E6#E3,

E7

E3#E7(b2, (3)

E8

E8#E4,

E9

E4#E9(b3, (4)

J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248 1241

Fig. 10. (a) Horizontal edge check. (b) Overlaid with horizontal edges. (c) Located face after edge check. (d) Restored location image.

Table 1Test results

Test face set Number of images Number of images Recognition Averagecompletely located rate recognition rate

Single front face 80 72 90%Single rotating face 150 130 86.7% 81.4%Single glasses face 34 22 64.7%Multiface 26 13 50%

where b0"0.6, b1"0.45, b2"0.4, b3"0.3. Besides, inareas 1, 2 and 4, if the number of the vertical edge pixels islarger than that of horizontal edge pixels, the candidate isdiscarded.

As result of the in#uence of some elements such asnoise and shade, the statistic results of the edges extrac-ted in the current candidate face area do not alwayssatisfy the above inequalities even though there existsa true face. Therefore, we let the right or left border of thecandidate face area shift regularly from 1 to 4 times thesize of the current mosaic unit and in each case check theabove proportions until these inequalities are satis"ed.

3. Experiment

When the system has been input an image, in prepro-cessing stage, the image is rotated 9 times from !203 to203 in a step of 53, then it is shifted half the current mosaicunit size upward or leftward 4 times, i.e. in neither direc-tions, in either directions and in both directions. And thesize of the units making up the mosaic image varies4 times from 6]6 to 3]3 pixels. Thus, for each sourceimage, there will be 9]4]4"144 mosaic images de-rived and the same number of detections will be executedwith di!erent angles, di!erent shifts and di!erent scales.

1242 J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248

Fig. 11. Examples of single front face (320]240).

J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248 1243

Fig. 12. Examples of single face rotating in various directions (320]240).

1244 J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248

Table 2Test results with previous de"nition of recognition rate

Test Number Number of Number of Number of Recognition Averageface of faces faces faces rate recognitionset faces mistakenly failed to be correctly rate

located located located

Single front face 80 4 5 75 93.8%Single rotatedface

150 2 18 132 88%83.8%

Single glassesface

34 2 12 22 64.7%

Multiface 57 0 17 40 70.2%

Fig. 13. Examples of single face with glasses (320]240).

The face size to be detected ranges form 56]120 to18]18 in a normal image of 320]240. The average runtime on locating all faces in an image is about 65 s on aPentium-166 MHz PC.

Training face set consists of 100 images containingsingle face and multiface, which are from normalimage library and TV. Test set consists of 290images from the same sources, which are made up of fourcategories, (1)The single front face set consists of 80images; (2) the single rotated face set consist of 150images; (3)the set of single face with glasses consists of 34images, and (4) the multiface set consists of 26 images.De"ne

recognition rate

"

number of images completely located

number of images,

where &&completely located'' means include no cases oflosing faces or locating false faces in location of faces inimages. The test results are shown in Table 1.

If we introduce the recognition rate previously de"nedin [1,2], i.e.,

recognition rate"number of faces correctly located

number of faces,

then we get the results shown in Table 2.Obviously, our de"ned recognition rate is much stric-

ter than that introduced in [1,2]. From the above table,we learn that there are much more true faces failed to belocated (16.2%) than false faces (2.5%) located mistaken-ly. If we loosen some detecting conditions, there will beless lost faces and more false faces and more correctlylocated faces. In other words, correspondingly, the recog-nition rates de"ned in [1,2] will be larger than the aboveresults in Table 2, however, being compared with the

J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248 1245

Fig. 14. Examples of multiface (320]240).

1246 J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248

results in Table 1, the recognition rates de"ned in thispaper will be dropped in some degree, which has beenproved by experiments. The recognition rates de"ned inthis paper has considered both factors of false face andlost face. So it is a more scienti"c de"nition for facedetection.

Some location examples are shown in Figs. 11}14.

4. Conclusion

Compared with traditional search mode in source im-ages or in mosaic images, this paper presents a novelfaster search scheme which makes use of the same hori-zontal orientation and the similar vertical scales of 6facial components of double eyebrows, double eyes, onenose and one mouth. We decrease the detecting time toa large extent at the rough detection stage. Moreover, oursystem can detect the much slanted faces with anglesranging from !253 to 253. Faces horizontally rotatedbetween !453 and 453 and vertially rotated between!303 and 303 be also located, which has reached aspecially-de"ned recognition rate of 86.7%. This is be-cause that we extract the features of horizontal edges forcheck, which are not sensitive to left-right or up-downrotated faces.

Owing to the limit of the technique in this paper, thesystem sometimes will make mistakes as shown in Fig. 11gor h. Besides, relatively more true faces lost is also a prob-lem. Both of them are what we need to improve.

Although this system is designed for detection of hu-man faces in still gray images, it also has included someelements of facial feature detection, i.e. face organ loca-tion. All or part of the system can be used or extracted asa module for further facial feature detection, image se-quences detection, or image coding.

5. Summary

This paper presents an automatic human face detec-tion system using a novel faster search scheme of gravitycenter template matching compared with the traditionalsearch way in image, which signi"cantly saves the timeconsumed in rough detection of human faces. Besides, thesystem is able to detect rather slanted faces (!25&253)and faces with a much horizontal rotation (!45&453)and a vertical rotation (!30&303), which have varioussizes and locations in an unconstrained background.Experiments show a signi"cant e!ect.

The system consists of four stages including the "rststage of preprocessing and the next three stages of gravitycenter template matching, gray-level check and edge-level check. After input a source image, the system willoutput an object image with face locations.

The "rst stage of preprocessing consists of four steps,rotating image, mosaicizing image, extracting mosaic

edge and calculating gravity center. Rotating image is anecessary step for detection of slanted human faces. Thesecond step in "rst stage is of transforming the sourceimage into a mosaic image which is helpful for the detec-tion of faces in various sizes and at unknown locations ina complex background. Scanning full source image or fullmosaic image is usually time-consuming, we present anovel search approach for fast rough detection, whichneed the pre-operations of extracting the mosaic edgeunits and then simplifying them into their gravity centersfor facility of face matching. From the fourth step therewill be a gravity center picture produced.

A new technique of gravity center template matching isintroduced at the second stage which can save much timeever consumed in traditional search procedure. Fourgeneral gravity center templates are designed for match-ing the gravity centers of facial components in the gravitycenter picture.

The third detecting stage is gray-level check. The aver-age gray values and their projection values of some areasof a face have their general proportion rules which areused to check the candidates detected from last stage.

The "nal stage implement edge-level check in pixel-level. The horizontal and vertical edges are extracted inthe candidate face areas and their distribution and pro-portions are recorded to check whether they satisfy someinequalities. If "t well, the candidates will be accepted asnormal output of the system.

Acknowledgements

The authors acknowledge the assistance of Mastercandidate Ma Chunling, Wu Sining and Cai Tao, Dr.Kong Dehui and Engineer Cao Jianbing. The authors arealso grateful to the referees for many valuable comments.

References

[1] W. Gao, M.B. Liu, A hierarchical approach to human facedetection in a complex background, Proc. 1st Int. Conf. onMultimodal Interface '96, 1996, pp. 289}292.

[2] G.Z. Yang, T.S. Huang, Human face detection in a com-plex background, Pattern Recognition 27 (1) (1994) 43}63.

[3] L.D. Harmon, The recognition of faces, Scienti"c Ameri-can, 229 (5) (1973) 71}82.

[4] A.L. Yuille, D.S. Cohen, P.W. Hallinan, Feature extractionfrom faces using deformable templates, Proc. IEEE. Com-put. Soc. Conf. Computer Vision and Pattern Recognition.1989, pp. 104}109.

[5] C.L. Huang, C.W. Chen, Human facial feature extractionfor face interpretation and recognition, Pattern Recogni-tion, 25 (12) (1992) 1435}1442.

[6] G.J. Kaufman, K.J. Breeding, The automatic recognitionof human faces from pro"le silhouettes, IEEE Trans.Systems Man Cybernet. 6 (2) (1976) 113}121.

J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248 1247

About the Author=JUN MIAO received his B.E. degree in computer science from Beijing Polytechnic University (BPU) in 1993 and hadbeen an instructor in the department of computer science of BPU for three years. He is currently a M.E. candidate in computer science atBPU. His research interests include pattern recognition, computer vision, neural networks and arti"cial intelligence.

About the Author=BAOCAI YIN received the B.S. degree in applied mathematics from Dalian University of Technology, Dalian,China, in 1985 and the Master degree in computational mathematics from Dalian University of Technology, in 1988. He received Ph.D.degree in computational mathematics from Dalian University of Technology, in 1993. Dr. Yin was a postdoctoral researcher in theDepartment of Computer Science, Harbin Institute of Technology, from September 1993 to October 1995. He is currently an AssociateProfessor of Computer Science in the Department of Computer Science, Beijing Polytechnic University, Beijing. His current researchinterests include computer graphics, virtual reality and image processing.

About the Author=KONGQIAO WANG is currently a Ph.D. candidate in communication and electronic system at University ofScience and Technology of China. His research interests include signal processing, image analysis and understanding, image coding,pattern recognition, arti"cial intelligence.

About the Author=LANSUN SHEN is currently a professor at Beijing Polytechnic University, and Ph.D. instructor at University ofScience and Technology of China. His research interests include detection of spectrum signals, key techniques of B-ISDN and real-timesignal processing based on VLSI. He has authored over 100 journal and conference papers, and authored 10 books. He is a seniormember of IEEE.

About the Author=XUECUN CHEN is a M.E. Candidate in computer science at the Beijing Polytechnic University. He received hisB.E. degree in mechanical engineering from Beijing Union University in 1993, where he had been an instructor for three years. Hisresearch interests include computer graphics, CAGD, multimedia and image processing.

[7] L.D. Harmon, S.C. Kuo, P.F. Ramig et al., Identi"cation ofhuman faces pro"les by computer, Pattern Recognition 10(1978) 301}302.

[8] C.J. Wu, J.S. Huang, Human face pro"le recognition bycomputer, Pattern Recognition 23 (3/4) (1990) 255}259.

[9] J.C. Campos, A.D. Linney, J.P. Moss, The analysis of facialpro"les using scale space techniques, Pattern Recognition,26 (6) (1993) 819}824.

[10] M. Turk, A. Pentland, Face recognition using eigenfaces,Proc. IEEE Comput. Soc. Conf. Computer Vision andPattern Recognition, 1991, pp. 586}591.

[11] M. Mirby, L. Sirovich, Application of the Karhunen-Loeve procedure for the characterization of human faces,IEEE Trans. Pattern Anal. Machine Intell. 12 (1) (1990)103}108.

[12] Y.Q. Cheng, K. Liu, J.Y. Yang, A novel feature extractionmethod for image recognition based on similar discriminantfunction, Pattern Recognition 26 (1) (1993) 115}125.

[13] P. Juell, R. Marsh, A hierarchical neural networks for humanface detection, Pattern Recognition 29 (5) (1996) 781}787.

[14] D. Valentin, H. Abdi, A.J. O'Toole et al., Connectionistmodels of face processing: a survey, Pattern Recognition27 (9) (1994) 1209}1230.

[15] H. Mannaert, A. Ooserlinck, Self-organizing system foranalysis and identi"cation of human faces, Proc. Appl.Digital Process. XIII, SPIE-1349 (1990) 227}232.

[16] B. Takacs, H. Wechsler, Detection of faces and faciallandmarks using iconic "lter banks, Pattern Recognition30 (10) (1997) 1623}1636.

[17] R. Brunelli, D. Falavigna, Person identi"cation using mul-tiple cues, IEEE Trans. Patter Anal. Machine Intell. 17 (10)(1995) 955}966.

[18] V. Govindaraju, D.B. Sher, R.K. Srihari, et al., Locatinghuman faces in newspaper photographs, Proc. IEEE Com-put. Soc. Conf. Computer Vision and Pattern Recognition,1989, pp. 549}554.

[19] Y. Dai, Y. Nakano, Face-texture model based on SGLDand its application in face detection in a color scene,Pattern Recognition 29 (6) (1996) 1007}1017.

[20] C.H. Lee, J.S. Kim, K.H. Park, Automatic human facelocation in a complex background using motion andcolor information, Pattern Recognition 29 (11) (1996)1877}1889.

[21] C.C. Lin, W.C. Lin, Extracting features by an inhibitorymechanism based on gradient distributions, Pattern Rec-ognition 29 (12) (1996) 2079}2101.

1248 J. Miao et al. / Pattern Recognition 32 (1999) 1237}1248