L002.Face Recognition 1

48
Face Recognition and Its applications Based on works of: Jinshan Tang; Ariel P from Hebrew University; Mircea Focşa, UMFT; Xiaozhen Niu, Department of Computing Science, University of Alberta; Christine Podilchuk, [email protected] , http://www.caip.rutgers.edu/wiselab PART 1

description

face recognition ppt

Transcript of L002.Face Recognition 1

Page 1: L002.Face Recognition 1

Face Recognition and Its applications

Based on works of: Jinshan Tang; Ariel P from Hebrew University; Mircea Focşa, UMFT; Xiaozhen Niu,Department of Computing Science, University of Alberta; Christine Podilchuk, [email protected], http://www.caip.rutgers.edu/wiselab

PART 1

Page 2: L002.Face Recognition 1

ContentsIntroductionFace detection using color informationFace matchingFace Segmentation/DetectionFacial Feature extractionFace RecognitionVideo-based Face RecognitionComparisonConclusionReference

Page 3: L002.Face Recognition 1

Face Segmentation/Detection

During the past ten years, considerable progress has been made in multi-face recognition area, This includes: Example-based learning approach by Sung and

Poggio (1994). The neural network approach by Rowley et al. (1998). Support vector machine (SVM) by Osuna et al. (1997).

Page 4: L002.Face Recognition 1

Introduction

Page 5: L002.Face Recognition 1
Page 6: L002.Face Recognition 1

Input face image

Face detection

Face feature extraction

Feature Matching Decision maker

Output result

Basic steps for face recognition

Face database

Face recognition

Page 7: L002.Face Recognition 1

Face detection• Geometric information based face detection

• Color information based face detection

• Combining them together

(a) Geometric information based face detection

(b) Color information based face detection

Page 8: L002.Face Recognition 1

Color information based face detection

Face color is different from backgroundChoice of color spaces is very importantColor Spaces:

•R,G,B

•YCbCr

•YUV

•r,g

•……..

Skin color

Background color

Figure 4. Skin color distribution in a complex background

Page 9: L002.Face Recognition 1

A face detection algorithm Using Color and Geometric information

Page 10: L002.Face Recognition 1

Ideas: (1) compensate for lightning, (2) separate by transforming to new (sub) space.

Page 11: L002.Face Recognition 1

Ideas: (1) compensate for lightning, (2) separate by transforming to new (sub) space.

(3) clustering.

Page 12: L002.Face Recognition 1

Color can be used in segmentation and grouping of image subareas.

Feature-based face detection

Page 13: L002.Face Recognition 1

Location and shape parameters of eyes are the most important features to be detected through segmentation and morphological operations (dilation and erosion).

Page 14: L002.Face Recognition 1
Page 15: L002.Face Recognition 1

Ideas:

1) Eyes

2) Mouth

3) Boundary (edge detection)

4) Boundary approximated to ellipse or something (Hough)

Page 16: L002.Face Recognition 1

The concept of eye glasses

The concept of half-profiles

Page 17: L002.Face Recognition 1

Face Matching•Feature based face matching

•Template matching

Features versus templates

Page 18: L002.Face Recognition 1

•Feature based face matching

Face image From face detection

Normalization Feature extraction

Feature vector

classifierDecision maker

Output results

You can extract various features

You can use various classifiers

You can use various decision makers

Page 19: L002.Face Recognition 1

Normalization

)()(

))(()()y(

TI

TImeanTImeanC

T

TN

Eye location Normalization: rotation

normalization, scale normalization

Cross Correlation :

object template

Averaged for objects

Page 20: L002.Face Recognition 1

Feature extraction•Eyebrow thickness and vertical position at the eye center position

•A coarse description of the left eyebrow’s arches

•Nose vertical position and width

•Mouth vertical position, width, height upper and lower lips

• eleven radii describing the chin shape

•Bigonial breadth (face width at nose position)

•Zygomatic breadth (face width halfway between nose tip and eyes).

3.5-D feature vector

Page 21: L002.Face Recognition 1

Example of some geometrical features

Page 22: L002.Face Recognition 1

Classifier

1)()()( j

Tjj mxmxx

Bayes classifier

Feature vector

Computer )(xj

x

(j=2,3,…N)j

m

Rank the distance values

)(xj

Output the results

This is just one example of classifier, others are Decision Trees, expressions, decomposed structures, NNs.

Page 23: L002.Face Recognition 1

ANN Classifier

ANN

one-class-in-one network

multi-class-in-one network

Feature vector

Class 1 Class 2

MAXNET

Classification results

Fig.2. one-class-in-one network

Page 24: L002.Face Recognition 1

Template matching Produce a template

Face image From face detection

Normalization

Decision maker

Output results

matching

Templatesdatabase

You have to create the data base of templates for all people you want to recognize

Page 25: L002.Face Recognition 1

There are different templates used in various regions of the normalized face.

Various methods can be used to compress information for each template.

Page 26: L002.Face Recognition 1

Example-based learning approach (EBL)

Three parts:The image is divided into many possible-overlapping windows, each window pattern gets classified as either “a

face” or “not a face” based on a set of local image measurements.

For each new pattern to be classified, the system computes a set of different measurements between the new pattern and the canonical face model.A trained classifier identifies the new pattern as “a face” or “not a face”.

Page 27: L002.Face Recognition 1

Example of a system using EBL

Page 28: L002.Face Recognition 1

Neural network (NN)

Kanade et al. first proposed an NN-based approach in 1996.Although NN have received significant attention in many research areas, few applications were successful in face recognition.

Why?

Page 29: L002.Face Recognition 1

Neural network (NN)

It’s easy to train a neural network with samples which contain faces, but it is much harder to train a neural network with samples which do not.The number of “non-face” samples are just too large.

Page 30: L002.Face Recognition 1

Neural network (NN)

Neural network-based filter. A small filter window is used to scan through

all portions of the image, and to detect whether a face exists in each

window.

Merging overlapping detections and arbitration. By setting a small threshold, many false detections can be eliminated.

Page 31: L002.Face Recognition 1

An example of using NN

Page 32: L002.Face Recognition 1

Test results of using NN

Page 33: L002.Face Recognition 1

SVM (Support Vector Machine)

SVM was first proposed in 1997, it can be viewed as a way to train polynomial neural network or radial basic function classifiers.Can improve the accuracy and reduce the computation.

Page 34: L002.Face Recognition 1

Comparison with Example Based Learning (EBL)

Test results reported in 1997.Using two test sets (155 faces). SVM achieved better detection rate and

fewer false alarms.

Page 35: L002.Face Recognition 1

Recent approaches

Face segmentation/detection research area still remain active, for example: An integrated SVM approach to multi-face

detection and recognition was proposed in 2000.

A technique of background learning was proposed in August 2002.

Still lots of potential!

Page 36: L002.Face Recognition 1

Static face recognition

Numerous face recognition methods/algorithms have been proposed in last 20 years, several representative approaches are:

Eigenface LDA/FDA (Linear DA, Fisher DA) Discriminant analysis

(algorithm) Neural network (NN) PCA – Principal Component Analysis Discrete Hidden Markov Models (DHMM) Continuous Density HMM (CDHMM).

Page 37: L002.Face Recognition 1

EigenfaceThe basic steps are:

Registration. A face in an input image first must be located and registered in a standard-size frame.Eigenpresentation. Every face in the database can be represented

as a vector of weights, the principal component analysis (PCA) is

used to encode face images and capture face features.

Identification. This part is done by locating the images in the database whose weights are the closest (in Euclidean distance) to the weights of the test images.

Page 38: L002.Face Recognition 1

LDA/FDAFace recognition method using LDA/FDA is called the fishface method.Eigenface use linear PCA. It is not optimal to discrimination for one face class from others.Fishface method seeks to find a linear transformation to maximize the between-class scatter and minimize the within-class scatter.Test results demonstrated LDA/FDA is better than eigenface using linear PCA (1997).

Page 39: L002.Face Recognition 1

Test results of LDATest results of a subspace LDA-based face recognition method in 1999.

Page 40: L002.Face Recognition 1

Video-based Face Recognition

Three challenges: Low quality Small images Characteristics of face/human objects.

Three advantages: Allows much more information. Tracking of face image. Provides continuity,

this allows reuse of classification information from high-quality images in processing low-quality images from a video sequence.

Page 41: L002.Face Recognition 1

Basic steps for video-based face recognition

Object segmentation/detection.Motion structure. The goal of this step is to estimate the 3D

depths of points from the image sequence.

3D models for faces. Using a 3D model to match frontal views of

the face.

Non-rigid motion analysis.

Page 42: L002.Face Recognition 1

Recent approachesMost video-based face recognition system has three modules for

detection, tracking and recognition.

An access control system using Radial Basis Function (RBS) network was proposed in 1997.

A generic approach based on posterior estimation using sequential Monte Carlo methods was proposed in 2000.

A scheme based on streaming face recognition (SFR) was propose in August 2002.

Page 43: L002.Face Recognition 1

The Streaming Face Recognition (SFR) schemeCombine several decision rules together, such as: Discrete Hidden Markov Models (DHMM) and Continuous Density HMM (CDHMM).

The test result achieved a 99% correct recognition rate in the intelligent room.

Page 44: L002.Face Recognition 1

ComparisonTwo most representative and important protocols for face recognition evaluations: The FERET protocol (1994).

Consists of 14,126 images of 1199 individuals. Three evaluation tests had been administered in

1994, 1996, and 1997. The XM2VTS protocol (1999).

Expansion of previous M2VTS program (5 shots of each of 37 subjects).

Now consists 295 subjects. The results of M2VTS/XM2VTS can be used in wide

range of applications.

Page 45: L002.Face Recognition 1

1996/1997 FERET EvaluationsCompared ten algorithms.

Page 46: L002.Face Recognition 1

• Face recognition has many potential applications.• For many years not very successful,

• we need to improve the accuracy of face recognition

• Combining face recognition and other biometric recognition technologies,

•Such as:• fingerprint recognition technology, • voice recognition technologies • and so on

Conclusion

For our applications accuracy is much more important than speed.

Page 47: L002.Face Recognition 1

Significant achievements have been made recently. LDA-based methods and NN-based methods are

very successful.

FERET and XM2VTS have had a significant impact to the developing of face recognition algorithms.Challenges still exist, such as pose changing and illumination changing. Face recognition area will remain active for a long time.

Conclusion

Page 48: L002.Face Recognition 1

Reference[1] W. Zhao, R. Chellappa, A. Rosenfeld, and P.J. Phillips, Face Recognition: A Literature Survey, UMD CFAR Technical Report CAR-TR-948, 2000.[2] K. Sung and T. Poggio, Example-based Learning for View-based Human Face Detection, A.I. Memo 1521, MIT A.I. Laboratory, 1994.[3] H.A. Rowley, S. Baluja, and T. Kanade, Neural Network Based Face Detection, IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol. 20, 1998.[4] E. Osuna, R. Freund, and F. Girosi, Training Support Vector Machines: An Application to Face Recognition, in IEEE Conference on Computer Vision and Pattern Recognition, pp. 130-136, 1997.[5] M. Turk and A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, Vol.3, pp. 72-86, 1991.

[6] W. Zhao, Robust Image Based 3D Face Recognition, PhD thesis, University of Maryland, 1999.[7] K.S. Huang and M.M. Trivedi, Streaming Face Recognition using Multicamera Video Arrays, 16th International Conference on Pattern Recognition (ICPR). August 11-15, 2002.[8] P.J. Phillips, P. Rauss, and S. Der, FERET (Face Recognition Technology) Recognition Algorithm Development and Test Report, Technical Report ARL-TR 995, U.S. Army Research Laboratory.[9] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, XM2VTSDB: The Extended M2VTS Database, in Proceedings, International Conference on Audio and Video-based Person Authentication, pp. 72-77, 1999.