Presented by: Galit Levin.. . Zhihong Pan, Student Member, IEEE, Glenn Healey, Senior Member, IEEE,...

89
Face Recognition 2.1.2013 Presented by: Galit Levin.

Transcript of Presented by: Galit Levin.. . Zhihong Pan, Student Member, IEEE, Glenn Healey, Senior Member, IEEE,...

  • Slide 1
  • Presented by: Galit Levin.
  • Slide 2
  • . Zhihong Pan, Student Member, IEEE, Glenn Healey, Senior Member, IEEE, Manish Prasad, and Bruce Tromberg December 2003
  • Slide 3
  • The Problem How to perform accurate Face Recognition in the presence of changes in facial pose and expression and over time intervals between images.
  • Slide 4
  • Current Face Recognition Use spatial discriminants that are based on geometric facial features [3][4][5][6]. Perform well on database acquired under controlled conditions. Exhibit degradation in the presence of changes in face orientation.
  • Slide 5
  • Current Face Recognition Perform poorly when subjects are imaged at different times. Significant degradation in recognition performance for images of faces that are rotated more than 32 degrees!
  • Slide 6
  • Motivation Accurate face recognition over time in the presence of changes in facial pose and expression. Algorithm that performs better than current face recognition for rotated faces.
  • Slide 7
  • A Little Biology Epidermal The outermost layers of cells in the skin. Dermal The layer between the epidermis and subcutaneous tissues. Epidermal and Dermal layers of human skin contains several pigments: Melanin, Hemoglobin, Bilirubin and - carotene. Small changes in the distribution of the pigments include significant changes in the skins spectral reflectance!
  • Slide 8
  • Penetration Depth Visible wavelengths are : 380 740 nm. Near infrared wavelengths are : 750 2500 nm. In the near-infrared (NIR) skin has larger penetration depth than for visible wavelengths. Example: optical penetration of 3.57mm at 850nm and 0.48mm at 550nm. Larger penetration enables characteristics that are difficult for a person to modify.
  • Slide 9
  • Spectral Change In Human Skin Right cheek of four subjects over NIR Differences in amplitude and shape.
  • Slide 10
  • And Now The Same Object Different camera angles and poses.
  • Slide 11
  • Spectral Measurements NIR skin and hair reflectance. 2 subjects in a front view illumination is the same for both!
  • Slide 12
  • Spectral Measurements NIR skin and hair reflectance 2 subjects in 90 degree side view.
  • Slide 13
  • Conclusions Significant spectral variability from one subject to the other. Spectral characteristics from one subject remain stable over a large change in face orientation. Skin spectra differences are very pronounced. Hair spectra differences also are noticeable and valuable for recognition.
  • Slide 14
  • Experiments 200 human subjects using hyperspectral face images. Each subject imaged (NIR) over a range of poses and expressions. Several subjects imaged multiple times over several weeks. Recognition is achieved by combining spectral measurements for different tissue types.
  • Slide 15
  • Experiments All images were captured with 31 spectral bands separated by 0.01 nm over the NIR (700nm 1000nm). 2 light sources that provide uniform illumination on the subject.
  • Slide 16
  • Hyperspectral Bands 16 31 bands for one subject in ascending order all used!
  • Slide 17
  • Spectral Reflectance Images Main Idea: Convert the hyperspectral images to spectral reflectance images.
  • Slide 18
  • Spectral Reflectance Images Two spectralon were used during calibration. White spectralon : a panel with 99% reflectance. Black spectralon : a panel with 2% reflectance. Both panels have nearly constant reflectance over the NIR range.
  • Slide 19
  • Some Calculations Raw measurement obtained by hyperspectral imaging at coordinate (x,y) and wavelength . L illumination. S system spectral response. R reflectance of the viewed surface. O offset.
  • Slide 20
  • Some Calculations Image of white spectralon Same as for black spectralon. R w is the reflectance function of the white spectralon. We average 10 images of white and black spectralon panels and estimate E(I w ), E(I B ).
  • Slide 21
  • Some Calculations Now we can estimate L*S And then estimate O. Now we can estimate R (of the subject) Not dependent in L if L doesnt change during the experiment. 21
  • Slide 22
  • Data Distribution 200 subjects Diverse composition in term of gender, age and ethnicity.
  • Slide 23
  • Images Examples 23 7 images for each subject, up to 5 tissue types. Fg, fa front view, neutral expression. All frs are with rotations 45 and 90 degrees. 20 of the 200 were imaged at different times up to 5 weeks.
  • Slide 24
  • Images Examples Front view taken in 4 different visits. 24
  • Slide 25
  • Image Representation Each face image is represented by spectral reflectance vectors. These vectors are extracted from small facial regions which are visible. The regions are selected manually. 25
  • Slide 26
  • Image Representation 5 regions 2 regions 26
  • Slide 27
  • Spectral Reflectance Vector 27 Each reflectance vector of a region t and wavelength is estimated by averaging over the N pixels in the region. Spectral reflectance vector for each facial region Normalize.
  • Slide 28
  • Spectral Distance The distance between face image I and face image J for tissue type t. represents the B * B covariance matrix variability for tissue type t over the entire database. In our experiment, we use a single for the entire data.
  • Slide 29
  • Forehead Spectrum 29 Larger variance at the ends of spectral range due to sensitivity to noise.
  • Slide 30
  • Concepts Gallery (C) group of hyperspectral images of known identity. Example : image fg. Probes The remaining images of the subject that are used to test the recognition algorithm. Duplicates The images taken in the second and subsequent sessions. 30
  • Slide 31
  • Our Experiments Every image j in the probe set is present in the gallery as T j. Calculate D(i,j) For each i. j Probes, i C. Hit if D(T j, j) is the smallest from all C distances.
  • Slide 32
  • Our Experiments M 1 The number of correctly recognized probes. M n The number of probes that D(T j, j) is one of the n smallest of the C distances N The rank. P total number of probes.
  • Slide 33
  • Example M 2 :
  • Slide 34
  • Experiments 34 Skin is the most useful tissue hair and lips are less! 90% of the probes were recognizes accurately 200 images in the DB.
  • Slide 35
  • Reminder 35 fg is the gallery image. fa, fb are the probe images.
  • Slide 36
  • Recognition Performance 36 All tissue types, two probes. fa same expression as the gallery, fb different expression
  • Slide 37
  • Recognition using hyperSpectral discriminants is not impacted significantly by changes in facial expressions although it is harder to !identify 37
  • Slide 38
  • Recognition Performance 38 Individual tissue types, two probes. Degradation pyramid forehead, left + right cheek, lips
  • Slide 39
  • All Tissues Recognition 39 Change in face orientation over all 200 images in DB. 75% recognition for 45 degrees rotation. 80% recognition have a match in the top 10 for 90 degrees.
  • Slide 40
  • Face Orientation Current face recognition systems experience difficulties in recognizing probes that differ from a frontal gallery more than 32 degrees. Hyperspectral images achieve accurate recognition results for larger rotations! 40
  • Slide 41
  • Face Orientation Recognition 41 (a)Female (b)male (c)asian (d)caucasion (e)black (f)18-20 (g)21-30 (h)31-40 (i)over 40.
  • Slide 42
  • Table Analysis 42 Four table analysis : front view Neutral expression probes, front view changed expression probes, 45 degrees rotation, 90 degrees rotation For all categories Example: Female probes tend to false match with female images in the gallery. Same is for Male and Asian probes.
  • Slide 43
  • Duplicates 43 98 probes from 20 subjects at 3 days and 5 weeks. 92% have correct match in the top 10.
  • Slide 44
  • Duplicates Performance duplicates is similar when acquired within one week or over. Significant reduction in recognition accuracy for images not acquired on the same day as the gallery. Assumptions: drift in sensor characteristics or changes in the subject conditions including variation in blood, water concentration, melanin concentration Hyperspectral imaging has potential for face recognition over time! 44
  • Slide 45
  • Conclusion Purpose : Face recognition over time in the presence of changes in facial pose and expression. Implementation : Hyperspectral images over the NIR (0.7 nm-1 nm), Images for 200 subjects. Spectral comparison of combinations of tissue types. 45
  • Slide 46
  • Conclusion Results : Performs significantly better than current face recognition for rotated faces. Accurate recognition performance for expression changes and for images acquired over time intervals. Expectations : Further improvement by modeling spectral reflectance changes due to face orientation changes. We use only spectral information. Improvement can be achieved by incorporating spatial information. 46
  • Slide 47
  • . Stan Z. Li, Senior Member, IEEE, RuFeng Chu, ShengCai Liao, and Lun Zhang April 2007
  • Slide 48
  • The Problem Lighting conditions drastically change the appearance of a face. Changes between images of a person under different illumination conditions are larger than those between the images of two people under the same illumination.
  • Slide 49
  • The Problem Lighting is the top most issue to solve for reliable face-based applications. The system should adapt to the environment and not vice versa.
  • Slide 50
  • Current Face Recognition Systems Most current face recognition systems are based on face images captured in the visible light spectrum. These systems are compromised in accuracy by changes in the environmental illumination. Most of these systems are designed for indoor use.
  • Slide 51
  • Related Work Most of the related work improved recognition performance but have not led to a face recognition method that is illumination invariant.
  • Slide 52
  • Related Work One good direction is to use 3D data. Such data captures geometric shapes of face and as a result is less affected by environmental lighting. It can cope with rotated face. Disadvantages: increases cost and slow speed and not necessarily produce better recognition results recognition performance by a 2D image and by a 3D image may be similar.
  • Slide 53
  • Motivation Achieving illumination invariant face recognition using active near infrared (active NIR) imaging techniques. Build accurate and fast face recognition systems.
  • Slide 54
  • Control Light Direction Two strategies to control light direction: 1. Provide frontal lighting 2. Minimize environment lighting.
  • Slide 55
  • Frontal Lighting We would like to produce a clear frontal-lighted face image. We build an active NIR imaging hardware. Mount active lights on the camera to provide frontal lighting the best possible straight frontal lighting than mounting anywhere else.
  • Slide 56
  • Frontal Lighting Choose the active lights in the NIR spectrum (780 1100 nm). We use LEDS. The camera-face distance is between 50-100 cm, convenient range for the user. Guideline : Frontal lighting should be at least stronger than expected environment illumination but still safe for human eyes.
  • Slide 57
  • Minimize Environment Lighting Filter that cuts off visible light while allowing NIR light to pass. Our filter passes wavelengths 720, 800, 850, 880 at rates 0, 50, 88, 99 respectively. This filter cuts off visible environment lights (