Real Time Eye Detection (2) Edited Final

Post on 07-Apr-2015

201 views 3 download

Transcript of Real Time Eye Detection (2) Edited Final

REAL TIME EYE DETECTION

BY- GUIDANCE BY-

MADHUR DHAMNE DR. S. P. NAROTE

ROHIT DHAYGUDE

TUSHAR DHULE

DEPT. OF ELECTRONICS & TELECOMMUNICATION ENGG.SINHGAD COLLEGE OF ENGINEERING, PUNE

Objectives

To detect the EYE from the face present in the field of view of CAMERA

To move the camera to align itself to capture eye

Introduction

Roadmap

What is Digital Image Processing ? Fundamentals steps in DIP

Image acquisition Image enhancement Image restoration Color image processing Compression Morphological processing Segmentation Representation and description Recognition

Feature Extraction Feature Detection and Classification

Fundamental Steps in DIP

Image acquisition- is the first process which involves preprocessing such as scaling.

Image enhancement- this is bringing out obscured detail or highlighting certain features of interest in an image. This technique deals with a number of mathematical functions such as the Fourier Transform.

Image restoration- it improves the appearance of an image but is objective in the sense that this technique tends to be based on mathematical or probabilistic models of image degradation.

Color image processing- this is used as a basis for extracting features of interest in an image.

Wavelets- are the foundation for representing images in various degrees of resolution.

Compression- deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it.

(continued…)

Morphological processing- deals with tools for extracting image components that are useful in the representation and description of shape.

Segmentation- partitions an image into its constituent parts or objects. Representation and description- representation is necessary for

transforming raw data into a form suitable for subsequent computer processing. Description, also known as feature selection, deals with extracting attributes that result in some quantitative information of interest.

Recognition- assigns a label to an object based on its descriptors.

Feature Extraction- this is an area of image processing which involves using algorithms to detect and isolate various desired portions of a digitized image or video stream.

7

Image Enhancement

Original

Histogram Example

8

Histogram Example (cont… )

Poor contrast

9

Histogram Example (cont… )

Poor contrast

10

Histogram Example (cont… )

Enhanced contrast

11

Smoothing and Sharpening Examples

Smoothing Sharpening

Image analysis

Image analysis is to identify and extract useful information from an image or a video scene, typically with the ultimate goal of forming a decision.

Image analysis is the center piece of many applications such as remote sensing, robotic vision and medical imaging.

Image analysis generally involves basic operations: Pre-processing, Object representation, Feature detection, Classification and interpretation.

Image Segmentation

Image segmentation is an important pre-processing tool. It produces a binary representation of the object with features of interest such as shapes and edges.

Common operations include: Thresholding: to segment an object from its background

through a simple pixel amplitude based decision. Complicated thresholding methods may be used when the background is not homogeneous.

Edge detection: to identify edges of an object through a set of high-pass filtering. Directional filters and adaptive filters are frequently used to achieve reliable results.

14

Segmentation Examples

Thresholding Edge detection

15

Feature Extraction

This is an area of image processing that uses algorithms to detect and isolate various desired portions of a digitized image.

What is a Feature?

A feature is a significant piece of information extracted from an image which provides more detailed understanding of the image.

Examples of Feature Detections Detecting of faces in an image filled with people and other

objects Detecting of facial features such as eyes, nose, mouth Detecting of edges, so that a feature can be extracted and

compared with another

16

Feature Detection and Classification

Feature detection is to identify the presence of a certain type of feature or object in an image.

Feature detection is usually achieved by studying the statistic variations of certain regions and their backgrounds to locate unusual activities.

Once an interesting feature has been detected, the representation of this feature will be used to compare with all possible features known to the processor. A statistical classifier will produce a feature type that has the closest similarity (or maximum likelihood) to the testing feature.

Data collection and analysis (or the training process)have to be performed at the classifier before any classification.

CAMERA COMPUTER

MECHANICAL ASSEMBLY

CONTROLLING CAMERA FOCUS

MICROPROSESSORCIRCUIT

Block Diagram

INPUT

Phases

PHASE 1 CAMERA MOVEMENT

PHASE2 FACE DETECTION

PHASE3 LOCATE EYE

PHASE4 CAPTURE EYE

Camera movement

Mechanical Assembly Gears & links

Motion control According to co-ordinates

Servo motors Load consideration

Controller/Processor Speed considerations

Face Detection

Face Detection

What is face detection?

Importance of face detection

Different approaches

One example

What is Face Detection?

Given an image, tell whether there is any human face, if there is, where is it.

Importance of Face Detection

Required in any face recognition system

Most important in any SURVEILLANCE system

Extracting facial features

Different Approaches

Features based Approach

Face color is a unique feature

This can be used to detect a face.

Face Circle fitting Algorithm

The continuous images as the input The the Skin Color pixel extracted Similar color pixel blocks identification Block grouping Final boundary of face region Face circle fitting

(face is considered as the circular object) Face co-ordinates as the centre of the circle

Flow-chart

Details of the algorithm

The algorithm uses HSI color modeling the H value of pixel decides the skin color pixel

The value of H falling in the range HLo, HHi are selected Blocks of pixels are examined

group of 4 blocks area more than 1/3rd then skin color

The boundaries of face region are optimized

(continued…)

The holes are filled Face circle is fitted

centre location fixed Cvc= Ftop + R Chc is decided by vertical positions Pv[i] The maximum value of Pv[i] gives the Chc

the radius is calculated using boundary values R= (Ftop - F bottom)/2

Color Segmentation Algorithm

Color Segmentation

Captured image => RGB color space Affected by lightning condition Remedy : use of YCbCr color space Y component => luminance Cb, Cr component => chrominance

RGB to YCbCr conversion Y = 0.299R + 0.587G + 0.114B Cb = -0.169R - 0.332G + 0.500B Cr = 0.500R - 0.419G - 0.081B

Result of color segmentation

(continued…)

Image Segmentation

Need : to obtain perfect binary image A three step process:

Step 1: removal of small black and white regions

Step 2: edge detection using Robert Cross Operator

Step 3: integration of step 1 and step 2

(continued…)

Step 1 Elimination of small black holes on white region Removal of white region less than minimum face area

Small region eliminated

image

(continued…)

Step 2 Robert cross algorithm

Performs gradient measurement on binary image Highlights gradient of high magnitude Converts highlighted region into black lines by connecting

adjacent pixels Gradient magnitude |G | = |P1 – P4 | + |P2 – P3 |

(continued…)

Edges detected by Robert cross operator

(continued…)

Integration of step1 and 2 Small black and white regions removed Clear view of skin region

(continued…)

Image matching Template eigen image generation Correlating test image with eigen image template

Drawbacks All the faces are vertical and have frontal view Images are captured under same lightning conditions

Eye Detection

Importance of Eye Detection

Challenges

Different Approaches

Eye Detection(continued…)

Importance of Eye Detection Man machine interaction technology Monitoring human vigilance Assisting people with disability

Challenges Eye closure Eye occlusion Variability in scale and location, different Lighting conditions Face orientation

Different Approaches

Dark Area Extraction Algorithm

The dark pixels extraction

Intensity comparison

Change the boundaries

Detect the eye area

Mark Eyes

Send co-ordinates to processor

Flow Chart

Details of Algorithm

Eye positioning,(a) source image,(b) eye pixel extraction,(c) eye pixel grouping,(d) unreasonable pixel removal,(e) horizontal position locating,(f) nose pixel and eyebrow pixel removal,(g) vertical position location,(h) eye marking.

Eye Positioning

(continued…)

Convert the captured image in YCbCr color space

Build two separate eye maps EyeMapC EyeMapL

Above two eye maps are combined to get a single eye map.

Parallel Eye Detection

(continued…)

Eye Map C construction In YCbCr color space eye region have higher Cb value and lower Cr

value EyeMapC = 1/3 ((Cb)2 + (Cr)2 + (Cb/Cr))

Eye Map L construction Design of dilation and erosion operator To emphasis on bright and dark area around eyeEye map L = Y(x , y) + g(x , y)

Y(x , y) * g (x , y)

(continued…)

Combining eye map C and eye map L

Table shows the experiment results

Results(Comparison Chart)

METHOD APPROACH DETCTION RATE

(DATABASE)

ADVANTAGE DISADVANTAGE

FACE DETECTION

FACE CIRCLE FITTING

COLOR BASED(HSI) 85 -92 %

BACKGROUND FILLTERING IS FAST

RGB HSI REQUIRED,NO DIRECT FUNCTION

COLOR SEGMENTATION

ALGORITHMCOLOR BASED

(YCbCr)

90%FLASE

DETECTION 5%

LIGHTNING EFFECT IS REDUCED, LESS COMPUTATION TIME

GENDER RECOGNITION

85%

SPEED OF PROCEESING INCREASE DUE TO PARRALEL IMPLEMENTATION

IMAGE NEEDS TO BE FRONTAL,SSHOLUD NOT BE OCCLUDED BY GLASS ETC

EYE DETECTION

BUILDING EYE MAP

DARK AREA EXTRACTION

FEATURE INVARIANT

85 -92 % COMPUTATIONAL TIME IS LESS

EFFECT OF LIGHTNING CONDITION

Mechanical Assembly

Microprocessor system Motors systems Links and Gears Camera & Mountings

Assembly(how it will look/function)

Design Considerations

1. Camera Weight (around 1kg.)

2. Minimum displacement required Vertical & Horizontal Gear ratio calculation

3. Mobility with connecting cables The cables should be routed to achieve maximum

mobility for camera movement

Embedded system

ARM

Motor Drivers

PC interfacing Circuit

(USB 2.0/RS-232)

Manual Controls

ARM

ARM 7 TDMI Designed to be small & reduce the power consumption High code density Hardware debug technology within the processor High speed Flash memory(32 to 128 kB) Large buffer size and high processing power Various 32-bit timer Supports USB 2.0 Full-speed device, multiple UARTs, SPI, SSP

to I2C-bus Supports In-System Programming/In-Application Programming

(ISP/IAP)

Proposed Work(Planning Sheet)

ActivitiesActivities Semester –ISemester –I Semester -IISemester -II

1. Basics of Image 1. Basics of Image ProcessingProcessing

2. Literature Survey2. Literature Survey

3. Software Study 3. Software Study

4.Development of 4.Development of algorithmalgorithm

5.Implementation of 5.Implementation of prototype prototype

6. Optimization analysis6. Optimization analysis

References

[1] M. Yang, D.J. Kriegman, N. Ahuja, Detecting faces in images: a survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (1) (2002) 34–58.

[2] S. Zhang, Z. Liu, A robust, real-time ellipse detector, Pattern Recognition 38 (2) (2005) [273–287].

[3] K. Anderson, P.W. McOwan, Robust real-time face tracker for cluttered environments, Computer Vision and Image Understanding 95 (2) (2004) [184–200].

[4] F.Y. Shih, C. Chuang, Automatic extraction of head and face boundaries and facial features, Information Sciences 158 (1) (2004) [117–130].

[5] Z. Liu, J. Yang, N.S. Peng, An efficient face segmentation algorithm based on binary partition tree, Signal Processing: Image Communication 20 (4) (2005) [295–314].

[6] F. Tsalakanidou, S. Malassiotis, M.G. Strintzis, Face localization and authentication using color and depth images, IEEE Transactions on Image Processing 14 (2) (2005) [152–168].

[7] M. Soriano, B. Martinkauppi, S. Huovinen, M. Laaksonen, Adaptive skin color modeling using the skin locus for selecting training pixels, Pattern Recognition 36 (3) (2003) [681–690].

[8] R.L. Hsu, M. Abdel-Mottaleb, A.K. Jain, Face detection in color images, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (5) (2002) [696–706].

[9] R. Xiao, M. Li, H. Zhang, Robust multipose face detection in images, IEEE Transactions on Circuits and Systems for Video Technology 14 (1) (2004) [31–41].

[10] C. Liu, A Bayesian discriminating features method for face detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (6) (2003) [725–740].

[11] Y. Li, S. Gong, J. Sherrah, H. Liddell, Support vector machine based multi-view face detection and recognition, Image and Vision Computing 22 (5) (2004) [413–427].

[12] P. Viola, M.J. Jones, Robust real-time face detection, International Journal of Computer Vision 57 (2) (2004) [137–154].

[13] L. Huang, A. Shimizu, Y. Hagihara, H. Kobatake, Gradient feature extraction for classification-based face detection, Pattern Recognition 36 (11) (2003) [2501–2511].

[14] M. Lie´vin, F. Luthon, Nonlinear color space and spatiotemporal MRF for hierarchical segmentation of face features in video, IEEE Transactions on Image Processing 13 (1) (2004) [63–71].

[15] E. Loutas, I. Pitas, C. Nikou, Probabilistic multiple face detection and tracking using entropy measures, IEEE Transactions on Circuits and Systems for Video Technology 14 (1) (2004) [128–135].

[16] S. Spors, R. Rabenstein, A real-time face tracker for color video, in: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2001, pp. [1493–1496].

[17] S. Phimoltares, C. Lursinsap, K. Chamnongthai, Locating essential facial features using neural visual model, in: Proceedings of the First IEEE International Conference on Machine Learning and Cybernetics, 2002, pp. [1914–1919].

[18] S. Phimoltares, C. Lursinsap, K. Chamnongthai, Facial feature extraction with rotational invariance using neural visual model, in: Proceedings of the Third International Conference on Intelligent Technologies, 2002, pp. [226–234].

[19] S. Phimoltares, C. Lursinsap, K. Chamnongthai, Tight bounded localization of facial features with color and rotational independence, in: Proceedings of the IEEE International Symposium on Circuits and Systems,2003,pp.V-809–V-812.

[20] M. Nixon, Eye spacing measurement for facial recognition, SPIE Proceedings, Vol. 575: Applications of Digital Image Processing VIII, 1985, pp. [279-285]

[21] P.W.Hallinan, Recognizing human eyes, SPIE Proceedings, Vol. 1570: GeometricMethods in Computer Vision, 1991, pp. [214-226].

[22] A. Pentland, B. Moghaddam, T. Starner, M. Turk, Viewbased and modular eigenspaces for face recognition, Proceedings, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. [84-91].

[23] R. Herpers, M. Michaelis, K.H. Lichtenauer, G. Sommer, Edge and keypoint detection in facial regions, Proceedings, Second International Conference on Automatic Face and Gesture Recognition, 1996, pp. [212-217].

Thank You…!