Jayashan-cb004082-Criminal Face Recognition-Final
-
Upload
jayashan-fernando -
Category
Documents
-
view
358 -
download
13
Transcript of Jayashan-cb004082-Criminal Face Recognition-Final
Anton Jayashan cb004082
APIIT Sri Lanka Page 1 of 151
Project Approval
Approval of the Project Manager
…………………………….
Mr.Javed Ahsan
(Project Manager)
I certify that this thesis satisfies the requirements as a thesis for the degree of
Bachelor of Science in Engineering (Hons).
……………………………………
Dr.Dhananjaya Kulkarni
(Supervisor)
I certify that I have read this thesis and that in my opinion it is fully adequate in
scope and quality as a thesis for the Bachelor of Science in Engineering (Hons).
……………………………………
Mrs. Chathura Sooriyaarachchi
(Assessor)
Anton Jayashan cb004082
APIIT Sri Lanka Page 2 of 151
Abstract
Criminal record generally contains personal information about particular person along with
photograph. To identify any Criminal we need some identification regarding person, which
are given by eyewitness. In most cases the quality and resolution of the recorded image
segments is poor and hard to identify a face. To overcome this sort of problem we are
developing software. Identification can be done in many ways like finger print, eyes, DNA
etc. One of the applications is face identification. The face is our primary focus of attention in
social inters course playing a major role in conveying identify and emotion. Although the
ability to infer intelligence or character from facial appearance is suspect, the human ability
to recognize face is remarkable.
Anton Jayashan cb004082
APIIT Sri Lanka Page 3 of 151
Acknowledgment
First of all, let me thank the Almighty God and my parents who are the most graceful and
merciful for their blessings that contributed to the successful completion of my project work.
I am greatly thankful to my supervisor, Dr.Dhananjaya Kulkarni for his immense guidance,
support and supervision to make this effort a success. Without his invaluable supervision, all
my efforts could have been short-sighted.
I am also grateful to Mrs. Chathura Sooriyaarachchi as being my student assessor and guiding
me with the proper instructions and knowledge.
I would like to thank the Asia Pacific Institute of Information Technology for providing me the
facilities of computer laboratories, library and studying environment.
I would like to thank all my friends, relatives, and tech room and library assistants for their
support.
Author owes quite a lot to my family who provided me the chance to fulfil my career objectives
and for the support throughout the studies. Author would like to dedicate this study to them as
an indication of their significance in this study, as well as in my life. It has been a great
experience to my life.
Finally, I am responsible for all the errors remain in this documentation.
Anton Jayashan cb004082
APIIT Sri Lanka Page 4 of 151
Table of Contents
Abstract ...................................................................................................................................... 2
Acknowledgment ....................................................................................................................... 3
LIST OF FIGURES ................................................................................................................... 9
Chapter 1 .................................................................................................................................. 13
1 Introduction ...................................................................................................................... 13
1.1 Project background .................................................................................................... 13
1.2 Understanding the problem ....................................................................................... 14
1.3 Project objectives ...................................................................................................... 15
1.4 Proposed solution ...................................................................................................... 16
1.5 Project scope ............................................................................................................. 17
1.5.1 Accurate identification of faces ......................................................................... 17
Chapter 2 .................................................................................................................................. 18
2 Domain investigation ....................................................................................................... 18
2.1 Over View face recognition system .......................................................................... 18
2.2 Similar systems ......................................................................................................... 19
2.2.1 FaceIT ................................................................................................................ 19
2.2.2 ZN-Face ............................................................................................................. 20
2.2.3 Google picasa ..................................................................................................... 22
2.2.4 APPLE IPhoto .................................................................................................... 23
2.3 Problem of the face recognition systems................................................................... 24
2.3.1 Pose variance ..................................................................................................... 24
2.3.2 Lighting/illumination variance........................................................................... 24
2.3.3 Different facial occlusions ................................................................................. 25
2.3.4 Makeup challenges............................................................................................. 25
Chapter 3 .................................................................................................................................. 27
3 System research and investigation ................................................................................... 27
3.1 Fundamental of image processing ............................................................................. 27
3.2 Image classification ................................................................................................... 27
3.3 Image segmentation................................................................................................... 28
Segmentation approaches................................................................................................. 28
3.3.1 Threshold based segmentation ........................................................................... 29
3.3.2 Edge based segmentation ................................................................................... 30
3.3.3 Edge detection .................................................................................................... 31
3.4 Face detection approaches ......................................................................................... 36
3.4.1 Controlled environment ..................................................................................... 37
3.4.2 Colour images .................................................................................................... 37
Anton Jayashan cb004082
APIIT Sri Lanka Page 5 of 151
3.4.3 Images in motion................................................................................................ 37
3.4.4 Face detection difficulties .................................................................................. 38
3.5 Face detection methods ............................................................................................. 39
3.5.1 The Viola-Jones Face Detector .......................................................................... 40
3.5.2 Local Binary Patterns (LBP) .............................................................................. 45
3.5.3 AdaBoost Algorithm for face detection ............................................................. 47
3.5.4 Neural Network based face detection ................................................................ 49
3.6 Face Region extraction approaches ........................................................................... 51
3.6.1 Template matching............................................................................................. 51
3.6.2 Haar like features-Viola Jones ........................................................................... 52
3.7 Face recognition Methods ......................................................................................... 55
3.7.1 Holistic matching methods ................................................................................ 56
3.7.2 Eigenfaces .......................................................................................................... 56
3.7.3 Principal component analysis ............................................................................ 58
3.7.4 Fisherfaces ......................................................................................................... 60
3.7.5 Template matching face recognition methods ................................................... 61
CHAPTER 4 ............................................................................................................................ 62
4 REQUIRMENTS SPECIFICATION .............................................................................. 62
4.1 System requirements ................................................................................................. 62
4.1.1 Functional requirements..................................................................................... 62
4.1.2 Non-functional requirements ............................................................................. 66
4.2 Resource requirements .............................................................................................. 66
4.2.1 Hardware requirements ...................................................................................... 67
4.2.2 Software requirements ....................................................................................... 68
CHAPTER 5 ............................................................................................................................ 69
5 DEVELOPMENT METHODOLOGY ............................................................................ 69
5.1 Spiral model .............................................................................................................. 70
5.1.1 Diagram of spiral ............................................................................................... 71
5.1.2 Advantages of spiral .......................................................................................... 71
5.1.3 Disadvantages of spiral ...................................................................................... 71
5.1.4 When to use spiral .............................................................................................. 72
5.2 Waterfall Methodology ............................................................................................. 72
5.2.1 Advantages of Waterfall .................................................................................... 73
5.2.2 Disadvantages of Waterfall ................................................................................ 73
5.3 Agile methodology .................................................................................................... 74
5.3.1 Advantages of Agile .......................................................................................... 75
5.3.2 Disadvantages of Agile ...................................................................................... 75
Anton Jayashan cb004082
APIIT Sri Lanka Page 6 of 151
5.4 Hybrid model............................................................................................................. 76
5.4.1 Why we choose Hybrid Model .......................................................................... 76
5.5 Justification of selection method ............................................................................... 78
CHAPTER 6 ............................................................................................................................ 79
6 TECHNICAL RESEARCH AND INVESTIGATION ................................................... 79
6.1 Developing platforms ................................................................................................ 79
6.1.1 Dot.net ................................................................................................................ 79
6.1.2 .Net Languages .................................................................................................. 81
6.2 Sun Java..................................................................................................................... 82
6.3 Java architecture ........................................................................................................ 82
6.4 Compilation of Code ................................................................................................. 83
6.4.1 Characteristics of Java ....................................................................................... 83
6.5 Image processing APIs .............................................................................................. 84
6.5.1 OpenCv .............................................................................................................. 84
6.5.2 EmguCV ............................................................................................................ 84
6.5.3 Aforge.Net ......................................................................................................... 85
6.6 Developing approaches ............................................................................................. 85
6.6.1 Approach one ..................................................................................................... 85
6.6.2 Approach two ..................................................................................................... 86
6.6.3 Approach three ................................................................................................... 86
6.7 Justification of selection of programming environment ........................................... 86
CHAPTER 7 ............................................................................................................................ 88
7 SYSTEM DESIGN .......................................................................................................... 88
7.1 Use case diagrams ..................................................................................................... 89
7.1.1 Use case diagram for admin ............................................................................... 91
7.1.2 Use case diagram for Admin and Criminal........................................................ 92
7.2 Use case descriptions ................................................................................................ 93
7.2.1 Add new record .................................................................................................. 93
7.2.2 View record ........................................................................................................ 94
7.2.3 Train image space .............................................................................................. 95
7.2.4 Find the matched image ..................................................................................... 96
7.3 System Overview ...................................................................................................... 97
7.4 E-R Diagrams (Process flow) .................................................................................... 98
7.4.1 E-R diagram between administrator and the operator ....................................... 98
7.4.2 E-R diagram between operator and the criminal ............................................... 98
7.5 Activity Diagrams ..................................................................................................... 99
7.5.1 Administrator Process ...................................................................................... 100
Anton Jayashan cb004082
APIIT Sri Lanka Page 7 of 151
7.5.2 Operator Process .............................................................................................. 103
7.6 Sequence diagram ................................................................................................... 104
7.6.1 Administrator ................................................................................................... 104
7.6.2 System Operator............................................................................................... 105
7.7 Collaboration diagrams ........................................................................................... 106
7.7.1 Administrator ................................................................................................... 106
7.7.2 Operator ........................................................................................................... 107
7.8 Component Diagram ............................................................................................... 108
7.9 Deployment Diagram .............................................................................................. 109
7.10 Class Diagram...................................................................................................... 110
CHAPTER 8 .......................................................................................................................... 111
8 TESTING AND EVALUATION .................................................................................. 111
8.1 Test scope ................................................................................................................ 111
8.1.1 Unit testing ....................................................................................................... 112
8.1.2 Integration testing ............................................................................................ 112
8.1.3 Overall system testing ...................................................................................... 112
8.2 Testing Strategy: Unit Testing ................................................................................ 113
8.2.1 Test case 01- Video Preview of main window ................................................ 113
8.3 Testing Strategy: Integration Testing ...................................................................... 114
8.3.1 Test Case: 01 Face detection and face extraction ............................................ 115
8.4 Testing Strategy: Overall System Testing ............................................................... 117
8.4.1 Test case: Overall System Testing ................................................................... 117
8.4.2 Conclusion of overall system testing ............................................................... 118
CHAPTER 9 .......................................................................................................................... 119
9 IMPLEMENTATION .................................................................................................... 119
9.1 Implementation face detection ................................................................................ 119
9.2 Implementation face extraction ............................................................................... 121
9.3 Implementing Training Set Manager ...................................................................... 122
CHAPTER 10 ........................................................................................................................ 123
10 CRITICAL EVALUATION .......................................................................................... 123
10.1 Limitations ........................................................................................................... 125
10.2 Suggestions for Future Enhancements ................................................................ 126
CHAPTER 11 ........................................................................................................................ 127
11 Conclusion ..................................................................................................................... 127
12 Bibliography .................................................................................................................. 128
13 Reference ....................................................................................................................... 131
14 APPENDICS .................................................................................................................. 134
Anton Jayashan cb004082
APIIT Sri Lanka Page 8 of 151
14.1 Gantt chart ........................................................................................................... 134
14.2 User Manual ........................................................................................................ 142
Anton Jayashan cb004082
APIIT Sri Lanka Page 9 of 151
LIST OF FIGURES
Figure 1: Data representation in ZN-Face. Such graphs can be shifted or .............................. 21
Figure 2: Add name tags in Picasa ........................................................................................... 22
Figure 3 : IPhoto Face Recognition ......................................................................................... 23
Figure 4: database of a person with different pose .................................................................. 24
Figure 5 : Face image in different illumination ....................................................................... 25
Figure 6: Sample occluded face images ................................................................................... 25
Figure 7 : Challenges in face recognition: images of actress ................................................... 26
Figure 8: Image representation in plane as matrix ................................................................... 27
Figure 9: Threshold based segmentation ................................................................................. 29
Figure 10 : Edge detection by using various methods ............................................................. 32
Figure 11: Masks used by Sobel Operator ............................................................................... 33
Figure 12: Masks used by Prewitt Operator ............................................................................ 34
Figure 13: Three commonly used discrete approximately to the Laplacian filter ................... 34
Figure 14: Original image with the result of various edge detection techniques ..................... 35
Figure 15: Author’s work based on Haar-features ................................................................... 40
Figure 16: Author’s work -non face detection ......................................................................... 42
Figure 17: Author’s work Negative results .............................................................................. 42
Figure 18: Integral image ......................................................................................................... 43
Figure 19: illustration of the integral image and Haar-like features ........................................ 43
Figure 20 : Different types of features ..................................................................................... 44
Figure 21: The basic Local Binary Pattern (LBP) operator ..................................................... 45
Figure 22: Basic scheme of AdaBoost –Authors work based on –Meynet (2003) .................. 47
Figure 23: the process of Neural Network Face Detection ...................................................... 49
Figure 24: Rectangle Features ................................................................................................. 52
Figure 26: This is not a face because F is small ...................................................................... 54
Figure 27: Eigenfaces obtained using ORL database .............................................................. 57
Figure 28: A training image as a weights sequence of Eigenfaces .......................................... 59
Figure 29: Diagram of Spiral ................................................................................................... 71
Figure 30: Structure of Waterfall Methodology ...................................................................... 72
Figure 31: Agile methodology life cycle ................................................................................. 74
Figure 32: Hybrid Model Structure.......................................................................................... 76
Figure 33: Dot Net Frameworks .............................................................................................. 79
Anton Jayashan cb004082
APIIT Sri Lanka Page 10 of 151
Figure 34 : Use Case Diagram ................................................................................................. 90
Figure 35: Use Case Diagram for admin ................................................................................. 91
Figure 36 : Use Case for Admin and Criminal ........................................................................ 92
Figure 37: E-R diagram between administrator and the operator ............................................ 98
Figure 38: 7.4.2 E-R diagram between operator and the criminal ........................................ 98
Figure 39: Activity Diagram For System Administrator ....................................................... 100
Figure 40: Activity Diagram For Add Records ..................................................................... 102
Figure 41 :Activity Diagram for Operator Process ................................................................ 103
Figure 42: Sequence Diagram for Administrator ................................................................... 104
Figure 43 :Sequence Diagram for operator ............................................................................ 105
Figure 44: Collaborative diagram for Administrator ............................................................. 106
Figure 45 : Collaboration diagram for operator ..................................................................... 107
Figure 46: Component Diagram ............................................................................................ 108
Figure 47: Deployment Diagram ........................................................................................... 109
Figure 48: Class Diagram ...................................................................................................... 110
Figure 49: Code for Image Loading....................................................................................... 119
Figure 50 : Code for cascade object ....................................................................................... 120
Figure 51: Save detected faces in an array ............................................................................. 120
Figure 52: Final Result of face detection ............................................................................... 121
Figure 53: Extracted Faces ..................................................................................................... 121
Anton Jayashan cb004082
APIIT Sri Lanka Page 11 of 151
LIST OF TABLES
Table 1: edge based segmentation ........................................................................................... 30 Table 2: User Interface Module-Admin Module ..................................................................... 64
Table 3: Admin Requirements ................................................................................................. 65 Table 4: Comparison between the hybrid model and spiral model ......................................... 77 Table 5: Use Case Description -Add New User ...................................................................... 93 Table 6 : Use Case Description For View Record ................................................................... 94 Table 7: Use Case Description of Train Image Space ............................................................. 95
Table 8 :Use Case Description -Find The Matched Image ...................................................... 96 Table 9 : System Architecture.................................................................................................. 97 Table 10: Test case 01: Video Preview Of Main Window .................................................... 113
Table 11: Test Case: 01 Face detection and face extraction .................................................. 115 Table 13 : Test case 01: Count Human Faces ........................................................................ 116 Table 14: Test case: Overall System Testing ......................................................................... 117
Anton Jayashan cb004082
APIIT Sri Lanka Page 12 of 151
LIST OF ABBREVIATIONS
API - Application Programming Interface
GUI - Graphical User Interface
HSV - Hue, Saturation and value
IDE - Integrated Development Environment
OS - Operating System
OO - Object Oriented
SDK - Software Development Kit
UML - Unified Modelling Language
LDA - Linear Discriminant Analysis
OOP - Object Oriented Programming
PCA - Principal Component Analysis
SDLC - System Development Life Cycle
ATM - Automatic Teller Machine
2D - Two Dimensional
3D - Three Dimensional
WPF - Windows Persistence Language
Anton Jayashan cb004082
APIIT Sri Lanka Page 13 of 151
Chapter 1
1 Introduction
1.1 Project background
In the modern world, security is a one of the main concerns. There is a significant rise of
threats to the society with increasing rate of crimes and terrorist activities. Even have many
ways of identifying a person, biometric identification approaches have had a huge attraction
because of the accuracy and the uniqueness of the biometric factors of a person. Finger print
recognition, voice recognition, palm recognition iris and voice recognition are the approaches
of biometric identification. The advantage of face recognition approach is people do not need
to look into an iris scanner or to place their hands on a fingerprint reader, or to speak to close
by microphone. And also face recognition techniques can be very useful in footages taken by
surveillance and applications.
This project is aimed to identify the criminal faces. In here the technique which going to use
is, manually we already store some images of the criminals in our database along with his
details. By surveillance camera system residing at some public place which automatically
matches the input faces with criminal database and gives alert if the results are matched. If
any image is matched up to 95% or closer to that rate then we predict that he is only the
criminal.
Face recognition technology can be applied to a wide variety of application areas including
access control for PCs, airport surveillance, private surveillance, criminal identification and
for security in ATM transactions. In addition, face recognition system is moving towards the
next-generation smart environment where computers are designed to interact more like
humans.
Anton Jayashan cb004082
APIIT Sri Lanka Page 14 of 151
1.2 Understanding the problem
Take a face image of a person as an input. And compare the face image of a person with the
existing face images that are already saved in the database. So face detection is the main
problem here. When it come to the real world, it might not possible to capture a full frontal
picture of a face at all the times in uncontrolled environments. Even though there are many
face recognition application available, most of these applications work in optimal condition.
Detecting a face in probe image may be relatively simple task for a human. But it’s not for a
computer. The computer has to decide which pixels in the image is part of the face and which
are not. In a typical passport photo, where the background is clear, it’s easy to do, but when
the background becomes cluttered with other objects, the problem becomes extremely
complex.
As mentioned in the proposal there are many challenges specially uncontrolled background
and lighting, unrestricted range of facial expression and typical variations in hair style,
criminal wear specs or wig and difficult to identify facial marks ,aging, sketches and other
adornments. These variations have to handle by any face detection method which hopes to
operate in an uncontrolled environment.
Automated face recognition is a sequence of process as mentioned in below.
Capture or detect
images from a
video or from a
camera
Detect and extract
face area from the
image
Normalization
(Covert all
extracted faces in to
grey colour)
Match extracted
face features
against images in
database
Anton Jayashan cb004082
APIIT Sri Lanka Page 15 of 151
By understanding the above, there are several sub steps need to be performed within the
steps. Even there are lot of approaches for full face recognition, the proposed solution
deviates from the regular face recognition approaches.
1.3 Project objectives
Main objective of this project is to identify a person using the images previously taken. The
identification will be done according the previous images of different persons.
Even today we use different types of face recognition applications sometimes those
applications also fail to identify faces because of different facial expressions and typical
variations in hair style. When the criminal where specs or wig it might get too difficult to
identify a face. Another thing is facial marks, like aging, sketches birthmarks or other
adornments will be big challenge to identify a face.
This face recognition system will identify individuals based on characteristics of separate
face segmentations and the objectives of the project as follows.
- Identify unique face features of eye, nose, and mouth region for recognises individuals.
-Improve capabilities of the detecting features of the local segmentations of face. So that it’s
necessary to find the efficient algorithms to extract features of the segmentations.
Through researches being carried on face recognition techniques and available algorithm on
partial face recognition and choose an appropriate method and implement face recognition
system based on it. Finally As there are many approaches to these contents, selection of the
most suitable techniques has to be determined after evaluating available techniques under
various factors.
Anton Jayashan cb004082
APIIT Sri Lanka Page 16 of 151
1.4 Proposed solution
To overcome the drawbacks that were in the existing system we develop a system that will be
very useful for any investigation department. Here the program keeps track of the record
number of each slice during the construction of identifiable human face and calculate
maximum number of slices of the similar record number.
The proposed system takes segment of a face (eye region, nose region, mouth and forehead
region) at a time and identifies which has submitted region. Based on the input region it will
extract the features that are unique to each region. Then it will extract particular face region
form that saved in the database. After that system will match the features. Based on this
record number the program retrieves the personal record of the suspect
Proposed System
Face Tracking
Or Face
Detection
Image/ Video Face Detection
Face Localization,
Size and Pose
Face Extraction
Face Matching
Database of training
faces
Result
Aligned Face
Feature Vector
Anton Jayashan cb004082
APIIT Sri Lanka Page 17 of 151
1.5 Project scope
The scope of the project is confined to store the image and store in the database. When a
person has to be identified the images stored in the database are compared with the existing
details. This project has narrow scope, after carrying out an extension research on face
recognition using an image or video stream. This project is concerned to cover following
area.
1.5.1 Accurate identification of faces
Identification of individuals those stored in the database. Here the technique is we already
store some images of the criminals in our database along with his details. And insert a photo
that we previously taken from a passport or somewhere else, and segmented into many slices
say eyes, hairs, lips, nose, etc so even when a criminal wear a wig or specs we don’t care
about it and it will not hard to identify criminals because from the system match other facial
features with the image what we insert to the system. Depend on the total matching weight
age, system predict that he is only the criminal.
Anton Jayashan cb004082
APIIT Sri Lanka Page 18 of 151
Chapter 2
2 Domain investigation
2.1 Over View face recognition system
Computerized face recognition has been an active research area over the last 40 years. Also
manual face recognition systems had undertaken the interest of researches long ago. And
manual face recognition process is very slow to give the result. It is very critical to find the
criminal images.
Normally face recognition applications are used to recognize faces based on various factors
and technologies. Although today different face recognition systems use different approaches,
most of them perform key main steps, which are common to most face recognition
approaches.
Even the final result of recognize a face, but not all face recognition applications are same
when comparing with the process of steps. Some applications might be reliable than others,
depending on the quality of the image captured as well as the algorithms employed in the
application. There are many techniques used for facial recognition, mainly geometric and
photometric. In geometric technology especially data relies on shapes on the face called node
points (distance between the eyes, nose width, cheekbones and depth of eyes etc.).
Despite improvements, there is still no system available that can report with one hundred
percent accuracy. Despite the qualitative differences, all facial recognition software apply the
same process steps.
1. Face detection
2. Face Extraction
3. Face Normalization
4. Face Recognition
Anton Jayashan cb004082
APIIT Sri Lanka Page 19 of 151
2.2 Similar systems
Face recognition has long been a goal of computer vision, but only in recent years reliable
automated face recognition has become a realistic target of biometric research. There are
several face recognition applications available in market and today most commonly use this
technology for security purposes.
2.2.1 FaceIT
IJCSIS [326], Indentix, a company based in Minnesota,is one of many developers of facial
recognition technology and FaceIT is one of a product of them.This is an award winning face
recognition software engine which allows computers to recognize faces rapidly and accurately
[13].this was also named as the top performer in the Facial Recognition Vendor Test (FRVT)
in the year 2002.
All the technologies which combine of face recognition including face detection, tracking,
segmentation and recognition are used in FaceIT.
Input images and
videos
Face Detection Normalizing the image if
needed (scaling and rotating)
Face extraction Face Recognition
Anton Jayashan cb004082
APIIT Sri Lanka Page 20 of 151
Some of the measured by the software are:
Distance between the eyes
Width of the nose
Depth of the eye sockets
The shape of the cheekbones
The length of the jaw line
2.2.2 ZN-Face
This is an automated access control system developed by ZN Vision Technologies and this
application performs a biometric identification of persons from their facial images. To achieve
this they have applied various methods and several extensions. Mainly they used Elastic Graph
Matching for recognition human faces and it has proven to be a powerful algorithm.
Anton Jayashan cb004082
APIIT Sri Lanka Page 21 of 151
Figure 1: Data representation in ZN-Face. Such graphs can be shifted or
As shown in above picture at every node within the graph, a feature vector is calculated from
the specific local information. The facial similarities between the live feature and graph model
are established on the basis of the weighted sum of the node similarities and possible to handle
large database that more than 1000 persons or more.
Anton Jayashan cb004082
APIIT Sri Lanka Page 22 of 151
2.2.3 Google picasa
Google Picasa is combination with many features and the face detection and identification is
one of an interesting thing in picasa.
In Picasa there is an album called “unnamed people” and in this album where you can find
the faces that you haven’t named before. So you have to add a name of person for the
particular face image by manually.
After that when you upload an image to the Picasa, here after automatically it detects and
recognise the face and name that face by the name what we added earlier. Actually this is
very interesting.
The main advantage of using face detection in Picasa is that, once trained, the face detection
algorithm in Picasa works very well. Sometimes Picasa will find a face even if that face is
way off in the background somewhere too.
Figure 2: Add name tags in Picasa
Anton Jayashan cb004082
APIIT Sri Lanka Page 23 of 151
2.2.4 APPLE IPhoto
(MacWorl, 2012) The face feature in iPhoto uses face recognition to help you find and name
the photos of your favourite people. When you upload a photo to iPhoto it automatically scan
your uploaded image and detect the faces and crop the only the faces and tell us to add the
names of people for the particular face. If it’s already added before, when you import new
photos, iPhoto scans them to see if any include people on your corkboard, and then suggests
selected photos for your confirmation.
Sometimes iPhoto detect non faces too.
Figure 3 : IPhoto Face Recognition
Anton Jayashan cb004082
APIIT Sri Lanka Page 24 of 151
2.3 Problem of the face recognition systems
2.3.1 Pose variance
(Ramchandra, 2013) Normally even in face recognition system used frontal view face images
of individuals as training data. In a frontal view images have more specific and critical
information of a face than other pose angle images. The problem appears when the system
has to recognize a rotated face using this frontal view training data. User needs together
multiple views of an individual in a face database. Most difficult case in pose problem is
when training image pairs are not.
2.3.2 Lighting/illumination variance
(Ramchandra, 2013) The primary requirement of the face recognition system is quality face
image and good quality image. Sometimes due to changing in lighting, same face might be
appear in a differently. Drastically a appearance of an object can be change by the
illumination. Following image processing can be used to provide a degree of lighting
invariance.
Normalizing
Histogram equalization
Order statistic filtering
Figure 4: database of a person with different pose
Anton Jayashan cb004082
APIIT Sri Lanka Page 25 of 151
Heuristic, light-modelling, model based and statistical approaches are some approaches in
illumination problems.
2.3.3 Different facial occlusions
Partial face occultation is one of the challenging problems in face recognition. A face
recognition system can face with the occulted faces in real life. Mostly due to use of
accessories such as scarf or sunglass, hands on the face or cover the face with the object that
person carry. There for a face recognition system has to be robust to occlusion in order to
guarantee reliable real world operation.
Figure 5 : Face image in different illumination
Figure 6: Sample occluded face images
Anton Jayashan cb004082
APIIT Sri Lanka Page 26 of 151
2.3.4 Makeup challenges
Another big challenge of face recognition is makeup or plastic surgery and it may reduce the
accuracy of the system. Even accuracy of the face recognition systems has rapidly increase in
past decades there are additional factors like biological change, and plastic surgery, a
medically induced change and what we discussed above compromise the use of these system
in security applications. (See Figure 7 and Figure 8)
The impact of cosmetic makeup is a big challenge on automated face recognition system. But
today use some special algorithms to reduce the impact from biological changes.
Figure 7 : Challenges in face recognition: images of actress
Jennifer Grey demonstrating variations due to pose, illumination, expression, and alterations
through plastic surgery, aging and makeup (Spie, 2015)
Anton Jayashan cb004082
APIIT Sri Lanka Page 27 of 151
Chapter 3
3 System research and investigation
3.1 Fundamental of image processing
Digital image processing can be considered as one of the interesting area of the computer
vision and it’s a method to convert an image into digital form and perform some operation on
it. Basically image processing system based on concepts on mathematics and includes
treating images as two dimensional signals. Analog and visual techniques are the methods
that used in image processing.
Today image processing is rapidly growing technology in the computer industry. Following
section shows fundamental of image processing.
3.2 Image classification
According to Jahne (2005, pp32) there are entirely different ways to display an image. The
most popular way to represent an image is rectangular grid. Gonzales (2004, pp 01) further
described it as follows, digital images can be represent as a matrix as it can represent as 2D
function where x and y gives coordinates in geometric plane.
Figure 8: Image representation in plane as matrix Source: [Johne, 2005, pp32]
Anton Jayashan cb004082
APIIT Sri Lanka Page 28 of 151
3.3 Image segmentation
Image segmentation is the division of an image into meaningful structures and it’s very
important step in image processing such as object representation, visualization and many
other image processing tasks. Not only today, in the past decades various types of image
segmentation methods have been proposed, such as:
Threshold based segmentation
Edge based segmentation
Region based segmentation
Clustering techniques
matching
In this chapter the author has discussed few techniques of image segmentation that related to
face recognition process.
According to Grady & Schwartz (2006) “Image segmentation has often been defined as the
problem of localizing region of an image related to content”
According to Biswas (2008) image segmentation approaches can divide as two different
categories as follows.
Segmentation approaches
Pixel based segmentation
Region based segmentation
Edge based segmentation
Segmentation algorithms for images generally based on the similarity of image intensity
values.
Anton Jayashan cb004082
APIIT Sri Lanka Page 29 of 151
3.3.1 Threshold based segmentation
(John Ashburner & Karl J. Friston) Threshold based segmentation method is the most
frequently used technique to segment an image, and the grey value of thresholding operation
remapping the operation of g defined by :
G (v) = {0 𝑖𝑓 𝑣 < 𝑡1 𝑖𝑓 𝑣 > 𝑡
Figure 9: Threshold based segmentation
(John Ashburner 2011) An original image is shown at the top left. If it is known that this
image contains only uniformly sized squares, then the image on the top right shows the
correct segmentation.
Each segment has been indicated by a unique grey value here. The bottom left and right
images show examples of over segmentation and under segmentation respectively.
Anton Jayashan cb004082
APIIT Sri Lanka Page 30 of 151
3.3.2 Edge based segmentation
Image segmentation can also be done by edge detection techniques. In the edge detection
technique mainly identify the boundary of an image. Edges are detected to identify the
discontinuities in the image. In here edges on the region are traced by identifying the pixels
value and it is compared with the other closer pixels and no need to detect edges to be
closed.
Basically there are two steps for edge based segmentation
Edge detection for edges of pixels – Gradient, Laplacian and Canny filtering are the
techniques use for edge detection.
Edge linking- Linking adjacent edges in to edges
The algorithms what use for edge base segmentation are less complex. And the edges of an
object are important feature in an image to separate the regions. But sometimes region can
often be hard to find due to noise or occlusions.
Table 1: edge based segmentation
Anton Jayashan cb004082
APIIT Sri Lanka Page 31 of 151
3.3.3 Edge detection
(Rashmi, 2004) An edged can be defined as a set of connected pixels that forms a boundary
between two disjoints regions.Edge detection is a basic tool used in image processing,
basically for feature detection and extraction, which aim to identify points in a digital image
where brightness of image changes sharply and find discontinuities.
The major property of the edge detection technique is its ability to extract the exact edge line
with good orientation. Edge detection is the fundamental tool for image segmentation. An
edge detection method transform original images in to edge images and it’s a fundamental
process detects and outlines of an object and boundaries among object and the background in
the image. Today edge detection is used for object detection which serves various
applications like medical image processing, biometrics etc.
Edge detection
Gradient
LoG
Canny
Sobel
Robert
Laplacian
Anton Jayashan cb004082
APIIT Sri Lanka Page 32 of 151
In edge detection, basically get the boundary of an image or contour at which a significant
occurs in some physical aspects of an image. Edge detection is a well-developed field on its
own within image processing. Edge detection is basically image segmentation technique,
divides spatial domain, on which the image is defined, into meaningful parts or regions.And
there is different ways to perform edge detection. Especially this edge detection can
categories in to two categories, like gradient and Laplacian. Below figure shows the edges of
an image detected using the gradient methods of Robert, Perwitt and Sobel and the Laplacian
method of Marrs-Hildreth.
Figure 10 : Edge detection by using various methods
Anton Jayashan cb004082
APIIT Sri Lanka Page 33 of 151
3.3.3.1 Canny’s edge detection
(R.Yogamangalam) The purpose of Canny’s edge detection is to significantly reduce the
amount of data in an image, while preserving the structural properties to be used for further image
processing.
Here is the Canny’s Edge detection procedure that runs in 5 separate steps:
According to IJET (R.Yogamangalam 2011)
1. Smoothing: To reduce the effect of noise, the surface of the image is smoothened by
using Gaussian Convolution.
2. Finding gradient: The edges should be marked where the gradients of the image has
large magnitudes.
3. Non- maximum suppression: Only local maxima should be marked as edges.
4. Double thresholding: Potential edges are determined by thresholding.
5. Edge tracking by hysteresis: Final edges are determined by suppressing all edges
that are not connected to a very certain (strong) edge.
3.3.3.2 Sobel edge detection
(Radha, 2011) Sobel edge detection method is introduced by Sobel in 1970 and the Sobel
edge detection is initially used for edge detection but they did not give sharp edges and were
highly sensitive to noise image. This method used to find the estimated absolute gradient
magnitude at each point in n input greyscale image. To detect edges along the horizontal and
vertical axis, the Sobel kernels are most suitable for that and Robert’s able to detect edges
along the vertical axis of 45°and 135°
Figure 11: Masks used by Sobel Operator
Anton Jayashan cb004082
APIIT Sri Lanka Page 34 of 151
3.3.3.3 Prewitt Edge Detection
(Muthukrishnan, 2011) The Prewitt edge detection is proposed by Prewitt in 1970 to estimate
the magnitude and Prewitt is the correct way to estimate the orientation of an edge. This
gradient based edge detector is estimated in the 3x3 neighborhood for eight directions. All the
eight convolution masks are calculated. One complication mask is then selected, namely with
the purpose of the largest module. Prewitt detection is slightly simpler to implement
computationally than the Sobel detection, but it tends to produce somewhat noisier results.
3.3.3.4 Laplacian of Gaussian
(Maini, 2011)The Laplacian is a 2-D isotropic measure of the 2nd spatial derivate of an image.
The Laplacian of an image highlights regions of rapid intensity change and is therefore often
used for edge detection. The operator normally takes a single gray level image as input and
produces another gray level image as output.
Figure 12: Masks used by Prewitt Operator
Figure 13: Three commonly used discrete approximately to the
Laplacian filter
Anton Jayashan cb004082
APIIT Sri Lanka Page 35 of 151
3.3.3.5 Justification of edge detection techniques
In this section the author has presents the relative performance of various edge detection
techniques what we discussed above and other techniques. The performance of the edge
detector is compared to commonly used or comparable algorithms such as the Canny Sobel
and Robert’s edge detection algorithms
Roberts, Sobel and Prewitt results are actually deviated from the others.
Marr-Hildreth, LoG and Canny edge detection almost same results.
Kirsch and Robinson edge maps are almost same.
But Canny’s edge detection is the superior when comparing with the other techniques.
Even though so many edge detection techniques are available in the computer science field,
since it a challenging task to detect the exact image without noise from the original image.
Figure 14: Original image with the result of various edge detection techniques
Anton Jayashan cb004082
APIIT Sri Lanka Page 36 of 151
3.4 Face detection approaches
(Clare Tischer 2012) Face detection is the computer vision technology and that detects or
determines the location and the size of a face in image that human can do effortlessly.
Anyhow when comparing with the computer vision terms, this task is not easy. A face
detection system should be able to achieve the regardless of illumination, orientation or
camera distance. And also there are general problems in face detection system such as
extraction and verification of faces, possibly identify only faces or face features from an
uncontrolled background.
Detection will happen based on the following techniques
Face Detection
Feature-based
Image-Based
Low-Level
Analysis
Feature
analysis
Edges
Colour
Motion
Constellation
analysis
Linear subspace methods
Linear subspace methods
Statistical approaches
Author’s work based on Article of Luis Torres’s 2012
Anton Jayashan cb004082
APIIT Sri Lanka Page 37 of 151
3.4.1 Controlled environment
It’s the important and straightforward case. Normally photographers are taken under
controlled light, background etc, so in that case can use simple edge detection techniques to
detect faces.
3.4.2 Colour images
To detect faces, typically skin colour can be used to detect faces. And the result might be
change or weak if the light conditions change. Human skin colour can be change from nearly
white to almost black. (Proyecto 2010) The major difference lies between their intensity, so
chrominance is a good feature. It’s not easy to establish a solid human skin color
representation. However, there are attempts to build robust face detection algorithms based
on skin color
3.4.3 Images in motion
Detect faces in a real time videos is a challenge. There is a continuing challenge to achieve
the best detecting results with the best possible performance. (Proyecto 2010) Another
approach based on motion is eye blink detection, which has many uses aside from face
detection
Anton Jayashan cb004082
APIIT Sri Lanka Page 38 of 151
3.4.4 Face detection difficulties
It’s very hard process to build automatic robust face detection. Because, even a face detection
system detect faces in different image situations and different face condition it seems really
easy to do this with the human visual system. In fact the object of “face” is hard to define by
mathematical algorithms, as its large variability depending on the identity of the person, the
lighting conditions and the psychological context of the person etc.
The main challenge for detecting faces to is to find a classifier which can discriminate faces
from all other possible images.
When we consider face as a global attribute, we can extract some common attributes
from every faces. Normally even we consider face as an ellipse object but there can be
thin faces and also rounder faces. And the skin colour also different from person to
person. So a system might get confuse to detect faces automatically.
The pose of the face: by the position of a person where he stands in front of the
camera might get change the view of the faces.
The facial expression: because of face appearance the results might get change. The
face feature of a smiling face can be far from the appearance of a sad face.
Presence of added objects: the objects that we can find usually on a face can be
affecting the result of detecting a face. If a person wear a sunglass, that will change
one of a main characteristic of a face. And also from the natural facial features such as
beards and hair style which can occult one part of a face.
Image condition: the appearance of a face might get change to one another due to
light conditions so illumination and the intensity have to be considered.
Anton Jayashan cb004082
APIIT Sri Lanka Page 39 of 151
3.5 Face detection methods
Today face detection and face feature recognition has been an active research area in the
computer vision field. (Zhang, 2010)Face detection is the basic and the first step of face
recognition. A number of methods and algorithms have been proposed for face detection and
these methods can group in to several categories like
Knowledge based methods use predefined rules to determine a face based on human
knowledge.
Feature based approaches main target is to find the features of a face structure.
Template matching methods use some sample face templates to judge if an image is a
face.
Appearance based methods get an idea of a face models from a set of representative
training face images.
In this technical report, author survey the recent advances in face detection for the past
decade and the Viola-Jones face detector is first re-viewed and In feature invariant
approaches usually consider about skin color, texture and facial features. Then survey the
various techniques according to how extract features and what learning algorithms are
adopted and hope to review several existing algorithms.
Face Detection
Knowledge based
methods
Template
matching methods
Feature invariant
approaches
Appearance based
methods
Author’s work based on Technical Report of Zhang, 2010
Anton Jayashan cb004082
APIIT Sri Lanka Page 40 of 151
3.5.1 The Viola-Jones Face Detector
(Zhang, 2010) Face detection proposed by Viola and Jones based on statistic methods is most
popular among the face detection approaches. This face detection is a variant of the
AdaBoost algorithm which achieves rapid and robust face detection. In AdaBoost algorithm
is proposed a face detection method based on AdaBoost algorithm using Haar Features that
detected the face successfully with high accuracy. But still we can’t satisfy with the accuracy
of this method.
The basic principal of the Viola-Jones algorithm is to scan a sub window that capable of
detecting faces across the given image. The standard image processing approach would be to
rescale the input image to different sizes and then run the fixed size detector through these
images. Rather than time consuming due to the calculation of the different size images, this
approach is better. But in Viola-Jones has a method of scale invariant detector that has the
same number of calculations whatever the size. This detector is constructed using a integral
image that can be used to compute simple Harr-like rectangular features.
The propose method to detect face is Viola Jones Haar classifier cascade and it draws a
rectangle around faces as shown below.
Figure 15: Author’s work based on Haar-features
Anton Jayashan cb004082
APIIT Sri Lanka Page 41 of 151
3.5.1.1 Advantages and disadvantages of Viola-Jones Technique
Advantages
Viola Jones face detection algorithm is the most admired algorithms for face detection
in real time.
Another main advantage is uncompetitive detection speed.
When comparing with other face detection algorithms Viola Jones is an especially
successful method because it has a low false positive rate.
Disadvantages
Actually its takes long training time
Limited head poses
Not detect black faces (depend on the photo background)
Furthermore in Viola Jones they have used three types of features. Two rectangle features,
three rectangle features and four-rectangle features which is respectively the difference
between the sums of the pixels within the two rectangular region.
Anton Jayashan cb004082
APIIT Sri Lanka Page 42 of 151
3.5.1.2 Generating negative values
Some times for the training images the cascade classifier generates negative values or detects
non faces too. A negative example is basically just an image not containing faces. Below
figure 16 shows how the cascade classifier detects non faces.
The negative examples should be generated in the same manner as the detector scans through
an image. This means that negative examples should be generated at every size and then
rescaled to the base resolution of 24*24 pixels. It detects all faces within given image via
Haar classifier cascade and draws rectangle around them.
(Lyngby, 2008) Haar-like [9], [14], [15] feature classifier cascades are composed of multiple
classifiers, or conditions, that are used to distinguish unique objects (i.e., face, eye, etc.) from
anything that is not that object. Classifiers are a set of values representing sums of pixel
brightness on a region-by-region basis. These specifically structured regions are called Haar-
like features. Haar-like feature classifiers are created using the integral image which is an
intermediate image representation that allows the features used by the detector to be
computed very quickly. Rectangle features can be computed very rapidly using the integral
image.
Figure 16: Author’s work -non face detection
Figure 17: Author’s work Negative results
Anton Jayashan cb004082
APIIT Sri Lanka Page 43 of 151
3.5.1.3 The scale invariant detector
First step of the Viola-Jones face detection algorithm is to turn the input images in to integral
image. This is done by making each pixel equal to the entire sum of all pixels above and the
left of the concerned pixel. This demonstrate in Figure 16
This allows for the calculation of the sum of all pixels inside any given rectangle using only
four values. These values are the pixels in the integral image that coincide with the corners of
the rectangle in the input images. This demonstrates in figure 17.
3.5.1.4 The Integral image
Each pixel in the integral image represents the change in brightness of the corresponding
pixel in the original image by finding the sum of the derivation of the pixels above and left of
the original
Figure 18: Integral image
B A
C D
1 2
3 4
Figure 19: illustration of the integral image and Haar-like features
Anton Jayashan cb004082
APIIT Sri Lanka Page 44 of 151
Sum of the ABCD rectangle =4+1-(2+3)
1= A
2=B+A
3=A+C
4 = B+D+C+A
Sum of the ABCD rectangle =B+D+C+A+A-(B+A+A+C)
=B+D+C+A+A-B-A-A-C
=B+D+C+2A-B-2A-C
= B+D+C+2A-B-2A-C
= D
Now it has been demonstrated the sum of pixels within rectangles of arbitrary size. The
Viola-Jones face detector analyzes a given sub window using features consisting of two or
more rectangles. The different types of features are shown in figure
Figure 20 : Different types of features
Anton Jayashan cb004082
APIIT Sri Lanka Page 45 of 151
3.5.2 Local Binary Patterns (LBP)
(THÈSE N, 2006)The LBP operator is a non-parametric 3*3 kernel which summarizes local
special structure of an image. It was found by Ojala. The Local Binary Pattern technique is
very effective to describe the image texture features.LBP has advantages such as high speed
computation and rotation invariance, face recognition image segmentation etc.
According to the THÈSE N, 2006, [17] even in early the cascade quickly discard most of the
background regions, in last stages of the cascade, a large number of Haar-like features are
necessary to reach the desired detection acceptance rate trade off. [18] It results in a long
training procedure and cascade with several dozen stage which are difficult to design. So
furthermore Haar like features are not robust to local illumination changes.
3.5.2.1 Frontal face detection using LBP
Figure 21: The basic Local Binary Pattern (LBP) operator
(Varsha, 2014)Very recently, the LBP was successfully applied to the detection of moving
objects via background substation. In LBP, every pixel is assigned a texture value, which can be
naturally combined with target for tracking thermo graphic and monochromatic video. The major
uniform LBP patterns are used to recognize the key points in the target region and then form a
mask for joint color-texture feature selection. [6695]
Anton Jayashan cb004082
APIIT Sri Lanka Page 46 of 151
3.5.2.2 Advantages and disadvantages of LBP
Advantages
(According to the Sharma, 2014)
Effective to describe image texture features.
By the background subtraction detect the moving object.
It’s a simple approach
Computationally simple than Haar-like features and fast
The most properties of LBP feature are tolerance against the monotonic illumination
changes and computational simplicity.
Disadvantages
(According to the Sharma, 2014)
Not sensitive for small changes in the face localization
When using large local regions, increases the number of errors
It’s insufficient for non-monotonic illumination changes
Not accurate
Use only binary and grey images.
Anton Jayashan cb004082
APIIT Sri Lanka Page 47 of 151
3.5.3 AdaBoost Algorithm for face detection
(Meynet, 2003)AdaBoost algorithm is highly inspired by the Robust Real time object
detection of Viola and Jones. Boosting algorithms is a powerful iterative procedure that
builds efficient classifiers by selecting and combining very simple classifiers. This new
method given by Viola [1] is a combined method of more traditional ones like geometrical
image based detection that is a geometrical in the sense that it uses general features of human
faces.
(Varsha Gupta, 2014) AdaBoost algorithm was the first practical Boosting algorithm and one
of the most widely used and studied with applications in numerous fields. [2] AdaBoost is a
learning algorithm which produces a strong classifier by choosing visual features in a family
of simple classifiers and combining them linearly. Although AdaBoost [12] is more resistant
to over fitting than many machine learning algorithms, it is repeatedly sensitive to noisy data
and outlier’s. [2] AdaBoost is called adaptive because it uses multiple iterations to generate a
single composite strong learner.
In AdaBoost algorithm, it has two main goals.
Selecting a few set of features which represents as well as possible faces.
Train a strong final classifier with a linear combination of these best features.
Figure 22: Basic scheme of AdaBoost –Authors work based on –Meynet (2003)
AdaBoost
Training Set
Family of weak classifiers
1 strong
classifier
Anton Jayashan cb004082
APIIT Sri Lanka Page 48 of 151
3.5.3.1 Advantages and disadvantages of AdaBoost
According to the (Gupta, 2014)
Advantages:
AdaBoost algorithms need only two inputs: A training dataset and the set of features
and there is no need to have any prior knowledge about face structure.
Very simple to implement
Do feature selection resulting in comparatively simple classifier fast
Simple and easy to program
Disadvantages:
The quality of the final detection depends highly on the consistence of the training set.
Quite slow training
Weak classifiers too complex leads to over fitting
Very sensitive to noisy data and outlier
Anton Jayashan cb004082
APIIT Sri Lanka Page 49 of 151
3.5.4 Neural Network based face detection
Artificial Neural Network is one of a popular tool in face recognition. In here the technique
what uses in Neural Network is pattern recognition and classification. Kohonen [11] was the
first to demonstrate that a neuron network could be used to recognize aligned and normalized
faces. [9] A rationally attached neural network examines small windows of an image, and
chooses whether each window contains a face. [12] Training a neural network for the face
detection task is challenging because of the difficulty in characterizing prototypical none face
images. [12] Unlike face recognition, in which the classes to be discriminated are different
faces, the two classes to be discriminated in face detection are “images containing faces” and
“images not containing faces”.[12] It is easy to get a representative sample of images which
contain faces, but it is much harder to get a representative sample. Figure 23 shows this
approach.
Figure 23: the process of Neural Network Face Detection
Anton Jayashan cb004082
APIIT Sri Lanka Page 50 of 151
3.5.4.1 Advantages and disadvantages of Neural Network
Advantages
This algorithm can detect between 78.9% and 90.5% of faces in a set of 130 test
images with a satisfactory number of false faces detections.
Less computationally expensive
Acceptable false detection
Disadvantages
The detection process is slow due to train the non-face window.
The result is not so much accurate
A number of guidelines for future effort
The methodology is complex
Anton Jayashan cb004082
APIIT Sri Lanka Page 51 of 151
3.6 Face Region extraction approaches
The face recognition approach was introduced by Sir Francis Galton in 1888 (Kepenekci,
2001).which was done by measuring four characteristics of French prisoners. Because of that
attempt of Sir Francis Galton to identify persons in more scientific way, it made foundation
of the biometric recognition which cause to improvement of face recognition.
The author has identified several approaches to extract face features during the research.
Among them following techniques show promising results.
3.6.1 Template matching
(Adaptive-vision 2007-2014) Template Matching is a high-level machine vision technique
that identifies the parts on an image that match a predefined template. Advanced template
matching algorithms allow finding occurrences of the template regardless of their orientation
and local brightness. Template Matching techniques are flexible and relatively
straightforward to use, which makes them one of the most popular methods of object
localization.
(Proyecto, 2010) Most of face recognition algorithms are combination with some template
matching techniques. Basically in here the process of template matching uses pixels, samples,
models or textures as patterns. The recognition function computes the different between these
features and the stored templates. Even the matching of 2D images was the early trend
nowadays 3D templates are common.
Anton Jayashan cb004082
APIIT Sri Lanka Page 52 of 151
3.6.2 Haar like features-Viola Jones
Like we discussed in previous topics Viola-Jones (2001) approach they have used features
that generated by Haar basis functions. According to Viola Jones (2001) the reason for using
features are “that features can act to encode ad-hoc domain knowledge that is difficult to
learn using a finite quantity of training data.” Further more in this topic have discussed how
to use an integral image that we mentioned in previous topic to feature extraction.
3.6.2.1 Integral image for feature extraction
Rectangle Feature value of F=∑ (pixels value in white area)-∑ (pixels values in dark area)
Figure 24: Rectangle Features
[Source: Viola & Jones, 2001
Anton Jayashan cb004082
APIIT Sri Lanka Page 53 of 151
1 4 5 2
3 0 1 3
5 5 4 1
0 3 2 5
Find the rectangle feature value of the box enclosed by the dotted line
Rectangle Feature value of F=∑ (pixels value in white area)-∑ (pixels values in dark
area)
= (5+4)-(0+1)
=9-1
=8
A simple face detection method using one feature
Rectangle Feature value of F=∑ (pixels value in white area)-∑ (pixels values in dark
area)
So
Pseudo code for detection method is:
If (F) is larger then it’s a face
If (F)>threshold, then
Face
Else
Non-face
Anton Jayashan cb004082
APIIT Sri Lanka Page 54 of 151
The purpose of find the sum of pixels values
Figure 26: This is not a face because
F is small Figure 25: This is a face, because
the eye-area is dark, the nose
region (white area is bright) so F
is large. Then this can take as a
face
B
A
B
A
C D
F
G
E
H
You may consider these features are face
features:
Left eyes: (Area _A – Area _B)
Nose: (Area _C + Area _E – Area_ D)
Mouth: (Area _F + Area _H – Area_ G)
Authors work based on: Wong 2004
Anton Jayashan cb004082
APIIT Sri Lanka Page 55 of 151
3.7 Face recognition Methods
The five main algorithms of face recognitions are
Eigenfaces
Fisherfaces
Laplacianfaces
Elastic Graph Matching
Neural Network
Vector mechanism
In order to develop a useful and applicable face recognition system several factors need to be
taken in hand
1. The overall speed of the system from detection to recognition should be acceptable.
2. The accuracy should be high
3. To increase the number of subjects, the system should be easily updated and enlarge
All face recognition methods can be classified as either 2D or 3D. The distance between
important points where used to recognize known faces, e.g. measuring the distance between
the eye or other important points or measuring different angles of facial components.
Techniques
Appearance based
techniques
Feature based techniques Model Based techniques
Eigen Faces
And Fisher Faces Distance between
landmark points such as
eyes, nose and mouth.
Graph matching
techniques
Active appearance / shape
models, fitting morph able
models
Author’s work based on Cybula 2004
Anton Jayashan cb004082
APIIT Sri Lanka Page 56 of 151
Face recognition is such a challenging yet interesting problem that it has attracted researches
that have different backgrounds: Psychology, pattern recognition, neural networks, and
computer vision and computer graphics.
The following methods are used to face recognition.
1. Holistic Matching Methods (appearance)
2. Feature based methods (landmark)
3. Hybrid methods
Feature classifier and get the distance between different faces are the main challenges.
3.7.1 Holistic matching methods
In holistic approach, the complete face region is taken in to account as input data into face
catching system. One of the best examples of holistic methods are Eigenfaces and Fisherfaces
that most widely used method for face recognition and use Principal Component Analysis,
Linear Discriminate Analysis and independent component analysis for face recognition.
In those two approaches face images are kept as templates in the database and matched
against the image to be identified.
3.7.2 Eigenfaces
The process of facial recognition is discriminating input image data into several classes. The
input signals are highly noisy caused by different lighting conditions, pose etc. still input
images are not completely random and in spite of their differences there are, there are
patterns which occurs in any input signal. The characteristics of eyes, nose and mouth in any
face and that has relative distances between these characteristics. The characteristics features
are called Eigenfaces in the facial recognition.
Anton Jayashan cb004082
APIIT Sri Lanka Page 57 of 151
These characteristics can be extracted out of original image data by means of a mathematical
tool called Principal Component Analysis (PCA) also known as Karhunen-Loeve methods,
and it’s a technology now commonly used for dimensionality reduction in computer vision
and choose a dimensionality reducing linear projection that maximizes the scatter of all
projected samples.
At elementary level, the image of a face can be expressed as a one dimensional column
vector of concerted rows of pixels.
X= [x1, x2, x3……….. xn]
Where n is the total number of pixels in the image. To compare two images, the simplest
method is to compare the image pixels by pixel. But n might be large for such comparison.
For an example n would be 10000 for a small image with 100*100 pixels. So comparison
might be time consuming and not efficient.
According to the above scenario, as most of them pixels do not represent the feature of face,
not all pixel values are important for comparison. Because of that reason dimensional
reduction technique, PCA is engaged in Eigenfaces approaches for keep the representation of
image compact and efficient for comparison.
Figure 27: Eigenfaces obtained using ORL database
Anton Jayashan cb004082
APIIT Sri Lanka Page 58 of 151
3.7.3 Principal component analysis
Principal Component Analysis also known as Karhunen-Loeve (KL) as mentioned above and
as well as Hotelling Transform In image processing that is one of the most used and cited
statistical method for feature extraction. It’s a technique of identifying patterns of data and
expressing the data in a way to highlight similarity and differences.
Consider that have set of 15 images (M) as shown above (Figure 1), each image is
transformed into a vector of size N and placed into the set.
S= {X1, X2, X3.............................., Xm}
After obtain the set, will obtain the image Ψ
Ψ = 1
𝑀 ∑ 𝑋𝑛
𝑀𝑛=1
Get the value of differences between the input image and the mean image
𝛷𝑖 =𝑋𝑖- Ψ
Eigenfaces works by finding the eigenvectors and Eigen values of covariance matrix C from
the training set of m images. Find the average face µ𝑘of the set which is defined by:
µ𝑘=1
𝑀∑ 𝑋𝑖
𝑀𝑛=1
And calculate the covariance matrix as follow
C=1
𝑀∑ (𝑋𝑖 − 𝜇𝑘)𝑀
𝑖=1 (𝑋𝑖 − 𝜇𝑘)𝑇
A new face is transformed in to Eigenfaces component and after compare input image with
mean image and multiply their differences with each eigenvector of the L matrix. Each value
would be represent a weight and would be saved on a vector
Y= Wpea(X-µk)
Anton Jayashan cb004082
APIIT Sri Lanka Page 59 of 151
Where Y= Wpea [𝑦1, 𝑦2, … … … … . . 𝑦𝑑]𝑇is the feature vector of the image that contains
weights to describe the contribution of each Eigenfaces in representing the input image X.
Each training image can be represented by a weighted sequence of Eigenfaces, as shown
below.
Figure 28: A training image as a weights sequence of Eigenfaces
Anton Jayashan cb004082
APIIT Sri Lanka Page 60 of 151
3.7.4 Fisherfaces
Belhumer et al. [29] Fisherfaces method is an example of a class specific method suggested
in 1997. And it’s more reliable method for reducing the dimensionality of the feature space.
Fisherfaces use the same template matching method similar to Eigenfaces that we described
earlier and use a projection of images into a feature space. The difference is that the
Fisherfaces uses the Fisher’s Linear Discriminant (FLD) [37] [47] where the Eigenfaces uses
the Principal Component Analysis (PCA).
3.7.4.1 Comparison Eigenfaces to Fisherfaces
Fisherfaces Eigenfaces
Computational complexity
Slightly more complex
Simple
Effectiveness across pose
Good, even with limited
data
Some, with enough data
Sensitivity to lighting
Little
very
Anyhow both Fisherfaces and Eigenfaces techniques work very well for a uniformly and
densely sample data set varied over pose. When sparser data set across pose is available, the
Fisherfaces approach performs better than Eigenfaces.
Anton Jayashan cb004082
APIIT Sri Lanka Page 61 of 151
3.7.5 Template matching face recognition methods
(Marqu, 2010) Template matching techniques uses in many face recognition algorithms. A
template matching process uses pixels, samples, models or textures as pattern. [13] The
recognition function computes the differences between these features and the stored
templates. It uses correlation or distance measures.
Anton Jayashan cb004082
APIIT Sri Lanka Page 62 of 151
CHAPTER 4
4 REQUIRMENTS SPECIFICATION
4.1 System requirements
Since this system is about the identifying individuals using partial, face recognition, this
system will have simple interfaces to interaction with users. In functional requirements
section it will shows the procedures of the proposed system and what will be the functional
outcome of the system. Then under non-functional requirements, it will describe the
performance, usability and security of the system.
4.1.1 Functional requirements
By conducting the requirements analysis we listed out the requirements that are useful to
restate the problem definition.
Face detection
- The user can detect the faces of the image that the user entered to the system
Manual face detection
- To increase the accuracy of the face detection , the user add a function to detect
faces manually when the system not detect faces automatically.
Extract the detected faces
- After detect the faces from an inserted image, the user can extract only the faces
from the image.
Face recognition
- Using this modular user will able to input probe image to search in database after
allows recognizing inputted image with the image in the database. And identify
the face by using face features one by one and get the weight age of the matching
face image and identify the face by results of the total weight age.
Anton Jayashan cb004082
APIIT Sri Lanka Page 63 of 151
Module Description
Author structured designs improve the maintainability of a system. A structured system is one
that is developed from the top down and modular, that is, broken down into manageable
components. In this project the author modularized the system so that user have minimal effect
on each other. This application is designed into five independent modules which take care of
different tasks efficiently.
1. User Interface Module.
2. Admin Module.
3. Client Module.
4. Database Operations Module.
5. Identify Module.
Anton Jayashan cb004082
APIIT Sri Lanka Page 64 of 151
User Interface Module:
- Actually every application has one user interface for accessing the entire
application. In this application also we are providing one user interface for
accessing this application. The user interface designed completely based on the end
users. It is provide friendly accessing to the users. This user interface has attractive
look and feel. Technically I am using the C# for preparing this user interfaces.
Admin Module:
User requirements Elaboration Further Elaboration
Create Assign new user id &
password for an employee.
Delete Administrator can delete
the user id & password of
unwanted employee.
Update First the details of
criminals are to be
obtained by using CrimeID
Table 2: User Interface Module-Admin Module
Anton Jayashan cb004082
APIIT Sri Lanka Page 65 of 151
Client Module:
Database Operations Module:
ADD MODULE:
The add module is helpful in adding the details of the criminals along with the details of the criminal
photo. While adding the details of the criminal, we crop the image of the criminal and store those
cropped parts in a separate database.
DELETE MODULE :
This module deletes the criminal details along with the photo. The operator first submits the criminal
id and searches for the availability of the id in the database. If that id is available in the database, then
the operator may delete the record of that particular criminal.
User requirements Elaboration Further Elaboration
Login User log in to home page by
entering id & password.
Adding details Personal details of criminal
store in to data base
Images are cropped and saved in
database.
Update process Enter criminal id and obtain his
details
Update the details and images of
existing criminal
Delete process Enter criminal id Delete the details and image of
unwanted criminal
Logout Logout in to the home page
Table 3: Admin Requirements
Anton Jayashan cb004082
APIIT Sri Lanka Page 66 of 151
UPDATE MODULE:
The operator first enters the criminal id and searches for the availability of that id .If that id is available
in the database , then the details of that criminal are retrieved and the operator can update the details of
that criminal and that updated details of the criminal are stored in the database again for future retrieval.
Identify Module:
The cropped parts of the criminals, along with the criminal Id are viewed by the eyewitness .The
eyewitness selects particular cropped part of the criminal and it is freeze by the operator., then complete
face of the criminal is constructed and the details of the criminal is retrieved.
4.1.2 Non-functional requirements
Accessibility
- Since this developed as PC client application , this will able to access only single
user at a time
Security
- Login module will be implemented for user login
4.2 Resource requirements
This part is done by based on technical research and investigation which is describe in
Chapter 6.
Under hardware requirements, it considered minimum hardware requirements that need to
develop this application and run the developing software.
In software requirements section, it will talk about the software requirements that need to
implement this application.
Anton Jayashan cb004082
APIIT Sri Lanka Page 67 of 151
4.2.1 Hardware requirements
Microsoft Visual Studio 2012 required at least 1.73 GHz CPU, 515 MB RAM, 64 bit
operating system and 515 VGA
Emgu CV requires Visual Studio 2012 with windows operating system as minimum
requirements.
It is suggested following minimum hardware requirements
The computer must have the following minimum capabilities.
(According to MSDN corporate website, 2014)
32-bit (x86) or 64-bit (x64) processors
Dual-core, 2.66-GHz or faster processor
USB 2.0 bus dedicated to the Kinect
2 GB of RAM
Graphics card that supports DirectX 9.0c
Recommended Hardware for Kinect
[11] Desktop PC with 3GHz (or better) multi-core processor and a graphics card with 2GB or
more of dedicated on-board memory.[11] Kinect Fusion has been tested for high-end
scenarios on a NVidia GeForce GTX680 and AMD Radeon HD 7850.
Supported architectures
32-bit (x86) (Windows 7 and Windows 8 clients)
64-bit (x64)
Hardware requirements for windows 7/windows 8
2.2 GHz or faster processor
1 GB RAM – Basic installation
Anton Jayashan cb004082
APIIT Sri Lanka Page 68 of 151
4.2.2 Software requirements
Following minimum software requirements need for implementation of the solution
Visual Studio 2010 or Visual Studio 2012. The free Express editions can be
downloaded from Microsoft Visual Studio 2010 Express or Microsoft Visual Studio
2012 Express.
.NET Framework 4 (installed with Visual Studio 2010), or .NET Framework 4.5
(installed with Visual Studio 2012).
Emgu CV 2.4
Windows xp or windows 7/windows 8
Anton Jayashan cb004082
APIIT Sri Lanka Page 69 of 151
CHAPTER 5
5 DEVELOPMENT METHODOLOGY
This document play a vital role in the development of life cycle (SDLC) as it describes the
complete requirement of the system. It means for use by developers and will be the basic
during testing phase. Any changes made to the requirements in the future will have to go
through formal change approval process.
There are different approaches to develop particular solution. However, the approach
should select based on the characteristics and requirements of the project.
The characteristics of the project can be listed down as follows
Project is short terms and will be deployed by an individual.
The functional and non-functional requirements of the artefact can be categorised
based on modules.
Managed the limitations
Needed to change or modify the functionalities and features
The next section of the chapter briefly explain some of development approaches analysis
them to select best approach for the project
Anton Jayashan cb004082
APIIT Sri Lanka Page 70 of 151
5.1 Spiral model
(Dr. Barry, 1988) SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A
spiral Model of Software Development and Enhancement. This model was not the first
model to discuss iterative development, but it was the first model to explain why the
iteration models. [14].
As originally envisioned, the iterations were typically 6 months to 2 years long. [14] Each
phase starts with a design goal and ends with a client reviewing the progress thus far. [14]
Analysis and engineering efforts are applied at each phase of the project, with an eye toward
the end goal of the project.
The steps for Spiral Model can be generalized as follows:
(According to Tutorials-Point, 2015)
The new system requirements are defined in as much details as possible. [12] This
usually involves interviewing a number of users representing all the external or
internal users and other aspects of the existing system.
A preliminary design is created for the new system.
A first prototype of the new system is constructed from the preliminary design. This
is usually a scaled-down system, [12] and represents an approximation of the
characteristics of the final product.
Anton Jayashan cb004082
APIIT Sri Lanka Page 71 of 151
5.1.1 Diagram of spiral
(According to the Tutorials_Point, 2015) [13]
5.1.2 Advantages of spiral
High amount of risk analysis there for avoidance of Risk is enhanced.
Good for large and mission critical projects
Strong approval and documentation control
Additional functionality can be added at later date.
5.1.3 Disadvantages of spiral
Can be costly model to use
Doesn’t work well for smaller projects
Figure 29: Diagram of Spiral
Anton Jayashan cb004082
APIIT Sri Lanka Page 72 of 151
5.1.4 When to use spiral
For medium to high-risk projects
Long-term project commitment unwise because of potential changes to economic
priorities
Users are unsure of their needs
5.2 Waterfall Methodology
(According to the WATERFALL MODEL, 2015) Waterfall model in software engineering
was formally introduced as an idea, through a paper published by Winston Royce in 1970.
The waterfall model is a popular version of the systems development life cycle model for
software engineering. [14] Often considered the classic approach to the systems development
life cycle, the waterfall model describes a development method that is linear and sequential.
[15] Waterfall development has distinct goals for each phase of development.
Figure 30: Structure of Waterfall Methodology
Anton Jayashan cb004082
APIIT Sri Lanka Page 73 of 151
As said earlier the waterfall model has been structured on multiple phases especially to help
out the software construction companies to develop an organized system of construction. [16]
5.2.1 Advantages of Waterfall
(According to the buzzle, 2014)
Simple to implement
The amount of resources required to implement this model are minimal.[17]
After every major stage of software coding, testing is done to check the correct
running of the code. [17]
5.2.2 Disadvantages of Waterfall
You cannot go back a step; if the design phase has gone wrong, things can get very
complicated in the implementation phase.[17]
Often, the client is not very clear of what he exactly wants from the software. Any
changes that he mentions in between may cause a lot of confusion. [17]
Small changes or errors that arise in the completed software may cause a lot of
problems. [17]
Anton Jayashan cb004082
APIIT Sri Lanka Page 74 of 151
5.3 Agile methodology
(Rouse, 2014) “In software application development, agile software development (ASD) is a
methodology for the creative process that anticipates the need for flexibility and applies a
level of pragmatism into the delivery of the finished product. “
(WEBOPEDIA, 2012) The agile methodology, focus on 'agility' and 'adaptability' in
development. Instead of one time-consuming and rigid development schedule, agile models
involve multiple iterative development schedules that seek to improve the output, with the
every iteration. [18]
Each iterations goes through all the steps of design, coding and testing, the design is not set
in stone and is kept open to last minute changes due to iterative implementation.[18]
(According to the My-project-management-expert, 2014)
Figure 31: Agile methodology life cycle
[Source: Buzzle, 2014]
Anton Jayashan cb004082
APIIT Sri Lanka Page 75 of 151
5.3.1 Advantages of Agile
Fast moving, cutting edge developments can be quickly coded and tested using this
method, as a mistake can be easily rectified.[19]
Due to short sprints and constant feedback, it becomes easier to cope with the
changes. [19]
It is easier to deliver a quality product in a scheduled time. [19]
5.3.2 Disadvantages of Agile
If a task is not well defined, estimating project costs and time will not be accurate. [19]
If the team members are not committed, the project will either never complete or fail.
It is good for small, fast moving projects as it works well only with small team. [19]
Anton Jayashan cb004082
APIIT Sri Lanka Page 76 of 151
5.4 Hybrid model
Hybrid is the combination of the models. In this type of models, it adopts suitable required
characteristics and procedure that meet the requirements of new project. Following diagram
shows hybrid model of waterfall and incremental software development.
5.4.1 Why we choose Hybrid Model
The three models that usually used for the vast majority of the projects are, waterfall model,
spiral and hybrid model. “Hybrid Model” which combines the features of the two common
development models: waterfall and spiral.
Figure 32: Hybrid Model Structure
We can use Hybrid Model to all small, medium and large systems and flexibility to start with
any phase either planning ,requirements collection and analysis, designing or programming
representation in accordance to system type and complexity.
(According to Mohammed, 2013)
Build
Test
Operatio
n and
maintenance Implement
ation
Physical design
Logical design
Requirements Analysis
Feasibility study
Initiation
(Author’s work Based on lecture slides-PPSP)
Anton Jayashan cb004082
APIIT Sri Lanka Page 77 of 151
Few advantages of using Hybrid Model
Easy to understand and implement.
Identifies deliverables and milestones.
Works well on mature products and weak teams.
Good for management control (plan, staff, track)
Works well when quality is more important than cost or schedule
(Based on lecture slides)
Comparison between the hybrid model and spiral model
Hybrid Model Spiral Model
This model works with small, medium and
large projects
Works with large and critical projects
Identifies the point of end for each phase Frequently and overlapping phases
Focus on the planning phase risk
management
focus on only risk management
Table 4: Comparison between the hybrid model and spiral model
Anton Jayashan cb004082
APIIT Sri Lanka Page 78 of 151
5.5 Justification of selection method
By considering my project background, goals and facts, I consider to use hybrid
methodology. As the hybrid methodology combines the approaches of waterfall and spiral
methodologies I feel that it is possible to give rich documentation about the research.
Actually face recognition is the very wide subject that needs to have good research to build
such a robust system. So by using waterfall method I can have a best research on the related
subjects within short period of time.
The waterfall methodology is normally suitable for development of programs that are already
stable and their design doesn’t need a major makeover. But even the “Agile “models are
applicable in every area of software development, it’s best suited for web based applications.
Since this waterfall is a sequential, that is suite for this project.
When considering the Spiral model, it’s incorporates with the iterative nature of software
development and incorporate with the all the advantages of both the model and prototyping
model but main disadvantage in spiral is need to have good knowledge and requires a
expertise in risk analysis and sometimes it looks like so complicated model to understand in
some scenarios.
Moreover, allocated developing duration is short and busy with our assignments and exams
this project requires a rapid developing approach in short period of time. So as Hybrid
methodology is combined with aspects of two or more models, somehow I managed to do my
best on my project.
Anyhow choosing a right and suitable life-cycle methodology can be difficult in some cases.
Actually it depends on the how you think. But for good approach it might essential to have
one. Life cycle methodology covers not only just development, it must be appropriate for
product support and maintenance
Anton Jayashan cb004082
APIIT Sri Lanka Page 79 of 151
CHAPTER 6
6 TECHNICAL RESEARCH AND INVESTIGATION
Face recognition has been object of much interest in the last years. It has many applications
in a variety of fields such as identification for law enforcement, authentication for banking
and security system access, and personal identification among others. There are different
developing platforms for face recognition. But selecting a suitable platform is very important
thing. It has briefly analyzed developing platforms and APIs, in this chapter.
6.1 Developing platforms
6.1.1 Dot.net
.NET is a programming framework created by Microsoft that developers can use to create
applications more easily. (Microsoft, 2014) The .Net Framework is a technology that support
building and running the next generation applications and XML services [20]. Dot net
platforms enrich with different powerful object oriented languages, functional programing
languages, virtual machine environment, and code libraries.
The .Net Frameworks Architecture
(Based on Dot.Net Corporate web site )
Figure 33: Dot Net Frameworks
Anton Jayashan cb004082
APIIT Sri Lanka Page 80 of 151
VB.Net, C#.Net, C++ are consider dynamic Dot.Net languages, WPF (Windows Persistence
Language), and Silverlight can consider as cross platforms that helps to increase functionality
of the programs. [22]
Microsoft .NET Framework is a software framework which is installed in Microsoft
Windows. The software which are developed using .NET require .NET Framework to be
installed in your system to be able to run. [22]
Once you install .NET Framework, it creates the required software environment so that it can
provide the appropriate runtime requirements to the software.
.NET Framework can be installed in Windows NT, 98, Me, 2000, XP, Vista, Windows 7,
Windows 8 and Windows Server 2003, 2008 and 2012.
Dot.Net framework 3.0 and the latest version that release for windows 8 windows server
2012 can consider as most reliable and stable Dot.Net frameworks which is available now for
download.
Anton Jayashan cb004082
APIIT Sri Lanka Page 81 of 151
6.1.2 .Net Languages
VB.Net, C#, Visual C++ are the most popular dynamic languages in Dot.Net framework and
in computer science industry. J#, F#, Iron Python also can consider as Dot.Net Languages.
6.1.2.1 Visual Basic.Net
(Robinson ,2001) Visual Basic is an Object Oriented Language, which inherit from Visual
Basic 6 and VB .net support lot of OOP concepts. VB.Net was created especially for VB
developers and has a number of unique features and functions that make a great choice for
building. [18]
In VB.NET, the functionality in that base class is available to your other classes as is, or you
can extend and modify it as necessary [18].So VB.NET has some advantages over the other
.NET languages.
6.1.2.2 Visual C++.Net
Visual C++ is a computer language that is used to create applications. C++ has its origin in its
parent language named C [19]. Actually, both C and C++ are mostly used as a foundation for
computer programming and they are available on various types of computers. [19]
Visual C++ is the Microsoft implementation of C++ language which is totally support OOP
concepts and this support by huge number of APIs.
Anton Jayashan cb004082
APIIT Sri Lanka Page 82 of 151
6.1.2.3 Visual C#. Net
C# .net which is popular programming language similar to VB.Net but C sharp has more object
oriented capability than VB.Net. And support huge variety of APIs and It will immediately be
familiar to C and C++ programmers. C# combines the high productivity of Rapid Application
Development (RAD) languages and the raw power of C++ [21].
Many programming environments are available. Probably the most popular programming
environment you can use to write C# instructions is Microsoft Visual Studio [20].
6.2 Sun Java
Sun Java is a software platform, which build on java virtual machine. Sun Java developed by
Microsoft system. In modern world java, consider as powerful developing platform. One of
the reason for popularity of the Java is Java is free platform because of that most developers
adapted Java and developed Java platforms.
6.3 Java architecture
Java architecture provides a portable, robust, high performing environment for development.
Java provides portability by compiling the byte codes for the Java Virtual Machine, which is
then interpreted on each platform by the run-time environment. Java is a dynamic system,
able to load code when needed from a machine in the same room or across the planet.
Anton Jayashan cb004082
APIIT Sri Lanka Page 83 of 151
6.4 Compilation of Code
When you compile the code, the Java compiler creates machine code (called byte code) for a
hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to execute
the byte code. The JVM is created for overcoming the issue of portability. The code is written
and compiled for one machine and interpreted on all machines. This machine is called Java
Virtual Machine.
6.4.1 Characteristics of Java
(According to the Vogella, 2015)
Platform independent: Java programs use the Java virtual machine as abstraction and do not
access the operating system directly [23].
Object Oriented Language: Except the primitive data types, all elements in Java are objects
[23].
Strong Typed Programing Language
Automatic memory management: The program does not have direct access to the memory
[23].
Anton Jayashan cb004082
APIIT Sri Lanka Page 84 of 151
6.5 Image processing APIs
APIs are intermediate software which have designed to communicate with software for
exchange and data process them.
An API consists of set of libraries which gives functionality to perform process and
communication in between software.
(Roos, 2014) An API is a software-to-software interface, not a user interface. With APIs,
applications talk to each other without any user knowledge or intervention [23].
An image processing field widely using APIs are Aforget.net, OpenCv, and EmguCV.
6.5.1 OpenCv
(According to the OpenCv, 2014) OpenCv (Open Source Computer Vision Library) is an
open source computer vision and machine learning software library [24]. OpenCv was built to
provide a common infrastructure for computer vision applications and to accelerate the use of
machine perception in the commercial products [24].
OpenCv has over 2500 optimized algorithms for image processing.
6.5.2 EmguCV
(According to the EmguCV, 2014) Emgu CV is a cross platform .Net wrapper to the OpenCv
image processing library [25]. Allowing OpenCv functions to be called from .NET compatible
languages such as C#, VB, VC++, Iron Python etc. The wrapper can be compiled by Visual
Studio [25].
Anton Jayashan cb004082
APIIT Sri Lanka Page 85 of 151
One of the main disadvantage of OpenCv is it not easy to integrate it with Dot.Net
environment so as a solution for that OpenCv wrapper developed which support to OpenCv
and Dot.Net integration.
6.5.3 Aforge.Net
(According to the Aforget.Net, 2014) AForge.NET is an open source C# framework
designed for developers and researchers in the fields of Computer Vision and Artificial
Intelligence - image processing, neural networks, genetic algorithms, fuzzy logic, machine
learning, robotics, etc. [26].
This is one of the drawback of this framework.
6.6 Developing approaches
During research, following developing possibilities has identified.
6.6.1 Approach one
Extracted face region using EmguCV
In this approach as mentioned above EmguCV integrated with Visual Studio and developing
language like c# to exchange data. EmguCV doesn’t provide face recognition ability using
Eigenfaces method.
Anton Jayashan cb004082
APIIT Sri Lanka Page 86 of 151
6.6.2 Approach two
As mentioned in previous topics implemented Haar-features for detect objects over the
project scope and duration, there for this approach should be consider again when implement.
6.6.3 Approach three
In this approach uses EmguCV for face detection so Emgu supports Haar based face region
detection.
6.7 Justification of selection of programming environment
When considering all the above potential environments, it’s all the programing environments
provide similar features and techniques with minor differences.
Dot.Net and developing platform and java platforms are competitive platforms which gives
similar features and performance. When comparing with windows application development
Dot.Net environment is more compatible with Microsoft technologies because Microsoft
developed both Dot.Net and windows.
Dot.Net development environment, Visual studio provides really advanced and completed
IDE for implement where Java development requires third party development tool for java
development. Dot.Net debugging is very effortless because of that it can fix the bugs quicker
and much easier.
.Net provides and IDE which helps developer to do development easily while java users
would need to use 3rd party tools for development. Main difference between java and .Net
framework is .Net framework lacks platform independence.
Anton Jayashan cb004082
APIIT Sri Lanka Page 87 of 151
When considering API support Dot.Net supported by lot of image processing APIs
comparing with java environment. By considering above fact .Dot Net has selected as
developing platform and visual studio 2008 selected as developing software.
As mentioned before C++ shows higher efferent rate and as mentioned before C++ has lot
support from API including OpenCv, EmguCV and Afforge.net. Therefor C++ gives lot of
support to image processing.
Visual C# and Visual basic.net are simple language than C++. In addition both of them have
less capability to implement image processing. However it doesn’t mean that by using those
languages, it’s not possible to implement image processing applications.
But when considering overall, visual studio C# is easy to coding and managing. C# can
consider as one of the most efficient language. As mentioned in above C# has both
capabilities of C++ and java. Because of that C# can identify as flexible and advance
language which can adopt to develop this project.
Anton Jayashan cb004082
APIIT Sri Lanka Page 88 of 151
CHAPTER 7
7 SYSTEM DESIGN
This chapter provides logic design of the system by using UML diagrams, Use Case Diagrams,
class diagram and activity diagram have used during the representation of logic design of the
system. This design has based on analysing and concluding facts on during research phase.
During analysis, the focus is on what needs to be done, independent of how it is done. During
design, decisions are made about how the problem will be solved, first at high level, then at
increasingly detailed levels.
System design is the first design stage in which the basic approach to solving the problem is
selected. During system design, the overall structure and style are decided. The system
architecture is the overall process of the system into components called subsystems. The
architecture provides the context in which more detailed decisions are made in later design
stages. By making high level decisions that apply to the entire system, the system designer
partitions the problem into subsystems so that further work can be done by several designers
working independently on different subsystems.
Anton Jayashan cb004082
APIIT Sri Lanka Page 89 of 151
7.1 Use case diagrams
This diagram has designed to understand functional requirements and users of the system.
There are six use cases which extend from the two main use cases. The following use case
diagram shows overall view of functionalities which are expected from the system.
A Use-case analysis. Its purpose is to present a graphical overview of the functionality provided
by a system in terms of actors, their goals and any dependencies between those use cases.
The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted.
The use case diagram shows the position or context of the use case among other use cases. As
an organizing mechanism, a set of consistent, coherent use cases promotes a useful picture of
system behavior, a common understanding between the Admin, user and the criminal and the
development team.
Anton Jayashan cb004082
APIIT Sri Lanka Page 90 of 151
SystemAdmin
New User
Add Record
View Record
Add Training images
Find the match
Login
FaceRecognition
Figure 34 : Use Case Diagram
Anton Jayashan cb004082
APIIT Sri Lanka Page 91 of 151
7.1.1 Use case diagram for admin
Admin
Face Recognition
Current UserNew User
Add New User
Detect the face
Face Extraction
Normalizing
Face Detection
<<include>>
<<include>>
<<include>>
<<include>>
<<extend>>
Face RecognitionAuthentication
FaceDetection
Normalization
Face Extraction
<<extend>>
<<include>>
<<include>>
<<include>>
Figure 35: Use Case Diagram for admin
Anton Jayashan cb004082
APIIT Sri Lanka Page 92 of 151
7.1.2 Use case diagram for Admin and Criminal
Figure 36 : Use Case for Admin and Criminal
Anton Jayashan cb004082
APIIT Sri Lanka Page 93 of 151
7.2 Use case descriptions
7.2.1 Add new record
Use case
Add new record
Description
- It had identify that adding new user to
database is a critical activity. This allows
adding new records to database. And
process them to store in the database
Actor
- User
Assumptions
- Operator has log in to the system
Main flow
- Actions
- Fill the relevant forms and upload the
images in to the database
- Verify the submitted images
- Detect the Face region
- Extract the face region
- Store the captured details in to the
database
- Display “Successfully Saves Your Data”
Alternative flow
-Actions
- The input image does not contains any
faces
- Prompt the error message
Alternative flows
- The input images contains faces to detect
but it is occluded that is not possible to
detect faces or face regions
- error message display
Table 5: Use Case Description -Add New User
Anton Jayashan cb004082
APIIT Sri Lanka Page 94 of 151
7.2.2 View record
Use case
Add new record
Description
- View Record
Actor
- User
Assumptions
- If record is available it will show the
record
Main flow
- Actions
- Ask to show /browse the records
- Select record to view
- Show the record
Alternative flow
-Actions
- If record is not available
- Show “not available message”
Table 6 : Use Case Description For View Record
Anton Jayashan cb004082
APIIT Sri Lanka Page 95 of 151
7.2.3 Train image space
Use case
Add new record
Description
- It is necessary to create a face space. This
feature allows user to create face space and
store it in the database for future use.
Actor
- User
Assumptions
- There are enough images that need to
create face space
Main flow
- Actions
- User enter no of Eigen Vectors need to
create
- User create image space type
- System create image space
- Store it in the database
- Display successful message
Alternative flow
-Actions
- If database is not accessible
- Display “Database Error “ message
Table 7: Use Case Description of Train Image Space
Anton Jayashan cb004082
APIIT Sri Lanka Page 96 of 151
7.2.4 Find the matched image
Use case
Recognise Face
Description
- To compare face regions of stored records
to find
Actor
- Operator
Assumptions
- This will provide accurate results if there
are relevant face/ face regions
Main flow
- Actions
- Submit the face region
- display the face region
- Identify submitted face region
- Verify the identify face region
- Match or compare with the training images
which stored in the database.
-Display the particular criminal details after
recognitions process done.
Alternative flow
- Actions
- If submitted images is not in the database
- provide an error message that the detected
face is not save or store in the database.
Alternative flow
- Action
- If submitted images not a face region
- provide an error message
Table 8 :Use Case Description -Find The Matched Image
Anton Jayashan cb004082
APIIT Sri Lanka Page 97 of 151
7.3 System Overview
Following diagram illustrate relationship between components of the system. These
relationship should be identify and understand properly before starting implementation of the
system.
Table 9 : System Architecture
Anton Jayashan cb004082
APIIT Sri Lanka Page 98 of 151
7.4 E-R Diagrams (Process flow)
7.4.1 E-R diagram between administrator and the operator
7.4.2 E-R diagram between operator and the criminal
Figure 38: 7.4.2 E-R diagram between operator and the criminal
Figure 37: E-R diagram between administrator and the operator
Anton Jayashan cb004082
APIIT Sri Lanka Page 99 of 151
7.5 Activity Diagrams
(Tutorials Point, 2014) Activity diagram is another important diagram in UML to describe
dynamic aspects of the system [27].
Activity diagram is basically a flow chart to represent the flow form one activity to another
activity. The activity can be described as an operation of the system [27]. It means basically
what are the steps that system use to archive it is goal. Following diagrams illustrate
functionalities of the system.
Anton Jayashan cb004082
APIIT Sri Lanka Page 100 of 151
7.5.1 Administrator Process
Figure 39: Activity Diagram For System Administrator
Anton Jayashan cb004082
APIIT Sri Lanka Page 101 of 151
7.5.1.1 Add records
This is the flow of adding faces to the system. First it checks weather inserted image is a face
image or a non-face image, if it is a face image it checks weather face regions of the face is
extractable. Else if submitted the face image does not contain any face images it prompts a
message and ends the process. If it is possible to extract face region from the face image, it
extracted the detected face from the image and save it in the database with the info of the
criminal.
Anton Jayashan cb004082
APIIT Sri Lanka Page 102 of 151
Figure 40: Activity Diagram For Add Records
Anton Jayashan cb004082
APIIT Sri Lanka Page 103 of 151
7.5.2 Operator Process
Figure 41 : Activity Diagram for Operator Process
Anton Jayashan cb004082
APIIT Sri Lanka Page 104 of 151
7.6 Sequence diagram
(Visual-Paradigm, 2015) The sequence diagram models the collaboration objects based on
time sequence. It shows how the objects extract with others in particular scenario of a use
case [28].
7.6.1 Administrator
Figure 42: Sequence Diagram for Administrator
Anton Jayashan cb004082
APIIT Sri Lanka Page 105 of 151
7.6.2 System Operator
Figure 43 :Sequence Diagram for operator
Anton Jayashan cb004082
APIIT Sri Lanka Page 106 of 151
7.7 Collaboration diagrams
(Smart-draw, 2015) A collaboration diagram describes interactions among objects in terms of
sequenced messages [29].
7.7.1 Administrator
Figure 44: Collaborative diagram for Administrator
Anton Jayashan cb004082
APIIT Sri Lanka Page 107 of 151
7.7.2 Operator
Figure 45 : Collaboration diagram for operator
Anton Jayashan cb004082
APIIT Sri Lanka Page 108 of 151
7.8 Component Diagram
(Smart Draw, 2015) A component diagram describes the organization of the physical
components in a system [30].
Figure 46: Component Diagram
Anton Jayashan cb004082
APIIT Sri Lanka Page 109 of 151
7.9 Deployment Diagram
Figure 47: Deployment Diagram
Anton Jayashan cb004082
APIIT Sri Lanka Page 110 of 151
7.10 Class Diagram
Figure 48: Class Diagram
Anton Jayashan cb004082
APIIT Sri Lanka Page 111 of 151
CHAPTER 8
8 TESTING AND EVALUATION
The system testing is performed after the implementation to ensure that the system is
functioning according to the specification without errors and performance of the system is
satisfactory. Testing allows measuring the system, evaluating the system and to find out the
bugs of the system. Then quality of the system can improve.
As the system implementation was carried out in four increments, at the end of the each
increment, it was possible to do the testing on developed modules, at the end of each
increment. Once all the modules were completed and integrated the overall system testing
was done.
8.1 Test scope
The testing phase involves the testing of development system using various data. Preparation
of the test data plays a vital role in system testing. After preparing the test data, the system
under study was tested using those data. While testing the system, by using the test data,
errors were found and corrected by using the following testing steps and corrections were
also noted for future use. Thus, a series of testing is performed on the proposed system before
the system is ready for implementation. The system has been tested in the following areas.
Anton Jayashan cb004082
APIIT Sri Lanka Page 112 of 151
8.1.1 Unit testing
Unit testing is carried out at each increment to determine that the each module is functioning
as required. Therefor errors could be identified and rectified prior to the integration of
modules within the system.
Following core unit has identify to be tested
Pre-processing module
- Image normalization unit
Face region extraction module
- Face detection
- Face region extraction
Verification module
- Face and face features verification unit
8.1.2 Integration testing
Once a module developing during an increment was tested, it was integrated with other
modules of the system.
It will check following module for integration testing
Face region extraction
Matching module
Verification module
8.1.3 Overall system testing
At the end of all four increment of system implementation and unit testing and integration
testing, overall system testing was done to ensure that the final system was functioning
without errors and according to the specification.
Anton Jayashan cb004082
APIIT Sri Lanka Page 113 of 151
8.2 Testing Strategy: Unit Testing
Each module of the system was unit tested and at the end of increment. The following entrance and exit
criteria are identified. . All independent paths were exercised to ensure that all statements in the module
have been executed at least once and all error-handling paths were tested.
Each unit is thoroughly tested to check if it might fail in any possible situation. This testing is
carried during the programming state itself. At the end of this testing phase each module is
found to be have an adverse effect working satisfactorily, as regard to the expected output
from the module.
Entrance criteria: the unit to be tested must be coded
Exit criteria: the unit should be logically correct and also bugs identified should have been
correct
8.2.1 Test case 01- Video Preview of main window
Test Type Unit test
Test Unit Video Preview of main window
Test Plan Connect webcam to the PC and run the
program
Success Criteria Connect should be established to the web
cam and input video should be displayed in
the main window.
Input Video from a webcam
Output(successful) Display video in main window
Output(unsuccessful) - if connection to the web cam cannot be
established
- display error message
Table 10: Test case 01: Video Preview Of Main Window
Anton Jayashan cb004082
APIIT Sri Lanka Page 114 of 151
8.3 Testing Strategy: Integration Testing
Integration testing was carried out at the end of each increment to determine that the
integrated modules were functioning according to the specification without errors are
required.
Data can be lost across an interface, one module can on another; sub-functions when
combined may not produce the desired major function. Global data structures can present
problems. Integration testing is a systematic technique for the program structure while at the
same time concluding tests to uncover errors associated with interface. All modules are
combined in this testing step. Then the entire program is tested as a whole. Each of the
module is integrated and tested separately and later all modules are tested together for some
time to ensure the system as a whole works well without any errors.
Entrance Criteria: All the units developed in an increment are integrated.
Exit Criteria: The integrated units perform in the expected behaviour without errors.
Anton Jayashan cb004082
APIIT Sri Lanka Page 115 of 151
8.3.1 Test Case: 01 Face detection and face extraction
Test type Face detection and face extraction
Test Unit/functionality Detect the face region
Description In this case system check whether the input
image is a face or not. If it is a face the
region of the face will be detected.
Test plan - browse the image from the disk
- Click on detection button to detect the
faces
- After that detection of all the faces will be
extracted from the image
-if it is a group image, all the faces will be
extracted and by clicking on next and
previous button can view all the extracted
faces.
Input An image that the format of JPEG,PNG
Output (Successful) Display message of verification success and
the number of images that detected
Output (Unsuccessful) Display an error message
Table 11: Test Case: 01 Face detection and face extraction
Anton Jayashan cb004082
APIIT Sri Lanka Page 116 of 151
8.4 Face detection and counting unit
This section consists of test cases which was help to test human face detection and counting
faces.
Test case 01 Test Name: Count human faces
Why is this test performed
This test case ensure that, to count human
faces , if there is one there
Input
Image (group of faces)
Accepted output
Successfully count and gives at least 2 faces
out of 4 faces
Actual output Successfully system counts the faces
- Given faces : 4
- Successfully count :2
- Unsuccessful count :2
Table 12 : Test case 01: Count Human Faces
Anton Jayashan cb004082
APIIT Sri Lanka Page 117 of 151
8.5 Testing Strategy: Overall System Testing
Overall system testing was done after integration testing ensure that the modules integrated
within system increments were functioning as specified as a final product.
8.5.1 Test case: Overall System Testing
Test Type System Integrated Testing
Rational
This is ensure that overall system is
functioning according to the specification
and without errors but have some
limitations.
Test plan
Adding a new record
Description
This checks whether the input image is an
image or not. If it is an image it detect the
image features and save it in the database
along with the criminal ID
Test plan -Click on start recognition button in main
window button
-Check relevant out put
-Check database values to verify
Success criteria
- Display extracted face region
Output (Unsuccessful)
- Display an error message
Number of entered records
50
No of successful detections
41
Table 13: Test case: Overall System Testing
Test Summary
Anton Jayashan cb004082
APIIT Sri Lanka Page 118 of 151
8.5.2 Conclusion of overall system testing
The overall system testing was completed with the majority of success. In here the face
detection module, face detection and face extraction were tested as functional requirements.
Integration test case design methods can also make use of random and partition tests. In
addition, scenario based testing and the tests derived from behavioural models can be used to
test a class and its collaborators. A test sequence tracks the flow of operations across class
collaborations.
OO system validation testing is black box oriented and can be accomplished by
applying the same black box methods known for conventional software. However scenario
based testing dominates the validation of OO systems, making the use case a primarily driver
for validation testing.
We mainly concentrated on scenario based testing strategy. Some of the test cases for
scenario based testing are given below.
Anton Jayashan cb004082
APIIT Sri Lanka Page 119 of 151
CHAPTER 9
9 IMPLEMENTATION
This chapter is mainly focused on methods that were used to develop the proposed system.
Since the proposed system requires computer vision to perform the face detection aspects of
the project a computer vision library was also used in the implementation to help the
development process.
9.1 Implementation face detection
Face detection is one of the main component of this system. It is as important as all the other
components.
First of all need to insert an image from the computer to picture box and it filter the weather
the image is in correct format like *.jpg;*.bmp;* or .gif.. Then image is turned in to grey-
scale, as well as Bitmap image and sent it to cascade object so that the image can be template
matched to see if there is any face in the image. This is done using the code given below.
Figure 49: Code for Image Loading
Anton Jayashan cb004082
APIIT Sri Lanka Page 120 of 151
After it send it to the cascade object to detect the faces.
Figure 50 : Code for cascade object
The result will be stored in an array and checked if there are any face area found. If face area
is found that area will be selected and copy of that area will be extracted. The code for storing
in array is given below
Figure 51: Save detected faces in an array
Anton Jayashan cb004082
APIIT Sri Lanka Page 121 of 151
Output will be looks like this:
Figure 52: Final Result of face detection
9.2 Implementation face extraction
Once faces have been detected from an input image, extract each single face out and view on
the windows form in a picture box like below:
Figure 53: Extracted Faces
Anton Jayashan cb004082
APIIT Sri Lanka Page 122 of 151
9.3 Implementing Training Set Manager
- New user interface control to make set of training images
In here operator can save criminal images with the crop of the criminal face in the XML file.
So the operator can match with training images with the help of training images.
In here the user can detect the faces by giving image to the system. Sometimes system detect
faces in a wrong way in this cases the user can select the face manually.
Anton Jayashan cb004082
APIIT Sri Lanka Page 123 of 151
CHAPTER 10
10 CRITICAL EVALUATION
A critical evaluation was performed on the completion of this project and it was determined
that project has met the functionality expected of it as mentioned at the initiation of the
project. Further this section provides a review of the project objectives and problems
encountered and solutions adapted in achieving objectives and knowledge and experience
gained through the process of this project.
During the primary research performed and studied information about face detection and face
recognition and also face features extraction techniques , based on that technical research was
carried out research about the developing platforms and developing strategies. The secondary
research was done at the next phase based on requirements identification analysis. In
secondary research current issues and challenges in creating successful face recognition
system is identified and then few of widely used solutions are discussed and compared.
This project is basically focused on identify persons using their face region and face features.
It has identified importance of using face regions for recognition during numerous reasons.
At the end of the research it analysed all selected details and finally conducted to use Viola
Jones Haar-Features Cascade classifier as face region detection techniques.
Identifying humans faces with their images have been successfully implemented in certain
areas. It can be applied to surveillance systems and access control systems under various
environments. Accuracy of the computer based face recognition systems have been criticized
as failure rates of such systems is quite high comparing to the other biometric identification
systems.
After the research was done then it was analysis part. The main functionalities of the system
were identified and the requirements specification made. The requirements were divided in to
two parts which are Functional and non-functional requirements. The initial system was
described by using activity diagrams and sequence diagrams.
Anton Jayashan cb004082
APIIT Sri Lanka Page 124 of 151
The next section was the implementation. The system was implemented using Visual Studio
C# and other libraries and API’s that support windows operating system. The OpenCv
computer vision library is used to implement the face detection. The library is used and it is
wrapped to C# using Emgu CV wrapper.
While the developing the system the proposed system there were many obstacles. Even
though Open CV was used as the computer vision library, the library was not compatible with
Visual Studio C#. This was a major setback when implementing the system.
The critical appraisal was performed after the system was implemented. The developed
system detect most of human faces and extract the faces from the image with the high
accuracy rates. As the conclusion of critical evaluation it can be mentioned that objectives of
the project set the initiations has be achieved satisfactory and knowledge in face recognition
techniques and skills in C# programming were greatly improved.
Anton Jayashan cb004082
APIIT Sri Lanka Page 125 of 151
10.1 Limitations
The following limitations have been identified when considering the overall functionality of
the system
Detecting non faces
- Some times it detect faces and non faces also. And in feature extraction part also it extract
faces and the non faces too.
Make ups and Masks
- Make ups and Masks are the another challenge in face recognition, so hope to develop this
system to identify human faces even he or she wear a Masks or sunglasses whatever.
Improve face detection methods used
- Still the face detection method can only detect the faces up to a certain angle.
Anton Jayashan cb004082
APIIT Sri Lanka Page 126 of 151
10.2 Suggestions for Future Enhancements
The following future enhancement are proposed in order to overall limitations of
functionality identified above and improved the application in general.
a) The Future enhancements of this project include the following:
The criminal photos may be of any size.
By selecting any one cropped part of the criminal, we can get the full image of the
criminals along with details from the google image and the other related images.
New face constructed by different cropped parts can be saved.
b) Use an algorithm to measure facial expressions and angle
A robust algorithm like Neural Network should be used to improve the system’s
accuracy in above conditions.
c) Improve as a security application
As the current system is a generic face recognition application, it can be improved as
security application which we can be applied to a wide variety of application areas
including access control for PCs, airport surveillance, private surveillance, criminal
identification and for security in ATM transactions. In addition, face recognition system
is moving towards the next-generation smart environment where computers are
designed to interact more like humans.
Anton Jayashan cb004082
APIIT Sri Lanka Page 127 of 151
CHAPTER 11
11 Conclusion
On conclusion of this project it can be mentioned that the project has effectively achieved the
objectives defined at the initiation of the project. The scope of the project is to implement a
criminal identification system which it can compare the witness images with the training
images. It can be stated that the scope of the project is feasible since the requirements of the
project has been successfully met.
For the successful implementation of the system, a wide research had to be done with the
busy time table of our institute. Even though I had a good dream to implement this system,
because of lack of time I was unable to implement my system in a successfully way. But the
process of the system I have clearly documented.
It’s not only academic achievements gained, while carrying out this project relatively the
project process helped in developing many personal skills such as time and resource
management. However through the process of this project, a better understanding of
techniques and also many more API’s available in the computer science vision industry.
Anton Jayashan cb004082
APIIT Sri Lanka Page 128 of 151
12 Bibliography
1) KUMAR, M. AND SAXENA, R.ALGORITHM AND TECHNIQUE ON VARIOUS
EDGE DETECTION: A SURVEY In-text: (Kumar and Saxena, 2015)
Bibliography: Kumar, M. and Saxena, R. (2015). ALGORITHM AND TECHNIQUE
ON VARIOUS EDGE DETECTION: A SURVEY.
2) Face Recognition Methods & Applications, [Online],
Available:http://www.ijetae.com/files/Volume4Issue7/IJETAE_0714_19.pdf.
[Accessed 23rd October 2014]
3) Face Recognition: Eigenfaces and Fisherfaces Performance across Pose, by Alan
Brooks [Online], Available:
http://dailyburrito.com/projects/facerecog/facerecreport.html . [Accessed 23rd
October 2014]
4) (IJCSIS) International Journal of Computer Science and Information Security,Vol. 8,
No. 1, 2010, http://sites.google.com/site/ijcsis/
5) P. Viola, M. Jones, "Robust Real-Time FaceDetection", International Journal of
Computer Vision, vol. 57, no. 2, pp. 137–154, (2014).
6) Z. Yao, H. Li, “Tracking a detected face with dynamic programming”, Image and
Vision, Computing, vol. 24, no. 6, pp. 573-580, (2006).
7) R. Padilla, C. F. F, C. Filho, M. G. F. Costa,"Evaluation of Haar Cascade Classifiers
Designed for Face Detection", World Academy of Science, Engineering and
Technology, vol. 64, (2012).
8) THÈSE N, T. N.Face Detection and Verification using Local Binary Patterns In-text:
(THÈSE N) Bibliography: THÈSE N, THÈSE N. Face Detection and Verification
Using Local Binary Patterns. 1st ed. 2006. Print.
9) Gupta, V. and Sharma, DA Study of Various Face Detection Methods In-text: (Gupta
and Sharma)Bibliography: Gupta, Varsha, and Dipesh Sharma. A Study Of Various
Face Detection Methods. 3rd ed. nternational Journal of Advanced Research in
Computer and Communication Engineering, 2014 Web 17 Feb 2015
10) Meynet, J.Fast Face Detection Using AdaBoost In-text: (Meynet) Bibliography:
Meynet, Julien. Fast Face Detection Using Adaboost. 1st ed. 2003. Print.
11) T. Kohonen. Self-organization and associative memory. Springer-Verlag, Berlin,
1989.
Anton Jayashan cb004082
APIIT Sri Lanka Page 129 of 151
12) Rowley, H. A., Baluja, S. and Kanade, T. Neural NetworkBased Face Detection
In-text: (Rowley, Baluja and Kanade) Bibliography: Rowley, Henry A., Shumeet
Baluja, and Takeo Kanade. Neural Networkbased Face Detection. 1st ed. School of
Computer Science, Carnegie Mellon University,: N.p., 2015. Print.
13) Marqu´es, I.-Face Recognition Algorithms In-text: (Marqu´es) Bibliography:
Marqu´es, Ion. Face Recognition Algorithms. 1st ed. 2010. Print.
14) Boehm, B. and Hansen, W. J. The Spiral Model as a Tool for Evolutionary
Acquisition In-text: (Boehm and Hansen) Bibliography: Boehm, Barry, and Wilfred
J Hansen. The Spiral Model As A Tool For Evolutionary Acquisition. 1st ed.
University of Southern California: N.p., 1988. Print.
15) Rowley, H. A., Baluja, S. and Kanade, T.-Neural NetworkBased Face Detection
In-text: (Rowley, Baluja and Kanade) Bibliography: Rowley, Henry A., Shumeet
Baluja, and Takeo Kanade. Neural Networkbased Face Detection. 1st ed. School of
Computer Science, Carnegie Mellon University,: N.p., 2015. Print.
16) Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs Fisherfaces: recognition
using class specific linear projection” volume 19, pages 711-720 new addition 2005.
17) “Hybrid model for software development process”, Nabil Mohammed Ali Munassar
and A. Govardhan,2013 1st addition
18)Robinson, E.- A Programmer’s Guide to Visual Basic.NET In-text: (Robinson)
Bibliography: Robinson, ED. A Programmer’S Guide To Visual Basic.NET. 1st ed.
2001. Web. 24 Feb. 2015.
19)Functionx.com C++ For MFC - Lesson 01: Introduction to Microsoft Visual C++
In-text: (Functionx.com)
Bibliography: Functionx.com,. 'C++ For MFC - Lesson 01: Introduction To
Microsoft Visual C++'. N.p., 2015. Web. 24 Feb. 2015.
Anton Jayashan cb004082
APIIT Sri Lanka Page 130 of 151
20)Functionx.com C# 4.0 - Lesson 01: Introduction to Microsoft Visual Studio
In-text: (Functionx.com)
Bibliography: Functionx.com,. 'C# 4.0 - Lesson 01: Introduction To Microsoft Visual
Studio'. N.p., 2015. Web. 24 Feb. 2015.
21)Msdn.microsoft.com Introduction (C#)
In-text: (Msdn.microsoft.com)
Bibliography: Msdn.microsoft.com,. 'Introduction (C#)'. N.p., 2015. Web. 24 Feb. 2015.
22)
Anton Jayashan cb004082
APIIT Sri Lanka Page 131 of 151
13 Reference
1) Msdn.microsoft.com System Requirements In-text: (Msdn.microsoft.com)
Bibliography: Msdn.microsoft.com,. 'System Requirements'. N.p., 2015. Web. 20
Feb. 2015.[online] https://msdn.microsoft.com/en-us/library/hh855359.aspx
2) Tutorialspoint.com -SDLC Spiral Model In-text: (Tutorialspoint.com)-Bibliography:
Tutorialspoint.com,. 'SDLC Spiral Model'. N.p., 2015. Web. 20 Feb. 2015.
3) Istqbexamcertification.com
What is Spiral model- advantages, disadvantages and when to use it?
In-text: (Istqbexamcertification.com)
Bibliography: Istqbexamcertification.com,. 'What Is Spiral Model- Advantages,
Disadvantages And When To Use It?'. N.p., 2015. Web. 20 Feb. 2015.
4) Waterfall Model
In-text: (Waterfall Model)
Bibliography: Waterfall Model,. 'Waterfall Model'. N.p., 2015. Web. 20 Feb. 2015.
5) McCormick, M.
Waterfall Methodology
In-text: (McCormick)
Bibliography: McCormick, Mike. Waterfall Methodology. 1st ed. 2012. Print.
6) Alam, S.
Waterfall Model Advantages and Disadvantages
In-text: (Alam)
Bibliography: Alam, Shah. 'Waterfall Model Advantages And Disadvantages'.
Buzzle. N.p., 2015. Web. 20 Feb. 2015.
7) Webopedia.com
What is Agile Software Development? A Definition from Webopedia.com
In-text: (Webopedia.com)
Bibliography: Webopedia.com,. 'What Is Agile Software Development? A Definition
From Webopedia.Com'. N.p., 2015. Web. 20 Feb. 2015.
8) My-project-management-expert
The Advantages and Disadvantages of Agile ScRUM Software Development from
www.My-Project-Management-Expert.com
In-text: (My-project-management-expert.com)
Bibliography: My-project-management-expert.com,. 'The Advantages And
Disadvantages Of Agile Scrum Software Development From Www.My-Project-
Management-Expert.Com'. N.p., 2015. Web. 20 Feb. 2015.
Anton Jayashan cb004082
APIIT Sri Lanka Page 132 of 151
9) Msdn.microsoft.com Overview of the .NET Framework In-text:
(Msdn.microsoft.com) Bibliography: Msdn.microsoft.com,. 'Overview Of The .NET Framework'. N.p., 2015. Web. 23 Feb. 2015.
10)Askvg.com- What is Microsoft .NET Framework? Download Links for All Versions
Inside - AskVG In-text: (Askvg.com)
Bibliography: Askvg.com,. 'What Is Microsoft .NET Framework? Download Links
For All Versions Inside - Askvg'. N.p., 2015. Web. 24 Feb. 2015.
11)Vogella.com Characteristics of Java
In-text: (Vogella.com)
Bibliography: Vogella.com,. 'Characteristics Of Java'. N.p., 2015. Web. 24 Feb. 2015.
23)Roos, D. What is an API? - HowStuffWorks
In-text: (Roos)
Bibliography: Roos, Dave. 'What Is An API? - Howstuffworks'. HowStuffWorks. N.p.,
2015. Web. 24 Feb. 2015.
24)Opencv.org ABOUT | OpenCV
In-text: (Opencv.org)
Bibliography: Opencv.org,. 'ABOUT | Opencv'. N.p., 2015. Web. 24 Feb. 2015.
25)Emgu.com Emgu CV: OpenCV in .NET (C#, VB, C++ and more)
In-text: (Emgu.com)
Bibliography: Emgu.com,. 'Emgu CV: Opencv In .NET (C#, VB, C++ and More)'. N.p.,
2015. Web. 24 Feb. 2015.
26)Aforgenet.com AForge.NET: Framework
In-text: (Aforgenet.com)
Bibliography: Aforgenet.com. ‘Aforge.NET: Framework'. N.p., 2015. Web. 24 Feb.
2015.
Anton Jayashan cb004082
APIIT Sri Lanka Page 133 of 151
27)Tutorialspoint.com UML - Activity Diagrams
In-text: (Tutorialspoint.com)
Bibliography: Tutorialspoint.com,. 'UML - Activity Diagrams'. N.p., 2015. Web. 25 Feb.
2015.
28)Visual-paradigm.com Sequence Diagram - UML Diagrams - Unified Modeling Language Tool
In-text: (Visual-paradigm.com)
Bibliography: Visual-paradigm.com,. 'Sequence Diagram - UML Diagrams - Unified
Modeling Language Tool'. N.p., 2015. Web. 25 Feb. 2015.
29)Smartdraw.com UML Collaboration Diagrams - Unified Modeling Language UML - SmartDraw Tutorials
In-text: (Smartdraw.com)
Bibliography: Smartdraw.com,. 'UML Collaboration Diagrams - Unified Modeling
Language UML - Smartdraw Tutorials'. N.p., 2015. Web. 25 Feb. 2015.
30)Smartdraw.com UML Component Diagrams - Unified Modeling Language UML - SmartDraw Tutorials
In-text: (Smartdraw.com)
Bibliography: Smartdraw.com,. 'UML Component Diagrams - Unified Modeling
Language UML - Smartdraw Tutorials'. N.p., 2015. Web. 25 Feb. 2015.
31)Owlnet.rice.edu Image Processing : Edge Detection
In-text: (Owlnet.rice.edu)
Bibliography: Owlnet.rice.edu,. 'Image Processing : Edge Detection'. N.p., 2015. Web.
26 Feb. 2015.
Anton Jayashan cb004082
APIIT Sri Lanka Page 134 of 151
14 APPENDICS
14.1 Gantt chart
Anton Jayashan cb004082
APIIT Sri Lanka Page 135 of 151
14.2 Testing results
14.2.1 Test case 1: Video Preview and face detection
14.2.2 System training
System records 10 faces as training images
Save the 10 training faces in a XML file
Anton Jayashan cb004082
APIIT Sri Lanka Page 136 of 151
14.2.3 Image based face detection
14.2.4 Detect many faces and extract the faces from an image
Anton Jayashan cb004082
APIIT Sri Lanka Page 137 of 151
14.2.5 Manual face detection
Anton Jayashan cb004082
APIIT Sri Lanka Page 138 of 151
14.3 End Results:
14.3.1 Test case 01: Face Detection
Input image Detected area Detected faces Result
1 success
1 success
0 fail
1 success
Anton Jayashan cb004082
APIIT Sri Lanka Page 139 of 151
0 fail
1 success
1 success
0 fail
Anton Jayashan cb004082
APIIT Sri Lanka Page 140 of 151
1 success
1 success
Times tested 10
Successful results 7
Failed results 3
Percentage of success 70%
Percentage of failure 30%
Anton Jayashan cb004082
APIIT Sri Lanka Page 141 of 151
14.3.2 Test case 2: Face Verification
Input Image Threshold value Face verified Result
4757.342 True success
4260.137 True success
4659.404 True success
3886.541 True success
- False Fail
Anton Jayashan cb004082
APIIT Sri Lanka Page 142 of 151
14.3.3 Test cases 3: Face matching
Test 01:
Input image
Matching results
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Successfully matched with 5 training images
Result: success
Anton Jayashan cb004082
APIIT Sri Lanka Page 143 of 151
Test 02:
Input image
Matching results:
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Successfully matched with 5 training images
Result: success
Anton Jayashan cb004082
APIIT Sri Lanka Page 144 of 151
Test 03:
Input image
Matching results:
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Successfully matched with 4 training images
Result: success
Anton Jayashan cb004082
APIIT Sri Lanka Page 145 of 151
Test 04:
Input image
Matching results:
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Not matched with any training images
Result: success, because system is not train with that face
Display message:
Anton Jayashan cb004082
APIIT Sri Lanka Page 146 of 151
Test 05:
Input image
Matching results:
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Successfully matched with 2 training images
Result: success
Anton Jayashan cb004082
APIIT Sri Lanka Page 147 of 151
Test 05:
Input image
Matching results:
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Successfully matched with 2 training images
Result: success
Anton Jayashan cb004082
APIIT Sri Lanka Page 148 of 151
Test 06:
Input image
Matching results:
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Not matched with any training images
Result: success, because system is not train with that face
Display message:
Anton Jayashan cb004082
APIIT Sri Lanka Page 149 of 151
Test 06:
Input image
Matching results:
Time duration: 30 seconds
No of testing during 30 seconds: 5 times
Successfully matched with 3 training images
Result: success
Anton Jayashan cb004082
APIIT Sri Lanka Page 150 of 151
15 User Manual
User have to log in to the system by giving correct user name and password. If the user
provide correct username and password the user can log in to the system. And the
confirmation message also pop up and inform the user that the user has successfully log in to
the system. If not error message display.
The user can see the Home page like below. And the Operator can select the option by
clicking on navigation buttons.
Anton Jayashan cb004082
APIIT Sri Lanka Page 151 of 151
In here system get 10 training images via a web cam. After that user can save these images in
a xml file as shown below