Android Gait Recognition System Guilherme Miguel Gonçalves ... · que as características...
Transcript of Android Gait Recognition System Guilherme Miguel Gonçalves ... · que as características...
Android Gait Recognition System
Guilherme Miguel Gonçalves Neves
Thesis to obtain the Master of Science Degree in
Electrical and Computer Engineering
Examination Committee
Chairperson: Prof. Fernando Duarte Nunes
Supervisor: Prof. Paulo Luís Serras Lobato Correia
Members of the Committee: Prof. João Pedro Oliveira
September 2013
Acknowledgements
This thesis marks the completion of a phase of great importance in my life and I want
to emphasize my gratitude to some people, which undoubtedly contributed to the
success of this work.
In first place, I would like to thank Professor Paulo Lobato Correia for his assistance,
ideas and support. Without his guidance and availability this work would have not
been possible.
I thank Nuno Moço for his assistance and availability to help me with any problem in
the Android programming area.
I want to express my gratitude to everyone who helped me build the database used on
this thesis. Without their help this work would not have been successful.
I also would like to thank my family and my friend Mariana Carvalho da Silva, for their
support, understanding and patience and for testing the developed application,
whenever I needed.
Abstract
The increasing popularity of mobile devices such as smartphones has changed the way
people access and manage data. With the increased capabilities of mobile phones,
several services are now available on the go: banking, e-commerce, remote work,
internet access or entertainment. Consequently, more confidential data is now stored
on these devices which increases the risk of information and identity theft by non-
authorized access to them. The need for methods more secure than the traditional
password or PIN inputs is now more important than ever. Biometric-based
authentication methods generally are more secure and are being increasingly adopted
to enhance the security of data and services. They perform individual recognition
based on biometric traits which are safer because they cannot be lost or easily stolen.
In this dissertation a gait biometric recognition system is proposed and implemented in
a smartphone using the Android operating system. The developed application is a
native mobile application since all stages of the process are executed on the device,
from the data acquisition to the matching phase, which uses the Dynamic Time
Warping (DTW) technique. The sensor used is a 3-axis accelerometer, which means
that three different signals will be captured. On this thesis two methods of recognition
were assessed; one uses the vector modulus by combining the three acceleration
components and the other where the three signals are processed independently. The
former method achieved promising results (EER=2.50%, FAR=2.05%, FRR=5.00%), but
the latter turned out to be impracticable, since the FAR and FRR values were too high.
Keywords
Biometrics, gait, Android, data security, Dynamic Time Warping, mobile device.
Resumo
O aumento da popularidade de dispositivos móveis, do tipo smartphones está a mudar
a forma como as pessoas acedem e gerem os seus dados e informação. Com o
aumento das capacidades destes dispositivos, vários serviços estão disponíveis de
forma remota, como o acesso online ao banco e o comércio online.
Consequentemente, cada vez mais informação confidencial é guardada no dispositivo
o que aumenta o risco de roubo de informação ou de identidade. Sistemas baseados
em biometria podem garantir métodos mais seguros de autenticação em comparação
com as soluções tradicionais, como a típica palavra-passe ou PIN. Estes sistemas
permitem o reconhecimento individual de uma forma mais prática e segura, uma vez
que as características biométricas não podem ser esquecidas e dificilmente são
furtadas. Nesta dissertação, um sistema biométrico baseado na forma de andar (gait)
do utilizador é proposto e implementado num smartphone usando o sistema operativo
Android. A aplicação desenvolvida é uma aplicação móvel, onde todos os estágios do
processo são executados no dispositivo, desde a aquisição dos sinais, através de um
acelerómetro de três referenciais, até ao processo de matching, usando a técnica
Dynamic Time Warping (DTW). Nesta tese foram estudados dois métodos de
reconhecimento; o primeiro onde os três sinais são combinados no módulo da
aceleração usando a norma do vetor e o segundo onde os sinais são analisados
independentemente. O primeiro método mostrou resultados promissores (EER=2.50%,
FAR=2.05%, FRR=5.00%), mas o segundo mostrou-se impraticável devido ao facto de
os valores de FAR e FRR serem demasiado elevados.
Palavras-chave
Biometria, gait, Android, Dynamic Time Warping , segurança de dados, ,
dispositivos móveis.
Table of Contents
1. Introduction .................................................................................................................. 2
1.1. The Mobile Phone Market ................................................................................. 3
1.2. Objectives of the Dissertation ........................................................................... 4
1.3. Main Contributions ............................................................................................ 5
1.4. Structure of This Dissertation ............................................................................ 5
2. Biometrics in the Mobile Environment ........................................................................ 7
2.1. Biometrics: Overview ......................................................................................... 7
2.1.1. Biometric Recognition System ................................................................ 10
2.2. Biometric Systems in Mobile Devices .............................................................. 14
2.2.1. Commercial systems ............................................................................... 15
2.2.2. Current Investigation .............................................................................. 17
3. State of the art ............................................................................................................ 23
3.1. Gait Recognition Technologies ........................................................................ 23
3.1.1. Machine vision ........................................................................................ 24
3.1.2. Floor sensors ........................................................................................... 25
3.1.3. Wearable sensors ................................................................................... 27
3.2. Gait Recognition Systems ................................................................................ 27
3.2.1. Data acquisition ...................................................................................... 27
3.2.2. Pre-processing ........................................................................................ 28
3.2.3. Feature extraction .................................................................................. 28
3.2.4. Matching ................................................................................................. 29
4. Selection of the Target Platform ................................................................................ 33
4.1. Operating System ............................................................................................. 33
4.2. Target Mobile Device ....................................................................................... 33
4.3. Software Development Tools .......................................................................... 34
5. Proposed Biometric Verification System .................................................................... 36
5.1. System Architecture ......................................................................................... 36
5.1.1. Introduction ............................................................................................ 36
5.2. Implementation Details ................................................................................... 37
5.2.1. Data Acquisition ...................................................................................... 37
5.2.2. Pre-processing ........................................................................................ 40
5.2.3. Feature Extraction .................................................................................. 47
5.2.4. Matching Stage ....................................................................................... 52
5.2.5. Dynamic Time Warping .......................................................................... 53
5.2.6. GPS & Email ............................................................................................ 56
6. Experimental Results .................................................................................................. 57
6.1. Experimental Setup .......................................................................................... 57
6.2. Performance Evaluation .................................................................................. 59
6.3. System’s performance ..................................................................................... 60
6.3.1. Sample rate ............................................................................................. 60
6.3.2. Acceleration modulus (mean method)................................................... 62
6.3.3. Acceleration modulus (DTW method) .................................................... 65
6.3.4. Independent acceleration components ................................................. 67
6.3.5. Maximum sample rate in different devices ........................................... 71
6.3.6. Discussion ............................................................................................... 73
6.3.7. Potential Applications ............................................................................. 77
7. Conclusions ................................................................................................................. 79
8. References .................................................................................................................. 80
List of Figures
FIGURE 1 - SMARTPHONE MARKET PENETRATION IN THE U.S. [2]. .............................................................................. 3
FIGURE 2 - GLOBAL SMARTPHONE OPERATING SYSTEM MARKET SHARE (BY UNITS SHIPPED), 2005 VS 2012 [4]. ................. 4
FIGURE 3 – MOST COMMON HUMAN CHARACTERISTICS USED AS BIOMETRIC TRAITS ....................................................... 9
FIGURE 4 - ENROLLMENT STAGE DIAGRAM. .......................................................................................................... 10
FIGURE 5 - RECOGNITION STAGE DIAGRAM. .......................................................................................................... 11
FIGURE 6 - MODULES OF A TYPICAL BIOMETRIC RECOGNITION SYSTEM. ...................................................................... 12
FIGURE 7 – FAR AND FRR DEPENDING ON T [9]. .................................................................................................. 13
FIGURE 8 - DET AND ROC CURVES EXAMPLE [10]. ................................................................................................ 13
FIGURE 9 - ANDROID 4.0 FACIAL RECOGNITION SET UP [11]. ................................................................................... 15
FIGURE 10 - SIEMENS SL10 [13]. ...................................................................................................................... 16
FIGURE 11 - MOTOROLA ATRIX [14]. ................................................................................................................ 17
FIGURE 12 – FROM LEFT TO RIGHT: PHOTO WITHOUT FLASH; PHOTO WITH FLASH; ROI. [15] ........................................ 18
FIGURE 13 - FEATURE EXTRACTION STAGE USING THE OLOF METHOD [18]. ............................................................... 19
FIGURE 14 – THE THREE AXIS IN WHICH ACCELERATION IS MEASURED AND PHONE POSITION WHEN COLLECTING DATA [19]. 20
FIGURE 15 - DATA OBTAINED FROM THE THREE AXIS [19]. ...................................................................................... 21
FIGURE 16 – A COMPLETE GAIT CYCLE SHOWING THE EIGHT GAIT PHASES [24]. ........................................................... 23
FIGURE 17 - THE BOTTOM ROW SHOWS SILHOUETTE FRAMES. THE RAW IMAGE CORRESPONDING TO EACH SILHOUETTE IS
SHOWN ON THE TOP ROW [35]. ................................................................................................................ 25
FIGURE 18 – TYPICAL GAIT CYCLE OVER THE SENSOR MAT. (A) SHOWS RECOGNIZED FOOTSTEPS (B) SHOWS THE TIME SPENT AT
EACH LOCATION (THE HIGHER INTENSITY OF THE PIXELS, THE LONGER THE PERSON HAS STAYED ON THAT POSITION) (C)
THE PROFILE OF THE 4 FOOTSTEPS ON THE SENSOR MAT, SHOWING HEEL AND TOE STRIKES [37]. ........................... 26
FIGURE 19 - HISTOGRAM SIMILARITY METHOD EXAMPLE [45]. ................................................................................. 31
FIGURE 20 - CMS CURVE COMPARING DTW, CORRELATION AND DATA DISTRIBUTION STATISTIC (HISTOGRAM) [41]. ....... 32
FIGURE 21 - THE SAMSUNG GALAXY ACE PLUS SMARTPHONE [47]. .......................................................................... 34
FIGURE 22 - SYSTEM ARCHITECTURE ................................................................................................................... 36
FIGURE 23 - AN EXAMPLE OF A GOOD AVERAGE CYCLE. ........................................................................................... 37
FIGURE 24 – THE THREE ACCELEROMETER’S AXIS. [48] ........................................................................................... 38
FIGURE 25 – AX, AY, AZ COMPONENTS AND THE CORRESPONDING ACCELERATION MODULUS. .......................................... 39
FIGURE 26 - PRE-PROCESSING BLOCK DIAGRAM ..................................................................................................... 40
FIGURE 27 - FIRST ROTATIONAL TRANSFORMATION [49]. ........................................................................................ 41
FIGURE 28 - SECOND ROTATIONAL TRANSFORMATION [49]. .................................................................................... 42
FIGURE 29 - LINEAR INTERPOLATION BETWEEN THE TWO RED POINTS. ....................................................................... 43
FIGURE 30 - EXAMPLE OF THE ACCELERATION MODULUS CAPTURED BY THE SMARTPHONE WHILE WALKING ALONG A
CORRIDOR. ........................................................................................................................................... 44
FIGURE 31 - WEIGHT DIFFERENCES BETWEEN MA AND WMA. ................................................................................ 45
FIGURE 32 - MA AND WMA RESULTS EXAMPLE. .................................................................................................. 46
FIGURE 33 - FEATURE EXTRACTION BLOCK DIAGRAM. ............................................................................................. 47
FIGURE 34 - STEP DETECTION EXAMPLE. .............................................................................................................. 49
FIGURE 35 – IRREGULAR CYCLE SKIPPING EXAMPLE. ............................................................................................... 50
FIGURE 36 – MEAN AVERAGE CYCLE COMPUTATION EXAMPLE.................................................................................. 51
FIGURE 37- EXPECTED M-LIKE SHAPED AVERAGE CYCLE RESULT. ............................................................................... 52
FIGURE 38 - MATCHING EXAMPLE, WITH A LEGIT USER AND AN IMPOSTER USER ATTEMPTS. ........................................... 53
FIGURE 39 - PARTICIPANT'S GENDER AND AGE DISTRIBUTION ................................................................................... 58
FIGURE 40 - ROC CURVES COMPARISON. ............................................................................................................. 61
FIGURE 41 - FAR AND FRR AT DIFFERENT OPERATING THRESHOLDS. ......................................................................... 63
FIGURE 42 - OBTAINED ROC CURVE. .................................................................................................................. 63
FIGURE 43 - FAR AND FRR AT DIFFERENT OPERATING THRESHOLDS. ........................................................................ 66
FIGURE 44 - OBTAINED ROC CURVE. .................................................................................................................. 66
FIGURE 45 - FAR AND FRR AT DIFFERENT OPERATING THRESHOLDS FOR COMPONENT AX. ............................................. 69
FIGURE 46 - FAR AND FRR AT DIFFERENT OPERATING THRESHOLDS FOR COMPONENT AY. ............................................. 69
FIGURE 47 - FAR AND FRR AT DIFFERENT OPERATING THRESHOLDS FOR COMPONENT AZ. ............................................. 70
FIGURE 48 - ROC CURVES OF THE DIFFERENT ACCELERATION COMPONENTS. ............................................................... 70
FIGURE 49 - S7500 VERSUS HTC WILDFIRE. ....................................................................................................... 71
FIGURE 50 - S7500 VERSUS SAPO A5. ................................................................................................................ 72
FIGURE 51 - S7500 VERSUS S5830. .................................................................................................................. 72
FIGURE 52 - S7500 VERSUS I8160. ................................................................................................................... 72
FIGURE 53 – ROC CURVE COMPARISON BETWEEN- THE ACCELERATION MODULUS AND EACH OF THE THREE COMPONENTS. . 75
FIGURE 54 - FRR (%) AGAINST FAR (%) TO OBTAIN EER. ...................................................................................... 75
List of Tables
TABLE 1- A SIMPLE EXAMPLE SHOWING A POSSIBLE TRANSFORMATION FROM THE TWO SEQUENCES. Λ INDICATES AN EMPTY
SYMBOL. .............................................................................................................................................. 54
TABLE 2 - EXAMPLE’S COST MATRIX. THE OPTIMAL PATH IS INDICATED IN BOLD ........................................................... 54
TABLE 3 - DIFFERENT SAMPLE RATES FAR, FRR AND GAR VALUES. IN GREEN ARE MARKED THE POSSIBLE OPERATING POINTS
IN FOR SECURITY PURPOSES. THE 50 SAMPLES PER SECOND RATE HAS NO VIABLE OPERATING POINT. ...................... 60
TABLE 4 - FAR, FRR AND GAR FOR DIFFERENT THRESHOLD VALUES, USING THE AVERAGE METHOD. ............................... 62
TABLE 5 - FAR, FRR AND GAR VALUES AROUND THE CHOSEN OPERATING POINT. ....................................................... 64
TABLE 6 - FAR, FRR AND GAR VALUES USING THE DTW METHOD. .......................................................................... 65
TABLE 7 - FAR, FRR AND GAR VALUES AROUND THE CHOSEN OPERATING POINT. ....................................................... 67
TABLE 8 - FAR, FRR AND GAR FOR DIFFERENT THRESHOLD VALUES FOR THE THREE COORDINATES. ................................ 68
TABLE 9 - MEAN AND DTW METHODS COMPARISON. ............................................................................................ 73
TABLE 10 – ONE USER FROM THE USED DATABASE. WALK1, WALK2 AND WALK3 REPRESENT THREE DIFFERENT WALKS FROM
THE SAME USER. THE VALUES REPRESENTED IN THE TABLE ARE THE TRANSFORMATION COSTS OBTAINED THROUGH THE
DYNAMIC TIME WARPING. THE COSTS BELOW THE CHOSEN OPERATING POINT (TRANSFORMATION COST = 33) ARE
MARKED IN DARK RED. ............................................................................................................................ 74
List of Acronyms
ADT – Android Development Tools
AM – Acceleration Modulus
CMS – Cumulative Match Score
DET – Decision Error Trade-off
DTW – Dynamic Time Warping
EER – Equal Error Rate
FAR – False Accept Rate
FRR – False Reject Rate
FS – Floor Sensors
FTA – Failure to Acquire
GAR – Genuine Accept Rate
HMM – Hidden Markov Models
HTK – Hidden Markov Model Toolkit
MV – Machine Vision
OLOF - Orthogonal Line Ordinal Features
PIN – Personal Identification Number
ROC – Receiver Operating Characteristic
ROI – Region of Interest
SDK – Standard Development Kit
WS – Wearable Sensors
2
1. Introduction
We are living in a technological world where IT gadgets are changing common citizen
daily life faster than ever. Mobile phones – and particularly smartphones - are in the
center of this technological revolution, making the mobile phone industry one of the
top fastest growing economies in the world, showing no tendency to slow down.
Nowadays, information can be accessed almost instantly, from anywhere or uploaded
to anywhere using the currently available mobile devices, cell phones, tablet PCs,
laptops or using wireless connections. This results in a large amount of personal
information stored on these devices such as addresses, bank account details, credit
card numbers, appointments and business details in addition to phone numbers. This
critical information is usually protected just by a PIN or a password, or in some cases it
is not even protected. If the device is stolen and the password broken it can have very
negative financial and/or personal consequences to the owner.
Together with the communication power of mobile devices the development of user
friendly graphical interfaces increases the simplicity of their use and consequently the
number of users. Despite being personal, mobile devices need to be protected, which
is not currently happening in most of the cases. This information protection concern
should also consider those users with less technological skills.
A survey by Breitinger and Nickel [1] reported that only 13% of the mobile phones
require some kind of user authentication after a stand-by-phase. As a result, all data is
directly accessible by every person which has physical access to that device. The same
survey also showed that the reason for this is that the authentication methods offered
by mobile phones are too troublesome, and users easily get annoyed of entering a PIN
or a password when the mobile device goes through a stand-by-phase.
This is where biometric authentication methods may come in. Biometric systems are
becoming increasingly important, since they provide more reliable and efficient means
3
of identity verification. Biometric-based systems consist of technologies used for
access control based on distinctive individual’s biological traits (physical, physiological
and/or behavioral). These authentication systems use automated processes which
include acquiring and processing a sample and matching the result with previously
enrolled templates, resulting in a fast (desirably real-time) decision.
1.1. The Mobile Phone Market
A survey by Nielsen [2] shows that on February 2012 almost half (49.7%) of the U.S.
mobile subscribers own smartphones while one year before, according to a previous
survey in February 2011, only 36 percent of mobile subscribers owned smartphones
(see Figure 1). This growing tendency seems to be still increasing since more than two-
thirds of new mobile devices bought from January 2012 to March 2012, are
smartphones instead of feature phones [3], suggesting that the growth of the
smartphone market is unstoppable.
This development of the smartphone market leads to a fast increase in the amount of
applications for these devices and consequently of the available services requiring
personal data access. In spite of their size, smartphones are in fact small computers,
Figure 1 - Smartphone market penetration in the U.S. [2].
4
able to carry valuable and personal information that needs to be secured from
everyone’s access but its legitimate user. This need will increase the demand for more
secure identification systems on mobile devices, including biometric based systems.
“Global Biometric Forecast to 2012” *4] estimates an annual growth of 21%, between
2012 and 2014, in the use of biometric based systems.
Figure 2 - Global smartphone operating system market share (by units shipped), 2005 vs 2012 [4].
The objective of this dissertation is to develop a biometric verification system for a
smartphone, using only the device’s built in accelerometer. As can be seen in Figure 2,
Android, by Google Inc. [5], accounts for more than 50% of the U.S. smartphone OS
market share, being the dominant operating system (OS) in the smartphone market.
Considering this and that the operating system by Google is open source, Android was
the chosen OS to develop a biometric based verification system in this dissertation.
1.2. Objectives of the Dissertation
This dissertation is based on the hypothesis that the smartphones market will keep
growing, requiring simple and efficient authentication technologies based on the built
in sensors and processing capacity, and that biometrics associated to those
5
smartphones characteristics can play an important role on controlling the abusive
device’s use.
The main product of this dissertation is to create a biometric authentication method
within a mobile phone environment. Nowadays, many mobile phone biometric
recognition systems require the usage of specific acquisition devices in order to get the
necessary biometric traits. Although the usage of such devices improves the quality of
the information gathered, increasing its recognition efficacy, its use is not practical.
The most practical way of using biometric recognition systems would be to just use the
device’s built in sensors. This dissertation proposes a mobile solution based on the
Android operating system using the accelerometer sensor in order to identify a
person’s way of walking (gait). The expected outcome of this dissertation is a software
application able to perform biometric verification on a smartphone capable of
capturing one’s gait through its accelerometer data.
1.3. Main Contributions
The main contributions of this dissertation are:
A native mobile biometric verification system application using gait as the
selected biometric trait.
An Android application with a friendly interface which can be used successfully
in a great variety of devices.
A referential calibration method was implemented (which enables the
application to work correctly independently of the device’s position), which has
not been reported in the literature before.
1.4. Structure of This Dissertation
After this introduction the remaining of the dissertation is structured as follows:
6
Chapter 2 – Biometrics in the Mobile Environment: A review of the main
biometric concepts and definitions and a brief overview of their
implementation in the mobile environment is presented.
Chapter 3 – State of the art: A review of different techniques implemented in
the literature that might be used while implementing a biometric
authentication system is provided.
Chapter 4 – Selection of the target platform: Discusses the reasoning behind
the choice of the mobile platform and the tools used in the dissertation.
Chapter 5 – Proposed Biometric Verification System: Describes the
architecture and implementation of the proposed biometric authentication
system.
Chapter 6 – Experimental results: Presents and discusses the experimental
results obtained with the developed system.
Chapter 7 – Conclusions and Future Work: Provides conclusions about the
work developed and the obtained results and discusses directions for future
work.
7
2. Biometrics in the Mobile Environment
This chapter will explain how biometric systems work, as well as the metrics used to
evaluate this type of systems. There is a brief introduction to the commercial systems,
which are already implemented, and some examples of new ways of using biometrics
in this kind of devices.
2.1. Biometrics: Overview
Biometric recognition, or biometrics, refers to the automatic identification of a person
based on her anatomical (e.g. face, fingerprint), behavioral (e.g. gait, signature) or
physiological (e.g. ADN) characteristics or traits - Figure 3 shows some of the most
common characteristics used by biometric recognition techniques. The main
advantages offered by biometric recognition over traditional methods involving ID
cards (tokens) or PIN numbers (passwords) are [6]:
The person to be identified is required to be physically present at the point-of-
identification.
Identification based on biometric techniques obviates the need to remember a
password or to carry a token.
By replacing PINs (or using biometric traits in addition to PINs), biometric techniques
can be the answer for preventing unauthorized access to ATMs, cellular phones,
laptops, and computer networks. Unlike biometric traits, PINs or passwords may be
forgotten, and credentials like passports and driver’s licenses may be forged, stolen, or
lost. As a result, various biometric traits are being used for real-time recognition in
order to enhance security and reduce financial fraud. In some applications more than
one biometric trait is used to attain higher security. Such systems are called
multimodal biometric systems [6], by contrast with unimodal systems, when a single
biometric trait is used.
8
Biometrics can use all the distinctive and measurable characteristics or traits, which
can identify individuals. In order to be used as a biometric identifier, the human
characteristic should satisfy the following requirements [7]:
Universality – each and every person should have the select characteristic.
Collectability – the characteristic should be acquired easily.
Distinctiveness – each individual’s trait must be unique and different from
everyone else’s.
Permanence – the trait should be reasonably invariant over time. However
some biometrics have to be re-measured from time to time (e.g., faces for
passport applications).
Yet, in a practical biometric authentication system there are other issues that must be
considered [7]:
Performance – The system’s accuracy and authentication speed.
Acceptability – The user’s cooperation/willingness in using a particular trait.
Circumvention – The difficulty in fooling the system by using fraudulent
methods.
If an individual’s trait verifies all the requirements listed above, it can be used on a
biometric authentication system.
9
Figure 3 – Most common human characteristics used as biometric traits
Biometric recognition systems can be used with two different objectives: identification
or verification. Biometric verification systems, answer the question “Am I who I claim
to be?” by comparing the user’s trait with a single user’s template (e.g. to access a
personal computer) that has been previously created. The system responds by
confirming or denying the person’s claimed identity. On the other hand, biometric
identification systems answer the question “Who am I?” from a list of N users in the
template database. Identification methods typically take more time to respond
because it involves 1:N matching operations compared to 1:1 matching for verification
methods [6].
The matching performance of identification methods can be improved using indexed
databases (e.g. hash generating techniques) [8]. Another technique to improve the
matching performance is the use of soft biometric characteristics, such as eye and skin
color, gender, height or weight. By using such data in conjunction with the usual
biometric identifiers, the matching speed can be improved, as the database can be
previously filtered/sorted.
10
2.1.1. Biometric Recognition System
The basic idea behind a biometric recognition system is to successfully recognize an
individual by retrieving and analyzing one or more of its biometric traits, and
consequently, before an individual can be identified, he must have been previously
registered (or enrolled in the system). The first time one uses the system, his biometric
information is captured and converted into numeric data through the use of
appropriate algorithms, and only then it is stored into a template, which will be
enrolled to be used for comparisons in future recognition attempts. Figure 4 shows the
modules of the enrollment stage.
Data acquisiton
Pre-processing
Feature extraction
Template Database
Name acquisiton
Figure 4 - Enrollment stage diagram.
After the user’s enrollment, the data previously stored in the database can be used for
recognition purposes (verification or identification). As the information retrieved by
the biometric recognition system is never exactly the same, during the recognition
stage a similarity score must be calculated between the actual measurement and the
enrolled information. A decision is then made, based on that similarity score, granting
or denying access to the user, as illustrated in Figure 5. The different modules of the
recognition system are explained below.
11
Data acquisiton
Pre-processing
Feature extraction
Template Database
Decision
Yes
No
Figure 5 - Recognition stage diagram.
A complete biometric recognition system, including both the enrollment and
recognition stages, is usually divided into six different modules, as represented in
Figure 6:
Data acquisition – This is the first module, and its function is to acquire the
biometric trait, which can be an image or other signal, through the system’s
sensors. If the user is in the enrollment phase, the name of the user is also
acquired by this module.
Pre-processing – Once the input signal is acquired, the original input is
processed in order to overcome the sensor’s weaknesses (e.g., different data
acquisition ratios, noise effects).
Feature extraction –The signal’s region of interest is segmented and the
extraction algorithm can be applied to get the feature vector of the biometric
signal. A new template is generated.
Template Database – Database containing all the templates which were stored
during the enrolment stage for matching purposes.
Matching – This module is responsible for comparing new templates with the
stored one, returning a matching score between the two templates.
Decision – This module is responsible for granting or refusing access, based on
the matching score.
12
Data acquisiton
Pre-processing
Feature extraction
Template Database
Enrollment
Identifying Decision
Yes
No
Figure 6 - Modules of a typical biometric recognition system.
Yet, biometric systems are not perfect. There are two types of errors associated with
biometric verification systems: the False Reject Rate (false negative), FRR, and the
False Accept Rate (false positive), FAR. FRR is the probability of wrongfully rejecting a
genuine user, and FAR is the probability of wrongfully accepting an impostor user. FAR
and FRR are calculated using the following equations:
(1)
(2)
The system’s decision to accept or reject a user is based on a threshold value, t.
The analysis of Figure 7 shows that FAR and FRR vary inversely to each other. This
means that by increasing the threshold value t, the system becomes less tolerant, and
consequently more secure, and the FRR increases. By lowering t, the system becomes
less secure and the FAR increases. To evaluate a biometric system’s performance in
verification mode, researchers often use a Decision Error Trade-off (DET) curve. A DET
curve is a plot of FAR versus FRR and shows the system’s performance under different
values of t. The DET’s curve point at which the FAR equals the FRR is called the Equal
13
Error Rate (EER). A low EER indicates a system with a balanced sensitivity, but not
necessarily the optimal threshold value, as different systems have different security
requirements.
Figure 7 – FAR and FRR depending on t [9].
A modified version of the DET curve is a ROC (Receiver Operating Characteristic) curve,
which plots 1-FRR (probability of correct verification) versus FAR. The ROC curve shows
the accuracy of the system. The closer the curve is to the 45-degree diagonal of the
ROC curve, the less accurate the system is. An example of a DET and ROC curves is
shown in Figure 8.
Figure 8 - DET and ROC curves example [10].
14
In a perfect biometric system, both FAR and FRR would be zero and the biometric
system would always correctly verify the user’s identity based on the biometric trait,
but as one knows, in practice this is often not achievable.
The Failure To Acquire (FTA) rate indicates how often the device fails to capture the
biometric trait. The FTA can be a useful metric particularly in systems collecting data
from unconstrained environments, which is the case of mobile-based applications.
There is another way of evaluating a biometric identification system (‘who am I?’
system), the Cumulative Matching Score (CMS). The identification process is a typically
‘closed’ test, which means the sensor takes an observation of an individual that is
known to be in the database. The test subject’s features are compared to the other
features in the system’s database and a similarity score is found for each comparison.
These similarity scores are then numerically ranked in a descending order. The
percentage of times that the highest similarity score is the correct match for all
individuals is referred as the ‘top match score’. If any of the top r similarity scores
corresponds to the test subject, it is considered as a correct match in terms of the
cumulative match. The percentage of times one of those r similarity scores is the
correct match for all individuals is referred to as the Cumulative Match Score. The CMS
curve is the rank n versus percentage of correct identification, where rank n is the
number of top similarity scores reported. An example of a CMS curve is shown in
Figure 20.
2.2. Biometric Systems in Mobile Devices
The use of biometrics in mobile devices is not a new topic (as it will be explained on
section 2.2.1), but its spread was limited by usability constraints of the former device
generation. Yet, this tendency is changing, since the introduction of smartphones
which have built in cameras and a wide range of additional sensors that can be
explored for biometric recognition purposes.
15
2.2.1. Commercial systems
Biometric verification systems are bound to replace passwords, and the proof of that,
is the release of Android OS 4.0, Ice Cream Sandwich. A company named PittPatt
developed a biometric verification system which is integrated on this new Android OS
version. This system uses face recognition to unlock the mobile device, and
incorporates a failsafe password or pattern in case the facial recognition fails (see
Figure 9). The use of this biometric system requires that the mobile device has a
frontal camera, which is already true for many devices, due to the fact that it is highly
impractical to use the rear camera for facial recognition.
Figure 9 - Android 4.0 facial recognition set up [11].
This is a good idea but recent studies show it is possible to fool the facial unlock
function by simply holding a picture in front of the phone [12].
However, the use of biometrics in mobile devices is not a new topic. On 1998, Dr.
Manfred Bromba presented the possibility to replace passwords by a fingertip scan
[13]. At a Siemens booth at CeBIT 98 the public could admire the first demonstrator
16
which captured the fingerprint with a newly developed semiconductor sensor and sent
it to a notebook, which decided whether the finger pattern was identical with a
beforehand stored reference. After the first fingerprint demonstrator has caused quite
a stir at CeBIT 98, one year later Siemens released the SL10, which was the first
fingerprint protected mobile phone on the market (see Figure 10) and some major
companies still hold to the fingerprint protection idea.
Figure 10 - Siemens SL10 [13].
On 2011, Motorola Mobile released the ATRIX (Figure 11), which is the first
smartphone to operate using fingerprint recognition [14]. Every time the user touches
the power button, the device will recognize the fingerprint and approve/disapprove
the operation.
17
Figure 11 - Motorola ATRIX [14].
But, once again, in spite of being a good way of user’s data protection, fingerprint
reading is not very popular because it requires a dedicated finger scanner, which
creates a problem of convenience and device adaptability.
2.2.2. Current Investigation
This section describes two biometric systems which only use built-in hardware
(accelerometer and camera), which every smartphone integrates, overcoming the
device compatibility issue.
Palmprint Verification on Mobile Phones
Noam brown [15] proposed a system which is intended to realize a mobile palmprint
verification system using the device’s camera. The project goals were to provide a
mobile solution, allowing a high degree of freedom in hand placement for casual use
and sufficient verification accuracy.
The system architecture main modules deal with region of interest (ROI) extraction
and feature extraction using an accelerated competitive code, each entailing several
processing steps. In order to segment the hand from the rest of the image, the author
is using the camera’s flash and assuming that the hand always occupies the center of
the image. As the effect of the flash diminishes quickly with distance, using it helps to
18
differentiate the hand and the background (see Figure 12). By assuming that the palm
is in the center of the image, any region disconnected from the center pixel by a canny
edge detection is considered a background region and is removed.
Figure 12 – From left to right: Photo without flash; Photo with flash; ROI. [15]
The region of interest (ROI) is defined by two valley points, one located between the
little finger and the ring finger and the other between middle and index finger. Given
the two valley points, the ROI can be finally derived based on known geometric
relationships.
To compare the palm’s ROI, the author chose a modified version of the method
presented in [16], which relies on the assumption that palm lines are approximately
straight and thin. The image is firstly filtered using a 2-D Gaussian filter to reduce the
presence of extremely thin wrinkles, and then it is filtered with a series of direction-
specific line detectors. The experiments were performed indoor, using a Motorola
Droid X device with flash.
The dataset consisted on 56 palmprint images from 7 individuals. The participants used
their dominant hands, for the image capturing with six participants using their right
hand and one using his left. The dataset allows for a total of 1431 unique comparisons,
with 196 comparisons of the same hand and 1235 comparisons of different hands.
Using a threshold level of 0.285 the Equal Error Rate (EER) for the proposed method
was found to be 2.04%.
19
Nuno Moço [17] developed a similar application, which also uses the same concept of
hand recognition. The main modules also consist in the extraction of the region of
interest and feature extraction. In order to obtain the ROI, a number of steps must be
made, including image segmentation, key points (finger tips and finger valleys)
detection and region of interest extraction.
The normalized region of interest resulting from this pre-processing stage is the input
of the feature extraction module, which outputs the biometric templates computed
based on the palmprint features.
The feature extraction technique used in this biometric recognition system is the
Orthogonal Line Ordinal Features (OLOF) [18]. The OLOF technique generates a one bit
feature code by qualitatively comparing two elongated, line-like image regions, which
are orthogonal in orientation. A palmprint pattern is represented by thousands of
ordinal feature codes. A feature extraction stage example is shown in Figure 13.
Figure 13 - Feature extraction stage using the OLOF method [18].
In order to match the users, the Hamming distance was used. The Hamming distance
between two vectors is the number of ordinal code coefficients in which the
corresponding templates differ. The user verification decision is based on comparing
the computed Hamming distance against a pre-defined threshold.
The experiment was also performed indoor using an HTC Desire mobile phone. The
application was tested with a database of 50 users, composed by 10 images of each
20
hand, resulting in a total of 1000 images. Using a threshold level of 0.37 the FAR, FRR
and EER achieved were 0.03%, 9.87% and 6% respectively.
Accelerometer-Based Biometric Gait Recognition
Claudia Nickel [19] proposed a biometric-based authentication method using gait as
the biometric trait. The system’s idea is to authenticate the user while he is walking
with the phone, without the need to use any extra input. The unobtrusiveness of the
gait-based systems is one of their main advantages. Normally, high quality
accelerometers (sensors) are used to capture the gait data, but using the lower quality
sensors present on mobile phones creates an additional challenge.
In order to overcome this and other limitations experienced in similar systems, the
proposed method is based on Hidden Markov Models (HMM) [20].
The mobile phone used in [19] was an Android G1 which contains piezo-resistive
accelerometers for measuring acceleration in three directions (see Figure 14). The
walking distance for each subject was about 37 meters down a hall, waiting for 2
seconds at the end, turn around, wait 2 more seconds and then walk back the same
distance. Data from 48 subjects from both genders and from different ages was
collected. A sample of the recorded data can be seen in Figure 15.
Figure 14 – The three axis in which acceleration is measured and phone position when collecting data [19].
21
The gait recognition system consists of four stages - data preparation, training phase,
recognition phase and analysis phase - each one consisting of several steps. For
building and training HMMs, a software toolkit named Hidden Markov Model Toolkit
(HTK), was used to build and manipulate Hidden Markov Models. For the recognition
phase a modification of the Viterbi algorithm was used. The system takes as input a
dictionary which describes how each subject can be classified, a recognition network,
the HMM definitions produced during the training phase and finally a list of walk
sections for testing purpose. The output produced is a text file which contains a list of
walk sections of the user along with the class into which they fall.
Figure 15 - Data obtained from the three axis [19].
The goal of the analysis stage is to know how often the enrolled user is not recognized
by his phone and how often a foreign subject is recognized as the enrolled user
resulting in unauthorized access.
In conclusion, the results obtained were very motivating (with FAR and FRR values
around 10%) taking into account the quality of the sensors present in the used mobile
phone, specially comparing to results obtained in previous experiments using high
quality sensors. The fact that the data collection was performed under constrained
conditions has to be taken into account when examining the obtained results. In order
22
to further validate the results the authors intend to apply the proposed method using
the phone in a more realistic scenario.
23
3. State of the art
In this chapter we will look into gait as a biometric feature.
Gait is defined as “a manner of walking” in every dictionary. However, there is more to
gait than a simple manner of walking. It is possible to use gait as a biometric measure
to recognize known persons and classify unknown subjects.
Nearly a century ago, A.A. Marks, an American prosthetist offered a precise qualitative
description of normal human locomotion when he illustrated and analyzed the walking
process in eight phases and discussed the implications of prosthetic design on the
function of amputee gait [21].
Nowadays, a human gait cycle is defined as the period from an initial contact of one
foot to the following contact with the same foot [22]. In Figure 16 a complete gait
cycle illustrating the eight gait phases is shown.
Figure 16 – A complete gait cycle showing the eight gait phases [24].
3.1. Gait Recognition Technologies
The use of gait to recognize people is not a new idea. The earliest research dates 40
years back, where studies from medicine [24] and psychology [25] presented evidence
that human gait has distinctive patters from which individuals can be identified.
24
However, it was just in the last decade that gait has been introduced as a biometric
feature. From a technical point of view, gait recognition technology can be categorized
in three different classes:
Machine Vision (MV) – MV uses a video camera to capture gait data and
video/image processing to extract features. This method is often used in
surveillance and forensics.
Floor Sensors (FS) – FS use sensors installed in the floor that are able to
measure gait features such as ground reaction force and heel-to-toe-ratio when
a person walks on them. This method is not usually used in identification
biometric systems.
Wearable Sensors (WS) – WS collect the gait data through body-worn sensors
and this method is normally used to authenticate a person.
3.1.1. Machine vision
The most popular gait analysis systems use machine vision techniques in order to
extract gait patterns. The first attempt of gait analysis done automatically was
performed in 1994 by Niyogi and Adilson [26]. Their approach consisted on deriving
side-view gait signatures by modeling the walker as a stick-figure and using angles as
signatures. However, many reports have been published which use different
techniques and methods, for instance: Lee et al. [27] localized image features that
would fit different parts of the binary silhouette of a person. Yoo et al. [28] generated
gait signatures by computer vision and extracted kinematic features in order to
recognize people. He also proposed a new method for extracting the body points by
topological analysis and linear regression guided by anatomical knowledge. Cunado et
al. [29] extracted the gait data by using movement from the thighs to fit an articulated
pendulum-like motion model. Little and Boyd [30] developed a description of
instantaneous motion, that varies with the type of moving figure and the type of
motion, and used that to recognize individuals by their gait.
25
The most popular gait recognition method which has been used by e.g. Kale et al. [31]
and Wang et al. [32] is to extract the human silhouette from the sequence and use it as
the feature of gait [33], see Figure 17.
The main purpose of video based gait authentication is surveillance, for instance
recognizing a criminal from a security camera video [34]. In such cases where other
biometrics are obscured, for instance a criminal might conceal his face, or use gloves,
the gait-signature will normally be present since it is less frequent to conceal/disguise
the walking manner [33].
Figure 17 - The bottom row shows silhouette frames. The raw image corresponding to each silhouette is shown on the top row [35].
3.1.2. Floor sensors
Analyzing gait through floor sensors has been commonly used by physiologist since
pathological gait can be a key factor in order to indicate several age related diseases
such as Diabetic Polyneuropathy [36]. Although floor based sensors have been used to
track individuals, there are not many floor sensing systems which are designed with
gait recognition in mind. Those systems that have been used often have insufficient
resolution, too low sample rates, or are too expensive [37]. However, some research
has been done which looks promising [33].
26
Orr et al. [38] introduced a floor system that may be used to transparently identify
users in their everyday living and working environments. This system uses footstep
models based on footstep profile features and a technique called Ground Reaction
Force (GRF), which is highly related to Newton’s third law which states that “for every
action there is an equal and opposite reaction”. So the GRF is the reaction that a
measuring device produces in response to the weight and inertia of a body in contact
with that device. This method’s report showed that the effect of footwear is negligible
on recognition accuracy [33].
Middleton et al. [37] arranged an experiment with over 1500 individual sensors each
with a range of 3 cm2. The method used extracted three features: stride length, stride
cadence, and time on toe to time on heel ratio. Stride length and cadence are regular
features which have been used in machine vision based recognition, but the third
feature is new to this analysis. Figure 18 shows a graphical illustration of the gait cycle
and the data collected [33].
Floor based sensors could eventually find deployment as a standalone system (e.g. a
burglar alarm system) or as part of a multimodal biometric system.
Figure 18 – Typical gait cycle over the sensor mat. (a) shows recognized footsteps (b) shows the time spent at each location (the higher intensity of the pixels, the longer the person has stayed on that position) (c) the profile of the 4 footsteps on the sensor mat, showing heel and toe strikes [37].
27
3.1.3. Wearable sensors
This last category uses one or more wearable sensors, called accelerometers, in order
to acquire gait data. This method to authenticate people is quite recent and was
introduced by a research group from VTT Electronics located in Finland in 2005 [39].
The accelerometer captures movement, most commonly in three directions
(horizontal, vertical and lateral) and the recognition is performed by processing these
signals. Just like the other two methods, there are different approaches on both
placement of the sensor(s) and the recognition method. Mäntyjärvi et al. [39] wore the
sensor on the belt and used correlation, frequency domain and histogram statistics in
the processing phase. Vildjiounaite et al. [40] tried to place the sensor in three
different places: hip pocket, breast pocket and in the hand while carrying a suitcase.
Rong et al. [41] used a method called dynamic time warping to recognize the subjects.
Figure 25 shows an example of the data collected by an accelerometer [33].
Wearable sensors are mainly thought to be a part of a security module which acts by
continuously authenticating the user of a portable device (e.g. smartphone).
3.2. Gait Recognition Systems
This section looks in more detail to wearable sensor (WS) based gait recognition
systems, notably those involving the carrying of mobile devices such as smartphones.
There are various methods to process the gait data retrieved from the sensor. This
section will explain some of them and it will follow the same architecture of Figure 6.
3.2.1. Data acquisition
As mentioned in the previous section, there are different ways to place the sensors
while using WS-based systems. The most common is to place the sensor on either the
ankle or the belt [41, 42]. By firmly attaching the sensor to the belt or ankle one does
28
not have to worry about compromising the gait data since the accelerometer will not
move or rotate while walking. Other sensor placements studied are the pocket [40],
breast pocket [40], and in the hand while carrying a suitcase [40]. The results from
previous reports do not give any particular reason to favor one placement over the
other. From an application point of view, it makes sense to choose the belt or the
pocket since this is where people often carry their mobile phones [33].
3.2.2. Pre-processing
When a sensor is sampling movement data, some noise is also collected. As measured
acceleration signals are low frequency component, the accelerometer is sensitive to
the environmental noise [41]. This means that in order to precisely study the gait
signal, a digital filter has to be designed to remove/reduce this noise. Examples of
these filters are explained below:
Moving Average was used on [33] with a sliding window of size 5. This means
that the filtered element new value will depend on his four neighbors (two
from each side). In this filter each element has the same weight. This type of
filter will be further explained on chapter 5.2.2.
Weighted Moving Average was used on [33, 43], also with a window of size 5.
This filter is very similar to the Moving Average, but instead of every element
having the same weight, their proximity to the filtered element will define their
weight. This filter will also be explained on chapter 5.2.2.
3.2.3. Feature extraction
The Feature Extraction phase is where the pre-processed signal is processed in order
to compute a biometric template. This template will be stored for future comparisons
or will be used to compare against stored templates for recognition. This is the last
Enrollment’s phase. Different feature extraction techniques are listed and explained
below:
29
Frequency domain analysis was used in [39, 40, 44]. The idea behind frequency
domain analysis is that while a normal time domain graph shows how a signal
changes over time, a frequency domain graph shows how much of the signal
lies within each given frequency band over a range of frequencies. In [40] Fast
Fourier Transformation (FFT) coefficients were used for recognition of gait
patterns [33]. “The coefficients were calculated in a 256-sample window with a
100 sample overlap. The 128 FFT coefficients of each training file were clustered
with K-means algorithm into eight clusters. The FFT gait score was produced by
finding the minimum distance of the test data FFT coefficients from the trained
clusters” [39].
Average cycle length has been used in [41] and [43]. The basic idea is to
identify cycles in the signal and create an average cycle based on these steps.
After computing the average cycle it is stored in a database for future
comparisons. The authentication is based on a score which is computed by
comparing these average cycles. This technique is explained with more detail
on chapter 5.
High order moments were used in [39]. High order moments such as skewness
(third moment) and kurtosis (fourth moment) describe the degree of symmetry
in the variable distribution and the relative peakedness/flatness of a
distribution respectively [33].
Hidden Markov Models (HMM) were used in [19]. For building and training
HMMs, a software toolkit, the Hidden Markov Model Toolkit (HTK), was used to
build and manipulate Hidden Markov Models. This method was already briefly
explained on section 2.2.2.
3.2.4. Matching
The Matching phase is the last phase of a biometric recognition system. This is where
the computed biometric template, is compared with the database’s templates in order
to find a match. Some examples of techniques are listed and explained below:
30
Correlation has been used in e.g. [39, 40, 44]. In general correlation is an
indication of the strength and direction of a linear relationship between two
random variables. Mäntyjärvi et al. [39] applied correlation in the following way
[33]:
o In the training phase, divided the acceleration signals into one step long
parts by searching for local minimums and maximums, and since right
and left steps are not necessarily symmetrical they are processed
separately as a and b steps.
o Normalized all steps both in length and amplitude.
o Average a and b steps in order to create templates for them.
o In the identification phase, the steps above are repeated forming c and
d steps.
o Finally the correlation is calculated by the following formula:
(3)
Histogram similarity has been used in [39, 45] and mainly consists of the
following steps [33]:
o Compute n-bin histogram of the combined signal retrieved by the
accelerometer.
o Normalize the histogram by number of recorded samples.
o Use a distance metric to compute the distance between two
histograms. In [42] the distance metric was calculated by
∑ | | ,
where xi is the probability of a data point falling into bin i of the
enrollment’s normalized histogram and yi is the probability of a data
point falling into bin i of the verification’s normalized histogram.
Figure 19 shows the process of comparing two gait signals using histogram similarity.
31
Figure 19 - Histogram similarity method example [45].
Dynamic Time Warping (DTW): Dynamic Time Warping was used in [41, 46,
33]. In general DTW disposes the naturally occurring changes in walking speed
and is able to compare signals of different lengths and where the x-axis are
shifted. DTW is a method that allows a computer to find an optimal match
between two given sequences (e.g. time series) with certain restrictions. The
DTW method and its restrictions will be further explained in chapter 5.
32
There is not a way to decide which the best method is; the efficiency of each method
depends on the accelerometer’s sampling frequency and on the placement of the
sensor. By reviewing the articles mentioned above, one can conclude that average
cycle, DTW, correlation and histogram similarity provide the best results when the
accelerometer is placed on the hip.
In Rong et al [41] a comparison between the DTW method, the correlation method
[39] and the data distribution statistic (Histogram) [45] is presented. The
corresponding cumulative matching score (CMS), i.e., a curve that registers the
cumulative correct matching scores up to a given rank k is presented in Figure 20. The
CMS curves show that all methods compared support the study made by Ailisto et al
[39], stating that it was possible to identify people by their gait acceleration. From
Figure 20 one can conclude that the DTW matching method performs better than
correlation and data distribution statistic (Histogram) methods.
Figure 20 - CMS curve comparing DTW, Correlation and Data Distribution Statistic (Histogram) [41].
33
4. Selection of the Target Platform
This chapter describes the smartphone platform used in the biometric recognition
system developed in the present dissertation, as well as the reasons supporting that
choice. Details on the selected mobile phone device and software tools are also
provided.
4.1. Operating System
As one can see in Figure 2, there are many valid platform options for the development
of a mobile application. The Android Operating System was the chosen platform to
develop this application since it holds the biggest market share and is an open source
operating system.
Android offers developers an open development platform and the possibility to use
third-party tools for application development. Another advantage of choosing Android
OS is that usually the open source software has big communities behind it, which
constitutes a great help when developing new applications.
4.2. Target Mobile Device
The device used to develop this dissertation is the Samsung Galaxy Ace Plus S7500
(shown in Figure 21). It uses a single core CPU at 1GHz and 512Mb of RAM, giving it a
low processing capacity when compared with, i.e., the Samsung S3 which uses a Quad
core CPU at 1.4GHz and 1Gb of RAM. As a consequence, one can realize the difference
of processing power between both devices. This difference of processing power will
have a huge impact on the application’s data processing speed.
34
Although the Samsung Galaxy Ace Plus is not the most powerful smartphone on
today’s market, it is still powerful enough to run the proposed application, in spite of
taking some time to process the information.
Figure 21 - The Samsung Galaxy Ace Plus smartphone [47].
4.3. Software Development Tools
The main software libraries and tools used in the development of the proposed
biometric recognition system are briefly described below:
Android Standard Development Kit (SDK): The SDK is a set of development tools for the
Android operating system. It contains a debugger (DDMS), an handset emulator,
documentation, libraries among other utilities. Another important platform tool included
in the SDK is the Android Debug Bridge (ADB), which enables the communication between
the user (through an integrated development environment like Eclipse) and a
virtual/Android-based device.
Java Development Kit (JDK): The JDK is an extended subset of a software development kit
which contains a collection of programing tools which include a java compiler (javac), an
archiver (jar) and a debugger (jdb) among other utilities. The JDK is an indispensable tool
to develop applications for the Android OS.
35
Eclipse and Android Development Tools (ADT): Eclipse is the official Integrated
Development Environment (IDE) in Android application development. Eclipse supports
many types of plugins, which greatly extends its functionalities. In conjunction with the
Android Development Tools (ADT), an official plugin from the Android developer’s team,
Eclipse becomes a powerful tool in the development of Android applications. ADT
increments Eclipse with functionalities specific to Android, like special Android file format
editors, specific XML template parsers and most importantly, the integration of the Dalvik
Debug Monitor Server (DDMS) where applications can be debugged and analyzed.
36
5. Proposed Biometric Verification System
In this section the implemented biometric verification system based on a smartphone
is presented. The first section describes the system architecture while the second
section details the various steps in the system implementation.
5.1. System Architecture
Figure 22illustrates the developed systems architecture. Each squared block represents
a phase of the system, which is constituted by various steps. These steps are explained
in section 5.2.
Data acquisiton
Pre-processing
Feature extraction
Template Database
Enrollment
Identifying GPS E-mailDecision
Yes
No
Figure 22 - System architecture
5.1.1. Introduction
This section provides details for the different algorithms used in this project. An
average gait cycle (a gait cycle is composed by one step from the right leg and one
from the left leg) is computed for each component (ax, ay, az and acceleration modulus)
using all the gait cycles previously detected. These computed cycles will then be stored
37
as templates for future comparisons or compared against other averaged cycles for
user authentication. On Figure 23 an example of an expected M-like shape average
cycle is shown. The figure also indicates the key points in the cycle; 1 and 5 represent
the start and the end of the step respectively, 3 represent the local minimum, and 2
and 4 represent the local maximums between 1-3 and 3-5 respectively.
Figure 23 - An example of a good average cycle.
5.2. Implementation Details
5.2.1. Data Acquisition
In order to acquire acceleration data the smartphone’s built in 3-axis accelerometer
was used, which operates at a maximum sampling frequency of approximately 100 Hz.
If there is no movement, and the orientation of the smartphone is as shown on
38
Figure 24, X and Z will be null while Y will be -g (g=9.8m/s2) since the only force acting
on the sensor is the gravity force.
Figure 24 – The three accelerometer’s axis. [48]
Figure 25 shows the three acceleration components (ax, ay and az) and the acceleration
modulus, M, obtained from the equation:
√(
)
(4)
Where ax, ay and az are the three signals read by the accelerometer at a time t, and n is
the number of readings. From Figure 25 one can see that both ax and ay, and
consequently M, is cyclic. The az values result from sideways movement and do not
have a clear cyclic repetition as ax and ay. The figure also shows that ax is much bigger
than ay and az and consequently the modulus evolution follows closely the ax
component. The reason for this is that when the accelerometer is located on the hip
both ay and az values usually lie between -0.5 and 0.5g, while the ax value lies around -
0.5 and -2.5g.
39
Figure 25 – ax, ay, az components and the corresponding acceleration modulus.
The following sections will explain in depth the signal processing algorithms. All
algorithms have been created for this application and are implemented in software
written in Android.
-18
-13
-8
-3
2
7
12
17
1 101 201 301 401 501 601 701 801 901
(m/s
2 )
Sample number
x Axis
y Axis
z Axis
Modulus
40
5.2.2. Pre-processing
The pre-processing stage prepares the data for the feature extraction phase. The
different stages of this phase are shown in Figure 26.
Figure 26 - Pre-processing block diagram
Referential calibration
As explained further ahead on this dissertation, a study will be made to figure out the
best combination of acceleration components to verify a user. In order to correctly do
this, the signals referential must be the same. To accomplish this goal, an algorithm
was created which calculates the 3D coordinate of a point in coordinate reference
frames that are rotated in succession, first, around the z-axis, and then around the x’-
axis, in order to place the templates in the same reference.
Assuming the coordinates of a point, P(x,y,z) in a reference (x,y,z) (data retrieved from
the accelerometer), this same point coordinates in the x’-y’-z’ coordinate reference
frame, which is rotated θ radians around the z/z’ (see Figure 27) axis will be calculated.
Furthermore, the final coordinates of point P in the x’’-y’’-z’’ coordinate reference
frame, which is rotated γ radians around the x’/x’’ axis will also be calculated (see
Figure 28).
41
Figure 27 - First rotational transformation [49].
In the x-y-z coordinate reference frame, the point P has coordinates (x,y,z). In the x’-y’-
z’ coordinates frame, which is rotated θ radians around the z/z’ axis, the coordinates
are:
(5)
(6)
(7)
42
Figure 28 - Second rotational transformation [49].
In the x’-y’-z’ coordinate reference frame, the point P has coordinates (x’,y’,z’). In the
x’’-y’’-z’’ coordinates frame, shown in Figure 28, where x’’ is coincident with x’ and the
axis y’’ makes an angle γ with the axis y’, the coordinates are:
(8)
(9)
(10)
To apply this algorithm, both angles, θ and γ, must be known. In this application, at the
beginning of the walk the user is required to stay still for a brief moment, which means
that his vertical acceleration is equal to the gravity force, and the horizontal
acceleration is zero. In these conditions, one knows that (x’’,y’’,z’’) = (0,0,-9.81) and the
43
coordinates of point P(x,y,z) are the coordinates read by the accelerometer. Through
the equations previously stated above one gets:
(
) (10)
(
) (11)
Time interpolation
A shortcoming of the accelerometer is that it does not always record the same number
of samples per second (each smartphone accelerometer has its own sampling ratio). In
order to overcome this weakness an interpolation must be done. The used sensor’s
maximum sample rate is close to 100 samples per second. With lower sampling rates,
one can achieve lower data processing times. With this in mind, the goal was set in
studying 3 different sample rates: 50, 75 and 100 samples per second. This goal can be
achieved by using linear interpolation. By knowing two points (t0, v0) and (t1, v1), the
linear interpolation is the straight line between these two points (see Figure 29). For a
value x in the interval (x0, x1), the value v along the straight line can be obtained
through the equation:
(
)
(12)
Figure 29 - Linear interpolation between the two red points.
44
Acceleration modulus computation
The raw data provided by the sensor is in g-force values, which makes this
computation easier. In order to compute the acceleration modulus the Euclidean norm
is used, being calculated through equation (4).
An example of a computed acceleration modulus is shown in Figure 30. This example
will be used to illustrate some of the techniques adopted in 5.2.3.
Figure 30 - Example of the acceleration modulus captured by the smartphone while walking along a corridor.
Other possibilities could have been to, for example, ignore the Z-direction:
√
or just use one direction: √ | |. The reason that all three
components were used and not only the one with highest impact on the acceleration
modulus (ax, as said earlier in this report), is that one can expect more accurate results
when all directions are used, since all the gathered data is being considered.
In this thesis the possibility of using the three acceleration components separately to
verify a user was also studied, although it has not been reported in the literature
before.
How these components will be matched is explained in section 5.2.4.
4
6
8
10
12
14
16
0 500 1000 1500 2000
(m/s
2)
Sample number
Resultant Vector
45
Noise reduction
One weakness of this kind of sensors is the fact that the acquired data will contain
some noise. In order to reduce/eliminate the noise from the signal there are different
approaches. In this project, only the Moving Average (MA) and the Weighted Moving
Average (WMA) were considered since they achieve the expected results and are
simple, reducing the processing time, which is a requirement since this project is
intended for a smartphone, which has limited autonomy and processing power. Figure
31 shows the weight differences between the two approaches.
Figure 31 - Weight differences between MA and WMA.
As one can see, the only difference between these two methods is that using the MA
method all the neighbors have equal weight, while with WMA the closest neighbors
are more relevant that those further away. The formulas for MA and WMA with a
sliding window of size 5 are given below:
(13)
where at is the acceleration value at time t. All the four closest neighbors are given the
same weight.
MA
WMA0
0.1
0.2
0.3
0.4
a(t-2) a(t-1) a(t)a(t+1)
a(t+2)
We
igh
t
MA
WMA
46
(14)
where at is the acceleration value in position t. The value at is given weight 3, the two
closest neighbors (at-1 and at+1) have weight 2 and the next two neighbors (at-2 and at+2)
have weight 1.
Figure 32 compares the results of both approaches. As one can realize, both
approaches give similar results.
Figure 32 - MA and WMA results example.
When the algorithm is averaging the signal, if the sliding window does not have the
values to work correctly (on the first and last 2 values of the signal), the window uses
the same value more than once. For example, when the MA’s algorithm is being used,
on the first value there is no at-2 or at-1, so the algorithm uses:
.
47
5.2.3. Feature Extraction
The feature extraction stage prepares the data for the final stage, which can be either
the matching phase or to store the data in the database. The feature extraction’s
stages are shown in Figure 33.
GaitDroid
Feat
ure
Ext
ract
ion
Starting Location
Computation
Step Detection
Skip Irregular Cycles
Average Cycle
Creation
Figure 33 - Feature Extraction block diagram.
The most important processing phase is to detect where a step starts and ends, in
order to split up the signal. The first step is to choose between splitting the signal into
singular left and right steps or take double steps (gait cycle). The most common way to
split steps is to look at minimum values, but there is also the possibility of using the
maximum values. The last phase of feature extraction is to create the template which
will be used for comparison or will be compared with previously saved templates. One
simple way of doing this is to align all steps at a single point and calculate the mean
average of each value. The resultant mean will be used as a template. Another
possibility is to use Dynamic Time Warping (DTW) to find a cycle that can represent all
the steps.
In this project it was decided that a cycle contains a double step, as shown in Figure 23,
and the steps are detected by looking at the signal’s minimum points. The end of a
step will automatically be the beginning of the next step (except, of course, for the
final step). The feature extraction algorithm consists of four sub-phases: starting
location computation, step detection, skip irregular cycles and average cycle
creation.
48
Starting location computation
After studying many sets of data, it was concluded that a gait cycle can have from 90 to
180 samples, depending on the walking speed. The computation starts in the middle of
the collected data, where the gait cycles are normalized (as the user has reached his
normal walking speed). Say the signal’s length is N, then the computation starts at
sample number
. After finding the middle sample, the closest maximum must be
found, which can be position 2 or 4 of Figure 23. In order to do this, all the samples in
the interval
(an interval of 200 samples was chosen to make sure
that a complete gait cycle was contained) are analyzed in order to look for the local
maximum, M. To decide if the maximum M is the position 2 or 4, the minimums on
both sides of M must be tested, let’s call them m_left and m_right. From here there
are two options:
If m_left > m_right, one knows that M, is in fact, in position 4, that m_right is in
position 5 (where the gait cycle ends) and that m_left is in position 3 (see
Figure 23). To compute the rest of the step, the algorithm decrements 50
positions and looks for the local maximum (position 2) between (m_left-50,
m_left). The same computation is done, but this time to find the local minimum
in (position 2 – 50, position 2). This minimum is position 1.
If m_left < m_right, then M is in position 2, m_right is in position 3 and m_left is
in position 1 (where the gait cycle starts). A similar approach from the first
option is done, in order to find the rest of the gait cycle.
The starting location will be the position 1 of this gait cycle.
Gait cycles detection
After the starting location has been found (position 1 in Figure 23) the rest of the step
detection can be performed. Since the computation starts at the middle of the
recorded data it has to be done in two stages, backwards and forwards. The backwards
stage is the first to be processed. From position 1 a similar computation as in the
previous stage (starting location computation) is done to find all the cycle steps before
the starting position, in order to get the first gait cycle. Once the first gait cycle is
49
known the second stage (the forward stage) can start. By computing the forward stage
the information of all the minimums and maximums (positions 1-5 in Figure 23) from
all the gait cycles is stored into a matrix, called cycles. A step detection example is
shown in Figure 34. Each step detected is represented in a different color while the
start and the end of the walking period are represented in black.
Figure 34 - Step detection example.
Skip irregular cycles
The irregular gait cycles are removed using the DTW to compute the distance
(transformation cost) from each cycle to all the others, .
After this, the average of these distances is calculated as
∑ . When the
average distance for each cycle has been calculated, the average of these values is
calculated as
∑ . So D is the total average and a reference value for all the
cycles. Now distances di can be assessed by comparing them with the reference
distance D. If , this cycle is removed. After removing the irregular cycles
a new average D is calculated and the procedure is repeated. This is repeated three
times, which is enough to remove the most significant irregular cycles.
4
6
8
10
12
14
16
0 500 1000 1500 2000
(m/s
2 )
Sample number
50
An irregular cycle skipping example is shown in Figure 35. The accepted step-cycles are
represented in green, the irregular step-cycles are represented in red and the start and
end of the walk are represented in black.
Figure 35 – Irregular cycle skipping example.
Average cycle creation
In this project two different ways to compute the average cycle have been studied:
Mean: This computation starts by aligning all the cycles in one of the
maximums (Points 2 or 4 from Figure 23). After this, each acceleration value in
the average cycle is the mean of the corresponding values in all the cycles.
DTW: The rationale of this method is the same as to skip irregular steps. The
distance from each gait cycle to all others is calculated and the cycle with the
lowest average can represent all others, and will be used as the average cycle.
After calculating the average cycle, this data is stored on a database for future
comparisons (if enrolling) or used to compare with previously stored average cycles (if
authenticating).
4
6
8
10
12
14
16
0 500 1000 1500 2000
(m/s
2 )
Sample number
Accepted step-cycle Start & End of walk Removed step-cycle
51
In this project both methods will be studied. In order to create the average cycle
through the mean method all cycles were aligned on the second maximum point
(position 4 from Figure 23). This method is expected to achieve better results since it
uses all the data gathered rather than only a small amount and ignore the rest. An
example of this computation and the resulting average cycle is shown in Figure 36 and
Figure 37, respectively. On Figure 36 each of the accepted gait cycles (green step-
cycles) from Figure 35 is represented in a different color. All the cycles are aligned on
the second maximum point (position 4 from Figure 23), as previously explained, and
the average cycle is calculated. This computation’s result is shown in Figure 37 and has
the expected M shape, which makes this a successful data collection and processing
example.
Figure 36 – Mean average cycle computation example.
4
6
8
10
12
14
16
(m/s
2 )
52
Figure 37- Expected M-like shaped average cycle result.
5.2.4. Matching Stage
On the matching stage, the distance of the created average cycle to all the cycles
stored on the data base is computed using the DTW. If this distance is lower than a
threshold’s value, t, a match has been found. One advantage of using DTW, is that the
normalization of the average cycles is avoided, since DTW can compare two signals of
different lengths. As mentioned before, this application is for a smartphone with
limited autonomy and processing power, so minimizing computations is one of the
objectives of this dissertation.
If the acceleration modulus is being used to identify a person, only one matching is
required, but if, for example, all the three coordinates separately are being used, three
different average cycles must be created (one for each coordinate), and three separate
average cycles have to be matched. Once again, if the distances from these 3 different
matchings are lower than the threshold’s values, tx, ty and tz, we have found our
match.
On Figure 38 a matching of three signals is illustrated. In black, the previously enrolled
user, in green, the legitimate attempt to authenticate and in red, an imposter attempt
of authentication.
0
2
4
6
8
10
12
14
16
1 41 81 121
(m/s
2 )
Sample number
Average cycle
53
Figure 38 - Matching example, with a legit user and an imposter user attempts.
5.2.5. Dynamic Time Warping
Dynamic time warping (DTW) is an algorithm for comparing series of values. The
rationale behind DTW is, given two time series, which may vary in length, to stretch or
compress them locally in order to make one resemble the other as much as possible.
The distance between the two time series is computed, after stretching, by summing
the distances of individual aligned elements [46].
The main difference between DTW and more simple distance measurements is that
DTW finds a more optimal alignment between the two series instead of just comparing
the template’s sample number n with the input’s sample number n. In order to better
understand how DTW works, a simple example will be explained. DTW performs three
different operations when transforming sequences: substitution, deletion and
insertion. For example, to transform the string “peek” into “speak”, one can see that
“s” is inserted, “p” and “e” are left unchanged, the second “e” is substituted with “a”
and finally “k” is left unchanged (see Table 1). In order to calculate the cost of the total
transformation we must introduce some different cost functions for each of the three
operations:
54
Insertion: Cins(x) is the cost of inserting x. The cost of inserting a value is
normally fixed, but may also depend on the value that is inserted.
Deletion: Cdel(x) is the cost of deleting x. As with insertion the cost of deleting a
value is normally fixed, but may also depend on the value that is deleted.
Substitution: Csub(x,y) is the cost of substituting x with y. The cost of doing this
change can be as simple as the absolute distance between the values or simply
a fixed value, but it may also be more sophisticated.
So by using these cost functions, the total cost is: .
Table 1- A simple example showing a possible transformation from the two sequences. λ indicates an empty symbol.
λ p e e k
↓ ↓ ↓ ↓ ↓
s p e a k
Cins(s) - - Csub(e,a) -
The goal of DTW is to find the minimum transformation cost between the template
and the input. In order to do that, all combination possibilities must be calculated.
Continuing the last example and assuming each operation has a fixed cost of 1, then
the cost matrix is shown in Table 2. The numbers in bold indicate a possible path in
order to achieve the objective. The cost of transforming a sequence of length i into a
sequence of length j is given in the cell (i,j).
Table 2 - Example’s cost matrix. The optimal path is indicated in bold
λ s p e a k λ 0 1 2 3 4 5 p 1 1 1 2 3 4 e 2 2 2 1 2 3 e 3 3 3 2 2 3 k 4 4 4 3 3 2
55
In this project, instead of dealing with sequences of string, one deals with sequences of
numbers, but the concept remains the same.
DTW algorithm used
The algorithm used is known as the classical DTW given in [50]. Suppose we have two
time series Q and C, of length n and m respectively, where and
. To align two sequences using DTW an n-by-m matrix, D, must be
computed where the element of the matrix contains the Euclidean distance
between and , √( ) . The cost matrix M is then computed, with
the same dimensions as D, and is filled as follows:
M(0,0) = D(0,0).
The first column was calculated through .
The first row was calculated through .
The rest of the matrix is calculated by the equation
( )
A warping path W, is a contiguous (in the sense stated below) set of matrix elements
which define a mapping between Q and C.
The warping path is typically subject to several constraints:
Boundary conditions: and , simply stated, this requires
the warping path to start and finish in diagonally opposite cells of the matrix.
Continuity: Given then where and
. This restricts the allowable steps in the warping path to adjacent
cells (including diagonally adjacent cells), which means that the indexes i and j
can only increase by 1 on each step.
Monoticity: Given then where and
and | | | | . This constraint implies that the path will
not turn back on itself, both the i and j indexes either stay the same or increase,
they never decrease.
56
5.2.6. GPS & Email
After the matching stage, the mobile device tries to obtain its GPS coordinates
(Longitude and Latitude), regardless of the outcome of the previous stage. This
outcome and coordinates are then sent to a previously defined e-mail for a matter of
security. If the device is out of the satellite’s range, it gives the last known GPS
position.
The idea behind this final stage is to help retrieving your mobile device in case of
burglary or loss, or just to grant information of where the mobile device’s owner has
been (in case the device belongs to a child).
57
6. Experimental Results
In this section, the results of the experiment will be discussed. All the results were
obtained by using the developed application and all the data was processed by the
mobile device. The system’s performance was tested with the commonly used
biometric performance measures.
6.1. Experimental Setup
In order to answer the main question of this thesis ‘Is it possible to recognize people’s
gait through a smartphone’, a number of subject’s gaits were analyzed.
The test-subjects wore the smartphone attached to the belt and placed on the right
side of the hip. By attaching the device to the belt it was ensured that the
accelerometer doesn’t change its orientation during the walk, which would
compromise the reading. The testing individuals were asked to walk 10 to 14 steps, as
close as possible to their common walking way (participants were asked to wear
regular shoes with a flat sole), along a straight line. The number of steps restraint is
due to the time the used device takes to process the readings. The processing time
increases exponentially with the number of steps. A number of five walking sessions
were recorded per individual, three of which were used for the database. In fact, the
testing individual is often nervous on the first walking sessions, which can compromise
the data. The only restraint during the recording sessions was to be carried out on a
solid and leveled surface. Each participant’s name, age and gender were registered.
In total a database composed of 20 volunteers was built for this project, all healthy and
without special injuries that might affect the gait (as it would make recognition easier).
The age of the test subjects ranged from 9 to 60 years old. The age and gender
distribution is shown in Figure 39.
58
Figure 39 - Participant's gender and age distribution
59
6.2. Performance Evaluation
As mentioned in Section 2.1.1, the performance of a biometric system is evaluated by
its false accept rate (FAR) and false reject rate (FRR) and consequently by the ROC
curve and the EER, which are computed through the false accept and false reject rates.
After a recognition attempt, there are four different outcomes. A genuine user
attempting to be recognized can be accepted or rejected. If the user is accepted, it
generates a correct accept match, if not, it will count as a false reject. The same logic
applies to an imposter attempt. If the imposter is recognized as a legitimate user the
result leads to a false accept, if it is rejected, it will lead to a correct reject. Both the
FRR and FAR are functions of the system’s threshold. If the threshold is increased, FAR
will decrease but the FRR will increase and vice versa. For a biometric system, it is not
possible to decrease both FAR and FRR by varying the threshold. Since a biometric
system can be operated at different threshold values, the FAR and FRR at different
thresholds can be summarized in the form of Receiver Operating Characteristic (ROC)
curve. The ROC curve can be obtained by plotting the FAR against the Genuine Accept
Rate (GAR) which is calculated through GAR = 1 – FRR. The closer the ROC curve is to
the graphic’s upper left corner, the more accurate the system is. Examining a given
biometric system’s ROC curve is the best way to evaluate its matching performance.
Another important metric to measure the performance of a biometric system is the
Equal Error Rate. As explained in 2.1.1, the EER is the point where the FAR equals the
FRR. This means that if a system has a low EER point, then it has low FAR and FRR
values. As one can easily figure out, the lower the EER value is, the better the systems
performance will be. However, the EER is not necessarily the system’s optimal
operating point. There are systems that may require, for example, a FAR close to zero
(high security systems) which will lead to a high FRR. This means that this system will
not be working on the EER point. This dissertation studies the possibility of verifying if
a mobile device’s user is authentic in two different ways: using the acceleration
modulus or using the acceleration components separately.
60
6.3. System’s performance
The database contains 20 registered test subjects and each participant generated 3
different gait acquisitions.
In this section, experiments based on the acceleration modulus and on separate
acceleration components will be studied. The threshold in Tables 3-6 is the
transformation cost, which is the outcome of a comparison between two signals
through the DTW technique, as it was discussed in section 5.2.5.
6.3.1. Sample rate
In order to improve the data’s processing speed, several sampling rates were
considered. The component signals were interpolated in order to get 50, 75 and 100
samples per second and the acceleration modulus was calculated. Independent
assessment of the acceleration components did not show good results and was not
considered for the sample rate study. The values of the FAR, FRR and GAR for each of
the 3 sample rates are presented on Table 3. The possible operating points from a
security viewing point, as explained in section 6.2, are marked in green. The 50
samples per second rate has no viable operating point.
Table 3 - Different sample rates FAR, FRR and GAR values. In green are marked the possible operating points in for security purposes. The 50 samples per second rate has no viable operating point.
Sample Rate 50 75 100
Threshold (transformation
cost) FAR(%) FRR(%) GAR(%) FAR(%) FRR(%) GAR(%) FAR(%) FRR(%) GAR(%)
10 0.00 91.67 8.33 0.00 100.00 0.00 0.00 100.00 0.00
15 1.46 58.33 41.67 0.06 91.67 8.33 0.00 98.33 1.67
20 8.07 25.00 75.00 0.29 63.33 36.67 0.00 85.00 15.00
26 22.34 15.00 85.00 2.46 26.67 73.33 0.29 41.67 58.33
30 32.75 13.33 86.67 7.02 18.33 81.67 1.11 20.00 80.00
31 34.80 13.33 86.67 8.25 15.00 85.00 1.40 16.67 83.33
32 37.49 11.67 88.33 9.94 15.00 85.00 1.81 13.33 86.67
32.8 39.18 11.67 88.33 11.40 13.33 86.67 2.05 6.67 93.33
33 39.71 11.67 88.33 11.70 13.33 86.67 2.05 5.00 95.00
61
33.2 40.00 11.67 88.33 12.11 13.33 86.67 2.10 5.00 95.00
34 41.17 11.67 88.33 13.92 11.67 88.33 2.51 1.67 98.33
34.2 41.81 10.00 90.00 14.09 11.67 88.33 2.51 1.67 98.33
34.4 42.05 10.00 90.00 14.39 10.00 90.00 2.57 0.00 100.00
35 43.22 8.33 91.67 15.44 8.33 91.67 2.98 0.00 100.00
36 44.74 8.33 91.67 17.37 6.67 93.33 3.97 0.00 100.00
37 46.96 5.00 95.00 20.18 5.00 95.00 5.26 0.00 100.00
38 48.77 3.33 96.67 21.46 5.00 95.00 6.72 0.00 100.00
40 51.23 3.33 96.67 25.15 5.00 95.00 9.41 0.00 100.00
50 65.26 0.00 100.00 40.47 3.33 96.67 24.09 0.00 100.00
100 95.56 0.00 100.00 80.35 0.00 100.00 68.77 0.00 100.00
190 100.00 0.00 100.00 98.48 0.00 100.00 88.83 8.33 91.67
On Figure 40, the ROC curves from the three sampling rates are presented.
Figure 40 - ROC curves comparison.
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
100.00
0.00 20.00 40.00 60.00 80.00 100.00
GA
R (
%)
FAR (%)
ROC
ROC 50
ROC 75
ROC 100
62
6.3.2. Acceleration modulus (mean method)
In this section the signal studied is the acceleration modulus, computed through
equation (4), using the mean method, as explained in section 5.2.3. The FAR, FRR and
GAR results obtained with the developed system are presented in Table 4.
Table 4 - FAR, FRR and GAR for different threshold values, using the average method.
Threshold (transformation
cost) FAR(%) FRR(%) GAR(%)
10 0.00 100.00 0.00
15 0.00 98.33 1.67
20 0.00 85.00 15.00
25 0.12 48.33 51.67
30 1.11 20.00 80.00
31 1.40 16.67 83.33
32 1.81 13.33 86.67
32.8 2.05 6.67 93.33
33 2.05 5.00 95.00
33.2 2.10 5.00 95.00
34 2.51 1.67 98.33
34.2 2.51 1.67 98.33
34.4 2.57 0.00 100.00
35 2.98 0.00 100.00
36 3.97 0.00 100.00
37 5.26 0.00 100.00
38 6.72 0.00 100.00
40 9.41 0.00 100.00
50 24.09 0.00 100.00
100 68.77 0.00 100.00
190 94.91 0.00 100.00
The values from Table 4 are represented graphically in Figure 41. The EER of the
developed biometric system is approximately 2.5%.
63
Figure 41 - FAR and FRR at different operating thresholds.
The obtained Receiver Operation Characteristic (ROC) curve is shown in the Figure 42.
Figure 42 - Obtained ROC curve.
Operating point
The operating point of the developed system was chosen taking into account the
system’s goal. Since the developed application was intended for security purposes, the
0
10
20
30
40
50
60
70
80
90
100
0 20 40 60 80 100
GA
R (
%)
FAR (%)
ROC
64
priority is to get a low FAR, so no/less impostor users can access the device. With this
priority in mind, one must understand that some genuine users will eventually be
rejected. In this case, the genuine user must re-authenticate or use some other
verification method, possibly an override password. With a security purpose in mind
(as it was discussed in section 6.2), the threshold used was 33 .For this threshold the
system’s matching performance is FAR = 2.05%, FRR = 5.00% and GAR = 95.00%. This
operating point was chosen due to its low FAR and FRR values (comparing to the other
points). The chosen operating point is marked in green on Table 5. This point does not
match the EER point in order to get the lowest FAR possible.
Table 5 - FAR, FRR and GAR values around the chosen operating point.
Threshold (transformation
cost) FAR(%) FRR(%) GAR(%)
30 1.11 20.00 80.00
31 1.40 16.67 83.33
32 1.81 13.33 86.67
32.8 2.05 6.67 93.33
33 2.05 5.00 95.00
33.2 2.10 5.00 95.00
34 2.51 1.67 98.33
34.2 2.51 1.67 98.33
34.4 2.57 0.00 100.00
35 2.98 0.00 100.00
65
6.3.3. Acceleration modulus (DTW method)
Just like in the previous section (6.3.2), the signal studied is the acceleration modulus,
now using the DTW method, which is also explained in section 5.2.3. The FAR, FRR and
GAR results obtained with the developed system are presented in Table 6.
Table 6 - FAR, FRR and GAR values using the DTW method.
Threshold (transformation
cost) FAR(%) FRR(%) GAR(%)
10 0.00 100.00 0.00
15 0.00 100.00 0.00
20 0.00 91.67 8.33
25 0.23 60.00 40.00
30 1.17 38.33 61.67
33 2.28 23.33 76.67
37 4.50 13.33 86.67
37.4 4.91 10.00 90.00
37.8 5.38 8.33 91.67
38.2 5.79 6.67 93.33
38.8 6.61 6.67 93.33
39 6.78 3.33 96.67
43 11.64 1.67 98.33
52 26.37 0.00 100.00
100 75.15 0.00 100.00
190 96.67 0.00 100.00
The values from Table 6 are represented graphically in Figure 43. The EER of the
developed biometric system is approximately 5%.
66
Figure 43 - FAR and FRR at different operating thresholds.
The obtained ROC curve is shown in Figure 44.
Figure 44 - Obtained ROC curve.
Operating point
As discussed in sections 6.2 and 6.3.2, the developed application was intended for
security purposes, which means that the priority is to get a low FAR. With this purpose
0
10
20
30
40
50
60
70
80
90
100
0 50 100 150 200
%
Threshold (transformation cost)
Acceleration Modulus
FAR
FRR
0
10
20
30
40
50
60
70
80
90
100
0 20 40 60 80 100
GA
R (
%)
FAR (%)
ROC
67
in mind, the selected operating point was 37.4. For this threshold the system’s
matching performance is FAR = 4.91%, FRR = 10.0% and GAR = 90.0%. The chosen
operating point is marked in green on Table 7. This point does not match the EER point
in order to get the lowest FAR possible.
Table 7 - FAR, FRR and GAR values around the chosen operating point.
Threshold (transformation
cost) FAR(%) FRR(%) GAR(%)
37 4.50 13.33 86.67
37.4 4.91 10.00 90.00
37.8 5.38 8.33 91.67
38.2 5.79 6.67 93.33
38.8 6.61 6.67 93.33
39 6.78 3.33 96.67
Although this operating point is not bad, the DTW method achieved worse results than
the average cycle method (FAR = 2.05%, FRR = 5.00% and GAR = 95.00%). This result
was expected since one can expect more accurate results when all the data is
considered.
The independent acceleration components will be studied using the mean method,
since it achieved better results.
6.3.4. Independent acceleration components
In this section the components ax, ay and az are accessed separately. The FAR, FRR and
GAR results obtained with the developed system are presented in Table 8.
68
Table 8 - FAR, FRR and GAR for different threshold values for the three coordinates.
Coordinate ax ay az
Threshold (transformation
cost) FAR(%) FRR(%) GAR(%) FAR(%) FRR(%) GAR(%) FAR(%) FRR(%) GAR(%)
10 0.00 100.00 0.00 0.00 100.00 0.00 0.00 100.00 0.00
15 0.00 95.00 5.00 0.00 95.00 5.00 0.00 98.33 1.67
20 0.00 91.67 8.33 0.12 83.33 16.67 0.00 86.67 13.33
25 0.00 65.00 35.00 0.70 70.00 30.00 0.12 80.00 20.00
30 0.23 38.33 61.67 2.11 65.00 35.00 0.41 70.00 30.00
31 0.47 33.33 66.67 2.46 58.33 41.67 0.53 70.00 30.00
32 0.70 30.00 70.00 2.87 58.33 41.67 0.76 61.67 38.33
32.8 0.82 28.33 71.67 3.04 56.67 43.33 0.94 58.33 41.67
33 0.88 26.67 73.33 3.16 56.67 43.33 0.99 58.33 41.67
33.2 0.94 26.67 73.33 3.16 56.67 43.33 0.99 58.33 41.67
34 1.11 26.67 73.33 3.45 56.67 43.33 0.99 55.00 45.00
34.2 1.17 26.67 73.33 3.51 56.67 43.33 1.05 55.00 45.00
34.4 1.17 26.67 73.33 3.63 56.67 43.33 1.11 55.00 45.00
35 1.40 25.00 75.00 4.09 56.67 43.33 1.17 55.00 45.00
36 1.75 23.33 76.67 4.39 53.33 46.67 1.23 55.00 45.00
37 2.28 20.00 80.00 5.15 53.33 46.67 1.35 55.00 45.00
38 2.87 18.33 81.67 5.79 51.67 48.33 1.64 51.67 48.33
40 4.56 15.00 85.00 7.08 50.00 50.00 2.22 51.67 48.33
50 14.33 10.00 90.00 13.92 45.00 55.00 5.67 41.67 58.33
100 49.88 6.67 93.33 48.89 23.33 76.67 39.01 25.00 75.00
190 73.33 3.33 96.67 75.85 16.67 83.33 88.83 8.33 91.67
The values from Table 8 are shown below in Figure 45, Figure 46 and Figure 47. The
EER of the developed biometric system is near 10%, 30% and 26% for the components
ax, ay and az respectively.
69
Figure 45 - FAR and FRR at different operating thresholds for component ax.
Figure 46 - FAR and FRR at different operating thresholds for component ay.
0
10
20
30
40
50
60
70
80
90
100
10 60 110 160
%
Threshold (Transformation Cost)
Component ax
FAR
FRR
0
10
20
30
40
50
60
70
80
90
100
10 60 110 160
%
Threshold (Transformation Cost)
Component ay
FAR
FRR
70
Figure 47 - FAR and FRR at different operating thresholds for component az.
The obtained Receiver Operation Characteristic (ROC) curve is shown in the Figure 48.
Figure 48 - ROC curves of the different acceleration components.
Operating point
Considering the developed project’s goal, and examining the values from Table 8 and
the three ROC curves from Figure 48, one can conclude that the independent
assessment of the acceleration components is not a good option. There is no threshold
0
10
20
30
40
50
60
70
80
90
100
10 60 110 160
%
Threshold (Transformation Cost)
Component az
FAR
FRR
0
10
20
30
40
50
60
70
80
90
100
0 20 40 60 80 100
GA
R (
%)
FAR (%)
ROC - Independent Coordinates
ROC_ax
ROC_ay
ROC_az
71
providing a good outcome, which means, there is no threshold that has a good FAR
and an acceptable FRR. For this reason, no operating point was chosen for this
method.
6.3.5. Maximum sample rate in different devices
In this subsection other smartphones will be tested in order to study how the
accelerometer’s sample rate varies with the device’s processing power. As previously
stated, the smartphone used on this thesis was a Samsung Galaxy Ace Plus (S7500)
with 1 GHz Cortex-A5 CPU and 512 MB of RAM. This device will be compared with the
following smartphones:
HTC WildFire – CPU 528 MHz ARMv6 and 512 MB of RAM.
Sapo A5 – CPU 600 MHz and 512 MB of RAM.
Samsung Galaxy Ace (S5830) – CPU 800 MHz ARM11 and 278 MB of RAM.
Samsung Galaxy Ace 2 (I8160) – CPU Dual Core 800 MHz and 768 MB of RAM.
Two signals were captured on each walk, one with the S7500 smartphone and one
with one of the other smartphones. Both devices were held on the same side of the
hip, overlapped, so the same signal was captured. The captured signals are shown
below in Figure 49, Figure 50, Figure 51 and Figure 52.
Figure 49 - S7500 versus HTC WildFire.
0
2
4
6
8
10
12
14
16
1 61 121 181 241 301 361 421 481 541 601 661 721 781 841 901 961 1021
(m/s
2 )
Sample Number
S7500
HTC WildFire
72
Figure 50 - S7500 versus Sapo A5.
Figure 51 - S7500 versus S5830.
Figure 52 - S7500 versus I8160.
0
2
4
6
8
10
12
14
16
18
1 51 101 151 201 251 301 351 401 451 501 551 601 651 701 751 801 851 901
(m/s
2 )
Sample Number
S7500
Sapo A5
0
2
4
6
8
10
12
14
16
1 51 101 151 201 251 301 351 401 451 501 551 601 651 701 751 801
(m/s
2 )
Sample Number
S7500
S5830
2
4
6
8
10
12
14
16
1 61 121 181 241 301 361 421 481 541 601 661 721 781
(m/s
2 )
Sample Number
S7500
I8160
73
This study confirms that the accelerometer’s maximum sample rate, and consequently,
the success rate of the developed application, depends on the processing power of the
device. The older and weaker devices cannot operate on the selected sample rate
(100Hz), since their maximum sample rate is around 50Hz, which will result in a drop in
the systems accuracy.
6.3.6. Discussion
This subsection discusses the results of the tests performed with the proposed
biometric recognition system.
In Table 3 one can see the transformation costs obtained through the Dynamic Time
Warping and by looking at the three ROC curves presented on Figure 40, one can see
the difference between the precision of each sample rate. To reduce the processing
data elapsed time (the time taken since the user finished walking until the device
accepts/rejects the user), low sampling rates were tested. In spite of the processing
speed being much faster, the precision lost is not viable. This way, one can conclude
that the best option is to use the sampling rate of 100 samples per second (100 Hz).
Table 9 shows the results of both Mean and DTW methods. As expected, the Mean
method achieved better results since it is using all the data gathered by the sensor
instead of just using one of the gait cycles to represent all the others. After this study it
was decided that the method used on this thesis would be the Mean.
Table 9 - Mean and DTW methods comparison.
Method FAR (%) FRR (%) GAR (%) EER (%)
Mean 2.05 5.00 95.00 2.5
DTW 4.91 10.00 90.00 5
74
On Table 10, the transformation costs obtained through the Dynamic Time Warping
from three different walks from the same user are shown. This user was chosen
because the differences between using both techniques are visible.
Coord. AM ax ay az
User Walk1 Walk2 Walk3 Walk1 Walk2 Walk3 Walk1 Walk2 Walk3 Walk1 Walk2 Walk3
Walk1 - 22.99 23.51 - 26.12 24.14 - 262.35 267.95 - 174.91 170.61
Walk2 22.99 - 27.15 26.12 - 26.19 262.35 - 19.52 174.91 - 27.83
Walk3 23.51 27.15 - 24.14 26.19 - 267.95 19.52 - 170.61 27.83 -
Table 10 – One user from the used database. Walk1, Walk2 and Walk3 represent three different walks from the same user. The values represented in the table are the transformation costs obtained through the Dynamic Time Warping. The costs below the chosen operating point (transformation cost = 33) are
marked in dark red.
By looking at Table 10 one can realize that although the results from the acceleration
modulus are below the chosen operating point (transformation cost = 33), this does
not imply that the comparison results from the components ax, ay and az are as well. As
it has already been discussed in section 5.2.1, when calculating the acceleration
modulus, the component ax is the most relevant because the acceleration coordinates
ay and az have very low values comparing to ax. The low ay and az values do not cause
problems on calculating the acceleration modulus but when it comes to verifying users
through the three components separately this can create an issue, since any
irregularity on the walk and/or the lack of precision of the smartphone’s
accelerometer can cause the signals to be very different. This is the main reason why
there are so high transformation cost values on Table 10.
As it has been discussed previously, working the signals separately is not a good
option. On the other hand, when combined in the acceleration modulus through
equation (4), the results are much more appealing, as one can see from the ROC curve
shown in Figure 53 and from the EER calculations in Figure 54.
75
Figure 53 – ROC curve comparison between- the acceleration modulus and each of the three components.
Figure 54 - FRR (%) against FAR (%) to obtain EER.
In spite of using the acceleration modulus one being able to achieve better results,
these are still not good enough for a reliable security system. The operating point
shown on Table 5 uses a FAR = 2.05%, FRR = 5.00% and EER of approximately 2.5%.
Statistically, this means that for every 10 000 false attempts of verification 205
impostor users will be accepted as genuine users, and for the same 10 000 genuine
0
10
20
30
40
50
60
70
80
90
100
0 20 40 60 80 100 120
GA
R (
%)
FAR (%)
ROC
ROC_AM
ROC_ax
ROC_ay
ROC_az
0
10
20
30
40
50
60
70
80
90
100
0 20 40 60 80 100
FRR
(%
)
FAR (%)
EER
AM
ax
ay
az
EER
76
verification attempts, 500 genuine users will be rejected, having to make a second
recognition attempt.
Although the results are not ideal, they show that it is, possible to identify users by
their gait information captured by a smartphone. The FAR percentage is too high from
a security standpoint (as mentioned in section 6.2), but the developed system can be
used in a multimodal biometric system, or just as a complementation of the usual PIN
or password. Mäntyjärvi et al [39] tested their gait recognition biometric system using
a database of 36 test subjects and the subjects wore the accelerometer on a belt. The
accelerometer used was an ADXL 202JQ. The Equal Error Ratio (EER) values for
correlation, frequency domain, histogram and high order moments were, respectively,
7%, 10%, 19% and 18%. Information about the FAR and FRR results was not available
on this paper.
Rong et al [41] used the Average cycle length and Dynamic Time Warping (DTW)
techniques and tested their system with a database containing 35 participants. The
accelerometer use was a Triaxial MMA7260 and it was also attached to the belt. It was
achieved an EER = 6.7%, FAR = 2.5% and FRR = 11.1%.
Gafurov et al [45] placed the accelerometer on the ankle and their biometric system
was tested with a database containing 21 participants. The accelerometer used was an
ADXL 202. Using the histogram technique an EER of 5% was achieved. There was no
information regarding the FAR and FRR values on the used paper.
Nickel et al [19] obtained an EER = 20%, FAR = 10.29% and FMR = 10.42%. These
results were obtained by placing the accelerometer (the accelerometer used was the
G1 mobile phone’s) on the belt, and by using the Hidden Markov Models technique.
The system was tested with a database of 48 subjects.
Holien [33] obtained an EER = 1.60% by placing the accelerometer on the belt, and by
using the same techniques as Rong et al [39]. The system was tested with a database
of 60 participants. The sensor used was an accelerometer called Motion Recording
100, developed at Gjøvik University College.
77
Reviewing all the results achieved by the other papers, the results achieved by the
developed system are very promising. However, these results can still be improved.
One way of improving the results is to ask the test-subjects to give take a longer walk,
which would imply more steps. Since a lower sampling rate is not a good option, this is
viable by using a more powerful device. Another way of improving the results is to use
a more sensible accelerometer, which would capture a more precise signal. Data
precision is one of the most important goals regarding biometric recognition.
6.3.7. Potential Applications
Mobile phones are no longer devices used solely for the purpose of radio
communications. Mobile phones have evolved to small computers which are capable
of browsing the internet, carrying out monetary transactions (online banking),
performing remote work and storing user’s sensitive data and information. With this
increase in functionality the need for better security is undeniable.
A biometric verification system can be applied to smartphone mobile devices in order
to increase their security, decreasing the risk of information and identity theft.
This section gives some examples of potential applications for the developed
application and for the used technology.
Login system - The developed application can be used as a login system, working as a
background application. This login system would authenticate the user as he walks,
and this authentication could be valid for two hours (for example) or until the
application failed to identify the user while he walks. This system can be used in a wide
range of operations in a mobile scenario
Internet security – Nowadays, it is common to use the internet to establish
operations involving sensitive data, such as online banking or e-commerce
payments. These types of operations could easily require the user to be logged
in, proving that he is the legit user. The developed application would be an
78
additional layer of security to the traditional methods already used (PINs or
receiving codes via SMS).
Application and file access – The same way the device can require the user to
be logged in order to complete online banking tasks, it can be used to prevent
the risk of information theft by locking specific folders or applications.
Opening doors – Following the same rationale, this application can be also used
to open doors. For example, the device becomes in range with a computer,
equipped with Bluetooth (for example), which is responsible for the door’s lock.
By sending a Bluetooth message to the computer stating that the user is logged
and has the permission to enter, the door would unlock automatically. This
could prevent the issue of lost/stolen keys or access cards. In order to improve
the security of this method, it can be used in a multimodal biometric system,
and work together with a fingerprint reader, palm print verification system or
even another gait analysis system (machine vision or floor sensors).
Forensic scenario – By using the email & GPS modules of this application, a
complete record of the device’s whereabouts is recorded online. In case of a
stolen device or even a missing person, this application’s outcome could serve
as evidence.
79
7. Conclusions
This dissertation presents an online, unimodal biometric recognition system based on
gait, implemented on the Android operating system. Since this application’s goal is to
improve the security of the device, the chosen operating point achieved FAR and FRR
of respectively 2.05% and 5.00% with an EER of roughly 2.5%. The accelerometers
chosen sample rate was 100 samples per second (100 Hz). The obtained results prove
that gait can be used to identify individuals, and consequently can be used as a
biometric trait.
Using the DTW technique to compare signals has proven very effective at providing
good biometric recognition results for the tested database. Since DTW can compare
series of values of different sizes, by using DTW one can skip some of the steps (gait
cycle and average cycle normalization). Since this application is intended for a
smartphone, where the processing power and autonomy are limited, reducing the
number of computations is crucial. This makes DTW very suitable for comparing data
series in a mobile device environment.
In order to improve the proposed biometric system, some future work ideas can be
taken into account, specifically:
By using a superior smartphone (with more processing power), the device can
process more data faster, making it viable to take longer walks.
Design the system to be used as a background application, where the device is
constantly verifying the user. In order to do this, the battery usage of the
application must be minimal, which can be an hard challenge to overcome.
80
8. References
[1] F. Breitinger & C. Nickel, 2010, “User Survey on Phone Security and Usage” in
BIOSIG 2010 - Proceedings of the Special Interest Group on Biometrics and
Electronic Signatures, vol. 164 of LNI, pp. 139-144, Darmstadt, Germany.
[2] Nielsenwire, “Smartphones Account for Half of all Mobile Phones, Dominate New
Phone Purchases in the US” – available at
http://blog.nielsen.com/nielsenwire/online_mobile/smartphones-account-for-
half-of-all-mobile-phones-dominate-new-phone-purchases-in-the-us/, accessed on
2012/11/05.
[3] Market Research “Global Biometric Forecast to 2014” – available at
http://www.marketresearch.com/RNCOS-v3175/Biometric-Forecast-6845398/,
accessed on 2012/11/05.
[4] Mary Meeker & Liang Wu, “Internet Trends”, 2013, in D11 Conference”,
2013/5/29, California.
[5] Google Android, “Android Operating System” – available at
http://www.google.com/mobile/android/, accessed on 2012/11/05.
[6] Biometric research group “An Overview of Biometric Recognition” available at
http://biometrics.cse.msu.edu/info/, accessed on 2012/11/05.
[7] A. K. Jain, A. Ross & S. Prabhakar, 2004, "An Introduction to Biometric
Recognition", IEEE Transactions on Circuits and Systems for Video Technology, vol.
14, pp. 4-20.
[8] C. Rathgeb & A. Uhl, 2010,“Iris-Biometric Hash Generation for Biometric Database
Indexing”, Proc. of the 20th International Conference on Pattern Recognition
(ICPR), Aug. 23-25, 2010, Istanbul, Turkey, pp. 2848-2851.
[9] Jean-Francois Mainguet, available at http://fingerchip.pagesperso-
orange.fr/biometrics/accuracy.htm, accessed on 2012/11/19.
[10] Derawi Biometrics, available at http://biometrics.derawi.com/?page_id=51,
accessed on 2012/11/19.
81
[11] Geeksugar, available at http://www.geeksugar.com/Ice-Cream-Sandwich-
Features-20073482, accessed on 2013/01/20.
[12] InformationWeek, available at http://www.informationweek.com/byte/ice-cream-
sandwichs-facial-unlock-securi/232300637, accessed on 2013/01/20.
[13] Bromba Biometrics, available at http://www.bromba.com/bromba1998e.htm,
accessed on 2013/01/21.
[14] Mail Online, available at http://www.dailymail.co.uk/sciencetech/article-
1383133/James-Bond-Motorola-ATRIX-smartphone-operate-using-fingerprint-
recognition.html, accessed on 2013/01/21.
[15] N. Brown, “Mobile Verification by Palmprint Biometric” available at
http://www.stanford.edu/class/ee368/Project_11/Reports/Brown_Mobile_Identif
ication_by_Palmprint_Biometrics.pdf accessed on 2013/01/20.
[16] X. Wu, D. Zhang & K. Wang, 2006, “Palm line extraction and matching for personal
authentication”, IEEE Transactions on Systems, Man, and Cybernetics, vol. 36, pp.
978–987.
[17] N. Moço, 2012, “Biometric Recognition Based on a Smartphone”, Master’s Thesis,
Instituto Superior Técnico, UTL, Lisboa.
[18] Z. Sun, T. Tan, Y. Wang & S.Z. Li, 2005, “Ordinal Palmprint Representation for
Personal Identification”, Proc. of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR), pp. 279-284, San Diego, CA,
USA.
[19] C. Nickel, S. Rangarajan & M. Möbius, 2007, “Using Hidden Markov Models for
Accelerometer-Based Biometric Gait Recognition”, Proc. Of the IEEE 7th
International Colloquium on Signal Processing and its Applications, pp- 58-63,
Penang, Malaysia.
[20] L. E. Baum & T. Petrie, 1966, “Statistical Inference for Probabilistic Functions of
Finite State Markov Chains”, Annals of Mathematical Statistics, 37(6), pp. 1554–
1563.
[21] American Academy of Orthotists & Prosthetitsts – available on
http://www.oandp.org/jpo/library/1997_01_010.asp, accessed on 2013/03/02.
82
[22] C. W. Chan & A. Rudins, 1994, “Foot biomechanics during walking and running”, in
Mayo Clinic Proceedings, 69, pp. 448-461.
[23] Journal of the American Academy of Orthopaedic Surgeons – available on
http://www.jaaos.org/content/15/2/107/F1.expansion, accessed on 2013/03/02.
[24] M. Murray, 1967, “Gait as a total pattern of movement”, American Journal of
Physical Medicine, 46 (1), pp. 290-332.
[25] G. Johannson, 1973, “Visual perception of biological motion and a model for its
analysis”, Perception and Phychophysics, 14, pp. 201-211.
[26] S. Niyohi & E. Adelson, 1994, “Analyzing and recognizing walking figures in XYT”,
in Computer Vision and Pattern Recognition, 1994 IEEE Computer Society
Conference, pp. 469-474, Seattle, WA.
[27] L. Lee, 2003, “Gait analysis for classification”, AI Technical Report, 014, pp. 112.
Massachusetts Institute of Technology-artificial Intelligence Laboratory.
[28] J. Yoo, M. Nixon & C. Harris, 2002, “Extracting gait signatures based on anatomical
knowledge”, Proceedings of BMVA Symposium on Advanced Biometric
Technologies, 2.
[29] D. Cunado, M. S. Nixon & J. N. Carter, 1997, “Using gait as a biometric, via phase-
weighted magnitude spectra”, Proceedings of the First International Conference
on Audio and Video-Based Biometric Person Authentication, pp. 95-102, London,
UK. Springer-Verlag.
[30] J. Little & J. Boyd, 1998, “Recognizing people by their gait: The shape of motion”,
Journal of Computer Vision Research, the MIT Press, 1, pp. 2-32.
[31] A. Kale, A. Sundaresan, A. Rajagopalan, N. Cuntoor, A. Roy-Chowdhury, V. Kruger
& R. Chellapa, 2004, “Identification of humans using gait”, Image Processing, IEEE
Transactions, 13(9), pp. 1163-1173.
[32] L. Wang, T. Tan, W. Hu & H. Ning, 2003, “Automatic gait recognition based on
statistical shape analysis”, Image Processing, IEEE Transaction, 12(9), pp. 1120-
1131.
[33] K. Holien, 2008, “Gait recognition under non-standard circumstances”, Master’s
Thesis, Gjøvik University College.
83
[34] M. S. Nixon, J. N. Carter, M. G. Grant, L. G. Gordon & J. B. Hayfron-Acquah, 2003,
“Automatic recognition by gait: progress and prospects”, Emerald, 23(4), pp. 323-
331.
[35] S. Sarkar, P. Phillips, Z. Liu, I. Vega, P. Grother & K. Bowyer, 2005, “The human ID
gait challenge problem: Data sets, performance, and analysis”, IEEE Trans. Pattern
Anal. Mach. Intell., 27(2), pp. 162-177.
[36] J. Perry, 1992, “Gait Analysis. Normal and Pathological Function”, SLACK
Incorporated, 576 pp.
[37] L. Middleton, A. A. Buss, A. Bazin & M. S. Nixon, 2005, “A floor sensor system for
gait recognition”, Automatic Identification Advanced Technologies, fourth IEEE
Workshop, pp. 171-176, Los Alamitos, CA, USA.
[38] R. J. Orr & G. D. Abowd, 2000, “The smart floor: A mechanism for natural user
identification and tracking”, in CHI 2000: Extended abstracts on human factors in
computing systems, pp. 275-276, New York, NY, USA. ACM Press.
[39] J. Mäntyjärvi, M. Lindholm, E. Vildjiounaite, S.-M. Mäkelä & H. Ailisto, 2005,
“Identifying users of portable devices from gait pattern with accelerometers”,
Acoustics, speech and Signal Processing. Proceedings. (ICASSP ’05). IEEE
International Conference, 2, pp. 973-976.
[40] E. Vildjiounaite, S.-M. Mäkelä, M. Lindholm, R. Riihimäki, V. Kyllönen, J. Mäntyjärvi
& H. Ailisto, 2006, “Unobtrusive multimodal biometrics for ensuring privacy and
information security with personal devices”, in Pervasive, K. P. Fishkin, B. Schiele,
P. Nixon & A. J. Quigley (eds), Lecture Notes in Computer Science, 3968, pp. 187-
201.
[41] L. Rong, D. Zhiguo, Z. Jianzhong & L. Ming, 2007, “Identification of individual
walking patterns using gait acceleration”, The 1st International Conference on
Bioinformatics and Biomedical Engineering, ICBBE 2007, pp. 543-546.
[42] D. Gafurov, E. Snekkenes & P. Bours, 2007, “Spoof attacks on gait authentication
system”, Information Forensics and Security, IEEE Transactions, 2(3), pp. 491-502.
[43] M. O. Derawi, C. Nickel, P. Bours & C. Busch, 2010, “Unobtrusive User-
Authentication on Mobile Phones using Biometric Gait Recognition”, Sixth
84
international conference on Intelligent Information Hiding and Multimedia Signal
Processing (IIH-MSP), pp. 306-311, Darmstadt, Germany.
[44] E. Vildjiounaite, S.-M. Mäkelä, M. Lindholm, J. Mäntyjärvi & H. Ailisto, 2005,
“Identifying people from gait pattern with accelerometers“, in Anil K. Jain; Nalini
K. Ratha (eds), Biometric Technology for Human Identification II, 5779, pp. 7-14.
[45] D. Gafurov, K. Helkala & T. Søndrol, “Gait recognition using acceleration from
MEMS”, April 2006, in Availability, Reliability and Security. ARES 2006. The first
International Conference, pp. 432-439, Orlando, Florida.
[46] M. O. Derawi, C. Nickel, P. Bours & C. Busch, 2010, “Unobtrusive User-
Authentication on Mobile Phones using Biometric Gait Recognition”, Sixth
international conference on Intelligent Information Hiding and Multimedia Signal
Processing (IIH-MSP), pp. 306-311, Darmstadt, Germany.
[47] Tecnoblog, “Samsung Galaxy Ace Plus”, available at
http://tecnoblog.net/smartphone/samsung-galaxy-ace-plus/, accessed on
2013/02/19.
[48] Stackoverflow, available at http://stackoverflow.com/questions/11175599/how-
to-measure-the-tilt-of-the-phone-in-xy-plane-using-accelerometer-in-android,
accessed on 2013/02/19.
[49] Akiti, available at http://www.akiti.ca/RotateTrans.html, accessed on 2013/05/23.
[50] T. Giorgino, 2009, “Computing and visualizing Dynamic Time Warping Alignments
in R: the dtw package”, 31 (7), pp. 1-24, University of Pavia, Italy.