Extraction and Classification of Human Gait Features
-
Upload
makess2001 -
Category
Documents
-
view
222 -
download
0
Transcript of Extraction and Classification of Human Gait Features
-
8/9/2019 Extraction and Classification of Human Gait Features
1/11
H. Badioze Zaman et al. (Eds.): IVIC 2009, LNCS 5857, pp. 596–606, 2009.
© Springer-Verlag Berlin Heidelberg 2009
Extraction and Classification of Human Gait Features
Hu Ng1, Wooi-Haw Tan2, Hau-Lee Tong1, Junaidi Abdullah1,
and Ryoichi Komiya 3
1 Faculty of Information Technology, Multimedia University, Persiaran Multimedia,
63100 Cyberjaya, Selangor Malaysia2 Faculty of Engineering, Multimedia University, Persiaran Multimedia, 63100 Cyberjaya,
Selangor Malaysia3 Department of Mechatronic and BioMedical Engineering, Faculty of Engineering and Science,
Universiti Tunku Abdul Rahman, Jalan Genting Kelang, Setapak 53300 Kuala Lumpur{nghu,twhaw,hltong,junaidi}@mmu.edu.my, [email protected]
Abstract. In this paper, a new approach is proposed for extracting human gait
features from a walking human based on the silhouette images. The approach
consists of six stages: clearing the background noise of image by morphological
opening; measuring of the width and height of the human silhouette; dividing
the enhanced human silhouette into six body segments based on anatomical
knowledge; applying morphological skeleton to obtain the body skeleton; ap-
plying Hough transform to obtain the joint angles from the body segment skele-
tons; and measuring the distance between the bottom of right leg and left leg
from the body segment skeletons. The angles of joints, step-size together withthe height and width of the human silhouette are collected and used for gait
analysis. The experimental results have demonstrated that the proposed system
is feasible and achieved satisfactory results.
Keywords: Human identification, Gait analysis, Fuzzy k-nearest neighbour.
1 Introduction
Personal identification or verification schemes are widely used in systems that require
determination of the identity of an individual before granting the permission to access
or use the services. Human identification based on biometrics refers to the automatic
recognition of the individuals based on their physical and/or behavioural characteris-
tics such as face, fingerprint, gait and spoken voice. Biometrics are getting important
and widely acceptable nowadays because they are really personal / unique that one
will not lose or forget it over time.
Gait is unique, as every individual has his/her own walking pattern. Human walk-
ing is a complex locomotive action which involves synchronized motions of bodyparts, joints and the interaction among them [1]. Gait is a new motion based biometric
technology, which offers the ability to identify people at a distance when other bio-
metrics are obscured. Furthermore, there is no point of contact with any feature cap-
turing device and is henceforth unobtrusive.
-
8/9/2019 Extraction and Classification of Human Gait Features
2/11
Extraction and Classification of Human Gait Features 597
Basically, gait analysis can be divided into two major categories, namely model-
based method and model-free method. Model-based method generally models the
human body structure or motion and extracts features to match them to the model
components. The extraction process involves a combination of information on the
human shape and dynamics of human gait. This implies that the gait dynamics areextracted directly by determining joint positions from model components, rather than
inferring dynamics from other measures, thus, reducing the effect of background
noise (such as movement of other objects). For instance, Johnson used activity-
specific static body parameters for gait recognition without directly analyzing gait
dynamics [2]. Cunado used thigh joint trajectories as the gait features [3]. The advan-
tages of this method are the ability to derive gait signatures directly from model pa-
rameters and free from the effect of different clothing or viewpoint. However, it is
time consuming and the computational cost is high due to the complex matching and
searching process.On the other hand, model-free method normally distinguishes the entire human
body motion using a concise representation without considering the underlying struc-
ture. The advantages of this method are low computational cost and less time con-
suming. For instance, BenAbdelkader proposed an eigengait method using image self-
similarity plots [4]. Collins established a method based on template matching of body
silhouettes in key frames during a human’s walking cycle [5]. Philips characterized
the spatial-temporal distribution generated by gait motion in its continuum [6].
This paper presents the unique concept of extracting the human gait features of
walking human from consecutive silhouette images. First, the height and width of thehuman subject are determined. Next, each human silhouette image is enhanced and
divided into six body segments to construct the two-dimension (2D) skeleton of the
body model. Then, Hough transform technique is applied to obtain the joint angle for
each body segment. The distance between the bottoms of both lower legs can also be
obtained from the body segment skeletons. This concept of joint angle calculation is
found faster in process and less complicated than the model-based method like linear
regression approach by Yoo [7] and temporal accumulation approach by Wagg [8].
2 Overview of the System
First, morphological opening is applied to reduce background noise on the raw human
silhouette images. The width and height of each human silhouette are then measured.
Next, each of the enhanced human silhouettes is divided into six body segments based
on the anatomical knowledge [10]. Morphological skeleton is later applied to obtain
the skeleton of each body segment. The joint angles are obtained after applying
Hough transform on the skeletons. Step-size, which is the distance between the bot-
toms of both legs are also measured from the skeletons of the lower legs. The dimen-
sion of the human silhouette, step-size and six joint angles from body segments –
head and neck, torso, right hip and thigh, right lower leg, left hip and thigh, and left
lower leg are then used as the gait features for classification. Fig. 1 summarizes the
process flow of the proposed system.
-
8/9/2019 Extraction and Classification of Human Gait Features
3/11
598 H. Ng et al.
Fig. 1. Flow chart of the proposed system
2.1 Original Image Enhancement
The acquired original raw human silhouette images are obtained from the small sub- ject gait database, University of Southampton [9]. They used static cameras to cap-
ture eleven subjects walking along the indoor track in four different angles. Videodata was first preprocessed using Gaussian averaging filter for noise suppression,followed by Sobel edge detection and background subtraction technique to create thehuman silhouette images.
Due to poor lighting condition during the video shooting, shadow was foundespecially near to the feet. It appeared as part of the subject body in the binary humansilhouette image as shown in Fig. 2. The present of the artefact affects the gait featureextraction and the measurement of joint angles. This problem can be reduced by applying
morphological opening with a 7×7 diamond shape structuring element, as denoted by
AB = (AB) ⊕ B) . (1)
where A is the image and B is the structuring element. The opening first performs
erosion operation and followed by dilation operation. Fig. 2 shows the result of apply-
ing morphological opening on a human silhouette image.
(a) Original image (b) Enhanced image
Fig. 2. Original and enhanced image after morphological opening
Measurement
of width and
height
Measurement
of step-sizeSkeletonization
of body
segments
Human
silhouette
segmentation
Joint angles
extraction
Original
image
enhancement
Computationof the
similarities
Determinationof the k-nearest
neighbor
Classificationof the unlabeled
subjects
-
8/9/2019 Extraction and Classification of Human Gait Features
4/11
Extraction and Classification of Human Gait Features 599
2.2 Measurement of Width and Height
The width and height of the subject during the walking sequences are measured from
the bounding box of the enhanced human silhouette, as shown in Fig. 3. These two
features will be used for gait analysis in the later stage.
Height
Width
Fig. 3. The width and height of a human silhouette
2.3 Dividing Human Silhouette
At this stage, the enhanced human silhouette is divided into six body segments based
on the anatomical knowledge [10]. First, the centroid of the subject is determined by
calculating the centre of mass of the human silhouette. The area above the centroid is
considered as the upper body – head, neck and torso. The area below the centroid is
considered as the lower body – hips, legs and feet.
Next, one third of the upper body is divided as the head and neck. The remaining
two thirds of the upper body are classified as the torso. The lower body is divided into
two portions – (i) hips and thighs (ii) lower legs and feet with the ratio one to two.
Again, the centroid coordinate is used to divide the two portions into the final four
segments – (i) right hip and thigh (ii) lower right leg and foot (iii) left hip and thigh
and (iv) lower left leg and foot.
Fig. 4. Six body segments
-
8/9/2019 Extraction and Classification of Human Gait Features
5/11
600 H. Ng et al.
Fig. 4 shows the six segments of the body, where “a” represent head and neck, “b”represents torso, “c” represents right hip and thigh, “d” represents lower right leg andfoot, “e” represents left hip and thigh and “f” represents lower left leg and foot.
2.4 Skeletonization of Body Segments
To better represent each body segment, morphological skeleton is used to constructthe skeleton for each of the body segments. Skeletonization involves consecutiveerosions and opening operations on the image until the set difference between the twooperations is zero.
Erosion Opening Set differences
Ak B (Ak B) B (Ak B) – ((Ak B)) B (2)
where A is an image, B is the structuring element and k is from zero to infinity. Fig. 5.shows the skeleton of the body segments.
Fig. 5. Skeleton on a torso segment
2.5 Joint Angles Extraction
To extract the joint angle for each body segment, Hough transform is applied on the
skeleton. Hough transforms maps pixels in the image space to the straight line
Fig. 6. Joint angle formation
-
8/9/2019 Extraction and Classification of Human Gait Features
6/11
Extraction and Classification of Human Gait Features 601
through a parameter space. The skeleton in each body segment, which is the longest
line, is indicated by the highest intensity point in the parameter space. Fig. 6 shows
the joint angle formation from the most probable straight line detected via Hough
transform, where φ is the joint angle calculated using
φ θ =+°90 . (3)
2.6 Measurement of Step-Size
To obtain the step-size of each walking sequence, the Euclidian distance between the
bottom ends of lower right leg and lower left leg are measured.
Fig. 7 shows all the gait features extracted from a human silhouette, where Angle 7
is the thigh angle, calculated as
Angle 7 = Angle 6 – Angle 4 . (4)
Fig. 7. All the extracted gait features
3 Classification Technique
For the classification, the supervised fuzzy K-Nearest Neighbour (KNN) algorithm is
applied, as there is sufficient data to be used for training and testing. Basically, KNN
is a classifier to distinguish the different subjects based on the nearest training data in
the feature space. In other words, subjects are classified according to the majority of
nearest neighbours.
-
8/9/2019 Extraction and Classification of Human Gait Features
7/11
602 H. Ng et al.
In extension to KNN, J.M. Keller [11] has integrated the fuzzy relation with the
KNN. According to the Keller’s concept, the unlabeled subject’s membership func-
tion of class i is given by Equation (5).
2
1
2
1
1( )
|| ||( )
1
|| ||
∈−
∈−
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟
−⎝ ⎠=
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟
−⎝ ⎠
∑
∑
i
x KNN m
i
x KNN m
u x
x x u x
x x
. (5)
where x , x and U i( x ) represent the unlabelled subjects, labelled subjects and x ’s
membership of class i respectively. Equation (5) will compute the membership value
of unlabeled subject by the membership value of labelled subject and distance be-
tween the unlabelled subject and KNN labelled subjects.
Through the fuzziness, the KNN will annotate the appropriate class to the unla-
belled subject by sum of similarities between labelled subjects. The algorithm in-
volved in the identification of the human beings is implemented as follows:
Step 1: Compute the distance between the unlabelled subject, and all labelled or
training subjects. The distance between an unlabelled subject x i and labelled
subject, x j is defined as:
D ( x i, x j) = || x i, x j || − 2 . (6)
Step 2: Sort the objects based on the similarity and identify the k-nearest neighbours.
k-nearest neighbours, KNN ={ x 1, x 2, …, x k } . (7)
Step 3: Compute the membership value for every class using Equation (5).
Step 4: Classify unlabelled subject to the class with the maximum membership value
as shown in Fig. 8.
Fig. 8. An example for four nearest neighbours
-
8/9/2019 Extraction and Classification of Human Gait Features
8/11
Extraction and Classification of Human Gait Features 603
From Fig. 8, the values on the lines denote the similarities between unlabelled and
labelled subjects. The sum of membership values for Class 1, m1=0.7 and for Class 2,
m2=0.3. Since m1 is more than m2, so the unlabelled subject is classified as Class 1.
4 Experimental Results and Discussion
The experiment was carried out for nine subjects with three different conditions,
which are walking with normal speed, walking in own shoes and walking in boots.
The major objective was to identify the degree of accuracy for fuzzy KNN technique
by using the different values of k. For each subject, there were approximately twenty
sets of walking data in normal track (walking parallel to the static camera).
In order to obtain the optimized results, five features were adopted for the classifi-
cation. First, maximum thigh angle, θ max
was determined from all the thigh anglescollected during a walking sequence. When θ max was located, the corresponding val-
ues for the step-size, S and width, w and height, h are determined as well. From the
graph plotted, it can be observed that the width for each subject changes in a sinusoi-
dal patter over time, as shown in Fig. 9. Therefore, the last employed feature is the
average of the maximum width, AP.
Fig. 9. Graph of width versus time
All the features were channelled into the classification process and the distance of
similarity between unlabelled object, x i and labelled object, x j was defined by Equa-
tion (8).
max max 2 2 2 2 2( , ) ( ) ( ) ( ) ( ) ( ) p pi j i j i j i j i j i j
D x x w w h h S S A Aθ θ = − + − + − + − + − . (8)
The adopted algorithm was supervised fuzzy KNN, which requires training and test-ing. For the training part, a minimum of eight set of walking data were used for each
subject. The residual data were used for the testing. The allocation of the training and
testing data for each condition is shown in Table 1.
-
8/9/2019 Extraction and Classification of Human Gait Features
9/11
604 H. Ng et al.
Table 1. Allocation of the data for each condition
Testing data set Training data set
Normal speed 106 78
Wearing own shoes 101 79
Wearing boots 100 74
Different values of k nearest neighbours were adopted for the classification, where
k = 3, 4, 5, 6, 7 and 8. Since the minimum of the training data is eight, the maximum
value of k was set to eight. The results obtained are depicted in Fig. 10 and Table 2.
Fig. 10. Graph for the percentage of accuracy versus the value of k
Table 2. The percentage of accuracy for fuzzy KNN
k Normal speed
(%)
Wearing own shoes
(%)
Wearing boots
(%)
3 78.3 72.3 834 76.4 75.2 82
5 75.5 72.3 81
6 75.5 71.3 82
7 76.4 71.3 81
8 77.4 72.3 80
From Table 2, it can be concluded that the changes of value k do not have a sig-
nificant impact on the accuracy of the classification. However, when k = 3, results
were slightly better on others. More satisfactory classification results might be ob-tained if more features are employed.
In addition to evaluation for each condition, classification results for each subject
were evaluated as well. This was to determine which unlabelled subjects were well
identified and vice versa for all the conditions. Since k = 3 provided the best result for
all three conditions, the experiment was carried out using k = 3 for subject evaluation.
-
8/9/2019 Extraction and Classification of Human Gait Features
10/11
Extraction and Classification of Human Gait Features 605
The obtained results for respective subject are shown in Table 3. From Table 3, sub-
ject 1 produces the most satisfactory classification results for all three conditions. This
was contributed by the adopted features for subject 1 which was highly distinctive
from other subjects. Furthermore, there were not many variations between training
and testing data for subject 1. In other words, subject 1 was well recognizable underthese three conditions. For the rest of the subjects, the accuracy of the classification
was highly depending on the conditions. For instance, under normal speed the accu-
racy for subject 8 only achieved an accuracy of 36.4%. This was due to the large
number of misclassifications of subject 8 to other subjects.
Table 3. The percentage of the classification results for three conditions when k = 3
Normal speed
(%)
Wearing own shoes
(%)
Wearing boots
(%)
Subject 1 100 81.8 100Subject 2 81.8 58.3 90.9
Subject 3 92.3 70 81.8
Subject 4 80 37.5 81.8
Subject 5 100 53.8 90.9
Subject 6 100 90.9 45.5
Subject 7 57.1 85.7 85.7
Subject 8 36.4 80 91.7
Subject 9 50 83.3 90
5 Conclusion
We have described a new approach for extracting the gait features from enhanced
human silhouette image. The gait features is extracted from human silhouette by de-
termining the skeleton from body segment. The joint angles are obtained after apply-
ing Hough transform on the skeleton. In the future, more gait features will be ex-
tracted and applied in order to achieve higher accuracy of classification.
Acknowledgment
The authors would like to thank Prof Mark Nixon, School of Electronics and Com-
puter Science, University of Southampton, United Kingdoms for providing the data-
base for use in this work.
References
1.
BenAbdelkader, C., Culter, R., Nanda, H., Davis, L.: EigenGait: Motion-based Recogni-tion of People Using Image Self-similarity. In: Proceeding of International Conference
Audio and Video-Based Person Authentication, pp. 284–294 (2001)
2.
Bobick, A., Johnson, A.: Gait Recognition Using Static, Activity-specific Parameters. In:
Proceeding IEEE Computer Vision and Pattern Recognition, pp. 423–430 (2001)
-
8/9/2019 Extraction and Classification of Human Gait Features
11/11
606 H. Ng et al.
3.
Cunado, D., Nixon, M., Carter, J.: Automatic Extraction and Description of Human Gait
Models for Recognition Purposes. Computer and Vision Image Understanding 90, 1–41
(2003)
4. BenAbdelkader, C., Cutler, R., Davis, L.: Motion-based Recognition of People in Eigen-
gait Space. In: Proceedings of Fifth IEEE International Conference, pp. 267–272 (2002)5.
Collin, R., Gross, R., Shi, J.: Silhouette-based Human Identification from Body Shape and
Gait. In: Proceedings of Fifth IEEE International Conference, pp. 366–371 (2002)
6. Phillips, P.J., Sarkar, S., Robledo, I., Grother, P., Bowyer, K.: The Gait Identification
Challenge Problem: Dataset and Baseline Algorithm. In: Proceedings of 16th International
Conference Pattern Recognition, pp. 385–389 (2002)
7. Yoo, J.H., Nixon, M.S., Harris, C.J.: Extracting Human Gait Signatures by Body Segment
Properties. In: Fifth IEEE Southwest Symposium on Image Analysis and Interpretation,
pp. 35–39 (2002)
8. Wagg, D.K., Nixon, M.S.: On Automated Model-based Extraction and Analysis of Gait.
In: Proceedings of 6th IEEE International Conference on Automatic face and Gesture Rec-ognition, pp. 11–16 (2004)
9. Shutler, J.D., Grant, M.G., Nixon, M.S., Carter, J.N.: On a Large Sequence-based Human
Gait Database. In: Proceedings of 4th International Conference on Recent Advances in Soft
Computing, pp. 66–71 (2002)
10.
Dempster, W.T., Gaughran, G.R.L.: Properties of Body Segments Based on Size and
Weight. American Journal of Anatomy 120, 33–54 (1967)
11.
Keller, J., Gray, M., Givens, J.: A Fuzzy K-Nearest Neighbour Algorithm. IEEE Trans.
Systems, Man, Cybern. 15, 580–585 (1985)