[IEEE 2012 International Symposium on Instrumentation & Measurement, Sensor Network and Automation...

3
Home Security System Based on Fuzzy k-NN Classifier Ahmad Kadri Junoh Industrial Mathematics Research Group (IMRG), Institute of Engineering Mathematics, University Malaysia Perlis, Kuala Perlis, 02000, Perlis, Malaysia. [email protected] Muhammad Naufal Mansor Intelligent Signal processing Group (ISP) No 70 & 71, Taman Pertiwi Indah, Seriab, University Malaysia perlis, 01000, Kangar, Perlis [email protected] Abstract—A Fuzzy k-nn Classifier for home security system we describe in this paper. Images were taken in uncontrolled indoor environment using video cameras of various qualities. Database contains 4,005 static images (in visible and infrared spectrum) of 267 subjects. Images from different quality cameras should mimic real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios. In addition to database description, this paper also elaborates on possible uses of the database and proposes a testing protocol. A baseline Principal Component Analysis (PCA) face recognition algorithm was tested following the proposed protocol based on k-nn Classifier. Other researchers can use these test results as a control algorithm performance score when testing their own algorithms on this dataset. Database is available to research community through the procedure described at http://www.lrv.fri.uni-lj.si/facedb.html. Keywords- Surveillance System; Image Processing; PCA; Fuzzy k-NN I. INTRODUCTION The issue of the actual mechanism for the visual and computational perceptions of motion in the human are kept grows for the last decade. Each of the researchers is keep pursuit to find the ideal potion of and the robust recognition and detection of video system. However most of the system just recorded the scenario of the event in certain location [1], without analyse it further [2]. Illumination and pose variations are considered as major concerns in facial recognition [3-8]. Several compensation techniques were proposed to overcome these issues however, were successful for face recognition in partly lightened faces and not for facial expression recognition (FER) [7]. Attempts were made to implement FER in [5], [9], [10]. However these were not focused for driver’s emotion recognition/monitoring. They lack the region of interest (ROI, in this case face detection) while processing, which is crucial for environment such as in an outdoor environment [2]. Thus, the goals of this paper are to promote new first visual and computational perceptions of motion monitoring. A PCA method was employed to study the behaviour of the human. The experimental results reveal that the proposed method based on PCA method on k-NN algorithm. II. SYSTEM OVERVIEW From the image sequence that recorded by the camera. Each frame have same size pixel, however each pixel have different color pixel based on RGB (red, green, blue) [11]. Even do, each image that have been recorded are different in time, the color value are almost same, because the background didn’t change. However, if the objects are moved with different time, the pixel value will change respectively. The system also adapts itself to long lasting changes in the background over time. The moving entities are further classified into human and non-human categories using the Principal Component Analysis (PCA) [12]. A brief overview of the system is given in Fig. 1. The foreground is extracted from the video scene by learning a statistical model of the background, and subtracting it from the original frame. The background model learns only the stationary parts of the scene and ignores the moving foreground. The system uses the different color pixel based on RGB (red, green, blue) [13] for modeling the background adaptively. Hence, the motion regions are identified in the frame, which constitute the regions of interest (ROI) for the system. The ROI might consist of a human figure, an animal or even a vehicle. Finally Haar Cascade was employed for face detection and k–NN Classifier to determine the efficiency. Figure 1. Surveillance Detection System

Transcript of [IEEE 2012 International Symposium on Instrumentation & Measurement, Sensor Network and Automation...

Page 1: [IEEE 2012 International Symposium on Instrumentation & Measurement, Sensor Network and Automation (IMSNA) - Sanya, China (2012.08.25-2012.08.28)] 2012 International Symposium on Instrumentation

Home Security System Based on Fuzzy k-NN Classifier

Ahmad Kadri Junoh Industrial Mathematics Research Group (IMRG), Institute of Engineering Mathematics, University

Malaysia Perlis, Kuala Perlis, 02000, Perlis, Malaysia. [email protected]

Muhammad Naufal Mansor Intelligent Signal processing Group (ISP)

No 70 & 71, Taman Pertiwi Indah, Seriab, University Malaysia perlis,

01000, Kangar, Perlis [email protected]

Abstract—A Fuzzy k-nn Classifier for home security system we describe in this paper. Images were taken in uncontrolled indoor environment using video cameras of various qualities. Database contains 4,005 static images (in visible and infrared spectrum) of 267 subjects. Images from different quality cameras should mimic real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios. In addition to database description, this paper also elaborates on possible uses of the database and proposes a testing protocol. A baseline Principal Component Analysis (PCA) face recognition algorithm was tested following the proposed protocol based on k-nn Classifier. Other researchers can use these test results as a control algorithm performance score when testing their own algorithms on this dataset. Database is available to research community through the procedure described at http://www.lrv.fri.uni-lj.si/facedb.html.

Keywords- Surveillance System; Image Processing; PCA; Fuzzy k-NN

I. INTRODUCTION The issue of the actual mechanism for the visual and

computational perceptions of motion in the human are kept grows for the last decade. Each of the researchers is keep pursuit to find the ideal potion of and the robust recognition and detection of video system. However most of the system just recorded the scenario of the event in certain location [1], without analyse it further [2].

Illumination and pose variations are considered as major concerns in facial recognition [3-8]. Several compensation techniques were proposed to overcome these issues however, were successful for face recognition in partly lightened faces and not for facial expression recognition (FER) [7]. Attempts were made to implement FER in [5], [9], [10]. However these were not focused for driver’s emotion recognition/monitoring. They lack the region of interest (ROI, in this case face detection) while processing, which is crucial for environment such as in an outdoor environment [2].

Thus, the goals of this paper are to promote new first visual and computational perceptions of motion monitoring. A PCA method was employed to study the behaviour of the human. The experimental results reveal that the proposed method based on PCA method on k-NN algorithm.

II. SYSTEM OVERVIEW From the image sequence that recorded by the camera.

Each frame have same size pixel, however each pixel have different color pixel based on RGB (red, green, blue) [11]. Even do, each image that have been recorded are different in time, the color value are almost same, because the background didn’t change. However, if the objects are moved with different time, the pixel value will change respectively.

The system also adapts itself to long lasting changes in the background over time. The moving entities are further classified into human and non-human categories using the Principal Component Analysis (PCA) [12]. A brief overview of the system is given in Fig. 1. The foreground is extracted from the video scene by learning a statistical model of the background, and subtracting it from the original frame. The background model learns only the stationary parts of the scene and ignores the moving foreground. The system uses the different color pixel based on RGB (red, green, blue) [13] for modeling the background adaptively. Hence, the motion regions are identified in the frame, which constitute the regions of interest (ROI) for the system. The ROI might consist of a human figure, an animal or even a vehicle. Finally Haar Cascade was employed for face detection and k–NN Classifier to determine the efficiency.

Figure 1. Surveillance Detection System

Page 2: [IEEE 2012 International Symposium on Instrumentation & Measurement, Sensor Network and Automation (IMSNA) - Sanya, China (2012.08.25-2012.08.28)] 2012 International Symposium on Instrumentation

A. Face Detection The Intel Open CV cascade classifier is based on a face

detection algorithm developed by [14-15]. The Algorithm scans an image and returns a set of locations that are believed to be faces. The algorithm uses an Ada-Boost based classifier that aggressively prunes the search space to quickly locate faces in the image. Figure 2 show an example of the Haar Classifier applied to an image, where the squares indicate face detections and circles indicate ground truth. Circles with no corresponding square indicate false negatives (or undetected faces).

The accuracy of this algorithm can be evaluated using a standard configuration that ships with the OpenCV source code. The accuracy is tested by detecting a set of faces in the Infant Face Dataset, which is designed for testing face detectors. A script compares the detections to manually selected ground truth, and the results are reported as the ratio of true Detections to the total number of faces.

Because the Haar Classifier is part of the OpenCV library, the source code should be well optimized. OpenCV is a popular open source image library that was originally created by Intel. The data structures and algorithms in OpenCV have been carefully tuned to run efficiently on modern hardware. It is evident from the code of the Haar Classifier that hand optimization was used to improve the performance of that algorithm.

Figure 2. Face Detection Results

B. Principal Componenet Analysis Principal Component Analysis (PCA) [12] is a standard

technique for dimensionality reduction and has been applied to a broad class of computer vision problems, including feature selection (e.g., [16]), object recognition (e.g., [17]) and face recognition (e.g., [18]). While PCA suffers from a number of shortcomings [19, 20], such as its implicit assumption of Gaussian distributions and its restriction to orthogonal linear combinations, it remains popular due to its simplicity. The idea of applying PCA to image patches is not novel (e.g., [21]). Our contribution lies in rigorously demonstrating that PCA is well-suited to representing keypoint patches (once they have been transformed into a canonical scale, position and orientation), and that this representation significantly improves face detection performance as discuss in Result and Discussion.

C. Fuzzy K-nn In [22] a Fuzzy k-NN algorithm was presented, in which

membership function values are assigned to the vector to be classified instead of assigning the vector to one of the classes represented by the prototype data used. Consider,

=

=

−= K

j

m

j

K

j

m

jij

i

xx

xxx

1

)1/(2

1

)1/(2

)/1(

)/1()(

μμ (1)

As seen from (1), the assigned memberships of x are influenced by the inverse of the distances from the nearest neighbors and their class memberships. The inverse distance serves to weight a vector's membership more and more as it moves closer and closer to the vector under consideration. The training sample values can be assigned class membership in several ways. One way is to give sample values complete membership in their known class and non membership in all other classes. Alternatives are to assign the samples' membership values based on the distance from their class mean or based on the distances from known sample values of their own class and those of other classes, and then to use the resulting memberships in the classifier.

III. RESULTS AND DISCUSSION The proposed algorithm was evaluated on a hundred and

twenty subjects with different race, gender and age. The average size of each image is 400-500 pixels. The entire subjects were tested for ten trials. Ten images were taken for each subject. Average accuracy for all subjects was shown in Figure 3.

10987654321

100

80

60

40

20

0

FKNN Test Trial

Ave

rage

Acc

urac

y (%

)

Average Accuracy Vs Test Trial

Figure 3. Accuracy Detection Based on PCA and Fuzzy k-NN Algorithm

The confidence values for the recognition of a person is calculated using the Euclidean distance between the PCA projected values of the test image and PCA projected values of the train database. This value determines whether recognition of a face image using this method is dependable or not. When the confidence value is low recognition is not dependable. The confidence value obtained for different test images are tabulated in Figure 3.

IV. CONCLUSION The proposed face detection algorithm localizes the face

from the given input image using the Haar Classifier method where employed. The detected face image is projected using Eigen face analysis and classified using the Fuzzy K nearest neighbor (FKNN) classifier. This algorithm is efficient as it can be integrated with the output from multi-modal sensors

Page 3: [IEEE 2012 International Symposium on Instrumentation & Measurement, Sensor Network and Automation (IMSNA) - Sanya, China (2012.08.25-2012.08.28)] 2012 International Symposium on Instrumentation

and thus can be used as part of multi-sensor data fusion

ACKNOWLEDGMENT This research conducted under Fundamental Research

Grant Scheme (FRGS) which contributed by Ministry of Higher Education Malaysia.

REFERENCES [1] T.C Robert, J.L Alan, and K. Takeo, “Introduction to the Special Edition on Video Surveillance”, Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 22, No. 8, pp 745-757, August 2000. [2]�M. Harville, G. Gordon, and J. Woodfill, “Foreground Segmentation using adaptive mixture models in color and depth”, In Proceedings of the IEEE workshop on Detection and Recognition of Events in Video, 2001. [3] M. Akhloufi, and A. Bendada, “Thermal Faceprint: A new thermal face signature extraction for infrared face recognition,” 2008, pp. 269-272. [4] P. Buddharaju, I. Pavlidis, and P. Tsiamyrtzis, “Pose-invariant physiological face recognition in the thermal infrared spectrum,” 2006, pp. 53. [5] M. M. Khan, M. Ingleby, and R. D. Ward, “Automated Facial Expression Classification and affect interpretation using infrared measurement of facial skin temperature variations,” in ACM [6] J. Wilder, P. Phillips, C. Jiang et al., “Comparison of visible and infra-red imagery for face recognition,” 2002, pp. 182-187. [7] F. J. Prokoski, “History, current status, and future of infrared identification,” in Proceedings of the IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications, Washington, DC, USA, 2000, pp. 5. [8] B. Fasel, and J. Luettin, “Automatic facial expression analysis: a survey,” in Pattern Recognition, 2003, pp. 259-275. [9] I. Pavlidis, J. Levine, and P. Baukol, “Thermal imaging for anxiety detection,” 2002, pp. 104-109. [10] I. Pavlidis, and J. Levine, “Thermal facial screening for deception detection,” 2002, pp. 1143-1144.

[11] M. Harville, G. Gordon, and J. Woodfill, “Foreground Segmentation using adaptive mixture models in color and depth”, In Proceedings of the IEEE workshop on Detection and Recognition of Events in Video, 2001. [12] I. T. Joliffe. Principal Component Analysis. Springer-Verlag, 1986 [13] P. Banerjee and S. Sengupta, “Human Motion Detection and Tracking for Video Surveillance”, Proceedings of the national Conference of Communications, IIT Bombay, Mumbai, India, pp. 88–92, 2008. [14] P.Viola and M.J. Jones, “Robust real time face detection,”Int. J. Comput. Vision, vol.57, no.2, pp.137–154, 2004. [15] R. Lien hart and J. Maydt, “An extended set of haar-like features for rapid object detection, ”in Proceedings of the 2002 International Conference on Image Processing (P.of the 2002 International Conference on Image Processing, ed),vol.1,pp. 900–903, 2002 [16]K. Fukunaga and W. Koontz. Application of the Karhunen-Loeve expansion to feature selection and ordering. IEEE Trans. Communications, 19(4), 1970. [17]H. Murase and S. Nayar. Detection of 3D objects in cluttered scenes using hierarchical eigenspace. Pattern Recognition Letters, 18(4), April 1997. [18]M. Turk and A. Pentland. Face recognition using eigenfaces. In Proceedings of Computer Vision and Pattern Recognition, 1991. [19]J. Karhunen and J. Joutsensalo. Generalization of principal component analysis, optimization problems and neural networks. Neural Networks, 8(4), 1995. [20]D. Lee and S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401, 1999. [21]R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In Proceedings of Computer Vision and Pattern Recognition, June 2003. [22]Muhammad Naufal Mansor, Sazali Yaacob, M.Hariharan, Shafriza Nisha Basah, Shahrul Hi-fi Syam bin Ahmad Jamil, Mohd Lutfi Mohd Khidir, Muhammad Nazri Rejab, Ku Mohd Yusri Ku Ibrahim, Addzrull Hi-fi Syam bin Ahmad Jamil, Ahmad Kadri Junoh, Jamluddin Ahmad (2012). “Fuzzy k-NN for Clinical Infant Pain Trial” in 8th International Conference on Innovations in Information Technology, (IIT’12), Al Ain, UAE, March 2012.