(2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

7
An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry Stephen J. Elliott Benny Senjaya Eric P. Kukula Jason M. Werner Mark Wade BSPA Lab BSPA Lab Noblis, Inc. BSPA Lab BSPA Lab Purdue University Purdue University Purdue University Purdue University 401 N. Grant Street 401 N. Grant Street 3150 Fairview Park Drive 401 N. Grant Street 401 N. Grant Street West Lafayette, IN 47907 West Lafayette, IN 47907 Falls Church, VA 22042 West Lafayette, IN 47907 West Lafayette, IN 47907 USA USA USA USA USA Abstract - The purpose of this paper is to extend the Human Biometric Sensor Interaction (HBSI) model to various modalities, in this case, hand geometry. As the data was collected at different times, there was a slight modification in training between group 1 and group 2. Therefore, a secondary purpose of this paper was to examine the differences in the HBSI metrics when individuals are given two different types of training (one using video training, and the other using small group lecture-style training). 151 individuals were asked to perform an enrollment transaction and three successive post- enrollment verification attempts with the hand geometry machine, and an observational analysis was performed on their interactions. This type of analysis is novel to the field of biometrics and the human interaction component has only recently received attention [1]. Using a framework developed specifically for studying various human interaction errors, the observations from hand recognition device placements were analyzed and mapped onto the HBSI error framework. Instead of categorizing a user error as a failure to enroll (FTE) or failure to acquire (FTA), a more comprehensive categorization of these errors were developed. Both incorrect and correct interaction errors were coded and binned in appropriate categories by a human observer. The results showed that hand geometry modality could fit the existing HBSI model. Furthermore, the experiment highlighted slight variations in errors due to training, which will be investigated further in another paper. Index Terms — hand geometry, hbsi, training, biometric I. INTRODUCTION Mansfield and Wayman are credited with creating a general biometric model – this model outlines the processes involved from collecting a biometric through to the decision of the sub-system. This model has withstood the test of time, and although there have been slight alterations to it; it is still the foundational model for testing and evaluation standards, as well as the basis for many articles within biometric literature [2]. Testing standards established by ISO/IEC JTC 1 SC37 provide distinctions between traditional technology and scenario evaluations. Metrics documented in these standards include most of the error rates such as False Match Rate (FMR), False Non-Match Rate (FNMR), Failure to Enroll (FTE), and Failure to Acquire (FTA) [3]. Such metrics provide a macro description to the performance of the biometric system, but do not isolate or attribute errors to specific individuals. With respect to an FTE or FTA, how does the reporting of a percentage indicate how to solve these particular errors? Of course, the reporting of FTA and FTE was never intended to provide such detailed analysis on the individual subject level, but in the context of biometric testing and evaluation, additional information, such as video analysis, collected at the time of data collection, may be invaluable by providing some form of distinguishable patterns, and therefore some insight on how to reduce error. Research by [4] proposed an evaluation methodology for environmental influences in biometric systems. The article, and subsequent working draft technical report consists of two parts, first the definition of the environmental factors and their respective metrics, and secondly, a specification on how to perform the evaluation. Another type of methodology for biometric testing and evaluation is that of the human biometric sensor interaction [5,6,7,8]. As biometric systems are being deployed in various applications, including identification programs, border security programs, and consumer facing applications, there is a desire, both practically and commercially, to collect good quality data during enrollment. Good enrollment images lead to better opportunities to de-duplicate a database, improved verification results, reduction in organizational costs, and reduced inconvenience to those enrolling into the system as they will not be required to come back for another enrollment. As with all systems, regardless of whether they are security related or not, individuals need to be trained; and many systems lack a user-centered design [9]. Poor training may result in poor quality images, which will increase the cost of deployment, as poor quality images affect the performance of the system (especially with fingerprints) [10,11,12,13]. Training, although more expensive, could paradoxically provide the answer to the performance problem. The aim of the Human Biometric Sensor Interaction (HBSI) model is to provide structure and definition to errors that are observed while conducting evaluations involving various modalities. II. LITERATURE REVIEW The review of literature is comprised of the following sections; the first is an introduction to the Human Biometric Sensor Interaction model. The second looks at previous studies that have examined the performance of a hand geometry system using the traditional approach of

description

The purpose of this paper is to extend the Human Biometric Sensor Interaction (HBSI) model to various modalities, in this case, hand geometry. As the data was collected at different times, there was a slight modification in training between group 1 and group 2. Therefore, a secondary purpose of this paper was to examine the differences in the HBSI metrics when individuals are given two different types of training (one using video training, and the other using small group lecture-style training). 151 individuals were asked to perform an enrollment transaction and three successive postenrollment verification attempts with the hand geometry machine, and an observational analysis was performed on their interactions. This type of analysis is novel to the field of biometrics and the human interaction component has only recently received attention [1]. Using a framework developed specifically for studying various human interaction errors, the observations from hand recognition device placements were analyzed and mapped onto the HBSI error framework. Instead of categorizing a user error as a failure to enroll (FTE) or failure to acquire (FTA), a more comprehensive categorization of these errors were developed. Both incorrect and correct interaction errors were coded and binned in appropriate categories by a human observer. The results showed that hand geometry modality could fit the existing HBSI model. Furthermore, the experiment highlighted slight variations in errors due to training, which will be investigated further in another paper.

Transcript of (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

Page 1: (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

Stephen J. Elliott Benny Senjaya Eric P. Kukula Jason M. Werner Mark Wade BSPA Lab BSPA Lab Noblis, Inc. BSPA Lab BSPA Lab Purdue University Purdue University Purdue University Purdue University 401 N. Grant Street 401 N. Grant Street 3150 Fairview Park Drive 401 N. Grant Street 401 N. Grant Street West Lafayette, IN 47907

West Lafayette, IN 47907

Falls Church, VA 22042

West Lafayette, IN 47907

West Lafayette, IN 47907

USA USA USA USA USA

Abstract - The purpose of this paper is to extend the Human Biometric Sensor Interaction (HBSI) model to various modalities, in this case, hand geometry. As the data was collected at different times, there was a slight modification in training between group 1 and group 2. Therefore, a secondary purpose of this paper was to examine the differences in the HBSI metrics when individuals are given two different types of training (one using video training, and the other using small group lecture-style training). 151 individuals were asked to perform an enrollment transaction and three successive post-enrollment verification attempts with the hand geometry machine, and an observational analysis was performed on their interactions. This type of analysis is novel to the field of biometrics and the human interaction component has only recently received attention [1]. Using a framework developed specifically for studying various human interaction errors, the observations from hand recognition device placements were analyzed and mapped onto the HBSI error framework. Instead of categorizing a user error as a failure to enroll (FTE) or failure to acquire (FTA), a more comprehensive categorization of these errors were developed. Both incorrect and correct interaction errors were coded and binned in appropriate categories by a human observer. The results showed that hand geometry modality could fit the existing HBSI model. Furthermore, the experiment highlighted slight variations in errors due to training, which will be investigated further in another paper.

Index Terms — hand geometry, hbsi, training, biometric

I. INTRODUCTION

Mansfield and Wayman are credited with creating a general biometric model – this model outlines the processes involved from collecting a biometric through to the decision of the sub-system. This model has withstood the test of time, and although there have been slight alterations to it; it is still the foundational model for testing and evaluation standards, as well as the basis for many articles within biometric literature [2]. Testing standards established by ISO/IEC JTC 1 SC37 provide distinctions between traditional technology and scenario evaluations. Metrics documented in these standards include most of the error rates such as False Match Rate (FMR), False Non-Match Rate (FNMR), Failure to Enroll (FTE), and Failure to Acquire (FTA) [3]. Such metrics provide a macro description to the performance of the biometric system, but do not isolate or attribute errors to

specific individuals. With respect to an FTE or FTA, how does the reporting of a percentage indicate how to solve these particular errors? Of course, the reporting of FTA and FTE was never intended to provide such detailed analysis on the individual subject level, but in the context of biometric testing and evaluation, additional information, such as video analysis, collected at the time of data collection, may be invaluable by providing some form of distinguishable patterns, and therefore some insight on how to reduce error. Research by [4] proposed an evaluation methodology for environmental influences in biometric systems. The article, and subsequent working draft technical report consists of two parts, first the definition of the environmental factors and their respective metrics, and secondly, a specification on how to perform the evaluation. Another type of methodology for biometric testing and evaluation is that of the human biometric sensor interaction [5,6,7,8]. As biometric systems are being deployed in various applications, including identification programs, border security programs, and consumer facing applications, there is a desire, both practically and commercially, to collect good quality data during enrollment. Good enrollment images lead to better opportunities to de-duplicate a database, improved verification results, reduction in organizational costs, and reduced inconvenience to those enrolling into the system as they will not be required to come back for another enrollment. As with all systems, regardless of whether they are security related or not, individuals need to be trained; and many systems lack a user-centered design [9]. Poor training may result in poor quality images, which will increase the cost of deployment, as poor quality images affect the performance of the system (especially with fingerprints) [10,11,12,13]. Training, although more expensive, could paradoxically provide the answer to the performance problem. The aim of the Human Biometric Sensor Interaction (HBSI) model is to provide structure and definition to errors that are observed while conducting evaluations involving various modalities.

II. LITERATURE REVIEW

The review of literature is comprised of the following sections; the first is an introduction to the Human Biometric Sensor Interaction model. The second looks at previous studies that have examined the performance of a hand geometry system using the traditional approach of

vgriego
Text Box
978-1-4244-7402-8/10/$26.00 ©2010 IEEE
Page 2: (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

performance metrics. The third section reviews hand geometry studies that relate to training and habituation to provide some insight into typical interaction errors and training deficiencies.

A. Human Biometric Sensor Interaction Model

The Human Biometric Sensor Interaction (HBSI) model

has been developed and subsequently modified over a number of years, as a response to observations made during data collections efforts across a number of modalities, such as fingerprint, face recognition, iris recognition, dynamic signature verification, and voice recognition [11,14,15,16,17,18,19,20]. When observing individuals in these tests, errors occurred that could not be described and/or accounted for by existing biometric metrics. For example, with fingerprint recognition, the interaction between the sensor and the user varies depending on whether the sensor is a swipe or slap. When trying to evaluate systems, or train individuals to use the system, errors in interaction need to be quantified in order to minimize training costs, and increase user acceptance and satisfaction. Typically there are five questions that are asked when evaluating a system’s performance using the HBSI model approach. These are:

• How do users interact with the biometric devices? • What errors do users make? • What are the most common errors or issues that

users face? • Why do users continually make these interaction

errors and how do we prevent or avoid them from happening?

• What level of training and experience is necessary to successfully use biometric devices?

In order to answer these questions, the HBSI model was

created and finds its roots in three different areas of research; ergonomics, usability and biometrics. The initial studies that formulated the model can be found in [8].

B. Traditional Approach to Performance

A study completed in 2005 discussed the implementation

issues of installing a commercially available hand geometry system at Purdue University’s Recreational Sports Center [21]. In that study, statistical analyses exposed a significant difference in the scores of attempts made by users with prior hand geometry usage, and individuals that could not straighten out their hand on the device. In that study, the system performance was segmented into FTE, FTA, FAR/FRR, demographic, and unique characteristic analysis which included shirt / coat sleeves covering the hand, ring placement, improper thumb placement and little fingers that would not straighten. In the conclusion and future work of [21] the authors stated “individual analyses reveal that either more training or stricter policies for individuals interacting with the device need to be implemented for some users, as roughly 100 attempts deviated from the training that was given” (p.6).

C. Habituation / Acclimation Studies Relating to Biometrics Several reports have identified that there is a need to train

users. Studies such as BANCA [22], Biometrics for Secure Identity Verification: Trends and Developments [23] indicate that training is required to successfully use biometrics. With respect to hand geometry, [24] examined how individuals interacted with the hand geometry device. Such an activity is important because successful interaction impacts the resulting performance of the device. A flow chart can be shown below in Fig. 1:

Fig. 1. Concept model for habituation / acclimation

Understanding how users interact with the system is also important for successful deployment of the biometric system. Thieme [25] noted that full habituation was accomplished once the match scores had stabilized and that in order to get these scores to stabilize, the user needs to be sufficiently trained.

III. HBSI METRICS AS APPLIED TO HAND GEOMETRY Hand geometry is a verification system which requires the

user to claim an identity. In order to interact with the system, users present a credential (either a token or a PIN). Once the PIN has been accepted, four LED lights illuminate on the hand geometry device. The users’ hand needs to be inserted completely thus it touches the five pins on the hand geometry’s platen. When the hand is placed correctly, the four LED display lights on the device will turn off, indicating the correct placement of the fingers. The CMOS sensor and near infrared illumination will take the image of the user’s hand from above and the side by measuring the length, width, thickness, and surface area of the user’s hand.

The HBSI Framework for Biometric Interactions (Fig. 2) illustrates the metrics associated with the model as they relate to correct/incorrect presentation to the sensor. The purpose of this study is to map the HSBI metrics laid out in [8] and to validate the HBSI model for hand geometry. The metrics included: defective interaction, concealed interaction, failure to detect, failure to extract, and successfully acquired samples. These definitions were mapped to the hand geometry modality below, and will be subsequently mapped to other modalities in future research. The model is broken down by presentation type - “correct” and “incorrect”– if you refer back to the general biometric model [26], these presentations normally occur in

Page 3: (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

the data collection module. Within the “incorrect” presentation type, there is defective interaction, concealed interaction and false interaction. Within the “correct” presentation type, there is failure to detect, failure to extract, and successfully acquired sample.

Fig. 2. HBSI Framework for Biometric Interactions

A. Defective Interaction

A defective interaction (DI) occurs when an incorrect presentation is made by the individual to the hand geometry machine, and this interaction is not detected by the system. A specific error that occurs within hand geometry would be when the subject does not present anything to the hand geometry machine and the sensor “correctly” does not detect any presentation and subsequently times out.

B. Concealed Interaction

A concealed interaction (CI) occurs when an incorrect

presentation is made to the hand geometry machine that is detected by the device, but it is not handled or classified correctly as an “error” by the hand geometry machine. There are two types of concealed interactions – one generated by the user (user CI) and the other by the system (system CI). In the case of the user concealed interaction, the user generates the error, either because they presented the wrong hand, or didn’t comply with the initial instructions. A system concealed interaction occurs when the user interacts with the hand geometry machine, but does not provide a good quality sample. A specific example of a system CI involving the hand geometry machine would be when a subject is still in the process of inserting his/her hand into the device and the system successfully acquires the sample. This is a system CI because the subject’s hand is 1) not fully in the device, and 2) the subject did not properly squeeze the pins. The system acquired the sample even though the subject was still interacting with the system. This sample will pose quality and verification issues later on, potentially leading to a false rejection.

C. False Interaction

A false interaction (FI) occurs when a user interacts incorrectly with the hand geometry machine during hand placement. The incorrect placement is detected by the device and classified as an “error”. An example of this error would be

when the subjects hand is either not flat on the platen, or fingers are not touching the guide pins, both resulting in the indicator lights not extinguishing.

D. Failure to Detect

There are two different types of Failure to Detects (FTD):

System and External Factor. A system failure to detect is where the user correctly places their hand in the hand geometry machine, but the device does not respond to the proper placement by the subject and times out. An external factor FTD would be when the user correctly presents the hand to the device, but factors outside the control of the user and device impact the system. An example of this would be external lighting causing the camera to not recognize the correct placement by the subject.

E. Successfully Acquired Sample

A Successfully Acquired Sample (SAS) happens when the sample has been detected by the system and subsequently extracted and passed through to the biometric matching system. The hand geometry machine provides a message on the display notifying the individual of a successful verification.

IV. DATASET & METHODOLOGY

The purpose of this study was to categorize the interaction of individuals using a commercially available hand geometry machine. Data collection was a two-step process – the first was an enrollment where users presented their right hand three times to the biometric sensor, assuming that the samples presented were of sufficient quality. The second part of the process was where the users completed a post-enrollment verification. In this study, we were not interested in the traditional performance of the device (the FAR, FRR), but the HBSI metrics. Two groups were established, one in the Spring of 2009 (subsequently labeled T) and the other in Spring 2010 subsequently labeled V).

A. Data Collection

Data was collected in from 73 participants during Spring 2009 and 78 individuals during Spring 2010. The testing methodology was similar for both groups, except for training. The hand geometry machine was placed on a table 28.75 inches (Spring 2009) and 26.75 inches (Spring 2010) off the ground. The individuals were seated in a chair. Each group received slightly different training instructions – the Spring 2009 group received small group training which included a demonstration of the hand geometry reader and time for question and answer (under 10 people), the Spring 2010 group watched a video for their training. The individuals were able to watch the video as many times as they wanted to. The only other difference between the two data collections was the angle of the camera that was used to record the interaction of the subject with the hand geometry machine (Fig. 3). The change in set up was primarily to do with a revised layout of the room in which the study occurred.

Page 4: (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

Fig. 3. Angle of the Video Camera

The individuals were trained on how to properly interact with the hand geometry machine before the start of the data collection according to their respective training groups. After the training was finished, the individuals were informed by the test administrator about the data collection procedure. The enrollment process was initialized by the test administrator through the PC-based hand geometry software. Once the video camera started recording, and the hand reader was initialized, the subject would place their hand three times on the platen of the hand geometry machine thereby constituting a successful enrollment process. A post-enrollment verification was also taken at this time. It should be noted that the use of the PIN is outside of the scope of the analysis as this paper is concerned only with the biometric sub-system, the subject did have to enter a PIN in order to trigger the hand geometry reader.

V. ANALYSIS

Observer® XT software from Noldus was used to observe the interaction videos and to code the correct / incorrect presentations. The behaviors for this study were comprised of three groups: enrollment, verification, and interaction. The enrollment group consisted of start and end of enrollment process. The verification group consisted of several verification attempts that indicated the beginning of a verification process (V1, V2, V3, etc.). The interaction group consisted of the HBSI metrics (SAS, FTX, FTD, CI, DI and FI). Modifiers were also added into the coding scheme. These were attached to a behavior that was used to limit the scope of a particular behavior to give more precise explanation of the actual event. The interaction videos were coded one at a time. This data was subsequently extrapolated to form the results below.

VI. RESULTS

As indicated earlier in the paper, although the main focus of

the analysis was to see whether hand geometry modality aligned well to the HBSI model, the data was collected over two different periods with the training slightly altered. The results of the analysis were separated by the two different types of training, training by demonstration (T) and training by video (V).

A. Successfully Acquired Sample

The formula for SAS is the number of successful attempts divided by the total number of attempts in the entire data collection.

1) Enrollment 221228 100% 96.93%

239252 100% 94.84%

2) Verification 219219 100% 100%

234235 100% 99.57%

B. Failure to Detect

The formula for FTD is the number of sensor detection failures when the subject presents the hand correctly to the hand geometry machine divided by the total number of attempts in the entire data collection. The researcher only noticed System FTD during the analysis of the data.

1) Enrollment 0%; 0% 2) Verification 2219 100% 0.91%

0%

C. False Interaction

The formula for FI is the incorrect presentation to the hand geometry that is detected by the device divided by the total number of attempts in the entire data collection.

1) Enrollment 11228 100% 4.82%

18252 100% 7.14%

2) Verification 4219 100% 1.83%

8235 100% 3.40%

D. Concealed Interaction

The formula for CI is the incorrect presentation to the hand

geometry machine that is not classified correctly by the device divided by the total number of attempts in the entire data collection. No CI’s were observed in T or V in either mode: enrollment or verification.

Page 5: (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

E. Defective Interaction The formula for DI is the incorrect presentation to the hand geometry machine that is not detected by the device divided by the total number of attempts in the entire data collection. No DI’s were observed in T or V in either mode: enrollment or verification.

F. Visual representation of the data

In Figures 4 and 5 below, the HBSI errors are graphically

displayed. The bold italics relates to (T) Spring 2009 small group training, and the italics non bold relates to (V) Spring 2010 video training. Note that the small group demonstration scores are better than the video scores. In figure 5, the post-enrollment verification results are shown. Note that the errors decrease, and the number of successfully acquired samples increases. There is one failure to detect for the small group training (T).

Fig. 4. Segmentation of errors on the HBSI model

(enrollment) for Spring 2009 (T) and Spring 2010 (V) studies.

Fig. 5. Segmentation of errors (verification) for Spring 2009

(T) and Spring 2010 (V) studies.

VII. CONCLUSIONS There are several conclusions and observations that can be

drawn from this paper. The first centers around the HBSI model itself, as it is applied to the hand geometry device. As indicated in the literature review, the initial model was developed using fingerprint recognition. In the case of the hand geometry experiment illustrated above, there were a couple of discussion points that the research group had to take into account. The first was that the hand geometry device was on-line or operational, that is, the scores generated by the system were live and could not be processed or examined at a later time. Therefore, we relied solely on the video interactions to classify the HBSI errors. Secondly, the hand geometry system requires either a credential or PIN, and therefore a decision had to be made to either incorporate the credential / PIN into the model, or to make the assumption that the correct credential and PIN had been passed through to the hand geometry machine, and therefore the HBSI model was in fact a model of the “biometric sub-system” of a larger identity management system. We also assumed that all of the interactions were of good faith – this concept has come up a number of times in recent discussions, especially as we examine interaction errors. As this was a supervised interaction, we make the assumption of “good faith”.

The next conclusion that can be drawn from the experience is that not only did we see a segmentation of error, but we can provide mechanisms to reduce the error, usually through better training. As the figure above shows, there is a slight difference across the different training methodologies at the enrollment stage and verification stage, with the small group training performing slightly better than the video training. Subsequent analysis indicated that the instructions for both groups could be improved by highlighting to subjects that they needed to make contact with the web-pin (rests between the base of the index and middle fingers), as this was the cause of most of the false interactions. This segmentation error can improve the chances of a successfully acquired sample.

The last conclusion or observation is about the actual coding of the data. For this paper, the data was coded by three different individuals.

VIII. FUTURE WORK The analysis provided some interesting results, especially

the review of the video analysis. Future work will be done on training and how people learn. Secondly, work will continue to examine whether the HBSI model can be applied with other biometric modalities, other than hand geometry and fingerprint recognition. Thirdly, it is clear that video analysis can provide additional information regarding errors. Further research in camera positioning, additional information and inter-rater reliability needs to be examined as continue to develop the HBSI model. Inter-rater reliability is one of the major issues that need to be addressed when trying to evaluate the individual videos.

Page 6: (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

IX. REFERENCES

[1] S.J. Elliott and E.P. Kukula, "A definitional framework for the human/biometric sensor interaction model," System, 2010, pp. 76670H-76670H-8.

[2] A.J. Mansfield and J.L. Wayman, "Best Practices in Testing and Reporting Performance of Biometric Devices ver 2.01," National Physics Laboratory, 2002, pp. 1-36.

[3] International Standards Organization, ISO/IEC 19795-1: Information technology - Biometric performance testing and reporting - Part 1: Principles and framework, Geneva: ISO/IEC, 2006.

[4] B. Fernandez-saavedra, R. Sanchez-reillo, R. Alonso-moreno, and O. Miguel-hurtado, "Environmental Testing Methodology in Biometrics," Technology.

[5] E.P. Kukula and S.J. Elliott, A definitional framework for the human-biometric sensor interaction model, West Lafayette: BSPA Laboratory Publications, 2009.

[6] E. Kukula, C. Blomeke, S. Modi, and S. Elliott, "Effect of Human Interaction on Fingerprint Matching Performance, Image Quality, and Minutiae Count," International Journal of Computer Applications in Technology, vol. 34, pp. 270-277, 2009.

[7] E. Kukula and R. Proctor, "Human-Biometric Sensor Interaction: Impact of Training on Biometric System and User Performance," Human Interface, Part II, HCII 2009, M. Smith and G. Salvendy, Berlin / Heidelberg: Springer, 2009, pp. 168-177.

[8] E.P. Kukula, M.J. Sutton, and S.J. Elliott, "The Human–Biometric-Sensor Interaction Evaluation Method: Biometric Performance and Usability Measurements," IEEE Transactions on Instrumentation and Measurement, vol. 59, pp. 784-791, 2010.

[9] A. Adams and M.A. Sasse, "Users are not the enemy," Communications of the ACM, vol. 42, 1999, pp. 40-46.

[10] X. Cui, H. Kim, S. Elliott, J. Hwang, J. Kim, E. Park, and C. Jin, "Comparative assessment of fingerprint sample quality measures based on minutiae-based matching performance," Second International Symposium on Electronic Commerce and Security, Nanchang, China: IEEE Computer Society, 2009, pp. 309 - 313.

[11] S.K. Modi and S.J. Elliott, "Image quality on performance: Comparison of young and elderly fingerprints," Proceedings of the 6th International Conference on, 2006, pp. 449-454.

[12] A. Jain, Y. Chen, and S. Dass, "Fingerprint quality indices for predicting authentication performance," Audio- and Video-Based Biometric Person Authentication, T. Kanade and N. Ratha, Berlin / Heidelberg: Springer, 2005, pp. 160-170.

[13] E. Tabassi and C. Wilson, "A novel approach to fingerprint image quality," IEEE International Conference on Image Processing, Genoa, Italy: IEEE, 2005, pp. 37-40.

[14] S. Elliott, "Development of a biometric testing protocol for dynamic signature verification," 7th International Conference on Control, Automation, Robotics and Vision, 2002. ICARCV 2002., Nanyang Technological Univ, 2002, pp. 782-787.

[15] S. Elliott and A. Hunt, "The Challenge of Forgeries and Perception of Dynamic Signature Verification," Proceedings of the 6th International Conference on …, 2006.

[16] E.P. Kukula. and S.J. Elliott, "Implementing Ergonomic Principles in a Biometric System: A Look at the Human Biometric Sensor Interaction (HBSI)," Proceedings 40th Annual 2006 International Carnahan Conference on Security Technology, Lexington, KY: IEEE, 2006, pp. 86-91.

[17] J. Jang, S. Elliott, and H. Kim, "On Improving Interoperability of Fingerprint Recognition Using Resolution Compensation Based on Sensor Evaluation," Advances in Biometrics, 2009, pp. 455-463.

[18] E. Kukula, S. Elliott, R. Waupotitsch, and B. Pesenti, "Effects of illumination changes on the performance of geometrix face vision 3D FRS," 38th Annual 2004 International Carnahan Conference on Security Technology, 2004., IEEE, 2004, pp. 331-337.

[19] R.T. Lazarick, E.P. Kukula, and S.J. Elliott, "Operational Times," Encyclopedia of Biometrics, 2009, pp. 1022-1025.

[20] J. Morton, C. Portell, S. Elliott, and E. Kukula, "Facial recognition at Purdue University's Airport - 2003-2008," IEEE 37th Annual 2003 International Carnahan Conference onSecurity Technology, 2003. Proceedings., IEEE, 2003, pp. 531-534.

[21] E. Kukula and S. Elliott, "Implementation of hand geometry at Purdue University's Recreational Center: an analysis of user perspectives and system performance," Proceedings 39th Annual 2005 International Carnahan Conference on Security Technology, IEEE, 2005, pp. 83-88.

[22] E. Bailly-Bailliere, S. Bengio, and F. Bimbot, "The BANCA database and evaluation protocol," Lecture Notes in Computer Science, vol. 2688, 2003, pp. 625-638.

[23] J. Lewis, "Biometrics for Secure Identity Verification: Trends and Developments," faculty.ed.umuc.edu, 2002, p. 26.

[24] E. Kukula, S. Elliott, B. Gresock, and N. Dunning, "Defining Habituation using Hand Geometry," 2007 IEEE Workshop on Automatic Identification Advanced Technologies, IEEE, 2007, pp. 242-246.

[25] M. Thieme, "Effect of User Habituation in the Context of Biometric Performance Testing," 2003.

[26] J. Wayman, "A Generalized Biometric Identification System Model," Conference Record of the Thirty-First Asilomar Conference on Signals, Systems and Computers (Cat. No.97CB36136), Pacific Grove, California: IEEE, 1997, pp. 291-295.

Page 7: (2010) An Evaluation of the Human Biometric Sensor Interaction using Hand Geometry

X. VITA

Stephen J. Elliott is currently an Associate Professor with the Department of Industrial Technology, Purdue University, where he is also a University Faculty Scholar and the Director of the Biometric Standards, Performance, and Assurance Laboratory. He received his Ph.D. from Purdue University in 2001. He has spoken at several conferences and is active in biometric standard initiatives. He is the Editor of ANSI/INCITS Information Technology—Biometric Performance Testing and Reporting—Part 5: Framework for Testing and Evaluation of Biometric System(s) for Access Control. He has written numerous articles on biometrics. His research interests include the testing and evaluation of biometric systems.

Benny Senjaya is a graduate student currently pursuing the

M.S. degree in Technology focusing on human interaction within biometric technology at Purdue University. He received his Bachelors degree from Purdue University on Computer and Information Technology specializing in Network Engineering Technology. His research interests include human interaction, iris recognition, and fingerprint recognition technology.

Eric P. Kukula is a Lead Biometric Engineer at Noblis in the

Center for National Security and Intelligence, located in Falls Church, VA. He received his Ph.D. in Technology with a specialization in Computational Science in 2008 and M.S. in Technology with specialization in Information Security in 2004, both from Purdue University. Prior to joining Noblis, Eric was a Visiting Assistant Professor and Senior Researcher in the BSPA Laboratory where his research focused on the Human-Biometric Sensor Interaction. He is co-editor of ANSI Technical Report – Information technology – Biometric Performance Testing and Reporting – Part 7: Framework for Testing Methodologies for Specific Modalities and is the International Representative for INCITS M1.5 to ISO/IEC JTC1 SC37 Biometrics WG5 Testing and Reporting.

Mark Wade is an undergraduate student in Industrial

Technology at Purdue University. Jason Werner is an undergraduate student in Supervision at

Purdue University.