Gesture-Based Interaction for Games on Multi-touch Devices

100
Honours Project Report Gesture-Based Interaction for Games on Multi-touch Devices Pierre Benz Supervisor: Prof. James Gain Category Min Max Chosen 1 Software Engineering/System Analysis 0 15 10 2 Theoretical Analysis 0 25 0 3 Experiment Design and Execution 0 20 10 4 System Development and Implementation 0 15 15 5 Results, Findings and Conclusion 10 20 10 6 Aim Formulation and Background Work 10 15 10 7 Quality of Report Writing and Presentation 10 10 8 Adherence to Project Proposal and Quality of Deliverables 10 10 9 Overall General Project Evaluation 0 10 5 Total Marks 80 Department of Computer Science University of Cape Town 2010.

Transcript of Gesture-Based Interaction for Games on Multi-touch Devices

Page 1: Gesture-Based Interaction for Games on Multi-touch Devices

Honours Project Report

Gesture-Based Interaction for Games on Multi-touchDevicesPierre Benz

Supervisor: Prof. James Gain

Category Min Max Chosen

1 Software Engineering/System Analysis 0 15 10

2 Theoretical Analysis 0 25 0

3 Experiment Design and Execution 0 20 10

4 System Development and Implementation 0 15 15

5 Results, Findings and Conclusion 10 20 10

6 Aim Formulation and Background Work 10 15 10

7 Quality of Report Writing and Presentation 10 10

8 Adherence to Project Proposal and Quality of Deliverables 10 10

9 Overall General Project Evaluation 0 10 5

Total Marks 80

Department of Computer ScienceUniversity of Cape Town

2010.

Page 2: Gesture-Based Interaction for Games on Multi-touch Devices

Abstract

Multi-touch technology has become increasingly accessible with regards to costand diversity of products available. The ability to understand the capabilities andappropriateness of this technology with respect to computer games is of paramountimportance, as games are used for their intrinsic enjoyment value. Similarly, multi-touch enables the user to interact with the device through the use of gestures, whichhas been shown to provide a more natural means of interaction compared to othertraditional input techniques. As games have traditionally been implemented usingdirect manipulation interfaces, it is crucial to investigate whether gestures can pro-vide a better means of interaction compared to direct manipulation.

The objective of this report is to investigate the use of gestures in games, aswell its effects on game genre and identifying user characteristics required for ges-tural interaction. A gesture recognition engine has been developed, which uses astate-machine with grouped nodes, as well as strategy game, where the objectiveis to capture three enemy towers using controllable units. User experiments wereconducted, where participants were grouped based on their touch device and gamingexperience, and played three versions of the game, where each version had either adirect manipulation, gestural or mixed interface. The experiment results show thatgestures are the most appropriate means of interacting with games with respect toflow, and that the use of gestures is dependent on game genre. The test also showsthat the use of gestures relied on participant touch and gaming experience and noton external factors.

Page 3: Gesture-Based Interaction for Games on Multi-touch Devices

Acknowledgements

I would like to thank my supervisor, Prof. James Gain, for his guidance and help-ful comments during the duration of this project, for his detailed feedback, helpfulsuggestions and numerous corrections, and for his understanding in times of bereave-ment. I would also like to thank my project group member, Nina Schiff, who notonly provided insightful comments during the duration of the project, but also pro-vided a shoulder to lean on during the busy periods of the project. I would liketo thank all my friends in CS Honours, for hours of laughter, support and encour-agement. Lastly, I would like to my friends and family, for their love, support andencouragement during this year.

Page 4: Gesture-Based Interaction for Games on Multi-touch Devices

Contents

List of Figures 4

List of Tables 6

Glossary 9

1 Introduction 141.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.2 Project Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2 Previous Work 172.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Multi-touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.2 Multi-touch technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.3 Multi-touch interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.4 Indirect versus direct-touch interactions . . . . . . . . . . . . . . . . . . . 192.2.5 Bimanual interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.6 Multi-touch limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Direct manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.4 Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.4.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Gestures in games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.5.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.6 Methods of evaluating games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.6.2 Immersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.6.3 Presence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.6.4 Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1

Page 5: Gesture-Based Interaction for Games on Multi-touch Devices

3 Gesture Engine 283.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.2 Gesture Engine Group breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . 283.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.4 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.5 Linear Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.5.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.5.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.5.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.6 State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.6.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.6.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.6.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.7 Grouped Node State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.7.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.7.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.7.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.8 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.9 Future Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4 Game design 394.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.2 Strategy game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.3 Game design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.3.1 Game-play and key features . . . . . . . . . . . . . . . . . . . . . . . . . . 394.3.2 Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.3.3 Gesture design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.4 User interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.4.1 Time-limitation interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.4.2 Unit interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.4.3 Minimap interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.4.4 Direct manipulation, mixed and gesture-based interface . . . . . . . . . . 48

4.5 Art and design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.5.1 World design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.5.2 Unit Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.6 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.6.1 Technical Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.6.2 Class Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.6.3 EAGLView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.6.4 EventHandler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.6.5 GestureEngine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.6.6 Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564.6.7 Projectile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.6.8 Tower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.6.9 UIManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

2

Page 6: Gesture-Based Interaction for Games on Multi-touch Devices

4.6.10 Renderer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.7 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5 Testing and experiments 615.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.3 Independent and dependant variables . . . . . . . . . . . . . . . . . . . . . . . . . 615.4 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625.5 Experimental design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625.6 Control procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625.7 Experiment procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635.8 Pilot experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6 Results and Analysis 666.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.2 Results of the strategy game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.3 Results of all three games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6.3.1 Interface univariate repeated measures results . . . . . . . . . . . . . . . . 676.3.2 Interface and Genre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706.3.3 Interface and Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

7 Conclusion 747.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

8 Appendix A - Experiment forms 768.1 Consent form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778.2 Demographic information form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788.3 Post game explanation form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

9 Appendix B - In-game descriptions 809.1 Welcome screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809.2 Training session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809.3 Game description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809.4 Direct manipulation introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 819.5 After direct manipulation game . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819.6 Gesture introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819.7 After gesture game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819.8 Mixed introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819.9 After Mixed game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829.10 Experiment end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

10 Appendix Control - Experiment results 8310.1 Strategy game results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

3

Page 7: Gesture-Based Interaction for Games on Multi-touch Devices

10.2 Univariate repeated measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8510.3 Interface and game genre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8910.4 Interface and participant group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

References 94

4

Page 8: Gesture-Based Interaction for Games on Multi-touch Devices

List of Figures

1.1 An overview of the project breakdown between group members . . . . . . . . . . . . . . . . 16

2.1 An example of two multi-touch interaction styles. (a) shows the use of bimanual interaction,

while (b) shows the use of multi-finger interaction. . . . . . . . . . . . . . . . . . . . . . . 182.2 An example of a direct manipulation interface, where the user interacts with the interface through

buttons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3 An example of a gestural interaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.1 An overview of what each group participant contributed to the gesture engine. . . . . . . . . 283.2 An example of two different feature extraction methods used in the gesture engine. Both feature

extractions are performed on ’V’ and ’U’ gestures. a) shows the use of parabolic curve-fitting

feature extraction, while b) shows the use of the change-in-direction feature extraction. Note

that the parabolic curve-fitting does not recognise the initial vertical line drawn, which often

resulted in ’U’ gestures having the same features as ’V’ gestures. . . . . . . . . . . . . . . . 303.3 An example of the Linear Node gesture engine, where each parent node has only one child . . 313.4 The direction map the State Machine gesture engine uses to map features. Also shown is the

zones in which features are rounded. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.5 An example of a spiral gesture being performed with feature extraction displayed as coloured

lines. a) multiple ways of extracting features exist, which shows the trouble in easily distinguish-

ing a spiral gesture. B) displays a solution the Grouped Node State Machine uses to address the

curved features present in the spiral gesture. . . . . . . . . . . . . . . . . . . . . . . . . 353.6 An example of the gesture nodes present in both the State Machine and Grouped Node State

Machine for a spiral gesture. As can be seen, a) has multiple gesture definitions to a handle a

single gesture, whereas b) shows a single gesture definition which handles all possible variations. 363.7 An example of two StateMachineNodes which are used in the Grouped Node State Machine,

where a check is made against the current node’s directional values, after which a check is made

against the child node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.1 An example of the direct manipulation interface showing the placement area after a player

selected a foot-soldier to be built . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.2 Interaction circles of two units, where (a) shows a foot-soldier within the interaction-circle of a

archer, causing the the archer to attack the foot-soldier, and (b) shows a foot-soldier and archer

within each others interaction-circles, where both units will attack each other . . . . . . . . . 424.3 An example of an attack sequence. A red line is drawn between the archer and the foot-soldier,

with a pulsing animation shown in the consecutive images. In step d) the archer has waited

sufficient amount of time to perform an attack check, where in steps a, b and c the red circle

shows how much the archer still has to wait in order to perform an attack. . . . . . . . . . . 43

5

Page 9: Gesture-Based Interaction for Games on Multi-touch Devices

4.4 List of gesture available in the gesture-based and mixed game interface . . . . . . . . . . . . 464.5 Time-limitation interface, where a coloured bar indicates the remaining time the player has to

complete the game and the severity of the total time left . . . . . . . . . . . . . . . . . . . 474.6 A display of the different interfaces the game will display depending on the game style. a) shows

the direct-manipulation interface, where buttons allow the player to interact with unit based

commands, while b) show a mixed interface, where some direct-manipulation buttons are still

availabe and c) shows a gesture-based interface, where all button have been removed as the

interaction are all gesture based. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.7 Minimap interface which is located at the bottom left of the interface . . . . . . . . . . . . . 494.8 An example of a direct manipulation interface, showing a foot-soldier being selected by the

player. Notice that the map movement button are displayed on the left, right, top and bottom

of the game map and will move the screen in either direction depending on which button has

been pressed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.9 An example of a mixed interface, showing a foot-soldier being selected by the player. A list of

context-sensitive gestures are displayed on the right-hand side of the screen. . . . . . . . . . 504.10 An example of a gesture-based interface, where a group one selection gesture has been performed.

A list of context-sensitive gestures are displayed on the right-hand side, as well as the previously

performed gesture that is show in the top right corner . . . . . . . . . . . . . . . . . . . . 514.11 An overview of the main game environment, displaying the playing environment where game-

play will occur and where the towers will be located. The blue rectangle displays the size of the

display screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.12 An example of how player and enemy units are displayed in the game. Player controllable units

have a blue circle surrounding the unit, while enemy units are surrounded by a red circle. . . . 524.13 Representation of the class hierarchy used in the game. The lines between the nodes represent

how each class used other classes, where the arrows indicates the direction of class use. . . . . 524.14 The game total game environment broken up into nine 500x500. Note that the game screen,

indicated by the blue rectangle, overlaps over the nine regions. . . . . . . . . . . . . . . . . 544.15 Internal representation of the GestureStateMachine object nodes that were used in the game. . 56

6

Page 10: Gesture-Based Interaction for Games on Multi-touch Devices

List of Tables

3.1 Results from gesture engine comparison. Percentages of classification in different categories. . . 383.2 Statistical T-test results performed on the gesture engine comparisons. . . . . . . . . . . . . 38

4.1 The cost of each unit available in the game . . . . . . . . . . . . . . . . . . . . . . . . . 404.2 Unit Interaction-circle radii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.3 A summary of the projectiles associated with units and their specifications. . . . . . . . . . . 444.4 A summary of the units available and their specifications. . . . . . . . . . . . . . . . . . . 444.5 List of units available in the game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.6 Unit style and design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.1 A summary of the different dimensions of flow . . . . . . . . . . . . . . . . . . . . . . . . 676.2 Multivariate tests of significance on all three game. Significances are shown in red. . . . . . . 686.3 A summary of the results found for all three games. . . . . . . . . . . . . . . . . . . . . . 69

10.1 Total mean flow score between different game interfaces . . . . . . . . . . . . . . . . . . . 8310.2 Breakdown of the flow dimensions between different game interfaces . . . . . . . . . . . . . 8310.3 Univariate tests for repeated measures strategy game . . . . . . . . . . . . . . . . . . . . 8410.4 Repeated measures analysis of variance on the strategy game . . . . . . . . . . . . . . . . . 8410.5 Univariate for repeated measure on challenge-skill balance flow dimension using interface, group

and gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8510.6 Tukey HSD test for challenge-skill balance flow dimension. Error: within MS = 6.9370, df=72.000.

Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8510.7 Univariate for repeated measure on action-awareness merging flow dimension using interface,

group and gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . 8510.8 Tukey HSD test for action-awareness merging flow dimension. Error: within MS = 6.1981,

df=72.000. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . 8610.9 Univariate for repeated measure on clear goals flow dimension using interface, group and gesture

independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8610.10Tukey HSD test for clear goals flow dimension. Error: within MS = 6.2667, df=72.000. Signifi-

cant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8610.11Univariate for repeated measure on unambiguous feedback flow dimension using interface, group

and gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8610.12Tukey HSD test for unambiguous feedback flow dimension. Error: between; within; Pooled MS

= 9.6370, df=95.455. Significant values are shown in red. . . . . . . . . . . . . . . . . . . 8610.13Univariate for repeated measure on concentration on task at hand flow dimension using interface,

group and gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . 87

7

Page 11: Gesture-Based Interaction for Games on Multi-touch Devices

10.14Tukey HSD test for concentration on task at hand flow dimension. Error: within MS = 10.756,

df=72.000. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . 8710.15Univariate for repeated measure on sense of control flow dimension using interface, group and

gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8710.16Tukey HSD test for sense of control flow dimension. Error: within MS = 6.8074, df=72.000.

Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8710.17Univariate for repeated measure on loss of self-consciousness flow dimension using interface,

group and gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . 8710.18Tukey HSD test for loss of self-consciousness flow dimension. Error: between; within; Pooled

MS = 9.6370, df=95.455. Significant values are shown in red. . . . . . . . . . . . . . . . . 8810.19Univariate for repeated measure on transformation of time flow dimension using interface, group

and gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8810.20Tukey HSD test for transformation of time flow dimension. Error: within MS = 9.8037,

df=72.000. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . 8810.21Univariate for repeated measure on autotelic experience flow dimension using interface, group

and gesture independant variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8810.22Tukey HSD test for autotelic experience flow dimension. Error: within MS = 10.506, df=72.000.

Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8810.23Tukey HSD test for challenge-skill balance flow dimension. Error: between; within; Pooled MS

= 10.256, df=89.300. Significant values are shown in red. . . . . . . . . . . . . . . . . . . 8910.24Tukey HSD test for action-awareness merging flow dimension. Error: between; within; Pooled

MS = 10.115, df=83.085. Significant values are shown in red. . . . . . . . . . . . . . . . . 8910.25Tukey HSD test for clear goals flow dimension. Error: between; within; Pooled MS = 10.667,

df=80.578. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . 8910.26Tukey HSD test for concentration on task at hand flow dimension. Error: between; within;

Pooled MS = 13.300, df=100.63. Significant values are shown in red. . . . . . . . . . . . . . 9010.27Tukey HSD test for sense of control flow dimension. Error: between; within; Pooled MS =

11.300, df=82.059. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . 9010.28Tukey HSD test for transformation of time flow dimension. Error: between; within; Pooled MS

= 14.722, df = 88.291. Significant values are shown in red. . . . . . . . . . . . . . . . . . . 9010.29Tukey HSD test for autotelic experience flow dimension. Error: between; within; Pooled MS =

18.648, df = 78.186. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . 9010.30Tukey HSD test for challenge-skill balanca flow dimension. Error: between; within; Pooled MS

= 10.256, df = 89.300. Significant values are shown in red. . . . . . . . . . . . . . . . . . . 9110.31Tukey HSD test for action-awareness merging flow dimension. Error: between; within; Pooled

MS = 10.115, df = 83.085. Significant values are shown in red. . . . . . . . . . . . . . . . . 9110.32Tukey HSD test for clear goals flow dimension. Error: between; within; Pooled MS = 10.667, df

= 80.578. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . . . . . . 9110.33Tukey HSD test for unambiguous feedback flow dimension. Error: between; within; Pooled MS

= 9.6370, df = 95.455. Significant values are shown in red. . . . . . . . . . . . . . . . . . . 9210.34Tukey HSD test for concentration on task at hand flow dimension. Error: between; within;

Pooled MS = 13.00, df = 100.63. Significant values are shown in red. . . . . . . . . . . . . . 9210.35Tukey HSD test for sense of control flow dimension. Error: between; within; Pooled MS =

11.300, df = 82.059. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . 9210.36Tukey HSD test for loss of self-consciousness flow dimension. Error: between; within; Pooled

MS = 17.422, df = 82.800. Significant values are shown in red. . . . . . . . . . . . . . . . . 92

8

Page 12: Gesture-Based Interaction for Games on Multi-touch Devices

10.37Tukey HSD test for transformation of time flow dimension. Error: between; within; Pooled MS

= 14.722, df = 88.291. Significant values are shown in red. . . . . . . . . . . . . . . . . . . 9310.38Tukey HSD test for autotelic experience flow dimension. Error: between; within; Pooled MS =

18.648, df = 78.186. Significant values are shown in red. . . . . . . . . . . . . . . . . . . . 93

9

Page 13: Gesture-Based Interaction for Games on Multi-touch Devices

Glossary

Analysis of variance [ANOVA] A statistical test used to identify any difference betweengroups of variables. It provides a statistical test for whether the means of several groups areequal. As such, it generalises t-tests to two or more groups. A repeated measures ANOVA isalso used to address the issue of the same subjects being used in each treatment. See Chapter5.

Application Programming Interface [API] An interface used in computer programminglanguages which enables it to be used with other software libraries and applications.

Bimanual interaction Interactions done with the use of both hands. See Chapter 2.

Dependent variables A variable which is dependent on another variable. This type of vari-able is often used in mathematics and statistics. When used in conjunction with an independentvariable, it allows the ability to distinguish between two types of quantities and what they pro-duce. It should be noted that the independent variables are the variables that are manipulated,while the dependent variables are only measured. See Chapter 4 and 5.

Direct manipulation An interaction style which uses a representation of objects and actions,where simplified and reversible actions replace overly complex ones that are continually visibleon a interface (commonly implemented on a user interface as a button widget). See Chapter 2.

Direct touch An interaction with the ability to touch a single specific location on the displaydirectly with the use of a finger or a stylus. See Chapter 2.

Duelling games A video game genre that emphasises action and fighting challenges. Thegame-play usually involves one or more playable character, with the aim to eliminate an opposingunit their a means of combat.

Electromyography [EMG] The technique of evaluating and recording the electrical activityproduced by skeletal muscles of a applicant. See Chapter 5.

Factorial design An experiment whose design consists of two or more factors, where eachfactor consists of a discrete set of values. The aim of a factorial design is to address all possiblecombinations of these values across all factors in an experiment. See Chapter 4.

10

Page 14: Gesture-Based Interaction for Games on Multi-touch Devices

Feature extraction Method used to simplify and describe a pattern of resources necessaryto describe a large set of data. Feature extractions is typically used in image processing. SeeChapter 3.

Flow The state where a successful balance between a user’s ability and the perceived challengeor difficulty exists and is associated with the experience that is in itself so gratifying that theuser will be willing to do it for its own sake, with little concern for what they might be gettingout of it in return or if it was difficult. See Chapter 2.

Flow State Scale [FSS] A questionnaire used to measure flow experienced by a participantand is commonly used in sport and physical activity settings. See Chapter 5.

Gestures An interaction style which uses a symbolic method of giving a command to a com-puter, where its location, size and dynamic properties play a role in its differentiation, and isdefined as being a more natural means of interacting with a computing device. Gestures areusually drawn using an input device. See Chapter 2.

Gesture recognition A system used to interpret and recognise gestures performed by aperson. See Chapter 3.

Hidden markov model [HMM] A statistical Markov model, where a system is composedwith unobservable states. Only the output of each state is visible. Each state is composedof a probability distribution over the possible output tokens. As such, the sequence of tokensgenerated by the Hidden Markov Model gives information of the series of states. HMMs aretypically used in pattern recognition systems. See Chapter 3.

Immersion The psychological state in which the immersant’s awareness of their physical self isdiminished or lost due to the level of stimuli produced from the use of a technological system. SeeChapter 2.

Independent variable A variable which is independent of other variables. This type ofvariable is often used in mathematics and statistics, when used in conjunction with an dependentvariable, it allows the ability to distinguish between two types of quantities and what theyproduce. See Chapter 4 and 5.

Independent Television Commision’s Sense of Presence Inventory [ITC-SOPI] Aquestionnaire used to quantify presence through the use of four independent sub-scales, namelysense of physical space, engagement, ecological validity and negative effects. See Chapter 2.

Indirect touch An interaction with the use of a pointing device, such as a mouse or a track-pad, where the user interacts with the device off-screen and has to visually track a cursoron-screen. See Chapter 2.

11

Page 15: Gesture-Based Interaction for Games on Multi-touch Devices

Integrated development environment [IDE] An application or environment that providesa number of facilities to computer programmers for software development. An IDE usuallyconsists of a source code editor, compiler and debugger. See Chap

Likert scale A psychometric scale used in questionnaires, where the respondent specifies theirlevel of agreement with a statement. For example, a 5-point scale will have the following levels:strongly disagree, disagree, neither agree nor disagree, agree, strongly agree. See Chapter 5.

Lopez-Dahab [LD] coordinates Coordinates used to represent elliptical curve points onbinary curves y2 + xy = x3 + ax2 + b. The triple (X, Y, Z) represents the affine point (X / Z,Y / Z2).

Multivariate analysis of variance [MANOVA] A statistical test used when more thanone dependant variable is present. As well as identifying whether changes in the independentvariables have significant effects on dependent variables, they are also used to identify the inter-actions between dependent and independent variables. See Chapter 5.

Multi-touch The use of two or more fingers as input onto a touch screen. See Chapter 2.

Multi-finger interaction The use of two or more fingers on a computing device. See Chapter2.

Open Graphics Library [OpenGL] A cross-platform, cross-language graphical library usedto produce 2D and 3D computer graphics. See Chapter 3.

Presence The illusion where the participant believes the environment that they are beingpresented with, through some medium, is real. See Chapter 2.

Puzzle games A video game genre that emphasises strategic, logical, pattern recognition andsequence solving challenges. The game-play usually involves shapes, colours or symbols whichmust be directly or indirectly manipulated to form a specific pattern or objective.

PVRTC A lossy image format optimised for use in PowerVR MBX technologies. See Chapter3.

Significant value A results that is unlikely to have occurred by chance and is used extensivelyin Statistics. See Chapter 5.

State-machine A system with a finite number of states, which is used to transition betweenstates in a system. The state machine moves from state to state when certain conditions aremet, typically defined by each state and the system running the state machine. The statesare composed of arbitrary data structures and are set together with a number of functions.State-machines are typically used in parsing systems. See Chapter 3.

12

Page 16: Gesture-Based Interaction for Games on Multi-touch Devices

Strategy computer game A video game genre that emphasises strategic, tactical and logicalchallenges. The goals of the game vary from eliminating all opposing units to capturing targets.Most strategy games involves a certain amount of warfare, as well as the use of environmentexploration and resource management. See Chapter 3.

Type 1 error (false positive) The error of rejecting a null hypothesis when it is actuallytrue. See Chapter 3.

Type 2 error (false negative) The error of failing to rejecting a null hypothesis when a nullhypothesis did occur. See Chapter 3.

Unimanual interaction Interactions done with the one hand. See Chapter 2.

Usability tests The evaluations of a product by testing it on users. The aim of the testsis to systematically observe how people interact with a product to discover errors or areas ofimprovement. Tests are measured in terms of user performance, accuracy, recall and emotionalresponse.

Virtual reality The use of computer-simulated environments to simulate places, both realand imaginary. These environments are primarily used as visual experiences and are displayedthrough computer screens, projectors or stereoscopic displays.

13

Page 17: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 1

Introduction

We use our hands to interact with objects; We pick them up, move them, transform theirshape and interact with them [44]. In short, we use out hands to interact with the world [17].Computing devices have become a foundational element in our society and have made numerousadvantages to our day-to-day lives. People spend most of their time inputting information intocomputers, yet the means with which we interact with these devices have not changed in years.As the most useful information does not reside in computing devices but in people [27], theneed to optimise the way we interact with them is of importance. If not, the time spent onphysically performing tasks on computing devices will not improve, even if the time it took toprocess that information was to become infinitely fast [17]. While two modes of interactionhave become the most prominent in most interactive computing devices, namely gestures anddirect manipulation, they differ in their means of display and interaction. Gestures refers to asymbolic sequence of actions that are drawn onto the device using an input medium, such asa finger, and provides a higher level of abstraction between the interface and commands issuedfrom the interface. Direct manipulation refers to the usage of constantly visible objects andactions, generally in some form of a button, which is used to replace overly complex syntax andcommands.

With the rise in availability of computing devices in everyday society, the computer gameindustry has also been seeing significant growth over the last few decades. While this growthhas equalled and at times exceeded the film industry with respect to revenue, little is knownabout the factors that contribute to a successful game [26]. While standard usability tests couldbe performed on games, it overlooks a crucial element that separates games from conventionalsoftware, which is that games are used as entertainment and not as utility [42]. As such, usabilitytests need to be modified to include research from other fields, such as virtual reality, in orderto properly evaluate computer games.

Multi-touch technologies allow the user to interact directly with the elements presentedon the screen, without the use of external input technologies. It also supports the use of two ormore fingers as input, allowing direct interaction with the device. The benefits of this is thatit elicits the feeling of being more “natural” and “compelling” compared to indirect methods ofinteraction [14]. This affordance could result in improving efficiency and accuracy in interaction.One of the advantages of multi-touch interactions is that they are intuitive and easy to learn.The interface designer can meet the user’s expectations by maintaining an analogy to the phys-ical world [31]. This also opens up many possibilities for novel solutions that were previouslyconstrained with single pointer technologies. Multi-touch technology also enables the ability for

14

Page 18: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 1. INTRODUCTION 15

two or more people to interact on the same device. This allows for collaborative interactions,which could not only change the way we interact with devices but also with the way we interactwith others.

1.1 Research Questions

This research project aims to address the question of whether gesture-based interaction providesa better means of interaction in games compared to direct manipulation. This is importantas there is a need to identify whether gestures improve interaction when compared to moreconventional techniques such as direct manipulation. In order to answer this question, threeversion of a game will be created, namely a direct manipulation, gestural and mixed interface(the mixed interface will contain a combination of direct manipulation and gestural interactionmethods). Similarly, two smaller questions are investigated in order to provide an adequateanalysis of the main research question. These questions are as follows:

Gesture effects on game genres An investigation as to whether gestures are an appropriateinput mechanism for different game genres is needed. In addition, an examination of how thecomplexity of gestures affects the game-play experience within each genre and whether thereare genre-dependant differences with the usage of gestures. In other words, if there are genredifferences, how does this affect the appropriateness of gesture specifics? This is important todistinguish, as some games may lend themselves to gestures more than other genres.

Gesture effects of participant characteristics By investigating which participant char-acteristics influence the usage of gestures within games, an accurate analysis as to what thecontributing factors for successful gestural use in games is. As such, the investigation aims toinvestigate whether touch device and gaming experience play a role in positive gesture gameinteractions. This is important, not only in analysing why gestures are an appropriate means ofinteraction, but also providing insights for future research in gestural interactions in games.

1.2 Project Breakdown

The project is broken up into three sections, namely the gesture engine, games and experimentaldesign. Figure 1.1 shows the project breakdown, with project member sections displayed in adifferent colour. The gesture engine contains a feature extraction subsystem [39] and two gesturerecognition engines, namely a Hidden Markov Model [49] and a state-machine gesture engine.That latter is outlined in this report. It allows the gesture, which are performed by the user, tobe recognised by the device.

There are a total of three games developed, each game focusing on a different game genre.The game genres include a puzzle game [39], a duelling game [49] and a strategy game, which isdescribed in this report. The games developed here are the games that are going to be used inthe experiments.

Lastly, the experiments that are going to be performed are developed by Nina Schiff [39]and includes the experimental process as well as the experimental techniques.

Page 19: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 1. INTRODUCTION 16

Figure 1.1: An overview of the project breakdown between group members

1.3 Outline

In Chapter 2 the previous work pertaining to this project is discussed. This is followed by thedesign, implementation and validation of the state-machine gesture engine in Chapter 3. Thegame design, implementation and validation is present in Chapter 4. Chapter 5 presents thedesign and procedure of the user experiments carried out, while Chapter 6 presents and discussedthe results of the experiment. Finally, Chapter 7 concludes with the main results of this projectand possible future work.

Page 20: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 2

Previous Work

2.1 Introduction

This chapter will take a brief look at the relevant previous work done that pertains to this project.This includes, among other things, a brief synopsis of multi-touch technology and interactionstyles, direct manipulation, gestures, gestures in games and what evaluation methods are suitablefor games.

2.2 Multi-touch

2.2.1 Definition

Multi-touch is defined as the simultaneous use of two or more fingers as input onto a touchscreen [9]. This allows for the removal of external input means, such as a mouse or a track-pad,leaving a direct interaction with the screen. The nature of the interaction is direct, meaningthat the user either interacts with the interface or does not. The benefits of multi-touch isthe notion that interacting with an application through touching graphical elements with one’shands provides a more “natural” and “compelling” means of interaction compared to indirectinput mechanisms [14]. This might also results in improved efficiency and accuracy with regardsto interacting with devices [31]. Since objects move and are interacted in a predictable andnatural fashion, users are given the impression of “gripping” real objects [36].

2.2.2 Multi-touch technology

Currently, there is no standard hardware on which multi-touch interfaces are developed. Somedevices are based on camera systems, while other employ capacitive technologies. This haslead to multiple design approaches based on the capabilities of the devices [45]. For example,Apple’s track-pad allows multi-touch gestures, including a four finger touch gesture, while theHP TouchSmart can only track two points of contact. It should also be noted that not alltouch-based technologies support multi-touch. In this case, they only allow single actions tobe performed sequentially. Different devices also have support for different input mechanisms;some allow the user to interact directly with their fingers, such as the Apple iPhone, while otherssupport the use of a stylus, such as Microsoft Window 6.5.

Multi-touch interactions are very well suited for object manipulation tasks [29]. By using

17

Page 21: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 18

Figure 2.1: An example of two multi-touch interaction styles. (a) shows the use of bimanual interaction, while(b) shows the use of multi-finger interaction.

the analogy of physical manipulation as a model for mapping from hand positions to objectparameters, one can construct easy-to-learn techniques that leverage our real-world interactionexperience. One of advantages of touch-screen interactions is that they are intuitive and easy tolearn. The interface designer can meet the user’s expectations by maintaining an analogy to thephysical world [31]. This opens up many possibilities for novel solutions that were previouslyconstrained with single pointer technologies.

The user interacts with the device by touching the screen and selecting objects with theirfingers. This type of interaction leaves the assignment of a finger to specific regions up to theperson using the device. Although this interaction frees the user to interact with the device asthey chose, it has implication for designers of multi-touch interaction techniques and interfacedesigners. To date, no sufficient design patterns have emerged which adequately describe gesturaluse patterns [31].

2.2.3 Multi-touch interactions

Interactions on a multi-touch device can be broken up into three groups, namely direct-touch,bimanual and multi-finger interactions [23]. Direct-touch interaction refers to the ability totouch a single specific location on the display directly with the use of a finger or a stylus. Incontrast, an indirect interaction refers to the use of a pointing device, such as a mouse or atrack-pad, where the user would interact with the device off-screen and would have to visuallytrack a cursor on-screen. Bimanual interaction refers to interactions done with the use of bothhands, while multi-finger interaction refers to the use of multiple fingers but not necessarily theuse of multiple hands.

The advantages that multi-touch brings to interaction are numerous. It allows for parallelinteractions, meaning that interactions can be performed at the same time, which reduces thecomplexity required in user interfaces [29]. Another benefit of parallelism is that it potentiallyleads to faster performance, as the execution of sub-tasks could overlap instead of performingthem sequentially. Moreover, each hand can remain in the area of proximity it is currentlyoperating, reducing the movement required [11]. Multiple fingers also allow the user to grasp orreach several objects at once. For example, multiple fingers might be used to control an arrayof sliders in a user interface. Similarly, multiple hands allow user to perform better at tasksrequiring separate control points than using only one hand. This has also been shown to bethe case when control points are within the rage of one hand’s fingers [31]. As described in

Page 22: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 19

section 2.2.5, there are limitations in the degree of independent freedom of fingers with respectto multi-touch interactions, but these are discussed there.

2.2.4 Indirect versus direct-touch interactions

In a single user setting, direct-touch interactions offer several benefits over indirect interactions.Where indirect interactions require the user to visually track an onscreen cursor in a separateposition from the hand, a direct-touch device offers interaction where the input and display areco-located[23].

In an investigation done by Sears and Schneiderman [40] to test whether interaction targetsize effects speed and accuracy between indirect and direct-touch interaction techniques, it wasfound that direct-touch interaction speeds were faster for targets of size 0.64cm in width or largerthan with a indirect interaction. Users interacting with target sizes of 1.28cm made 66% fewererror with the touch screen than with a mouse. It was also shown that direct-touch interaction forsingle target selection tasks outperformed indirect mouse-based interactions, however, occlusiondue to the imprecision of using fingers as an input mechanism to locate touch-points was a majorfactor in the accuracy reduction of direct-touch interaction [23]. A similar investigation done byForlines et al [14], which focused on unimanual interaction, showed that single-target selectionstasks with target sizes of 1.92cm and larger were significantly faster for direct-touch interactionscompared to a indirect, mouse-based interaction. However, the direct-touch interaction alsoproduced twice the amount of errors compared to the indirect interaction.

2.2.5 Bimanual interactions

Users are better at selecting and dragging objects using a mouse or track-pad than using one ortwo hands on a multi-touch device, however, these tasks generally require pixel-precision in theirselection and manipulation, which is very difficult to achieve with direct-touch interaction [40].Bimanual interactions were also shown to be faster than using unimanual interactions, but theytend to have higher error rates while performing tasks, such as target selection [23]. Even thoughthese errors occurred more frequently in bimanual interactions, the time it took to correct mis-takes made and complete the tasks is faster compared to unimanual interactions. Unfortunately,only selection tasks have been sufficiently researched, leaving other tasks commonplace to userinterface interactions needing research [31].

Performance comparisons between the use of two fingers on one hand compared to theuse of one finger on each hand shows that one fingered, two-handed interactions are better forperforming separable tasks, such as manipulating separate control points on a graphical object,whereas two fingered, one handed interactions are better at integral tasks, such as rotations oftwo-dimensional objects [31].

2.2.6 Multi-touch limitations

While multi-touch devices can detect multiple points of input, allowing the design of high-degree-of-freedom interaction techniques that make use of our real-world manipulation abilities,they are not without their limitations. Because of the physical limitations of the human hand,multi-touch devices suffer from the limited precision, occlusion and limitations on the size andproximity of the actual multi-touch display [31]. The hand and fingers often obscure the veryobject the user is interacting with. When a finger is used to select an object, the hand may cover

Page 23: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 20

certain areas of the screen, making it very difficult to interact with other areas of the screen, aswell as hiding what may be crucial aspect of the interface. In contrast, if a user uses an indirectinteraction, such as a mouse, no part of the screen would be obstructed. In many cases, fingersare wider than many user interface widgets common today, making precise selection difficult [31].At the same time, it must be taken into account that it is difficult to accurately point at objectsthat are smaller than one’s finger [14]. Simply making targets bigger, so that interactions areeasier, may not be possible as it reduces the amount of content displayable on the screen.

A reasonable way of increasing the interaction on multi-touch surfaces is with the use of twohands. Two-handed interaction is especially well suited for high-degree-of-freedom symmetricbimanual tasks, such as shape deformation. However when two interactions are too close togetherit becomes difficult to distinguish which point on the touch devices belongs to which finger [30].Fingers on the same hand inherit the hand’s motion, meaning their movement may be perceivedas being relative to the hand’s frame of reference. The motion of the fingers on separate handscan be more easily uncoupled than that of fingers on a single hand. However, while the motionof fingers on a single hand are easier to coordinate than that of fingers on opposing hands [31].

Error rates, corresponding to target selection tests, have been shown to have similar resultsacross all four input methods. However, bimanual interactions tend to have a higher error ratethan unimanual interactions. There was also a difference of 6.5% in error rates between one fingerand multiple finger interactions. The high error rates for multi-finger interactions may be dueto the increase in cognitive load required to plan movements for multiple fingers independently.In addition, fingers on the same hand are physically constrained to the palm, which limits theset of targets fingers can access simultaneously [23].

Finally, there is also a reason why early scribes began writing with a reed rather thandipping their finger in ink. The same applies to multi-touch technologies. This is because itis easier to control and produce lines of higher resolution when using a pen or stylus[30]. It isimportant for designers to take the context, use and practicality of each input mechanism intoaccount [45]. This means that in order to take full advantage of multi-touch many applicationswould have to be redesigned, as applications mainly only handle sequential interface actions.

2.3 Direct manipulation

2.3.1 Motivation

2.3.2 Definitions

Direct manipulation is defined as the representation of objects and actions, where actions replaceoverly complex syntax and reversible operations are immediately visible on the interface [41].The benefits of direct manipulation are manifold. Users unfamiliar with the interface can learnthe basic functionality very quickly, while experts can perform a wide range of tasks throughsuccessive interactions and can retain operational concepts. Error messages are rarely neededas users can immediately see if their actions are successful, meaning users are less anxious wheninteracting with the interface since actions are comprehensible and easily reversible. Finally,users become confident using the interface, as they feel in control [41]. Examples of directmanipulation in modern user interfaces are a graphical button or draggable objects, which theuser interacts with directly.

Page 24: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 21

Figure 2.2: An example of a direct manipulation interface, where the user interacts with the interface throughbuttons.

2.3.3 Limitations

One of the limitations that direct manipulation imposes is that it rises or falls with the quality ofthe selected descriptive or graphical representation of the underlying action. This means that theaction being represented by the direct manipulation interface must be meaningful and accuratelysignify the action it controls [15]. This directly effects the learnability and memorability thatdirect manipulation affords, as it will also depend on the users prior knowledge of the action athand and its semantic mapping to the system.

One of the benefits that direct manipulation has over long, complex syntactical commandsis that it makes complex commands more usable to novice users. Paradoxically, this has anegative effect on expert users, as the loss of language, which is inherent in direct manipulationinterfaces, also means that there is a loss of descriptive power for different classes of actions.Examples of this can be found in user interfaces, where parsing the interface for advancedcommands slows down the time to efficiently execute commands. Similarly, the use of familiarreal-world metaphors might be beneficial to novice users but not for expert users, who might befamiliar with different metaphors than those being presented [15].

2.4 Gestures

2.4.1 Definitions

A gesture is roughly defined as a symbol used to give a command to a computer, where theattributes of a gesture, such as its location, size, extent, orientation and dynamic properties aremapped to parameters of that command [37]. Because of this, gestures allow for a higher-level ofcontrol over commands, allowing interfaces to be more streamlined, as there is no need for overly-complex user interface structures. As McNeill [32] describes, “Indeed, the most important thingabout gestures is that they are not fixed. They are free and reveal the idiosyncratic imagery ofthought”.

2.4.2 Limitations

One of the biggest limitations of gestural interactions is that they are usually completed beforethe gesture can be classified. This means that the attributes that define and differentiate agesture from others are only examined once the gesture has been completed, leaving no oppor-

Page 25: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 22

Figure 2.3: An example of a gestural interaction.

tunity for the gesture to be cancelled or to be manipulated into a different gesture. This hasbeen described as a lack in continuous feedback and has contributed to the awkwardness thatgestures have in performing common tasks [37]. Similarly, the lack of feedback affects the userin a negative manner, as the user might not be sure how they have commanded the system [48].

Another limitation that surrounds gestures is not necessarily due to properties of gestures,but in the systems that classify and differentiate them. To date, gestures have primarily beendefined by system designers and technicians, employing them and teaching them to users. Mostof the time, these gestures are arbitrary and are only chosen due to reliable recognition that theunderlying engine can perform [48]. Unfortunately, no current system exists which can recognisefully dynamic gestures [44], which does affect the design of gestures. Recognition versus recallalso plays a role in gestures, as one could design a system in which all commands were executedwith the use of a gestural interaction, but would be very difficult to learn [50].

Finally, while gestures might provide a more natural interaction compared to that ofdirect-manipulation, often the simplest of gestures are not simple enough. An example of thisis a swipe gesture, where a finger moves from one side of the screen to the other side to performa ”move back” command, which is more complicated that pressing a button. There are manycases where direct manipulation would be more intuitive and more time saving than performinga gesture.

2.5 Gestures in games

2.5.1 Motivation

Several past explorations in the usage of gestures in games has been performed, and would beinvaluable in this project, as the focus of the project is in the appropriateness of gestures ingames and within game genres. The advantage of using gestures in games are numerous. Theyreplace cumbersome user interface structures and intrusive controllers, allowing players to feelin control over their actions [34]. When combined with external, physical apparatuses, theyallow physical freedom of motion, which has become a very attractive scenario for new gamingexperiences.

Several commercial games have utilised gestures within games, such as Lionhead Studios’Black & White, where the player uses gestures to cast spells by utilising the mouse. The usageof gestures in such games removes the use of menu structures, allowing the player a more flexibleand enjoyable game experience, and allows the player to focus on the object being manipulated

Page 26: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 23

instead of interface objects.Most past work being done in the field of gesture-based games gestures are with the use

of commercial external apparatuses, such as the Nintendo’s Wiimote, Sony’s EyeToy or withMicrosoft’s Kinect. Other mechanisms involve the use of camera systems, where the user istracked in the cameras viewport. Gesture engines are usually implemented by using a statisticalmodel, such as a Hidden Markov Model [24], or with a gesture spotting method, which is frequentin camera-based systems, where gestures are distinguished from unintentional movements [22].

2.5.2 Limitations

While gestures do provide a new and exciting mechanism for the game-play experience, it hasnot been proven to provide a new game-play paradigm nor is it without it’s limitations [46].

While it is a new and exciting paradigm for the traditional method of interacting withgames, it is not without its limitations. The use of physical gestural apparatuses, while allowingfor a more natural method of interacting with a game, brings new issues, such as user feedbackand human spatial motion awareness. While gestures can take the form of real world metaphors,players preferred simple gestures, where immediate context and actions were identifiable. Simi-larly, teaching of gestures and remembering what each gesture does severely affected game-play,at times frustrating players when critical game-play sections are required. This often resultsin gesture confusion, and as a result, gesture engines that rely on this method have the trou-ble of distinguishing similar looking gestures from one another [34]. Camera systems also thehave the drawback of being sensitive to the environment the player is playing the game, as thesurrounding environment needs to successfully contrast the player who is using the device.

In cases where gestures required the use of statistical gesture engines, such as HiddenMarkov Models, the accuracy of the gesture being recognised was largely dependant on the timespent training the gesture engine to the specific player [24]. This severely affects the usage ofgames that rely on get-up-and-go game-play, where players generally do not want to spend timetraining a game to be calibrated to their method of gesture input. Difficulty is often found whenusing the gesture spotting method, as multiple variations of player posture significantly affectswhich gesture are recognised.

2.6 Methods of evaluating games

2.6.1 Motivation

Games are fundamentally different from general purpose software, as their main purpose is toentertain instead of being a utility to accomplish a specific task [42]. This means that standardusability evaluations will only address a small subset of features in games. As such,psychologicalfindings and evaluation methods are needed to properly asses games.

2.6.2 Immersion

Immersion is defined as the psychological state in which the immersant’s awareness of their phys-ical self is diminished or lost due to the level of stimuli produced from the use of a technologicalsystem [28]. It has been shown that there are a number of immersion levels through which animmersant proceeds before reaching total immersion. While there are different labels individualshave given for each different stage, such as absorption, engagement and involvement, all these

Page 27: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 24

labels describe a state in which the immersant is focused on the experience at hand, whereastotal immersion is described as the experience of being wholly engaged in the experience andcompletely involved in this separate reality[12].

However, total immersion is very difficult to achieve [47]. This is because immersionrequires the immersant to be fully engaged and overlooking of the artificial reality they areinteracting with. This engagement is often done with the use of an external interface and is onlypossible in a virtual reality environment, where the goal is to fully immerse the immersant [13].

Measuring immersion

While there has been plenty of discussion of the effects of immersion and what it could mean,there is a lack in quantitative methods for evaluating the different levels a immersant might beexperiencing.

A test developed by Witmer and Singer [47] measures the inclination an immersant hasto become immersed in an environment. They came up with the Immersive Tendencies Ques-tionnaire[ITQ], which uses three dimensions, namely involvement, focus and games, to test theimmersive inclination an immersant might have. Involvement assesses the immersants’ inclina-tion to become immersed in a medium, while focus measures the degree an immersant is able toconcentrate on one task. Games measures how often a immersant plays games and how involvedthey became immersed. The test shows reliability and validity, as well as a a high internalconsistency rate (α = 0.81). While this test does not directly measure the amount of immer-sion an immersant experiences in a game environment, it does show how readily an immersantexperiences presence, with a significant positive correlation (r = 0.24, p < 0.01) [47].

Similarly, a test developed by Ermi and Mayra [12] identifies three dimensions of im-mersion, namely sensory immersion, challenge-based immersion and imaginative immersion.Sensory immersion describes the audio-visual experience of the environment, challenge-basedimmersion describes the combination of the immersant’s abilities and the challenge presented inthe environment, and imaginative immersion describes the narrative dimension of the environ-ment experience[12].

Finally, there appears to be no standard way of measuring the amount of immersion agame has over the immersant. While this might be the case, total immersion has been shownto be synonymous with the characteristics of presence. As such, presence might be a bettermethod of gauging the effects games has on a player.

2.6.3 Presence

Presence can be defined as an illusion where the participant believes the environment that theyare being presented with, through some medium, is real [21]. It is an important concept for gameenvironments, as high levels of presence can directly attribute to the increase in the amount ofenjoyment and fun ascribed to a particular game. Presence has been shown to have a positiveaffect on the ability to complete tasks in games [35].

There are a number of factors that contribute to presence. These can be loosely grouped asplayer and media characteristics [25]. Media characteristics refer to elements such as narrative,while player characteristics refer to elements such as the individual’s perceptual, motor andcognitive abilities, as well as transient factors, such as mood [25]. Selected and focal attentionalso plays a contributing role in presence [47], meaning that, for example, with an increase in

Page 28: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 25

game difficulty, players might be forced to be more attentive to the game, which gives them lesstime to focus on the fact that the environment they are playing is not real [35].

Measuring presence

The Independent Television Commision’s Sense of Presence Inventory [ITC-SOPI] is a question-naire used to quantify presence through the use of four independent sub-scales, namely sense ofphysical space, engagement, ecological validity and negative effects [7]. Sense of physical spacerefers to the feeling of being part of a mediated environment, engagement refers to the emotionalaspects that are involved with feeling fun and feeling psychologically involved with the environ-ment, ecological validity refers to the belief that the elements being presented in the environmentare real and negative effects refers to the counter-effects experienced in the environment thatmight negatively influence the degree of presence experienced by the participant [25]. Valida-tions of this questionnaire are promising, as the internal consistency of the test is very high forall four scales ( 0.77 < α > 0.94), meaning that the test is very reliable.

One of the most widely used introspective techniques for measuring presence is the Slater,Usoh and Steed [SUS] questionnaire. The questionnaire consists of six items, which requirethe participant to asses the level of presence they might have experienced in the game andis administered after they have experienced the game environment. The only downfall of theSUS questionnaire is its subjective nature, which causes internal inconsistencies and validityissues [7].

Apart from using the introspective techniques described above, a number of behaviouraland physiological methods are able to measure presence. These assessment techniques rely onphysiological responses in order to measure the emotions of the participant. Examples of suchtechniques include monitoring the heart-rate of the participant. It should be noted that thesephysiological methods are very difficult to administer and may fall prey to physical inconsisten-cies, such as reactivity [7].

While a number of techniques have been used to measure presence, no well acceptedmethod has been accepted as the standard measurement for presence [26]. While no consensushas been made as to the most reliable measurement of presence, the ITC-SOPI method has beenshown to be the most reliable, and even if not standardised, it is by far the most widely usedmeasure of presence [7].

2.6.4 Flow

Flow can be described as the state where a successful balance between a player’s ability and theperceived challenge or difficulty exists [12]. It can be associated with the experience that is initself so gratifying that people will be willing to do it for its own sake, with little concern forwhat they might be getting out of it in return or if it was difficult [10]. As such, it is a veryimportant concept to analyse in evaluating games.

According to Csikszentmihalyi [10], flow consists of nine dimensions. These are challenge-skill balance, action-awareness merging, clear goals, unambiguous feedback, concentration on thetask at hand, sense of control, loss of self-consciousness, transformation of time and autotelicexperience. Challenge-skill balance describes the feeling of balance between the situation andthe participants personal skill level. Action-awareness merging describes the level of involvementthe participant feels, which might bring upon the sensation that one’s actions are automatic.Clear goals describes the feeling of certainty that the participants clearly knows what they

Page 29: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 26

have to accomplish. Unambiguous feedback describes the feedback the participant receives,which could lead to a feeling of confirmation as everything might be going according to plan.Concentration on the task at hand describes the feeling of being really focused. Sense of controldescribes the distinguishing characteristic of being in complete control of the experience. Lossof self-consciousness describes the sensation where all concerns for the self disappears and theparticipant become fully part of the experience. Transformation of time describes the sensationof time either passing slowly or quickly and autotelic experience describes the end result of flow,where the feeling of doing something is done for its own sake [10].

Measuring flow

Flow has been noticeably difficult to measure, as the only way to directly measure it is bydisturbing an already established flow state [26]. Even with this the drawback, several attemptshave been made to measure flow.

One of the main areas where flow has been used is in the area of sport. Here, Jackson [20]found support for the nine flow dimensions in a qualitative analysis of elite athletes’ flow levels.As such, they developed the 36-item Flow State Scale [FSS], where they used conventionalitem analysis techniques along with other confirmatory factor analysis in order to establish theconstruct validity of the scale [20].

Physiological methods, such as facial electromyography [EMG], where changes in facialexpressions are analysed, has been used to asses a participant’s emotional state [16]. Thebenefits of using physiological methods, such as EMG, is that it bypasses the need to interrupt aflow state, which so many other methods employ. However, physiological methods of measuringflow are very expensive [7] and allow for other, externally extraneous factors due to the use ofexternal equipment. Because of the external nature of the measurements, it becomes very hardto measure which element of the flow state is triggered and how. That being said, physiologicalmethods are still highly favoured in the research community [26].

One of the first experiments to evaluate flow in games was done by Sweetser and Syeth [42], where they based their experiment according to the fixed set of heuristics based on the flowdimensions. It was found that their heuristics varied depending on the game genre played andthat while heuristics are useful for identifying potential problems, their use becomes unreliablewhen attempting to extract the emotional effect a environment might have to participants [42].

As mentioned above, flow is hard to measure without breaking the flow state of the par-ticipant. While the use of external equipment do bypass this problem, such as EMG, there arestill problems in establishing what triggered the flow state. As such, heuristics, while not beingable to asses the flow state, do prove useful in identifying which factors contribute to the flowstate.

2.7 Summary

Multi-touch is defined as the simultaneous use of two or more fingers as input onto a touchscreen, allowing the removal of other external input means. Multi-touch technology was shownto produce a more natural means of interacting with interfaces and does affect the efficiency withwhich we can accomplish simple tasks. It has been shown that the fastest multi-touch interac-tion is about twice as fast as a mouse-based selection. Not only does multi-touch interaction“emulate” the way we interact with objects, but it is also a lot more efficient than indirect-

Page 30: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 2. PREVIOUS WORK 27

touch interactions that we are all used to. That is not to say that multi-touch interactions arewithout their limitation, as occlusion of the screen objects demonstrates. Support for detect-ing two fingers will improve performance in multi-touch interfaces, but support for detectingmore than two fingers is unnecessary to improve multi-target selection performance. In short,the results are inconclusive. While tests using target selection does analyse basic operations,more advanced interactions, such as the manipulation of three-dimensional objects, needs moreanalysis. Factors such as learning, practise and muscle-memory also play an important part infuture research.

Direct manipulation was defined as the representation of objects and actions, where simpli-fied and reversible actions replace overly complex ones that are continually visible on a interface.While direction manipulation does afford a consistent visual representation of objects, allowingeasy learnability and anxious-free interactions for both novice and expert users, it is dependanton the quality of action description and illustration. Gestures are defined as a symbolic methodof giving a command to a computer, where its location, size and dynamic properties play arole in its differentiation, and are defined as being a more natural means of interacting witha computing device. Gestures allow for a higher-level of control over commands, allowing forthe near removal of interface objects and structures. However, gestures suffer from the lack offeedback and classification, allowing user frustration.

Several approaches have been made in combining gestures within games, including com-mercial games. However, most research tends to rely on external apparatuses when performinggestures. While gestures do allow for a more natural means of interacting with a game, eliminat-ing the need for intrusive game controllers, they are not without their limitations. Issues suchas memorability and gesture distinguishing have negative effects on game-play, especially forhighly interactive games which require rapid user interactions. Similarly, games which requireexternal apparatuses face human spatial motion awareness and feedback issues.

The measurement of games proved to be slightly inconclusive, where each component ofmeasurement displayed degrees of inter-connectivity. Immersion was defined as the state inwhich a immersant feels part of the environment. While different means of measuring immer-sion does exist, such as the questionnaire developed by Witmer and Singer, it was shown tohave no standard means of measurement. However, because total immersion displayed similarcharacteristics that were present in presence, it might be more appropriate to measure presencein order to measure immersion. Presence was defined as the illusion where a participant believesthe reality which they are being presented with is real. Similar to immersion, presence showed aresilience to being accurately measured. While more research has been done in the area of pres-ence and how individuals experience presence, there was no standardised method of measuringit. The only promising measurement of presence was the ITC-SOPI questionnaire. Flow wasdefined as the state where a successful balance between the player’s ability and the perceivedchallenge or difficulty exists. Most of the means of measuring flow had to break the flow state inorder to measure it. The only method that bypassed the interruption of the flow state was withthe use of physiological methods, such as EMG. Physiological methods do have drawbacks, suchas the expensive nature of the equipment and the lack of identifying which factors are producinga flow state. As such, a combination of physiological and heuristics can prove to be the mostuseful in measuring flow.

Page 31: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 3

Gesture Engine

3.1 Introduction

This chapter deals with the specifics of the gesture engine that was developed to facilitate theuse of gestures within games. The gesture engine is a crucial part of the project, not only forthe development of the game but also as an input mechanism on which this project is based. Abreakdown of how this gesture engine relates to the other group members work is discussed, aswell as how the gesture engine was designed, implemented and validated.

3.2 Gesture Engine Group breakdown

The gesture engine was broken up into separable components that each group member of theproject had to design and implement separately. Nina Schiff worked on the feature extraction,while Daniel Wood and this author worked on different implementations of gesture engines.Where Daniel Wood designed and implemented a gesture engine that used a Hidden MarkovModel, Pierre Benz designed and implemented a simple gesture engine based on a state-machine.Figure 3.3 shows the breakdown of components each group member implemented with respectto the gesture engine.

Figure 3.1: An overview of what each group participant contributed to the gesture engine.

28

Page 32: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 29

3.3 Motivation

The motivation behind the design of the gesture engine presented in this report is threefold.Firstly, the engine is designed to be real-time. Traditionally, gesture recognition is processedafter all input has been specified [44], which works well if one has computational time to sparebetween user input and computational output. This, however, is not the situation when it comesto games, as user input is only one among many other computational operations needed to bedone in real-time. The gesture engine should recognise features as they are processed, whichnot only alerts the user if they are performing the correct gesture as they perform it, but alsoensures that the user is not surprised in case they performed an incorrect gesture.

Secondly, the engine needs to be fast and light on resources. Since the games are designedto run on the Apple iPad, which is less powerful than standard gaming devices available, therecognition of gestures should not take up all the resources of the device.

Lastly, the engine should allow for the simple and easy creation of new gestures. This isessential, as group members will not have time to sit and experiment with input values in orderfor them to use gestures in their games.

3.4 Feature Extraction

In order to differentiate each gesture from one another, the symbol inputted by the user needs tobe broken up into a pattern that accurately represents the gesture. Because the gesture engine’smain function is to distinguish and identify each feature as it arrives from the feature extractor,it means that the development of the gesture engine had a dependency on the feature extractionimplementation. Major changes in the way each feature was broken up and represented meantthat the underlying gesture engine had to be able to interpret it.

Two different feature extraction methods are used. The Linear Node gesture engine usesa parabolic curve-fitting feature extraction method, while the State Machine and Grouped NodeState Machine gesture engine uses a change-in-direction feature extraction method.

The design and implementation of the different feature extraction methods can be foundin Nina Schiff’s report [39], as well as the limitations of each method. In order to understand thedesign decisions and motivations behind the gesture engine described here, a brief summary ofthe feature extractor limitations is presented. While the feature extraction that uses a paraboliccurve fitting method is able to handle curved gestures reasonably successfully, it struggles tohandle downward vertical lines. This causes a number of otherwise distinguishable gesturefeatures to be dropped and seriously affects the implementation of the Linear Node gestureengine, as it can not consistently handle gestures that incorporated downward vertical linesas a distinguishing feature. The feature extractor which uses a change-in-direction methodsuccessfully handles all gestures, but struggles to handle curved gestures consistently. Figure 3.2shows the two different feature extraction methods used in the development of the gesture engine,as well as how both feature extraction methods handled certain gestures.

Page 33: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 30

Figure 3.2: An example of two different feature extraction methods used in the gesture engine. Both featureextractions are performed on ’V’ and ’U’ gestures. a) shows the use of parabolic curve-fitting feature extraction,while b) shows the use of the change-in-direction feature extraction. Note that the parabolic curve-fitting doesnot recognise the initial vertical line drawn, which often resulted in ’U’ gestures having the same features as ’V’gestures.

3.5 Linear Node

3.5.1 Design

The first implementation of the gesture engine took the form of linear nodes. Here, the mainimplementation consists of a large array containing gestures definitions. Each gesture type isdefined individually and corresponds to each feature sent from the feature extractor. Gesturetypes do not share nodes with any other nodes in the engine, which means that each node onlyhas one parent and one child. Once a node is found, moving to its child was just a matter ofcorrectly keeping track where the gesture is in the engine.

Each gesture node is composed of a values which help distinguish it from other features.Firstly, it is composed of a a, b and c value, where each value corresponds to the parabolicequation:

y = ax2 + bx+ c (3.1)

These a, b and c values are the mean values derived by a process of performing a gesture 100times, which was performed by Nina Schiff. Because every person performs a gesture differently,a rough average of the gesture would provide the general case. This is essential in the gesturerecognition engine, as the classification and recognition of gesture features enables gestures to bedistinguished from one another. This means that even though the values computed give a roughestimation of the values of the feature, it is still liable to fail. As such, error values correspondingto each of the parabolic equation variables are also added to the gesture node. Lastly, eachgesture node is composed of an end point value in integer Lpez-Dahab [LD] coordinates, whichallows for a calculation of the direction of gesture motion.

Page 34: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 31

Figure 3.3: An example of the Linear Node gesture engine, where each parent node has only one child

There are two modes in which the gesture engine is used. Initially, the gesture engineloads gesture definitions from a file. This allows developers to specify which gestures are able tobe recognised by the engine and what their features should be. A file format is used to specifyeach gesture and is loaded into the gesture engine at the gesture initialisation. The file formatis as follows:

gesture identifier (string)

number of features (integer)

...

a b c (float)

a b c error values (float)

end points (integer)

...

In the second mode of the gesture engine, data is received from the feature extractor. Thisoccurs at run-time. When the feature extractor detects a feature, a call to the gesture engine ismade, specifying the feature’s parabola and end-point. The gesture engine then checks to see ifthe values specified from the feature extractor can be found in the gesture engine.

3.5.2 Implementation

The Linear Node gesture engine is the first iteration of the gesture engine. It consists of a Nodeand an Engine class. Gesture definitions are implemented by reading in files at the Engine objectcreation, at which point files are parsed and loaded into the engine. These nodes where packedinto a two-dimensional NSMutableArray. The engine has three additional methods, namelyStartGestureRecogniser, EndGestureRecogniser and doesGestureExist, which perform theactual recognition methods in conjunction with the feature extractor.

The StartGestureRecogniser method is an initialisation method for the gesture recog-niser engine and marks the start of the recognition process. Additionally, it resets all trackingvariables.

Page 35: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 32

The doesGestureExist method is called every time a feature is detected. It serves as themain function for real-time gesture recognising, as it updates the gesture engine according tohow far along the gesture nodes the feature has progressed. Once the method has been called,a number of steps are taken. Firstly, an iteration through the gesture array is undertaken ifno previous features have been sent to the gesture recogniser. Here, every single starting nodehas its node values checked against the values sent to the method. More specifically, a check ismade to see if the gesture direction and the input directions are aligned. This determines if thedirection is either going in the positive or negative x and y direction. A simplification is imposed,in that the gesture node’s x and y end-point values are rounded to either 1 or -1. The value is 1 ifthe value is positive and -1 if it is negative. A similar method is applied to the end-points passedto the method. The values are multiplied, and if they are in the same direction, the value shouldbe equal to 1. If this check has been successfully completely, the parabolic equation values arechecked against the node’s parabolic equation values, padded with the parabolic error values. Ifthe inputted parabolic equation values lie within the padded parabolic values, then the node isrecognised and tracker variables mark that node. The pseudo-code for the function is as follows:

doesGestureExist()

loop through gesture array

if features are in the same direction

if (a, b, c) values lie in indexed node’s padded values

track this node

The EndGestureRecogniser method is the final stage in the gesture recognition engineand is called once all feature extractions have been completed for a gesture. This is also the endstage in the gesture recognition process, meaning that if the state is in a final Node state, thea complete gesture has been recognise. Otherwise, a complete gesture has not been recognised,even though past nodes might have been recognised in the process.

3.5.3 Limitations

The Linear Node gesture engine has many limitations and it was decided to modify the way inwhich the gesture engine handles gesture internally and externally. Firstly, the use of a gesturearray has certain performance issues. Continuous iterations through the gesture array and theuse of tracker variables in order to keep track of which index of the array is successfully beingrecognised, not only makes for complicated code but also seriously affects the rate at whichother operations are executed. For example, if sixteen gestures were loaded into the gestureengine, the gesture array would be iterated sixteen times, regardless of which features havepreviously been recognised. Secondly, because each state is independent of the whole engine,it means that nodes sharing the same values or multiple nodes recognised by one inputtedgesture are all equally valid, and no differentiation is possible as to which gesture is actuallybeing recognised. Similarly, because each node is independent of other nodes, it means that thegesture engine grows significantly with the number of gestures. Lastly, classifying gestures byhand is a painstaking task, as numerous trial-and-error modifications need to be made to theparabolic error values to ensure that the correct gestures can be recognised. Even if they arerecognised, this recognition only applies to the person testing and adjusting the gestures, asthey are suited to their method of drawing and are not necessarily to other users.

Page 36: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 33

3.6 State Machine

3.6.1 Design

Addressing the issues of the Linear Node gesture engine, a directed-graph state-machine isproposed as a solution. Instead of using an array of gestures, where each node is independent ofall the other nodes in the engine, a StateMachineNode is designed to have multiple parents andchildren. This means that only one node can be accepted in the sequence of gesture featuresbeing inputted, which eliminates the need for tracker variables and any post-gesture recognitionmethods used for distinguishing completed gestures.

It should also be noted that several adjustments are required for the feature extraction. Achange-in-direction method of feature extraction was selected as the best method of extractingfeatures from gestures being performed. As such, parabolic equation variables, error parabolicvariables and end-point variables are not being used to distinguish gestures from one another.Instead, a gesture direction map is used (as can be seen in figure 3.4). The direction mapconverts features to a 360◦ Cartesian plane and specifies features in increments of 0, 45, 90, 135,180, 225, 270 and 315◦. This was found to adequately describe all features coming from thefeature extractor. StateMachineNodes are described in directions, which means that the totalnumber of gestures nodes remain small as the nodes are described in a manner which explainsoverlaps.

Figure 3.4: The direction map the State Machine gesture engine uses to map features. Also shown is the zonesin which features are rounded.

3.6.2 Implementation

The implementation follows a similar structure to the Linear Node gesture engine, in that it readsthe input file at creation and contains StartGestureRecogniser, EndGestureRecogniser and

Page 37: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 34

doesGestureExist methods. This was meant to keep the gesture engine Application Program-ming Interface [API] stable, so that different back-ends could be swapped engine-side withoutimpacting other areas and other group members.

Gesture definitions are read from a file at the Engine object creation, where the files areparsed and loaded into the engine. A root node, constructed from a StateMachineNode, is thehighest-level node of which first level gesture definition nodes are children. As the parser goesthrough all the input files, it first checks if a node already exists its children, in which case itdoes not create a new node but merely sets it as its current location and sets its parent node asthe found node’s parent. If the node does not exist in the gesture engine at that particular level,then a new StateMachineNode is created and set as the the current nodes child. In this way,node duplication is avoided. Once all the input files have been loaded into the gesture engine,the engine is ready to be used by the feature extractor.

The main difference between the doesGestureExist method here as compared to the Lin-ear Node engine, is the removal of gesture tracker variables and unnecessary iterations throughthe gesture array. A current gesture node is kept in memory, and checks are made against thecurrent node’s children to see if the feature being passed is accepted by one of the children.As the children will be different from one another, only one child can be accepted at a partic-ular node. Within node checking, all parabolic parameter are removed. Instead, the feature’srounded direction is checked. This means that directions which fall in the 90◦ range (0◦, 90◦,180◦ and 270◦) will accept features with angles of 15◦ of difference. For example, if a feature hasan angle of 81◦, then it will be rounded to 90◦, since it falls in the range of (75◦, 105◦). Similarly,angles at 45◦ will be accept features of 30◦ difference. Once the direction has been discretized,the children are checked to see if the feature’s direction matches that of the current gesturenode’s child. If a child node does exist, then the current node is set to that child node, markingthat the feature is accepted. Otherwise, the gesture engine signals to the feature extractor thatno matches have been found. The pseudo-code for this function is as follows:

doesGestureExist()

select current node’s children

if features are in the same direction as child node

set child node to current

else

signal to feature extractor that no matches have been found

3.6.3 Limitations

Several limitations arose while implementing the gesture engine using the State Machine ap-proach. Firstly, the size of the gesture engine remains significant. Even though the engine faresbetter at recognising gestures than the Linear Node gesture engine, and at times it performswith a 100% recognition rate, multiple gesture definitions still need to be loaded into the gestureengine in order to recognise similar gestures. This is especially the case when a curved gestureis being specified. An example of this is the spiral gesture, composed of curves and not straightlines. This means that about 8 or more different gestures files need to be loaded into the gestureengine in order for it to recognise every different gesture drawing case, not only making it cum-bersome to define the gesture but also in loading to a large gesture file. Figure 3.5 demonstratesthe issues with the spiral gesture, where the feature extractor successfully handles all spiral

Page 38: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 35

features but makes it almost impossible to handle all feature cases in a single gesture file. Thegesture engine needs to be modified to handle these curved gestures more appropriately.

Figure 3.5: An example of a spiral gesture being performed with feature extraction displayed as coloured lines.a) multiple ways of extracting features exist, which shows the trouble in easily distinguishing a spiral gesture.B) displays a solution the Grouped Node State Machine uses to address the curved features present in the spiralgesture.

3.7 Grouped Node State Machine

3.7.1 Design

The Grouped Node State Machine is designed to address the issue of curved features in inputtedgestures. As figure 3.5 demonstrates, features with curves requires multiple gesture definitioncases. One way of addressing the problem is by grouping curved features into a “special”node, which allows permutations without creating multiple gesture definitions for a gesture. Anexample can be seen in figure 3.5(b), where curved features are grouped, as well as figure 3.6,where the gesture engine for the states of a spiral is shown, comparing the State Machine andGrouped Node State Machine.

A further requirement at this juncture is that the engine support multi-touch. While themulti-touch handling is done in part by the feature extraction process, the internal gesture statesneeds to differentiate one-finger gestures from multi-finger gestures. As such, the StateMachi-neNode class has to include a multi-touch value, which means that multi-touch gesture nodesdistinguished themselves not only in direction, but also in number of touches.

Furthermore, the input file for handling gesture definitions is to be modified to handle notonly multi-touch, but also grouping of directions in order to handle curved features. As such,the file has the following structure:

gesture identifier (string)

number of features (integer)

Page 39: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 36

Figure 3.6: An example of the gesture nodes present in both the State Machine and Grouped Node StateMachine for a spiral gesture. As can be seen, a) has multiple gesture definitions to a handle a single gesture,whereas b) shows a single gesture definition which handles all possible variations.

..

number of touches (integer)

nodes grouped (integer)

angles in groups (integer)

..

3.7.2 Implementation

The implementation has the same structure as the State Machine gesture engine, with themodification that the StateMachineNode allows the inclusion of a “grouped” node, which allowsa node to have multiple directions, and an inclusion of a touches variable, that specifies howmany fingers are performing a gesture.

The doesGestureExist method is modified to incorporate multiple touches from thefeature extractor, as well as handle multiple directions within each node. Firstly, a check ismade against the number of fingers used by the feature extractor. This is a quick pass-fail test,which means that the gesture engine would only have to check against similar numbered touchnodes. If the number of fingers used by the feature extractor matches the number listed in anode, then the gesture engine performs actual feature comparisons. In order to accomplish theinclusion of grouped nodes within a gesture, several checks are made. Firstly, a check is made

Page 40: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 37

against the current node’s directions. If the current node has a direction that is successfullychecked against the inputted feature, then the node pointer stays on the current node. If thefeature’s direction value is not found in the current node, then a second check is made againstthe current node’s children and their direction values. Figure 3.7 shows a simplified example ofthis process. The direction checks are exactly the same as the State Machine gesture engine,where the feature extractor’s directions are rounded to the nearest 45◦ value.

Figure 3.7: An example of two StateMachineNodes which are used in the Grouped Node State Machine, wherea check is made against the current node’s directional values, after which a check is made against the child node.

3.7.3 Limitations

While the Grouped Node State Machine implementation successfully handles spirals and othercurved gestures, it still suffers from limitations. While grouping of direction values within a“grouped” node does help simplify the detection and creation of gesture definitions, it alsooverly complicates the gesture engine. Careful design of gestures is need to address each state,ensuring that each gesture can be successfully found and does not conflict with other gesturenodes. As such, many gestures needed to be modified to address the inclusion of grouped nodes.However, several problems arise. If a grouped node successfully “swallows” other, lone-standingnodes, and they are gesture endings, which one of the gesture endings did the gesture enginefind? Luckily, in the gestures designed in this project, no gestures ever displays this problem.

3.8 Validation

In order to test how well the gesture engine mentioned above performs in comparison to a prob-abilistic gesture engine based on a Hidden Markov Model (as developed by Daniel Wood [49]),several tests were done to test how accurate the engine was and to test for the occurrence fortype 1 and type 2 errors. Type 2 errors are described as “false negative”, which, in the contextof a gesture engine comparison, means that a gesture is falsely recognised by the gesture engineeven though it was correctly inputted into the device. A type 1 error is know as a “false posi-tive”, which would be when a gesture engine recognises the gesture but classifies it as a differentgesture to the one being inputted.

The test was performed by using the different engines, both loaded with ‘V’, ‘U’, ‘Z’ and‘spiral’ gestures. Each gesture was performed a hundred times by Daniel Wood and the author.The results of these tests can be found in table 3.1. T-tests were performed on the resultsprovided by the gesture comparisons and can be found in table 3.2. The analysis of the datashows that there is no significant difference in the scores produces by the Grouped Node StateMachine and the Hidden Markov Model gesture engines for correctly finding the gestures with

Page 41: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 3. GESTURE ENGINE 38

conditions t(3) = -0.2645, p = 0.8085. Similarly, no significant differences were present in Type1 errors with conditions t(3) = 1.8446, p = 0.1623, nor with type 2 errors with conditions t(3)= -0.7108, p = -0.5285. These results suggest that both engines performed similarly and thatthe choice of gesture engine should not be based on accuracy.

Gestures

V U Z Spiral

Grouped Node State Machine Correctly Classified 86% 74% 97% 99%

False positives 14% 11% 0% 1%

False negatives 0% 15% 3% 0%

Hidden Markov Model Correctly Classified 87% 95% 99% 83%

False positives 0% 0% 0% 0%

False negatives 13% 5% 1% 17%

Table 3.1: Results from gesture engine comparison. Percentages of classification in different categories.

mean of differences t-value df-value p-value 95% confidence value

Correctly Classified -2 -0.265 3 0.809 -26.062 22.062

Type 1 Errors 6.5 1.845 3 0.162 -4.714 17.714

Type 2 Errors -4.5 -0.711 3 0.53 -24.649 15.650

Table 3.2: Statistical T-test results performed on the gesture engine comparisons.

3.9 Future Improvements

There are several improvements that could be made to the gesture engine described in this report.Firstly, a more robust mechanism for handling complex gestures should be incorporated into thegesture engine. This includes features such as gesture-chaining, where several sub-gestures canbe executed sequentially. A more elegant method of handling complex multi-touch gesturesneeds to be developed, as the current gesture engine only handles multi-touch gestures wherethe number of fingers remain constant during the duration of the gesture. Lastly, the gestureengine is tightly coupled to the feature extraction for efficiency reasons. If the feature extractionhad to change to incorporate curvature more accurately, then the gesture engine would need tosupport it.

3.10 Conclusion

In conclusion, a gesture engine has been developed to handle features extracted in the form of astate-machine. The state machine has multiple implementations, with the final implementationhandling multi-touch gestures in the form of state nodes. Certain nodes are grouped in order tofacilitate curved gestures, which can be extracted in a variety of ways. It is also shown that thegesture engine performs comparable to a probabilistic gesture engine, in the form of a HiddenMarkov Model, in terms of differentiating and accepting gestures. As such, the gesture enginepresented here is adequate to being used for all the games used in this project.

Page 42: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 4

Game design

4.1 Introduction

This chapter describes the game that was designed to investigate the usage of gestures in games.The game is one among the three games which is used in the user experiments. A brief intro-duction on the game genre that the game was based on is described, as well as the design andimplementation of the game, followed by a short test study in which the game was validated. Thereasons for the game designs presented here are twofold, namely that they describe the internalmechanisms of actual game-play, mentioning and providing motivations for why certain gamemechanics are used, and how they are incorporated within the wider framework of addressingthe research questions pertaining to the effects of gestures in games and game gestures.

4.2 Strategy game

The game genre the game is loosely based on is a strategy game. The main emphasis in strategygames is the use of strategic planning and tactics in order to accomplish certain goals. Theplayer is generally in control of a certain amount of units, which they can either build or control,as well as assign commands to. The goals of the game vary from eliminating all opposing unitsto capturing targets. Most strategy games involves a certain amount of warfare, as well as theuse of environment exploration and resource management.

Strategy game are usually broken up into two groups, namely real-time strategy and turn-based strategy. Real-time refers to continuous game-play, where all actions are done in realtime, whereas turn-based refers to the use of discrete, turn-based strategy, where each side hasa phase to perform actions.

4.3 Game design

4.3.1 Game-play and key features

The game developed, Capture The Towers, is a real-time strategy game, where the objective isto capture three enemy towers, which are surrounded by enemy units. The player controls anumber of units, which are used to accomplish the objectives of the game.

39

Page 43: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 40

World viewing

The player has the ability to view the world environment, which is presented in a top-down, map-like overview. This allows the player to see the terrain, friendly and enemy units. Moving aroundthe map is performed by either using the map-navigational buttons, which is only available in thedirect manipulation version of the game, or using map-movement gestures, which is performedby performing a two-fingered swipe gesture.

Resources

The player starts the game with five hundred resources, that can be used to build units. Everytime a tower is successfully captured, the player will gain two hundred and fifty additionalresources, which can be used to build new units. Table 4.1 displays the resources required tobuild a specific unit. These resources are constantly visible to the player and is situated at thetop of the screen, next to the remaining time left to complete the game.

Unit name Cost

Foot soldier 20

Archer 30

Chaplain 80

Trebuchet 250

Table 4.1: The cost of each unit available in the game

Time-limitation

In order to successfully complete the game, all three enemy towers need to be captured within 7minutes of game-play. The player is unsuccessful at accomplishing the game requirements whenthis is not done, which results in the game ending. The time allocated, and the remaining timeleft for the player to accomplish the game objectives, is visually displayed to the player throughthe use of a horizontal bar, which appears in the top-centre of the screen.

The enemy

Enemy units occupy the area surrounding the towers that the player has to capture. Theyremain stationary and do not move from their allocated positions. They will automaticallyresort to combat if any player units come within their interaction-circle.

The towers

Capturing the enemy towers is the main objective of the game. There are a total of three towers,which are located across the map. An assorted number of enemy units surround each tower,which the player will have to eliminate before they can capture the tower. Capturing a toweris done by positioning a unit over the tower for an allocated time of 20 seconds and can onlybe performed by foot-soldiers or archers. When the capturing process has been started, visualindications of the capturing process will be displayed with a pulsing animation. Once a towerhas been captured, the tower will change to a blue colour, indicating that it now belongs to the

Page 44: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 41

Figure 4.1: An example of the direct manipulation interface showing the placement area after a player selecteda foot-soldier to be built

player. Additionally, the player will receive 250 resources, which can be used build additionalunits.

Building units

In order to add player controllable units in the game environment, the player will have to buildunits. This becomes especially useful when the player has captured enemy towers and wantsto use the resources gained to build more units. Building a specific unit is done by selecting aunit from the user interface or, if the player is playing the gesture-based interface, performinga unit gesture corresponding to the the unit they want to build. Once a unit has been chosento build, an area of placement is presented to the player. This area is located at the player’sinitial starting position and is visually indicated by a blue circle. If the player has moved awayfrom this location, for example, by moving the screen to a different area of the game map, thescreen will animate back into this initial location and will allow the player to place units in thegame map. Interacting with the placement area is done by tapping within the circle presented.If the player does not tap within the circle, the building action is cancelled and the player willhave to redo the unit building selection process.

Page 45: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 42

(a) (b)

Figure 4.2: Interaction circles of two units, where (a) shows a foot-soldier within the interaction-circle of aarcher, causing the the archer to attack the foot-soldier, and (b) shows a foot-soldier and archer within eachothers interaction-circles, where both units will attack each other

Unit name Interaction-circle radius

Foot soldier 20

Archer 200

Chaplain 150

Trebuchet 400

Table 4.2: Unit Interaction-circle radii

Combat

Combat happens whenever a unit occupies the interaction-circles of an opposing unit. Certainunits, like archers, can attack from a distance, whereas foot-soldiers need to be at a closerdistance from their opponents in order to attack them. Interaction-circles differ among unit-types, and table 4.2 shows the relative interaction-circle each unit will have. Figure 4.2 showsan example of unit interaction-circles that will be used in the game.

When a unit occupies the interaction-circle of an opposing unit, the unit whose interaction-circle has been infiltrated will target the infiltrating unit. A unit can only target one unit at atime and will continue to attack that unit unless they move out of the targeted unit’s interaction-circle’s range, if they or the opposing unit die or if the player assigns the unit to attack a differenttarget.

Once a unit is targeting an enemy unit, they will start attacking. Attacking is performedwith an attack-speed time delay. This means that units will only be able to perform an attackon their targets once they have waited an specific amount of time before performing the nextattack. Once a attack has been made, a secondary check is made to ensure that the attack issuccessful. This check is performed by generating a random number between the values of 0 and20, which will be the attack value. If this attack value is greater than the targeted unit’s defencemodifier, then a successful attack is made. If this value is not greater than the targeted unit’sdefence modifier, then the attack is unsuccessful. If the attack is successful, then the attackingunit will deal damage to the unit being attacked. When a unit takes damage from an attack,its hit-points will decrease. Once a unit’s hit-points reach 0, the unit is classified as dead and is

Page 46: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 43

removed from the game map.Visual indication is given to units attacking targeted units. In order to display that the

unit is attacking a target, a red line from the attacking unit to the targeted unit is drawn.In order to animate the attack speeds of each unit and to differentiate attacking units fromnormal, stationary units, a pulsing animation is applied to an attacking unit. This is done toindicate when when each attack will take place. This pulsing animation takes the shape of acircle and fills the whole area of the unit. A special note should be made about the chaplain.The chaplain is a unit that does not deal any damage to enemy units and only heals friendlyunits. A chaplain can only heal friendly units if they are within the chaplain’s interaction-circle.Once there, the chaplain will heal them every 10 seconds. Healed units will gain an additional5 hit-points every time they are healed by a chaplain. Chaplain’s also displays a healing pulseradius, which indicates when a healing action will take place.

If a unit fires a projectile, like the archer or the trebuchet, the unit does not deal directdamage. Instead,a projectile is launched at the position where the targeted unit was when theattack was made. Projectiles only initiate in a attack sequence if the projectile collides withan enemy unit. This also means that a unit could outmanoeuvre a projectile aimed at them.Projectiles only collide with enemy units and will not deal any damage to friendly units. Anyenemy unit that intercepts an oncoming projectile will collide with the projectile, causing theprojectile to initiate a attack check and destroy itself. Once a projectile has collided with atarget, an attack check is made. This is similar to the attack sequence mentioned above. If theattack value is above the targeted unit’s defence modifier, then a successful attack has been madeand projectile damage will be applied to the targeted unit. A note should be made about thetrebuchet projectile type. The trebuchet fires a boulder at a location. This boulder is projectedinto the air and can not be intercepted by other enemy units. Once it reaches its target, itdeals an area of damage, where any enemy unit that is located in the area of damage starts anattack check. If the attack check is unsuccessful, meaning that the attack value is greater thanthe unit’s defence modifier, the unit is hit and takes damage. Table 4.3 displays the differentprojectiles, with their movement speed, damage and their radius of effect.

Figure 4.3: An example of an attack sequence. A red line is drawn between the archer and the foot-soldier,with a pulsing animation shown in the consecutive images. In step d) the archer has waited sufficient amount oftime to perform an attack check, where in steps a, b and c the red circle shows how much the archer still has towait in order to perform an attack.

Page 47: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 44

Projectile origin Movement speed Damage radius of effect

Archer 7 2 -

Trebuchet 3 20 80

Table 4.3: A summary of the projectiles associated with units and their specifications.

Unit name Hit-Points Attack damage Defence modifier Attack speed Movement speed

Foot soldier 50 5 5 10 4

Archer 20 see table 4.3 10 50 3

Chaplain 15 - - - 2

Trebuchet 100 see table 4.3 1 100 1

Table 4.4: A summary of the units available and their specifications.

Table 4.4 displays the unit specifications of all the units available in the game, includingunit hit-points, attack damage, defence modifier, attack speed and movement speed.

Artificial intelligence

The enemy will have very basic artificial intelligence. No attacking will be done unless theplayer attacks a specific unit or if an enemy unit comes into its interaction-circle. Units willstop attacking once the opposing unit is out of its interaction-circle.

4.3.2 Units

Types of units

The game features a range of units that the player will be able to control. Each unit has theirown strengths, weaknesses and abilities. A summary of units available is displayed in Table 4.5.

Foot-soldier The foot-soldier is the standard unit of the game. It has the strongest defencein the game, but comes at the price of having a short attack range, since it can only attackenemy units that stand next to it.

Archer The archer provides the standard long-range support in the game. Being able toattack from a long distance and having a fast attack speed, the archer is great for providingcovering fire for units.

Chaplain The chaplain is a specialist unit, which functions as a healer. The chaplain can healnearby units for a certain amount of time, after which it will have to recharge its ability. Thechaplain has the weakest defence and the can not attack enemy units. The chaplain can onlyheal surrounding units once it is stationary.

Trebuchet The Trebuchet is a siege weapon which has the longest attack range of all the unitsavailable. While it can deal a lot of damage if it hits a unit, the weapon is very inaccurate. Assuch, contrasted to other units who can only attack individual units on a one-on-one basis, thetrebuchet attacks a certain area, which has a radius of damage. However, the projectile fired

Page 48: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 45

Image Name Characteristics

Foot soldier Short attack distanceHand-to-hand combatNormal attack speed

Strong defenceFast movement

Archer Long attack distanceArrows (projective)Fast attack speed

Weak defenceNormal movement

Chaplain Ability to heal nearby unitsUnable to attack

Weak defenceSlow movement

Trebuchet Great attack distanceSiege weapon (projectile)

Slowest attack speedWeakest defence

Slowest movementNo close combat

Table 4.5: List of units available in the game

from the trebuchet is slow moving, meaning that units might be able to outmanoeuvre it. Thetrebuchet is the slowest moving unit in the game and takes the longest time to recharge betweenattacks. It also has a fixed attack range and cannot attack any units at a short distance.

Selection and commands

In order to interact with a specific unit, the player simply selects the unit by tapping directlyon it. Once the unit is selected, the player can issue unit specific commands. Basic commandsinclude moving to a specific location on the map or attacking an enemy unit. If an unit isselected, a light-blue selection circle will be drawn around it to indicate that it has been selectedby the player.

While the player could continually control and command all their units, units can survivein the game world without any player interaction. For example, if an enemy unit comes withinthe range of a unit’s interaction-circle, it will automatically attack the unit.

Page 49: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 46

4.3.3 Gesture design

Figure 4.4: List of gesture available in the gesture-based and mixed game interface

Gestures allow the player to perform to interact and control the game with the use ofgestures. Figure 4.4 displays a list of gesture available in the game and are also displayed in thegame.

General Purpose Gestures

In case the player makes a mistake or wants to perform a different action than the one currentlybeing performed, a general cancel gesture is available.

Map Gestures

Other than directly manipulating the world through touch events, the player also has access tospecific map gestures that manipulate the way the player views the world.

Unit Grouping Gestures

Units have specific grouping gestures which can be applied to them by the player. This allowsfor more high-level of control and allows the player to command more than one unit at a time.

Page 50: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 47

Figure 4.5: Time-limitation interface, where a coloured bar indicates the remaining time the player has tocomplete the game and the severity of the total time left

4.4 User interface

4.4.1 Time-limitation interface

The time-limitation interface is constantly visible to the player. The bar is colour coded, indicat-ing the total time left for the player to complete the game. Figure 4.5 shows the time-limitationinterface and the different colour states it will be in.

4.4.2 Unit interface

The unit-interface is located at the bottom of the screen and displays the unit currently beingselected by the player. Three different interfaces are designed to accommodate the direct-manipulation, mixed and gesture-based interface. With regards to the selection of unit modifiers,the gesture-based interface will rely on gestures to modify the unit abilities, while the direct-manipulation interface will require the player to directly interact with the user interface elementsdisplayed at the bottom of the screen. This mixed interface will combine a combination of directmanipulation and gesture objects.

Page 51: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 48

Figure 4.6: A display of the different interfaces the game will display depending on the game style. a) showsthe direct-manipulation interface, where buttons allow the player to interact with unit based commands, while b)show a mixed interface, where some direct-manipulation buttons are still availabe and c) shows a gesture-basedinterface, where all button have been removed as the interaction are all gesture based.

4.4.3 Minimap interface

The minimap interface indicates where the player is currently with regards to the game map.The player’s position is indicated by a light-blue rectangle which is displayed on top of the actualgame map.

4.4.4 Direct manipulation, mixed and gesture-based interface

The games features three interface versions, one with a gesture-based interface, another with acombination of direct-manipulation and gestures and one with a direct-manipulation interface.The main differences between the direct-manipulation and gesture-based interfaces is that thedirect-manipulation interface will have more user interface controls for unit behaviour than thegesture-based interface. Figure 4.8 shows a direct-manipulation interface, while figure 4.9 showsa mixed interface, combining gestures for map movement and group selecting and using buttonsfor unit building and behaviour commands. Figure 4.10 shows a gesture-based interface, whereall buttons have been removed to allow for purely gestural interactions.

Page 52: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 49

Figure 4.7: Minimap interface which is located at the bottom left of the interface

In order to indicate which gestures the player can perform on a unit or group of units,the right hand side of the screen will display the list of available gestures that the player canperform. This list is context sensitive and will change depending on the actions of the player,which allows the player to know exactly which gestures are appropriate during the course ofthe game. In the list of available gestures, the previously activated gesture is coloured in red,whereas the other gestures, which follow the previously selected gesture, are coloured in black,with circles around the starting points of the gesture to denote where the gesture starts andwhere it ends.

Figure 4.8: An example of a direct manipulation interface, showing a foot-soldier being selected by the player.Notice that the map movement button are displayed on the left, right, top and bottom of the game map and willmove the screen in either direction depending on which button has been pressed.

Page 53: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 50

Figure 4.9: An example of a mixed interface, showing a foot-soldier being selected by the player. A list ofcontext-sensitive gestures are displayed on the right-hand side of the screen.

4.5 Art and design

4.5.1 World design

The design of the world is shown in Figure 4.11.

4.5.2 Unit Design

The design for the units will be very minimalistic, with an emphasis on easy identificationand location depending on whether the player is selecting units or finding them on the map.Figure 4.12 shows the how player and enemy units are differentiated and table 4.6 provides anelementary design of the different units available in the game.

Image Unit name

Foot Soldier

Archer

Chaplain

Trebuchet

Table 4.6: Unit style and design

Page 54: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 51

Figure 4.10: An example of a gesture-based interface, where a group one selection gesture has been performed.A list of context-sensitive gestures are displayed on the right-hand side, as well as the previously performed gesturethat is show in the top right corner

Figure 4.11: An overview of the main game environment, displaying the playing environment where game-playwill occur and where the towers will be located. The blue rectangle displays the size of the display screen.

4.6 Implementation

4.6.1 Technical Specifications

Capture The Towers is designed and implemented to be played on a Apple iPad [2]. The AppleiPad has a touchscreen display with a screen resolution of 1024 x 768 pixels and supports the useof fingers instead of a styli as input mechanism. The device houses a 1 GHz Apple A4 processor,256 MB DRAM and uses the PowerVR SGX 535 GPU for graphics.

The game was developed on Apple Mac Minis [3] running OS X 10.5 (Snow Leopard) [4].XCode [5] was used as the main development environment and Inkscape [18] was used to producethe graphics. Five different frameworks were employed in producing the game, namely Core-Graphics, QuartzCore, OpenGL ES 1.1 [33], UIKit [1] and Foundation. Programming languagesused to develop the game were C and Objective-C.

Page 55: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 52

Figure 4.12: An example of how player and enemy units are displayed in the game. Player controllable unitshave a blue circle surrounding the unit, while enemy units are surrounded by a red circle.

4.6.2 Class Layout

Figure 4.13: Representation of the class hierarchy used in the game. The lines between the nodes representhow each class used other classes, where the arrows indicates the direction of class use.

Figure 4.13 provides an overview of the classes used in the implementation of game. TheEAGLView class acts as the main class and provides the game logic. The Renderer class isresponsible for coordinating all on-screen renderings of the game world, which includes drawingof units, unit projectiles, towers and gesture interactions. The EventHandler class handles allevents received from the player, while Unit, Projectile and Tower classes handle the specific unit,projectile and towers that are interacted with within the game. The Help and UIManager classprovides general user feedback and visual feedback from the game environment.

4.6.3 EAGLView

EAGLView is a class that Apple’s XCode IDE automatically sets up if a OpenGL application iscreated for the iPad and acts as the main ViewController for the application. With this templatecode, the game uses the following main loop to execute all game logic:

Page 56: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 53

while (game_running){

update_timers();

event_handling();

update_units();

render();

}

Formal touch-based event-handling is done in this class, through Apple’s prescribed TouchesBe-gan, TouchesMoved and TouchesEnded functions. Here, multi-touch applications detect touchspecifics, such as the location touched on the screen. Basic multi-touch is supported. However,keeping track of individual touch events requires manual tracking of separate UITouch point-ers. Fortunately, the game only uses a maximum of three touches. All touch events are putinto a three-element NSMutable array called touchesArray, where each element of the arrayaccounts for one of the three touches. The TouchesMoved function detects if a touch movesand adds the new touch coordinates to touchesArray, depending on the dimension the touchbelongs to. Independent of whether a touch event is a gesture or a single touch, they all beginat TouchesBegan and end at TouchesEnded. In order to differentiate if a single touch or a multi-touch event occurred, the TouchesEnded functions checks the length of a particular element oftouchesArray. If the length is less than two , then a single touch occurs, otherwise a gesture isexpected. All touch events are further processed by the EventHandler class.

Finally, the main game-loop uses timers to ensure that basic game-play is consistent.Timers are also used to ensure that the game only lasts seven minutes and are widely used inthe Unit, Projectile, Tower and UIManager classes.

Separate player and enemy unit arrays are created at the start of the game. This allowsfor player units to be added or removed dynamically. Accessing player or enemy units meansthat a iteration through the player and enemy arrays is needed, however, the number of enemyor player units in the game is small enough that it never affects performance negatively. All theenemy units and towers are created and placed at the start of the game. Similarly, three grouparrays, which are used in the game to assign units to different groupings, are created at the startof the game. Enemy and player-based projectiles are also created, but a total of fifty Projectileobjects are created within each projectile array, to ensure that no unnecessary object creationand removal is done and to ensure that the game-play is not unnecessarily slowed down. Moreinformation on how the projectiles are handled is discussed in the Projectile section.

Keeping track of world coordinates is very important for the game, not only in movingthe map around, but also for displaying on-screen interactions, such as gestures drawings. Thetotal game environment is 2050x1800, with the starting screen having its origin in the bottom,left corner. If screen movements, either by gestures or movement buttons, are performed, thenthe screen moves left, right, bottom and top coordinates in increments of 500. This means thatthe total game environment can be broken up into nine 500x500 regions, where the game screen,which has a size of (1050, 800) will overlap over the regions. The decision to do this is to showthe player gradual map movement, instead of big jumps of movement, as the context switchmight confuse the player. Similarly, (x, y) coordinates of touch events will need to be offset inorder to interact with what is currently being displayed to the player. Figure 4.14 displays thegame environment decomposition.

Page 57: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 54

Figure 4.14: The game total game environment broken up into nine 500x500. Note that the game screen,indicated by the blue rectangle, overlaps over the nine regions.

Updating the units is done in three main steps, namely checkInteractionCircles,unitCombat and unitStateCheck.

The checkInteractionCircles function checks if units are within the interaction-circleof enemy units, or if the unit is a chaplain, if there are friendly units in its interaction-circle.However, because a unit can only attack one enemy unit at a time, a check to see if the unit isalready attacking another unit is performed. If the unit is not attacking another unit, then itproceeds to check the interaction-circles. This is done by iterating through the array of units,either player or enemy, and performing a simple distance check. If the unit is a chaplain, insteadof iterating through the enemy unit array, it iterates through the player array. Finally, if theunit is not moving or attacking, the distance between the unit and the towers is examined tosee whether a unit is able to capture a tower.

The unitCombat function operates the combat mechanism between units. The functionoperates in a similar fashion to the checkInteractionCircles function, except that only at-tacking units are checked. Because it is already know who the target of the attacking unit is, noiterating through enemy arrays is necessary. An attack sequence is performed, which consistsof generating a random number between the values of 0 and 20, and checks whether this passesthe targeted unit’s defence modifier. If it does, then a successful attack has happened and thetargeted unit’s hit-points get modified according to the attack damage it has received. Thatbeing said, further distinction is made between the foot-soldier, archer and trebuchet, as theirmethods of attack differ. If a foot-soldier attacks, then the normal attack sequence listed abovewill occur. If the unit is either an archer or a trebuchet, instead of performing a direct attack,they will launch a projectile at the position of the targeted unit. This projectile is placed inthe corresponding player or enemy projectile array. After the units have been checked, then the

Page 58: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 55

projectiles are checked for collisions. An interation through both the player and enemy projec-tiles is done, which iterates through its prospective enemy unit arrays, where a distance checkbetween the unit and projectile is done. If it is found that the radius value of the projectile andthe unit is greater than the distance calculated, then an attack sequence occurs. If the projectilecollides with an enemy unit, the projectile is destroy even if it does not deal damage to the unit.

The unitStateCheck function operates as a cleanup function and state update. It ba-sically iterates through all the units and removes units that have 0 hit-points. Similarly, itremoves references to these units from units that have them as a target, allowing units to befree to attack elsewhere. If the unit is moving, then the unit’s position is updated. Similarly forthe projectiles. At this stage the projectile should not have collided with a enemy unit and isthus safe to update.

The updateStateCheck function handles most of the user interface work. This displaysthe selected unit’s icon in the user interface, as well as its hit points and what group it is assignedto.

UIButtons and UILabels are used to place buttons and labels on the screen. These aredone by adding them as a subview to the EAGLView and are made visible or invisible program-matically. As UIButtons have their own event handlers, no manual touch event handling needsto be performed. OpenGL rendering is executed through the Renderer class.

4.6.4 EventHandler

EventHandler is an abstraction class that handles the touch events from the device and acts asa logical extension of the event-handling done in EAGLView. This allows the handling of touchevents to be more modular and leads to a streamlined codebase.

Firstly, single touch events are handled. The main use-case for single touch events isinteracting with units in the game environment. The top and bottom user interface bars do notneed manual event handling, as the buttons will be handled by separate UIButtons. As such,all touches that are in the location of the top bar and the bottom bar are ignored, which meansthat only single touches with properties of (top - y > bottom + 150) and (top - y < bottom +750) are handled.

If a touch event is handled, then the first check is made to see if the player is interactingwith a unit on the screen, which is done by touching the unit with a finger. This will selecta unit and the the user interface will change to reflect which and how many units have beenselected. The process of checking whether a unit has been selected is done by performing adistance check between the position touched and the unit’s centre. If a unit is interacted, thenthe unit will either be selected, or if it has already been selected by the player, then it will beunselected.

If a unit has not been interacted with, it means one of two interactions has occurred.Firstly, a unit command might have happened before the touch event occurred and could havetaken the form of a gesture or a button press. These actions include moving a unit to a specificlocation or attacking an enemy unit. To differentiate between a movement command and anattacking command, a boolean value named isAttacking is passed to the EventHandler fromthe EAGLView class. If a move action is active, then the event handler must ensure thatthe single touch event does not interact with other units. Conversely, if an attack action wasperformed, then the single touch event must touch an enemy unit.

Lastly, a special single touch event needs to be handled: unit building. Unit building can

Page 59: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 56

either be done by gestures or button presses. The build type is set by using a TYPE variable,which is assigned depending on what unit is set to be built. Once a build type has been specified,then a unit-build area is drawn on the screen by the renderer and the player is asked to specifywhere they want the unit to be placed within the build area. The build area has a radius of 400,and is placed in the centre of the initial screen at (500, 500). If the distance between the touchposition and the build area is less than 400, then a player unit can be placed at that location,otherwise the build area will notify the player that they have attempted to place a unit illegally.

If the game version supports gestures, then gestures are handled next. All that is needed isidentifying which gesture the player has performed, seeing whether it follows the correct gesturelogic within the game mechanics and performing the necessary function calls depending on thegesture. The maximum number of gestures in the game is fifteen. The only game that would useall fifteen gestures is the gesture-based interface, whereas the mixed interface only uses eleven,namely the cancel, movement, grouping, attack and move gestures. No extra functionality isrequired for gestures other than differentiating them, as all of the core functionality is alsoincluded direct manipulation interface.

4.6.5 GestureEngine

The internals of this class are as specified in Chapter 3, however, it should be noted thatthe Grouped Node State Machine engine is used. Figure 4.15 shows the GestureStateMachinestructure used in the game, with all the gestures loaded. GestureStateMachine was also modifiedto return a string containing the gesture found.

Figure 4.15: Internal representation of the GestureStateMachine object nodes that were used in the game.

4.6.6 Unit

The Unit class handles all unit functionality in the game. This includes creating units, settingthe unit position in the game environment, setting the unit type, updating the unit position,receiving and sending attack damage, assigning and selecting groups, selecting individual unitsand rendering the unit in the game environment.

A Unit object functions in the game environment by use of states. This is defined by aSTATE enumeration value as follows:

enum STATE{

ALIVE,

Page 60: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 57

DEAD,

MOVING,

ATTACK,

ATTACKING,

HEALING

}

The default state for a unit is the ALIVE state. Once a unit has 0 hit-points, their state changesto DEAD. The MOVING state describes a unit that is moving, while an ATTACK state is activatedonce a unit is attacking another unit that is within its interaction-area. The ATTACKING state isactivated once a unit has been issued an attack command from the player. The HEALING stateonly applies to the chaplain unit, which specifies whether they are healing nearby units.

Units are either friendly or enemy units. FRIENDLY units can only attack ENEMY units, andvice verse. Each unit has an ALIGNMENT state, defined with the following enumeration value:

enum ALIGNMENT{

FRIENDLY,

ENEMY

}

There are four different unit types available in the game (as described in section 4.3.2).The unit types are available to both FRIENDLY and ENEMY units and are defined programmaticallythrough a TYPE enumeration value:

enum TYPE{

FOOTSOLDIER,

ARCHER,

CHAPLAIN,

TREBUCHET

}

Unit object creation initialises all position coordinates, states and timers. Unit types andalignments are also defined at object creation.

Moving a unit to a specific location is achieved by a function call from the EAGLView class.The input values describe the x and y coordinates the unit needs to move to. Using the unit’scurrent position and the inputted x and y coordinates, the angle at which the unit needs tomove can be calculated as follows:

angle = atan2(y1 − y2

x1 − x2) (4.1)

Attacking an enemy unit is similar to moving a unit in the game environment, where the xand y coordinates refer to an enemy unit instead of empty space. Using equation 4.1, the angleat which the unit needs to move in order to attack a unit can be calculated.

The unit position is updated every time updateUnit is called. This is especially usefulwhen a unit’s state is MOVING or ATTACKING. Using the angle calculated in moving or attackingan enemy unit, the unit’s x and y coordinates can then be updated by:

Page 61: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 58

x1 = x+movementspeed ∗ cos(angle)y1 = y +movementspeed ∗ sin(angle)

(4.2)

Rendering a unit to the screen is done through a render function call from the Rendererclass. This function draws the unit texture, selection, attack-lines and pulsing animations to thescreen. The function uses a texture pointer from the Renderer class to bind the texture to theobject being drawn.

4.6.7 Projectile

The Projectile class handles projectiles created by the archer and trebuchet unit types. Projec-tiles have a very limited lifespan which usually lasts a few seconds at most. The class handlesinitial projectile placements, updating of projectile positions, renderings and, in the case of thetrebuchet’s projectile type, a splash damage animation.

There are two types of projectiles available in the game, namely arrows or boulders.Archers create arrows and trebuchets create boulder projectiles. Projectile types are defined bya PROJECTILE enumeration value, which is defined as follows:

enum PROJECTILE{

ARROW,

BOULDER,

}

As with the Unit objects, projectile object functions by using lifetime states:

enum LIFETIME{

ACTIVE,

INACTIVE,

SPLASH,

ANIMATING

}

The default state of a projectile is the ACTIVE state. Once a projectile has collided witha enemy unit, the projectile changes to the INACTIVE state. Only the BOULDER projectile willever be in a SPLASH state, and is triggered when a boulder has hit the ground and is causingsplash damage. The ANIMATING state is for internal animation renderings of the boulder splashdamage.

The initial placement of the projectile is done upon object creation. Here, the initial xand y game environment coordinates are provided, as well as the position of the target. Usingthe position of the target, the angle between the two points can be calculated. Equation 4.1 isthen used to calculate the angle between two points, and hence, the trajectory of the projectile.This is used to update the position of the projectile in the updateProjectile function.

The differences between the ARROW and BOULDER projectile types is as follows. Once anarrow collides with an object, the arrow’s state is set to INACTIVE and the arrow is then recycled

Page 62: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 59

internally. Once a boulder reaches a target position, instead of being switched to INACTIVE, thestate switches to SPLASH. Splash damage is applied once to units that fall in a certain radius ofthe boulder, after which the boulder’s state is changed to ANIMATING.

Rendering a projectile to the screen is done through a render function call from theRenderer class. This function draws the projectile and also draws the splash animation for theboulder.

4.6.8 Tower

The Tower class handles the tower objects required to successfully complete the game. The basicfunctionality that the class implements is the placement of a tower on the game environment,the timer to successfully capture the tower and the rendering of the tower.

Placing a tower in the game environment is similar to placing units and projectiles. Theonly difference is that the tower will never be moved from its initial position.

Timers are created once a unit is stationary over a tower. This timer is incremented everysecond a unit is stationary and positioned above a tower. If a unit moves, then the timer is resetand the unit will have to recapture the tower. The total time it takes the tower to be capturedis 20 seconds. Once a tower has been captured, the tower changes from green to blue.

Rendering is done whenever the render function is called and draws the tower unit in thegame environment. A GLuint texture pointer is sent to the render function from the Rendererclass so that no unnecessary textures are loaded.

4.6.9 UIManager

The UIManager class is responsible for drawing the top and bottom user interface bars on thedevice. The top bar consists of a time-limitation bar that changes as the game progresses andthe bottom bar consists of the mini-map, which gives a visual indication of where units and theviw are in the game map. Drawing is done in OpenGL and which provides a render methodthat the Renderer class can call.

The top bar displays the time remaining for the player to complete the game. It uses theStartTime and CurrentTime variables that are defined in EAGLView to display the runningtime in a graphical form. The bottom bar displays the current position and scale of the view inrelation to the game map. Not only does it draw the entire game map with the players currentlocation, but it also displays the towers, enemy and player units as well. The entire game mapis translated to a window of width 240 and height 145. As such, all towers, units and windowpositions need to be translated to fit into this context.

4.6.10 Renderer

The Renderer class is a controller class for all the OpenGL renderings done in the game. Uponcreation, the screen resolution, frame-buffers and textures of the OpenGL environment is loadedinto memory. Textures are converted to the PVRTC file format as specified by Apple, whichis optimised to work with the iPad’s PowerVR MBX hardware. The textures are loaded intotexture memory through the use of a texture pointer, and this is passed to each object thatneeds it.

The main rendering of the game environment occurs in the renderer function. Thismethod provides the logic of which objects need to be drawn and in what order. Because the

Page 63: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 4. GAME DESIGN 60

game is two-dimensional with z-axis = 0, the ordering with which each item drawn onto thescreen is crucial, as OpenGL requires items at the top of to be drawn last. The sequence ofOpenGL renderings is as follows:

renderer(){

draw towers();

draw player_units();

draw enemy_units();

draw player_projectiles();

draw enemy_projectiles();

draw ui_elements();

draw multitouch_interactions();

}

4.7 Validation

A pilot game test was undertaken during the game development process, in which five peerComputer Science Honours students were asked to play the game and note what they found mostenjoyable and most frustrating about the game, as well as providing comments and suggestions.The testing was informal and testers were asked to comment while they were playing the game.Several adjustments were made to the game to incorporate the shortcomings observed in thegame testings.

One issue that was encountered early on was that of unit selection. This issue was rectifiedby increasing the radius of each unit. Testers also struggled to distinguish map movementgestures from other gestures, where testers would often perform a unit command incorrectly,resulting in a map movement gesture being performed, leading to confusion and frustration.This was rectified by changing map movement gestures to use two-fingers instead of one, whichhelped testers distinguish map movements from other unit-based actions.

Testers found the lack of visual feedback frustrating, as they were not sure if they wereselecting units or operating commands correctly. This was corrected by displaying instructionalmessages onto the screen, such as “Assigning unit to group 1”.

Lastly, an animation to build units was incorporated into the game, which would animatescreen movement to the initial map position and scale, since testers selected a build but wouldnot know where to place the unit, as they were not in the original starting location and therewas no indication where the build area was.

4.8 Conclusion

A real-time strategy game was designed and implemented on an Apple iPad and three differentinterfaces were developed. The game’s objective was to build units which could be directed toattack enemy units in order to capture three enemy towers. A total of four units were availablefor the player to build and control, each with different capabilities. Informal user testing wasdone and game-play adjustments were made to make the game satisfactory to the players.

Page 64: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 5

Testing and experiments

5.1 Introduction

The following chapter focuses on the experimental design and procedures that were undertakenin order to successfully test whether gestures are appropriate for games and if there is any dif-ferentiation of that appropriateness within game genres. The experimental specifics, such as theindependent and dependant variables, how participants were selected, what the control proce-dures were and the process of the experimental procedure is discussed. All of the experimentaldesigns were done by Nina Schiff [39].

5.2 Motivation

The motivation for performing user experiments is to identify whether a significance betweendirect manipulation, mixed and gesture-based interactions exists. With the results from a exper-iment like this, it can be deduced which interaction mechanism is more appropriate for games,if there are significant differences within game genres and what participant characteristics con-tribute to the usage of gestures within games.

5.3 Independent and dependant variables

The independent variables for the experiment is identified as genre, group and interface type.Genre refers to the game genre being played, which is either a strategy game, puzzle gameor a duelling game. This independent variable is used to identify which game genre is moreappropriate for each interaction style. Group refers to the three participant groups that areselected on the basis of a participant’s appropriate technical skills and preferences, where two ofthe group are compared against a control group in order to validate the experiment. Interfacetype refers to the different interaction interfaces used in each game: direct manipulation, gestureand mixed interface.

The dependant variable was identified as flow scores and was based upon the questionnairethat the participants had to complete during the experiment. The questionnaire used was TheFlow Scales [19] and was used to assess the state of flow that each participant was experiencingduring each game for each interface. The questionnaire was obtained under license from Prof.James Gain and is mentioned in section 2.6.4.

61

Page 65: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 5. TESTING AND EXPERIMENTS 62

5.4 Participants

Participants were selected from nearby Computer Labs, namely The Shuttleworth Lab andSciLab A, which are located in the Computer Science building at the University of Cape Town.Participants were presented with a form (which can be found in Appendix A). The form askedthe participants to fill in their name, age, degree and year of study, as well as complete aquestionnaire. The participants were also notified twenty Rand would be made available to eachparticipant who completed the experiment. The questionnaire was in the form of a 7-point Likertscale, and questions such as computer literacy, time spent using a computer, frequency of gamesplayed, their preference of game genre and how much they enjoyed a particular game genre, ifthey owned a touch device and their frequency and comfortability with using a touch device,were asked. In order to keep the experiment controlled, a control group was set around computerliteracy, which ensures that all participants had the same level of computer literacy. Participantsthat did not meet the adequate level of computer literacy were rejected from the experiment.It should be noted that adequate ethics clearance was gained by the proper authorities beforeany attempts were made to recruit participants for this experiment. A total of fifteen (N = 15)participants were allocated to each group member, bringing it to a total of forty five participants(N = 45), where five (N = 5) participants were allocated to each participant group.

The participants who preferred to play strategy games were chosen to play the strategygame that is presented in this report. The participants who chose strategy games as theirpreference had a mean value of 4.93 for how often they played strategy games (M = 4.93, SD =1.1813), as well as a mean value of 6.26 for the enjoyment of playing strategy games (M = 6.25,SD = 0.98). The mean age of these participants was 20.133, with a mean computer literacy of6.2, a mean gaming experience of 5.067 and a mean touch experience of 4.4. No control wasmade for gender or race as this did not seem to have any relevance to the experiment.

Three groups were allocated within each game genre, where five participants were allocatedinto each participant group. A gaming group was allocated to participants who had experienceplaying games. Similarly, a touch group was allocated to participants who had experiencein touch devices but not necessarily in gaming experience. A control group was allocated toparticipants who did not have touch or gaming experience. All groups played all three gameversions.

5.5 Experimental design

The experimental designed incorporated a mixed design, where repeated measures was usedto pair and match participants between groups. The experiment featured 3 X 3 X 3 factorialdesign, where three interfaces, three genres and three participant characteristics per group wasused. Lastly, the experiment used a control group to determine what effect touch and gamingexperience had on the use of gestures in games.

5.6 Control procedures

Certain control procedures were followed to ensure that the experiment allowed for accurateresults. Firstly, each participant had to complete a survey after each game version played. Thishelped counter the effects of fatigue, as the participant would have time to do another activity

Page 66: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 5. TESTING AND EXPERIMENTS 63

before playing the different game version [6], and practise effects, where the participant becomesmore accomplished at the experiment the more time they spend performing the same tasks [6].Each participant played each game version in a random sequence, meaning that the order inwhich they played the direct manipulation, gesture and mixed game versions was at random.This addresses the issue of learning effects, where the participant becomes more practised at thegame over the duration of the experiment, and experimenter bias, where the experimenter mightsubject the participant towards an expected results [6]. Similarly, each participant had to write arandom number to identify each game played. This number, while allowing the experimenter toidentify which game version was played, served to distract the participant from identifying whatthe experiment was about. The participant also received written instructions on the device,minimising all interactions with the experimenter, to ensure that the participant was not biasedby the experimenter and to remove any self-consciousness the participant might feel with theirprogress during the experiment. Demand characteristics, which refers to the demand of whatthe participant has to fulfil during the experiment [6], was addressed by the experimenter bynot explaining the purpose of the experiment until the experiment was completed. Situationaleffects, where the participant responds to the experiment differently in relation to location orposition, was addressed by having each participant undertake the experiment in the same settingand for the same duration. Finally, the Hawthorne effect, which relates to how a participantmight modify their behaviour under the observance of the experimenter [6], was addressed byplacing the participant and the experimenter in separate rooms.

5.7 Experiment procedure

The actual experiment followed a set of procedures. Before the experiment proceeded, the ex-perimenter was responsible for contacting the participant to ensure and remind them of the timethey were allocated to undertake the experiment. Once the participant arrived at the experi-ment location, the participant was seated at a desk where the Apple iPad and documentation,containing participant information and the questionnaire, was located. The experimenter ex-plained the sequence of events that the participant was required to follow in order to completethe experiment, which included filling out a consent form and providing their demographic infor-mation. These forms can be found in Appendix A. The questionnaire takes the form of a FlowState Scale [FSS], which is used to measure flow experienced by a participant. Conventionally,the questionnaire is used to measure flow in sport and physical activity settings, but was rec-ommended [39] to be used to measure the amount of flow participants experienced in computergames. The questionnaire consists of thirty six questions, which are divided into nine dimen-sions. Each dimension represents a different flow dimension. Each question asks the participantto recall a particular experience that occurred whilst playing the game, to which they respondby answering the question using a 5-point Likert response format, where 1 indicates “Stronglydisagree” and 5 indicates “Strongly agree”. Unfortunately, due to licensing and reproductionrestrictions, the questionnaire can not be included in this report. It should be noted that thetopic of flow has been discussed in this report in section 2.6.4.

Once a participant has filled out the consent and demographic forms, they continued tothe device on which the games were to be played. There the participant was presented witha welcome screen, which they had to interact with in order to proceed with the next phase inthe experiment. It should be noted that all instructional messages were located on the device

Page 67: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 5. TESTING AND EXPERIMENTS 64

and not on a separate piece of paper, which was meant to facilitate in the participant becomingcomfortable and familiar with interacting with a touch device. All the instructional messagesthat were presented to the participant are located in Appendix B.

Once past the welcome screen, the participant was introduced to a practise session, wherethey were given the chance to practise performing gestures on the device. This was presented asa separate screen, where the participant was asked to draw the gesture presented on the screen.The gestures the participant had to perform were present in the gesture and mixed versions ofthe game, which meant that the participant was not being presented with different gestures inthe training session and in the games, which provided a chance to learn the gestures and becomecomfortable in performing them before the game started. Each gesture needed to be performedten times by the participant before proceeding to the next gesture. A total of fifteen gestureswere performed by the participant before they completed the training session.

Next, the participant was presented with an introduction to the strategy game they weregoing to be playing. This provided a brief overview of what they could expect the game-play tobe, as well as the objectives of the game and a description of the different unit types availablein the game.

Once the participant had completed the description of the game, they were presentedwith a screen describing the game version they were about to play. This introduced the differentinteraction mechanisms of each game interface. They then moved on to the actual game. Thegame lasted seven minutes, after which the game would end with the player being presentedwith a cue to complete the questionnaire before proceeding. Other game versions of the gamefollowed the same pattern. A total of three game versions were played by each participant.

Having completed all three games and questionnaires, the participant was informed thatthe experiment was over. The participant then left the room, handing the questionnaire tothe examiner and collecting their money. An experiment explanation was presented to theparticipant, explaining what the experiment was about. It was found that each experiment ranfor roughly forty five minutes in total, meaning no fatigue effects were likely.

5.8 Pilot experiment

A pilot experiment was conducted before the start of the main experiment process and wasused to gauge the length of the experiment, as well as examine if the experimental instructionspresented in the experiment were clear and intuitive. It was found that the initial time of tenminutes per game was too long and was reduced to seven minutes. Similarly, certain instructionswere modified to address confusions participants of the pilot study had. A counter to indicatehow many gestures the participant had completed in the practise session was also added.

5.9 Conclusion

This chapter outlines the design and procedures used to perform user experiments for thisproject. Independent variables, namely genre, group and interface type, as well as the depen-dant variable flow scores were identified and explained. Participants were selected from nearbycomputer labs, based on their level of computer literacy. A total of fifteen participants were al-located to each game genre, where three participant groups, namely gaming, touch and control,was allocated five participants based on their gaming and touch device experience. A 3x3x3

Page 68: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 5. TESTING AND EXPERIMENTS 65

factorial mixed experiment design was set up, using repeated measures between groups. Thecontrol procedures for handling the experiment was describes as well has how the experiment wasperformed. The results of this experiment is presented and discussed in the following chapter.

Page 69: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 6

Results and Analysis

6.1 Introduction

This chapter describes the results drawn from the user experiments. These results include thespecific results from the strategy game, as well as the results from the combination of all threegames. This is done in order to investigate which interaction style best accommodates enjoyablegame-play, whether the use of gestures are genre specific, and how participant characteristicsinfluenced the the usage of gestures in game-play. All the results of the experiment, includingthe necessary statistical outcomes, appear in Appendix C.

6.2 Results of the strategy game

The experimental results from the flow questionnaire of the participants who played the strategygame is presented and can be found in table 10.1. Table 10.2 shows the breakdown of the flowscore for each of the 9 flow dimensions. These flow dimensions, which are described in Chapter 2of this report, are ‘challenge-skill balance’, ‘action-awareness merging ’, ‘clear goals’, ‘unambigu-ous feedback ’, ‘concentration on the task at hand ’, ‘sense of control ’, ‘loss of self-consciousness’,‘transformation of time’ and ‘autotelic experience’. Table 6.1 provides a summary of the flowdimensions.

The average total flow score for the direct manipulation version of the game was 33.8 (SD= 3.93), while the gesture-based interface’s average total was 31.73 (SD = 4.423) and the mixedinterface was 29.68 (SD = 5.277). These results, in themselves, are not adequate to sufficientlyshow which version of the game outperforms the other versions and why.

To address this issue, statistical analysis, in the form of repeated measures analysis ofvariance [ANOVA], was performed on the data. ANOVAs were used to see if there is anydifference between groups of variables and provides a statistical test for whether the means ofseveral groups are equal. As such, it generalises t-tests to two or more groups. A repeatedmeasures ANOVA is also used to address the issue of the same subjects being used in eachtreatment. The dependant variable for the ANOVA is flow, while the independent variable isgame interface. Similarly, a multivariate analysis of variance [MANOVA], which is used whenmore than one dependant variable is present, was performed on the data to see how the interfaceand participant groups interacted. As well as identifying whether changes in the independentvariables have significant effects on dependent variables, MANOVAs are also used to identify

66

Page 70: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 6. RESULTS AND ANALYSIS 67

Flow dimensions Description

Challenge-skill balance The feeling of balance between the situation and the partic-ipant’s personal skill level.

Action-awareness merging The level of involvement the participant feels, which mightbring upon the sensation that their actions are automatic.

Clear goals The feeling of certainty that the participant clearly knowswhat they have to accomplish.

Unambiguous feedback The feedback the participant receives, which could lead to afeeling of confirmation as everything might be going accord-ing to plan.

Concentration on task at hand The feeling of being really focused.Sense of control The distinguishing characteristic of being in complete con-

trol of the experience.Loss of self-consciousness The sensation where all concern for self disappears and the

participant become fully part of the experience.Transformation of time The sensation of time either passing slowly or quickly.Autotelic experience The feeling of doing something for its own sake, with no

expectation for reward or benefit.

Table 6.1: A summary of the different dimensions of flow

the interactions between dependent and independent variables.The results of the ANOVA and the MANOVA applied to the results from the strategy

game can be found in table 10.3 and 10.4. The results show no significance between the differentinterface versions, as well as no significance between the interface versions and participant groups.

6.3 Results of all three games

All three games’ flow scores were combined to test for interactions between game genre, interfacetype and participant group. Similar to the strategy game analysis, a MANOVA was appliedto the combination of games. The results of this analysis can be found in table 6.2. Theresults of the MANOVA showed a significance for all the main effects of the dependant variablesexcept the combination of the interface, genre and group. In order to understand how thesesignificant values affect each flow dimension, a univariant repeated measures test was applied forthe interface on each of the nine flow dimensions, as well as post-hoc tests for the interactionsbetween interfaces and genres and between interfaces and participant groups. A summary ofthe results can be found in table 6.3.

6.3.1 Interface univariate repeated measures results

The univariate repeated measures test was applied to all flow dimensions against the differ-ent interfaces. Significance was present in all nine flow dimensions. Interactions between thegame interfaces and game genres were also found for the unambiguous feedback and loss ofself-consciousness flow dimensions. A Tukey’s range test, which is a post-hoc test performedto determine which group’s means are significantly different from one another and generallyincorporates a method for controlling Type 1 errors, was performed on the univariate values of

Page 71: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 6. RESULTS AND ANALYSIS 68

Effect Test Value F Effect df Error df p

Intercept Wilks 0.009898 311.2176 28.0000 0.000000Genre Wilks 0.318534 2.4013 18 56.0000 0.006511Group Wilks 0.396239 1.8313 18 56.0000 0.043880Genre*Group Wilks 0.157573 1.8885 36 106.6663 0.006588Interface Wilks 0.099285 9.5760 18 19.0000 0.000005Interface*Genre Wilks 0.094389 2.3802 36 38.0000 0.004767Interface*Group Wilks 0.126765 1.9091 36 38.0000 0.025903Interface*Genre*Group Wilks 0.087785 0.9168 72 77.0695 0.644534

Table 6.2: Multivariate tests of significance on all three game. Significances are shown in red.

these results. The test results show that there were a number of significant differences betweeninterfaces.

A comparison between the means for the challenge-skills balance flow dimension (ta-ble 10.6) showed that the mixed interface (M = 14.933) was greater than the direct manip-ulation interface (M = 11.622), which indicates that the mixed interface elicited more flow forthe challenge-skills balance flow dimension than the direct manipulation interface. Similarly, thegesture interface (M = 15.289) had a greater mean than the direct manipulation interface.

The action-awareness merging flow dimension’s mean comparison (table 10.8) showedthat the gesture interface (M = 15.622) was greater than the direct manipulation interface(M = 13.667) and that the mixed interface (M = 15.511) had a greater mean than the directmanipulation interface, indicating that both the mixed and gesture interfaces elicited more flowfor the action-awareness merging flow dimension than the direct manipulation interface.

The clear goals flow dimension mean comparison (table 10.10) showed that the mixedinterface (M = 15.200) was greater than the direct manipulation interface (M = 12.578), whichindicates that the mixed interface elicited more flow for this flow dimension than the directmanipulation interface. Similarly, the gesture interface (M = 15.356) had a greater mean thanthe direct manipulation interface.

The unambiguous feedback flow dimension showed an interaction between the interfaceand game genre dependant variables. Tukey’s test (table 10.12) showed that there was signif-icance between the different game versions and the game genres. The duelling game’s directmanipulation interface (M=10.867) had a smaller mean value compared to its gesture interface(M = 15.267), meaning that its gesture interface elicited more flow for the flow dimension thanits direct manipulation interface.

Control on the task at hand goals flow dimension mean comparison (table 10.14) showedthat the gesture interface (M = 15.511) was greater than the direct manipulation interface (M= 13.267), which indicates that the gesture interface elicited more flow for this flow dimensionthan the direct manipulation interface.

Sense of control flow dimension mean comparison (table 10.16) showed that the mixedinterface (M = 15.356) was greater than the direct manipulation interface (M = 12.467), whichindicates that the mixed interface elicited more flow for the sense of control flow dimension thanthe direct manipulation interface. Similarly, the gesture interface (M = 15.511) had a greatermean in turn than the mixed interface, indicating that the gesture interface also elicited moreflow than the mixed interface for this flow dimension.

The loss of self-consciousness flow dimension showed that there was an interaction between

Page 72: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 6. RESULTS AND ANALYSIS 69

Flow Dimension Effect Interface 1 > Interface 2 Significance

Challenge-skill balance Interface Gestures > Direct manipulation 0.000111Interface Mixed > Direct manipulation 0.000111Interface * genre Puzzle gestures > Puzzle direct manipulation 0.000293Interface * genre Puzzle mixed > Puzzle direct manipulation 0.004324Interface * genre Duelling gestures > Duelling direct manipulation 0.035357Interface * genre Duelling mixed > Duelling direct manipulation 0.023826Interface * group Gaming gestures > Gaming direct manipulation 0.001410Interface * group Duelling gesture > Duelling direct manipulation 0.00908Interface * group Duelling mixed > Duelling direct manipulation 0.00405

Action-awareness merging Interface Gestures > Direct manipulation 0.001201Interface Mixed > Direct manipulation 0.00278

Clear goals Interface Gestures > Direct manipulation 0.000114Interface Mixed > Direct manipulation 0.000121Interface * genre Strategy gestures > Strategy direct manipulation 0.007097Interface * genre Duelling gestures > Duelling direct manipulation 0.001737Interface * genre Duelling mixed > Duelling direct manipulation 0.002781Interface * group Gaming gestures > Gaming direct manipulation 0.000855Interface* group Gaming mixed > Gaming direct manipulation 0.004454

Unambiguous feedback Interface * Genre Duelling mixed > Duelling gestures 0.000921Concentration on task at hand Interface Gestures > Direct manipulation 0.005088Sense of control Interface Gestures > Direct manipulation 0.000209

Interface Gestures > Mixed 0.018505Interface * Group Gaming gestures > Games direct manipulation 0.039190Interface * Group Duelling gestures > Duelling direct manipulation 0.004790

Loss of self-consciousness Interface * genre Strategy gestures > Strategy mixed 0.016831Interface * genre Strategy direct manipulation > Strategy mixed 0.0.000233

Transformation of time Interface Direct manipulation > Mixed 0.001579Interface * genre Strategy direct manipulation > Strategy mixed 0.025650

Autotelic experience Interface Direct manipulation > Gestures 0.003083Interface Direct manipulation > Mixed 0.00863Interface * genre Strategy direct manipulation > Strategy mixed 0.006637Interface * group Gaming direct manipulation > Gaming gestures 0.030455

Table 6.3: A summary of the results found for all three games.

the interface and game genre dependant variables. Tukey’s test (table 10.18) revealed that therewere significant differences between the different game versions and the game genres. The meanof the strategy game’s mixed interface (M = 10.333) was less than its direct manipulationinterface (M = 16.333), meaning the strategy game’s direct manipulation interface elicited moreflow for the flow dimension than the strategy game’s mixed interface. Similarly, the strategygame’s mixed interface’s mean value was also less than its gesture interface (M = 14.6000).

The Transformation of time flow dimension comparisons (table 10.20) showed significantdifferences. The mean of the mixed interface (M = 13.156) was less than the direct manipulationinterface (M = 15.556), indicating that the direct manipulation interface elicited more flow forthe transformation of time flow dimension than the mixed interface.

Lastly, the autotelic experience mean comparisons (table 10.22) showed that there wasa significant difference between the gesture and direct manipulation interface and between themixed and direct manipulation interfaces. The mean of the mixed interface (M = 12.156) andthe gesture interface (M = 12.444) were both less than the direct manipulation interface (M= 14.778), which indicates that the direct manipulation interface elicited more of an autoteliceffect than both the mixed and gesture interfaces.

Page 73: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 6. RESULTS AND ANALYSIS 70

6.3.2 Interface and Genre

Tukey’s range test applied to the challenge-skills balance flow dimension (table 10.23) showedthat there were significant differences between game genres and interfaces in this flow dimension.The puzzle game’s direct manipulation interface (M = 9.333) had a smaller mean value comparedto the gesture (M=14.067) and mixed (M = 13.200) interfaces. This means that both the puzzlegame’s gesture and mixed interfaces elicited more flow for this flow dimension than its directmanipulation interface. Similarly, the duelling game’s direct manipulation interface (M 12.400)had a smaller mean value compared to its gesture (M = 15.600) and mixed (M = 15.733)interfaces.

Tukey’s range test applied to the clear goals flow dimension (table 10.25) again showedthat there were significant differences. The strategy game’s gesture interface (M = 14.933) hada greater mean value compared to its direct manipulation interface (M = 11.400). Similarly, theduelling game’s mixed interface (M = 16.267) had a higher mean value compared to its directmanipulation interface (M = 12.467).

Tukey’s range test applied to the transformation of time flow dimension (table 10.28)showed that there were significant differences between game genres and interfaces in this flowdimension. The strategy game’s direct manipulation interface (M = 15.067) had a greater meanvalue compared to its mixed interface (M = 11.133), indicating that the direct manipulationinterface elicited more flow for this flow dimension than the mixed interface.

Tukey’s range test applied to the autotelic experience flow dimension (table 10.29) showedthat there were significant differences. The strategy game’s direct manipulation interface (M =13.933) had a greater mean value compared to its mixed interface (M = 9.333), which indicatesthat the direct manipulation interface for the strategy game elicited more flow in this flowdimension than its mixed interface.

No significant results were found for Tukey’s range tests applied to concentration ontask at hand or sense of control flow dimensions, while unambiguous feedback and loss of self-consciousness were already examined in section 6.3.1.

6.3.3 Interface and Group

Tukey’s range test applied to the interface and participant groups (table 10.30) showed thatthere were significant differences in the challenge-skills balance flow dimension. The gaminggroup’s direct manipulation interface (M = 11.467) had a lower mean value compared to its ges-ture interface (M = 15.667), which indicates that the gaming group’s gesture interface elicitedmore flow for the challenge-skills balance flow dimension than its direct manipulation inter-face. Similarly, touch’s mixed interface (M = 16.667) had a greater mean value than its directmanipulation interface (M = 12.067).

Clear goals flow dimension (table 10.32) shows that there were significant differences be-tween participant groups and interfaces in this flow dimension. The gaming group’s directmanipulation interface (M = 11.533) had a smaller mean value compared to its gesture (M =15.667) and mixed (M = 15.200) interfaces, which indicates that both the gaming’s gesture andmixed interfaces elicited more flow than than its direct manipulation for this flow dimension.

Tests applied to the sense of control flow dimension (table 10.35) showed that there aresignificant differences between participant groups and interfaces in this flow dimension. Thegaming’s direct manipulation interface (M = 12.000) had a smaller mean value compared toits gesture interface (M = 15.133), indicating that the gaming’s gesture interface elicited more

Page 74: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 6. RESULTS AND ANALYSIS 71

flow for this flow dimension than its direct manipulation interface. Similarly, the touch group’sdirect manipulation interface (M = 11.800) had a smaller mean value compared to its gestureinterface (M = 15.600).

Lastly, tests applied to the autotelic experience flow dimension (table 10.38) showed thatthere was a significant difference between participant groups and interfaces in this flow dimen-sion. The gaming group’s direct manipulation mean value (M = 14.667) had a bigger meanvalue compared to its gesture interface (M = 10.667), indicating that the gaming group’s directmanipulation interface elicited more flow for this flow dimension than its gesture interface.

No significant results were found for Tukey’s range tests applied to action-awareness merg-ing, unambiguous feedback, concentration on task at hand or loss of self-consciousness flow di-mensions.

6.4 Discussion

Even though the averaged flow scores for the strategy game showed that the direct manipulationversion of the strategy game was preferred to the gesture and mixed game interfaces, the lack ofany significance between the different interfaces for the ANOVA and the MANOVA shows thatthere is no clear differentiation for which interface elicits a higher flow state for the interfacebetween the participants. These results might be due to the number of participants chosen foreach game and participant groups, due to the questionnaire having nine factors and not becauseof the game or interface implementations.

While games considered in isolation did not produce significant differences between theinterfaces, the combination of all three games did produce significance. The MANOVA forall the three game versions showed that significance for all interactions except the interactionbetween interface, genre and participant groups. As such, it can be said that there is no clearindication that the combination of interface, game genre and participant groups effect the flowstate. However, what the significance does indicate, is that there are interactions that doinfluence flow. These interactions, as can be seen from the MANOVA, lie in the interactions ofthe interface, interface and participant groups and the interface and game genre.

The results of the univariate repeated measures, in particular the mean values for all theinteractions in the Tukey’s range tests, showed that the gesture interface was the dominantinterface for all three games with regards to eliciting flow, followed by the mixed interface. Thedirect manipulation interface elicited the lowest level of flow in all three games. This means thatthe gesture interface provides a more compelling and enjoyable game interaction compared tothe mixed and direct manipulation interfaces. The mixed interface also elicits more flow com-pared to the direct manipulation interface, meaning that the combination of gestures and directmanipulation elements also provide a more enjoyable game interaction compared to a directmanipulation interface. While gestures can be seen to as the preferred method of interactionfor eliciting flow, caveats should be noted. For both the transformation of time and autotelicexperience, the direct manipulation elicited the highest flow score. While the transformation oftime dimension has been shown to be less important than the other dimensions [20], the autotelicexperience is problematic, as it has been identified as being central to the flow experience [10].One explanation for this might be due to participant familiarity with the direct manipulationinterface, causing the direct manipulation interface to elicit the most flow in this flow dimension.While this is an important point to consider, it should not undermine the general preference for

Page 75: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 6. RESULTS AND ANALYSIS 72

gestures as eliciting the most flow overall.The comparison between interface and game genres showed that gestures are dependent

on game genre. While both the puzzle and strategy games showed higher flow scores in thegestural interfaces, with the direct manipulation interface scoring the lowest, the duelling gamescored the highest in the mixed interface and the lowest in the direct manipulation interface.While the strategy game scored the highest in the gestural interface, the direct manipulationinterface elicited more flow in the loss of self-consciousness, transformation of time and au-totelic experience flow dimensions. While the transformation of time flow dimension is of lessimportance compared to the other flow dimensions, the loss of self-consciousness and autotelicexperience shows that the deeper flow levels were experienced in the direct manipulation inter-faces. To address this issue, several considerations need to be made. Firstly, these results mightbe due to the implementation of the strategy game eliciting more flow in the direct manipula-tion interface than the other interfaces. Secondly, participants might be familiar with strategygames using direct manipulation interfaces, and as such, found that they could enter a flowstate more easily with the direct manipulation interface than with the gestural interface. Thiswould certain attribute to the loss of self-consciousness flow state, as participants who felt morefamiliar with an interface would feel less self-conscious interacting with the interface. However,these considerations fall under the realm of speculation, and as such, more experimentation isneeded in order to find a clear answer. It should be noted that even though the strategy andduelling games score higher than the puzzle game with regards to overall flow score, it does notmean that the puzzle game is not more suited to gestures than the strategy game and duellinggame. While the strategy game demands continuous player involvement and interaction, whereunits might be attacking or under attack from enemy units or the player might be moving unitsaround the map, the puzzle provides a much slower and more strategic game-play compared tothe strategy game. Similarly, the puzzle game only has the cube in continuous view, whereasthe strategy game allows the player to shift focus to different areas of the map. While this is notan exhaustive list of all the differences between strategy and puzzle games, it serves to illustratehow different game genres are and how interaction styles differ. Because of the nature of thedifferent game genres, a cross-comparison on how they interacted among different genres is notpossible.

Lastly, comparisons between the interface and participant groups indicated that gestureselicited the most flow compared to the mixed and direct manipulation interfaces for the touchand gaming groups than the control group, with direct manipulation the weakest of the threeinterfaces. Significant differences in the gaming and touch groups indicates not only preferencein the gesture interface, but also that it provided a more enjoyable interaction than the otherinterfaces. One caveat does appear in the gaming group for the autotelic experience flow state.This could be due to participants being very familiar and experienced with direct manipulation,resulting in this flow dimension to be high. The only interaction that showed significance betweenthe two groups and the control group was found in the transformation of time flow dimension,where the gaming group had a larger mean value compared the the control group. Overall, theresults show that the preference for gestures for both the touch and gaming groups were basedon touch and/or gaming experience and not on external factors.

Page 76: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 6. RESULTS AND ANALYSIS 73

6.5 Summary

The results were analysed and showed that significant interactions exist for the interface, inter-face and genre, and interface and participant groups. The interactions on the interface showedthat gestures elicited the highest level of flow among all three interfaces. However, transfor-mation of time and autotelic experience provided the highest flow scores, indicating that thedeepest flow states were elicited by the direct manipulation interface. Comparisons between theinterfaces and game genres showed that both the puzzle and strategy games scored the highestflow scores with the gestural interface, with the direct manipulation interface scoring the lowest.The duelling game scored the highest with the mixed interface and the lowest with the directmanipulation interface. Compared to the other two games, the strategy game scored the highestin the loss of self-consciousness, transformation of time and autotelic experience flow stateswith the direct manipulation interface. While these flow states are indications that the strategygame elicited deeper flow states with the direct manipulation interface, these could be due tothe game implementation and participant familiarity with direct manipulation interfaces.

Lastly, comparisons between the interface and participant groups indicated that gestureselicited the most flow compared to the other interfaces for the touch and gamin group than forthe control group, with the direct manipulation scoring the lowest of all three interfaces. Assuch, it shows that the preference of gestures is dependent on touch and/or game experienceand not on other factors. The only caveat appears in the autotelic experience flow dimensionfor the gaming group, where past game familiarity and experience might contribute to this flowstate being high.

Page 77: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 7

Conclusion

This project concludes with the successful implementation of a gesture recognition engine inthe form of a Grouped Node State Machine(chapter 3) and a game which conforms to thestrategy genre. The game allows the user to control a handful of units, with the objective tocapture three towers by eliminating the enemies that surround it. Three different game interfaceswere developed, namely direct manipulation (where the user interacted with buttons), mixed(where a combination of gesture and direct manipulation interactions were present), and gesture-based (where the user had to perform actions by performing gestures). User experiments wereconducted (chapter 5) and were controlled by computer literacy, where three participant groupswere identified, namely gaming, touch and control. The gaming group included participantswho were experienced with games, the touch group included participants who had touch deviceexperience but not necessarily gaming experience, and the control group included participantswho did not have extensive touch device or gaming experience. Five participants were allocatedinto each group, which brought a total of fifteen participants who played the strategy game.A total of three games were tested, one game per project member, with a total of forty fiveparticipants participating across the user experiments.

The results of the experiments were analysed (chapter 6), and while it was found thatinteraction did not exist between the combination of interface, group and genre, interactions didexist for the interface, interface and genre, and interface and participant groups. The interactionson the interface shows that gestures elicits the highest level of flow among all three interfaces.However, transformation of time and autotelic experience provided the highest flow scores withthe direct manipulation interface, indicating that the deepest flow states were elicited by thedirect manipulation interface. Comparisons between the interfaces and game genres showed thatboth the puzzle and strategy game scored the highest flow scores with the gestural interface,with the direct manipulation interface scoring the lowest. The duelling game scored the highestwith the mixed interface and the lowest with the direct manipulation interface. In comparisonto the other two games, the strategy game scored the highest in the loss of self-consciousness,transformation of time and autotelic experience flow dimensions with the direct manipulationinterface. While these flow states are indications that the strategy game elicited deeper flowstates with the direct manipulation interface, these could be due to the game implementationand participant familiarity with direct manipulation interfaces. Comparisons between the inter-face and game genres showed that the gestural interface elicited the most flow in the puzzle andstrategy games, with the duelling game scoring the highest in the mixed interface. While thestrategy game scored the highest in the gestural interface, the loss of self-consciousness, trans-

74

Page 78: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 7. CONCLUSION 75

formation of time and autotelic experience flow states were highest in the direct manipulationinterface, indicating that the deeper flow states were experienced with the direct manipulationinterface. This could be due to limitations in the game’s gestural implementation and participantfamiliarity with other direct manipulation strategy games. Comparisons between the interfaceand participant groups indicated that gestures elicited the most flow compared to the otherinterfaces for the touch and gaming group than for the control group, with direct manipulationscoring the lowest of all three. As such, it shows that the preference of gestures is dependent ontouch and/or game experience and not on other factors. The only caveat appears in the autotelicexperience flow dimension for the gaming group, where past game familiarity and experiencemight contribute to this flow state being high.

In summary, these results conclude that gestures are a more appropriate means of experi-encing flow in games. As flow has been shown to be very important with respect to games andgame enjoyment, it can be said that gestures provide a more appropriate means of interactingwith games in order to elicit enjoyment. Gestures have been shown to be genre dependent withrespect to the game genres implemented in this project and that gestures rely on the user’stouch and gaming experience in order to successfully experienced.

7.1 Future Work

There are several aspects of the project that can be extended for future work.Firstly, the gesture recognition engine can be extended. This is part feature extraction,

part gesture recognition engine extension. The feature extraction could be extended to handlecurves more accurately, as touch devices lend themselves to curved features as compared to aexternal input devices, like a mouse. By using more accurate features, the gesture recognitionengine could handle more complex gestures that deal with curvature, speed, scale, dynamicmultiple-touches and gesture chaining.

The strategy game could be extended to include more units and more sophisticated graph-ics. Different attack techniques could be added to each unit to give units more variety. Theinclusion of a gesture recognition engine that could handle more complex gestures could be usedto help identify and design gestures that are more natural and identifiable to the player andcould open up larger possibilities of gestural interactions within strategy games.

Lastly, the experiment could be extended to include more participants, with longer exper-iment durations, in order for participants to master the games being played. The experimentcould be modified to include a measure of previous game familiarity, in order to eliminate formsof interaction bias. Lastly, the experiment could be modified to asses how game genres inter-act with one another, as this will give a better indication of why the use of gestures are moreappropriate for certain genres than others.

Page 79: Gesture-Based Interaction for Games on Multi-touch Devices

76

Page 80: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 8. APPENDIX A - EXPERIMENT FORMS 77

Chapter 8

Appendix A - Experiment forms

8.1 Consent form

University of Cape Town

Department of Computer Science

Consent form Researcher: __________________________

● I agree to participate in this experiment. ● I agree to my responses being used for education and research. ● I understand that my personal details will be used in aggregate form only, so that I will

not be personally identifiable ● I understand that I am under no obligation to take part in this project. ● I understand I have the right to withdraw from this experiment at any stage.● I have read this consent form and the information it contains and had the opportunity to

ask questions about them. Name of Participant: __________________________________________ Signature of Participant: __________________________________________ Date: __________________________________________

Page 81: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 8. APPENDIX A - EXPERIMENT FORMS 78

8.2 Demographic information form

Demographic Information

Age: _________________________ Gender: _________________________ Qualification towards which you are studying: _____________________ Year of study: ______________________ Please rate your level of computer literacy: 1 2 3 4 5 6 7Illiterate Highly literate Please rate how often you play games: 1 2 3 4 5 6 7Never Daily Please rate your experience with touch devices: 1 2 3 4 5 6 7No experience Very experienced

Page 82: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 8. APPENDIX A - EXPERIMENT FORMS 79

8.3 Post game explanation form

Post-experiment explanation Thank you for taking part in this experiment. The research question being investigated was whether gestures make touch-based games more enjoyable and whether there are genre dependent differences in this. In order to investigate this, three game version were designed. These included a gestural interface, direct-manipulation interface and a version that contained a mix of the previous two. The questionnaire you answered assesses the level of flow present in each condition. Flow can be understood as the intrinsic enjoyment, or fun, one gets out of performing some activity. In this case, we're using flow to determine which game version was the most enjoyable to play. In order to determine whether there are genre dependent differences in the appropriateness of gestures in gaming, three different games were developed. These included a strategy, puzzle and dueling/fighting game. Once we have all the results, comparisons will be done both across game version and genre. Once again, thank you for participating in our experiment. Iif you have any questions, feel free to ask.

Page 83: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 9

Appendix B - In-game descriptions

9.1 Welcome screen

”Welcome to my iPad Game experiment. You will be required to play three game, each gamelasting a total 7 minutes. Once each game has finished, it will automatically take to to the nextphase of the experiment. fun and ask me any question in case you get stuck. Press the continuebutton to move to the next screen.”

9.2 Training session

Before we begin the actual game, let’s become familiar with some of the gestures presented inthe game. The image on the left shows which gesture to draw.The red dots show the startingposition and the arrows show the direction in which you draw the gesture. The red dots indicatethe amount of finger you have to draw with. Draw this gesture 10 times. Take it slow and tryand draw as accurate as you can.

9.3 Game description

Game Description:The game you will be playing is a real-time strategy game. You will take control of a

group of units, whose aim is to capture three enemy towers. Given a starting credit of 500, youcan build a number of different units, each with their own strengths, weaknesses and abilities,which can be utilised to successfully capture the towers within the time allocated.There are fourdifferent types of units available for you to build. The foot soldier has a short attack distance,but has a strong defence and a strong attack. The archer can attack from a long distance,but has a weak defence compared to the foot-soldier. The chaplain can heal nearby units, butcan not attack any enemy units. The trebuchet is a siege weapon and deals a great amount ofdamage, however, it is the slowest moving unit in the game and is very expensive.

The enemy towers are located in different areas on the map. Use the mini-map, whichis located in the bottom left of the screen, to see where they are located. You can also movearound the map by using the map movement commands. In order to capture a tower, simplyplace either a foot-soldier or a archer over the tower and keep them there for a 20 seconds. Onceyou have successfully captured a tower, you will receive 250 credit for you to build additional

80

Page 84: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 9. APPENDIX B - IN-GAME DESCRIPTIONS 81

units.Basic unit controls include moving units to a desired location and attack an enemy unit.

Note that your units can’t attack the ground, so please ensure that you select an enemy unit.Selecting units is done by performing a single tap on the unit. To unselect a unit, tap it again.You can group units into groups to allow you to move a whole bunch of units to a specificlocation or to attack an enemy unit.

Press the continue button once you’re done to start playing the different games.

9.4 Direct manipulation introduction

Version Information:Use the interface buttons to manipulate units and/or the screen.

9.5 After direct manipulation game

Game OverPlease write the number 627 on the front of the questionnaire.Press continue after you have completed the questionnaire

9.6 Gesture introduction

Version Information: Gestures are used to manipulate units and/or the screen. To perform agesture, draw the desired gesture on the screen. A help menu is available to describe the gesturesavailable in the game. To access this help screen, tap on the credits button, which is locatedat the top right of the screen. A list of possible gestures are available on the right-hand sideof the screen and will change depending on your gesture performed and what gestured can beperformed next.

9.7 After gesture game

Game Over Please write the number 74 on the front of the questionnaire.Press continue after you have completed the questionnaire

9.8 Mixed introduction

Version Information:Use interface buttons and gestures to manipulate units and/or the screen. To perform a

gesture, draw the desired gesture on the screen. A help menu is available to describe the gesturesavailable in the game. To access this help screen, tap on the credits button, which is locatedat the top right of the screen. A list of possible gestures are available on the right-hand sideof the screen and will change depending on your gesture performed and what gestured can beperformed next.

Page 85: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 9. APPENDIX B - IN-GAME DESCRIPTIONS 82

9.9 After Mixed game

Game Over Please write the number 21 on the front of the questionnaire.Press continue after you have completed the questionnaire

9.10 Experiment end

This is the end of the experiment.Thank you so much for participating in this experiment.

Page 86: Gesture-Based Interaction for Games on Multi-touch Devices

Chapter 10

Appendix Control - Experimentresults

10.1 Strategy game results

Participant Direct manipulation Gestures Mixed

1 36.5 37.5 28.252 32 24 32.253 29 39 37.254 35.25 31.75 32.55 33.25 32 36.756 25.5 38.25 267 39.25 35 338 36.5 30 22.759 31.5 24.75 32.75

10 38.25 28.5 18.511 39.5 33.75 3512 29.5 26.5 23.7513 33.25 31 27.7514 36.5 32 32.515 31.25 32 26.25

Total 33.8 (SD = 3.933) 31.73 (SD = 4.422) 29.683 (SD = 5.277)

Table 10.1: Total mean flow score between different game interfaces

Flow Dimension Direct manipulation Gestures Mixed

Challenge-Skill Balance 3.367 3.167 3.03Merging of Action and Awareness 3.3167 2.867 2.9Clear Goals 4.183 4 3.567Unambiguous Feedback 3.83 3.567 3.483Concentration of the Task at hand 4.3 4.1 3.967Sense of Control 3.4 3.05 2.83Loss of Self-Consciousness 4.1 3.667 3.35Transformation of Time 3.8167 3.7167 3.45Autotelic Experience 3.483 3.6 3.1

Table 10.2: Breakdown of the flow dimensions between different game interfaces

83

Page 87: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 84

Effect SS Degr. of freedom MS F p

Interface 127.1028 2 63.55139 2.861109 0.076835Interface*Group 44.5556 4 11.13889 0.501477 0.734912Error 533.0917 24 22.21215

Table 10.3: Univariate tests for repeated measures strategy game

Effect SS Degr. of freedom MS F p

Group 1.29 2 0.64 0.021 0.979079Error 362.33 12 30.36 - -

Interface 127.10 2 63.55 2.861 0.076835Interface*Group 44.56 4 11.14 0.501 0.734912Error 533.09 24 22.21 - -

Table 10.4: Repeated measures analysis of variance on the strategy game

Page 88: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 85

10.2 Univariate repeated measures

Effect SS Degr. of freedom MS F p

Interface 368.0148 2 184.0074 26.52536 0.000000Interface*Genre 14.2963 4 3.5741 0.51522 0.724746Interface*Group 30.5185 4 7.6296 1.09984 0.363333Interface*Genre*Group 30.3704 8 3.7963 0.54725 0.816957Error 499.4667 72 6.9370 - -

Table 10.5: Univariate for repeated measure on challenge-skill balance flow dimension using interface, groupand gesture independant variables

.

Cell No. Interface {1} 11.622 {2} 15.289 {3} 14.933

1 1 0.000111 0.0001112 2 0.000111 0.7984692 3 0.000111 0.798469

Table 10.6: Tukey HSD test for challenge-skill balance flow dimension. Error: within MS = 6.9370, df=72.000.Significant values are shown in red.

Effect SS Degr. of freedom MS F p

Interface 108.5778 2 54.28889 8.758889 0.000394Interface*Genre 34.7111 4 8.67778 1.400060 0.242642Interface*Group 21.6889 4 5.42222 0.874813 0.483401Interface*Genre*Group 42.7556 8 5.34444 0.862265 0.552170Error 446.2667 72 6.19815

Table 10.7: Univariate for repeated measure on action-awareness merging flow dimension using interface, groupand gesture independant variables

.

Page 89: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 86

Cell No. Interface {1} 13.667 {2} 15.622 {3} 15.511

1 1 0.001201 0.0022782 2 0.001201 0.9756793 3 0.002278 0.975679

Table 10.8: Tukey HSD test for action-awareness merging flow dimension. Error: within MS = 6.1981,df=72.000. Significant values are shown in red.

Effect SS Degr. of freedom MS F p

Interface 219.2444 2 109.6222 17.49291 0.000001Interface*Genre 50.8444 4 12.7111 2.02837 0.099458Interface*Group 30.0889 4 7.5222 1.20035 0.318159Interface*Genre*Group 68.6222 8 8.5778 1.36879 0.224885Error 451.2000 72 6.2667

Table 10.9: Univariate for repeated measure on clear goals flow dimension using interface, group and gestureindependant variables

.

Cell No. Interface {1} 12.578 {2} 15.356 {3} 15.200

1 1 0.000114 0.0001212 2 0.000114 0.9533623 3 0.000121 0.953362

Table 10.10: Tukey HSD test for clear goals flow dimension. Error: within MS = 6.2667, df=72.000. Significantvalues are shown in red.

Effect SS Degr. of freedom MS F p

Interface 131.3926 2 65.69630 9.166925 0.000284Interface*Genre 89.4519 4 22.36296 3.120413 0.020006Interface*Group 18.7852 4 4.69630 0.655297 0.625051Interface*Genre*Group 12.3704 8 1.54630 0.215762 0.987129Error 516.0000 72 7.16667

Table 10.11: Univariate for repeated measure on unambiguous feedback flow dimension using interface, groupand gesture independant variables

.

CellNo.

Genre Interface {1} 13.200 {2} 13.800 {3} 15.533 {4} 14.000 {5} 16.000 {6} 15.00 {7} 10.867 {8} 15.267 {9} 12.667

1 Puzzle 1 0.999514 0.306979 0.998656 0.259906 0.809040 0.507560 0.666895 0.9999362 Puzzle 2 0.999514 0.699209 1.000000 0.587720 0.978651 0.205774 0.930857 0.9851173 Puzzle 3 0.306979 0.699209 0.912301 0.999977 0.999936 0.002645 1.000000 0.2317754 Strategy 1 0.998656 1.000000 0.912301 0.517559 0.982434 0.140213 0.970262 0.9596515 Strategy 2 0.259906 0.587720 0.999977 0.517559 0.982434 0.000663 0.999289 0.0921056 Strategy 3 0.809040 0.978651 0.999936 0.982434 0.982434 0.012429 1.000000 0.5075607 Duelling 1 0.507560 0.205774 0.002635 0.140213 0.000663 0.012429 0.000921 0.6550028 Duelling 2 0.666895 0.930857 1.000000 0.970262 0.999287 1.000000 0.000921 0.1812179 Duelling 3 0.999936 0.985117 0.231775 0.959651 0.092105 0.507560 0.655002 0.181217

Table 10.12: Tukey HSD test for unambiguous feedback flow dimension. Error: between; within; Pooled MS =9.6370, df=95.455. Significant values are shown in red.

Page 90: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 87

Effect SS Degr. of freedom MS F p

Interface 117.3778 2 58.68889 5.456612 0.006216Interface*Genre 54.5778 4 13.64444 1.268595 0.290307Interface*Group 24.7111 4 6.17778 0.574380 0.682091Interface*Genre*Group 61.6000 8 7.70000 0.715909 0.676722Error 774.4000 72 10.75556

Table 10.13: Univariate for repeated measure on concentration on task at hand flow dimension using interface,group and gesture independant variables

.

Cell No. Interface {1} 13.267 {2} 15.511 {3} 14.022

1 1 0.005088 0.5216892 2 0.005088 0.0864833 3 0.521689 0.086483

Table 10.14: Tukey HSD test for concentration on task at hand flow dimension. Error: within MS = 10.756,df=72.000. Significant values are shown in red.

Effect SS Degr. of freedom MS F p

Interface 135.1259 2 67.56296 9.924918 0.000156Interface*Genre 14.3407 4 3.58519 0.526659 0.716458Interface*Group 63.4074 4 15.85185 2.328618 0.064197Interface*Genre*Group 36.3259 8 4.54074 0.667029 0.718653Error 490.1333 72 6.80741

Table 10.15: Univariate for repeated measure on sense of control flow dimension using interface, group andgesture independant variables

.

Cell No. Interface {1} 12.467 {2} 14.889 {3} 13.356

1 1 0.000209 0.2455762 2 0.000209 0.0185053 3 0.245576 0.018505

Table 10.16: Tukey HSD test for sense of control flow dimension. Error: within MS = 6.8074, df=72.000.Significant values are shown in red.

Effect SS Degr. of freedom MS F p

Interface 164.0444 2 82.02222 7.719066 0.000918Interface*Genre 182.0889 4 45.52222 4.284071 0.003637Interface*Group 32.3556 4 8.08889 0.761241 0.553929Interface*Genre*Group 85.1111 8 10.63889 1.001220 0.442816Error 765.0667 72 10.62593

Table 10.17: Univariate for repeated measure on loss of self-consciousness flow dimension using interface, groupand gesture independant variables

.

Page 91: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 88

CellNo.

Genre Interface {1} 13.133 {2} 15.467 {3} 13.067 {4} 16.333 {5} 14.600 {6} 10.333 {7} 13.133 {8} 13.067 {9} 12.467

1 Puzzle 1 0.575353 1.000000 0.480828 0.988243 0.657736 1.000000 1.000000 0.9999632 Puzzle 2 0.575353 0.537515 0.999729 0.99729 0.030127 0.837733 0.815617 0.5693903 Puzzle 3 1.000000 0.537515 0.452000 0.984363 0.686339 1.000000 1.000000 0.9999834 Strategy 1 0.480828 0.999729 0.452000 0.871502 0.000233 0.480828 0.452000 0.2301245 Strategy 2 0.988243 0.999729 0.984363 0.871502 0.016831 0.988243 0.988243 0.8948976 Strategy 3 0.657736 0.030127 0.686339 0.000233 0.016831 0.657736 0.686339 0.8948977 Duelling 1 1.000000 0.837733 1.000000 0.480828 0.988243 0.657736 1.000000 0.9997538 Duelling 2 1.000000 0.815617 1.000000 0.4520000 0.984363 0.686339 1.000000 0.9998889 Duelling 3 0.999963 0.569390 0.999983 0.230124 0.894897 0.999753 0.999888

Table 10.18: Tukey HSD test for loss of self-consciousness flow dimension. Error: between; within; Pooled MS= 9.6370, df=95.455. Significant values are shown in red.

Effect SS Degr. of freedom MS F p

Interface 129.7333 2 64.86667 6.616547 0.002302Interface*Genre 44.8889 4 11.22222 1.144692 0.342549Interface*Group 55.2889 4 13.82222 1.409898 0.239375Interface*Genre*Group 74.8889 8 9.36111 0.954855 0.477916Error 705.8667 72 9.80370

Table 10.19: Univariate for repeated measure on transformation of time flow dimension using interface, groupand gesture independant variables

.

Cell No. Interface {1} 15.556 {2} 14.289 {3} 13.156

1 1 0.140792 0.0015792 2 0.140792 0.2059553 3 0.001579 0.205955

Table 10.20: Tukey HSD test for transformation of time flow dimension. Error: within MS = 9.8037, df=72.000.Significant values are shown in red.

Effect SS Degr. of freedom MS F p

Interface 186.0593 2 93.02963 8.855279 0.000364Interface*Genre 75.2296 4 18.80741 1.790234 0.140164Interface*Group 42.6519 4 10.66296 1.014983 0.405488Interface*Genre*Group 116.3259 8 14.54074 1.384100 0.218138Error 756.4000 72 10.50556

Table 10.21: Univariate for repeated measure on autotelic experience flow dimension using interface, group andgesture independant variables

.

Cell No. Interface {1} 14.778 {2} 12.444 {3} 12.156

1 1 0.003083 0.0008632 2 0.003083 0.9063703 3 0.000863 0.906370

Table 10.22: Tukey HSD test for autotelic experience flow dimension. Error: within MS = 10.506, df=72.000.Significant values are shown in red.

Page 92: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 89

10.3 Interface and game genre

CellNo.

Genre Interface {1} 9.3333 {2} 14.067 {3} 13.200 {4} 13.133 {5} 16.200 {6} 15.867 {7} 12.400 {8} 15.600 {9} 15.733

1 Puzzle 1 0.000293 0.004324 0.041253 0.000136 0.000140 0.192342 0.000152 0.0001452 Puzzle 2 0.000293 0.992289 0.996715 0.666131 0.833858 0.885127 0.925487 0.8851273 Puzzle 3 0.004324 0.992289 1.000000 0.216286 0.365108 0.998927 0.511983 0.4363594 Strategy 1 0.041253 0.996715 1.000000 0.051325 0.121302 0.999449 0.473768 0.4000575 Strategy 2 0.000136 0.666131 0.216286 0.051325 0.999994 0.041253 0.999876 0.9999826 Strategy 3 0.000140 0.833858 0.365108 0.121302 0.999994 0.087614 1.000000 1.0000007 Duelling 1 0.192342 0.885127 0.998927 0.999449 0.041253 0.087614 0.035257 0.0238268 Duelling 2 0.000152 0.925487 0.511983 0.473768 0.999876 1.000000 0.035257 1.0000009 Duelling 3 0.000145 0.885127 0.436359 0.400057 0.999982 1.000000 0.023826 1.000000

Table 10.23: Tukey HSD test for challenge-skill balance flow dimension. Error: between; within; Pooled MS =10.256, df=89.300. Significant values are shown in red.

CellNo.

Genre Interface {1} 14.400 {2} 14.733 {3} 16.000 {4} 12.200 {5} 15.067 {6} 14.133 {7} 14.400 {8} 17.067 {9} 16.400

1 Puzzle 1 0.999990 0.707550 0.619478 0.999710 1.000000 1.000000 0.356435 0.7313142 Puzzle 2 0.999990 0.896785 0.427308 0.999999 0.999868 0.999999 0.541593 0.8808703 Puzzle 3 0.707550 0.896785 0.039442 0.996519 0.798170 0.903059 0.991350 0.9999944 Strategy 1 0.619478 0.427308 0.039442 0.056324 0.464022 0.619478 0.002246 0.0143825 Strategy 2 0.999710 0.999999 0.996519 0.056324 0.982032 0.999710 0.731314 0.9647446 Strategy 3 1.000000 0.999868 0.798170 0.464022 0.982032 1.000000 0.235152 0.5806107 Duelling 1 1.000000 0.999999 0.903059 0.619478 0.999710 1.000000 0.097875 0.4169198 Duelling 2 0.356435 0.541593 0.991350 0.002246 0.731314 0.235152 0.097875 0.9981759 Duelling 3 0.731314 0.880870 0.999994 0.014382 0.964744 0.580610 0.416919 0.998175

Table 10.24: Tukey HSD test for action-awareness merging flow dimension. Error: between; within; PooledMS = 10.115, df=83.085. Significant values are shown in red.

CellNo.

Genre Interface {1} 13.867 {2} 14.733 {3} 15.667 {4} 11.400 {5} 14.933 {6} 13.667 {7} 12.467 {8} 16.400 {9} 16.267

1 Puzzle 1 0.989211 0.569355 0.501836 0.992740 1.000000 0.959756 0.464693 0.5395412 Puzzle 2 0.989211 0.982642 0.133101 1.000000 0.992740 0.615328 0.895625 0.9327263 Puzzle 3 0.569355 0.982642 0.016400 0.999515 0.758465 0.170510 0.999515 0.9998914 Strategy 1 0.501836 0.133101 0.016400 0.007097 0.259318 0.992740 0.002280 0.0032965 Strategy 2 0.992740 1.000000 0.999515 0.007097 0.899634 0.501836 0.947441 0.9698756 Strategy 3 1.000000 0.992740 0.758465 0.259318 0.899634 0.984313 0.359334 0.4284057 Duelling 1 0.959756 0.615328 0.170510 0.992740 0.501836 0.984313 0.001737 0.0027818 Duelling 2 0.464693 0.895625 0.999515 0.002280 0.947441 0.359334 0.001737 1.0000009 Duelling 3 0.539541 0.932726 0.999891 0.003296 0.969875 0.428405 0.002781 1.000000

Table 10.25: Tukey HSD test for clear goals flow dimension. Error: between; within; Pooled MS = 10.667,df=80.578. Significant values are shown in red.

Page 93: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 90

CellNo.

Genre Interface {1} 13.800 {2} 15.600 {3} 13.000 {4} 13.267 {5} 14.600 {6} 14.800 {7} 12.733 {8} 16.333 {9} 14.267

1 Puzzle 1 0.850521 0.999071 0.999982 0.999604 0.997884 0.996656 0.613771 0.9999932 Puzzle 2 0.850521 0.435262 0.712829 0.997884 0.999604 0.444399 0.999793 0.9850303 Puzzle 3 0.999071 0.435262 1.000000 0.954467 0.912784 1.000000 0.243183 0.9892544 Strategy 1 0.999982 0.712829 1.000000 0.970396 0.933881 0.999982 0.350459 0.9978845 Strategy 2 0.999604 0.997884 0.954467 0.970396 1.000000 0.894718 0.928710 1.0000006 Strategy 3 0.997884 0.999604 0.912784 0.933881 1.000000 0.827872 0.964498 0.9999827 Duelling 1 0.996656 0.444399 1.000000 0.999982 0.894718 0.827872 0.081952 0.9338818 Duelling 2 0.613771 0.999793 0.243183 0.350459 0.928710 0.964498 0.081952 0.7288759 Duelling 3 0.999993 0.985030 0.989254 0.997884 1.000000 0.999982 0.933881 0.728875

Table 10.26: Tukey HSD test for concentration on task at hand flow dimension. Error: between; within; PooledMS = 13.300, df=100.63. Significant values are shown in red.

CellNo.

Genre Interface {1} 13.067 {2} 15.667 {3} 13.200 {4} 12.400 {5} 14.267 {6} 13.467 {7} 11.933 {8} 14.733 {9} 13.400

1 Puzzle 1 0.156260 1.000000 0.999807 0.986972 0.999996 0.991032 0.910076 0.9999992 Puzzle 2 0.156260 0.209600 0.178407 0.966073 0.687022 0.073159 0.997644 0.6514483 Puzzle 3 1.000000 0.209600 0.999236 0.994032 1.000000 0.981625 0.942670 1.0000004 Strategy 1 0.999807 0.178407 0.999236 0.576016 0.969392 0.999987 0.615103 0.9961725 Strategy 2 0.986972 0.966073 0.994032 0.576016 0.995220 0.615103 0.999987 0.9986296 Strategy 3 0.999996 0.687022 1.000000 0.969392 0.995220 0.942670 0.981625 1.0000007 Duelling 1 0.991032 0.073159 0.981625 0.999987 0.615103 0.942670 0.096567 0.8330368 Duelling 2 0.910076 0.997644 0.942670 0.615103 0.999987 0.981625 0.096567 0.8944549 Duelling 3 0.999999 0.651448 1.000000 0.996172 0.998629 1.000000 0.833036 0.894454

Table 10.27: Tukey HSD test for sense of control flow dimension. Error: between; within; Pooled MS = 11.300,df=82.059. Significant values are shown in red.

CellNo.

Genre Interface {1} 14.933 {2} 14.533 {3} 13.533 {4} 15.067 {5} 13.733 {6} 11.133 {7} 16.667 {8} 14.600 {9} 14.800

1 Puzzle 1 0.999993 0.948404 1.000000 0.994639 0.158720 0.945887 1.000000 1.0000002 Puzzle 2 0.999993 0.993688 0.999987 0.999723 0.282980 0.841962 1.000000 1.0000003 Puzzle 3 0.948404 0.993688 0.973679 1.000000 0.737052 0.392091 0.997641 0.9922554 Strategy 1 1.000000 0.999987 0.973679 0.961020 0.025650 0.965962 0.999996 1.0000005 Strategy 2 0.994639 0.999723 1.000000 0.961020 0.371420 0.484334 0.999501 0.9976416 Strategy 3 0.158720 0.282980 0.737052 0.025650 0.371420 0.004822 0.258860 0.1947167 Duelling 1 0.945887 0.841962 0.392091 0.965962 0.484334 0.004822 0.677086 0.7837298 Duelling 2 1.000000 1.000000 0.997641 0.999996 0.999501 0.258860 0.677086 1.0000009 Duelling 3 1.000000 1.000000 0.992255 1.000000 0.997641 0.194716 0.783729 1.000000

Table 10.28: Tukey HSD test for transformation of time flow dimension. Error: between; within; Pooled MS= 14.722, df = 88.291. Significant values are shown in red.

CellNo.

Genre Interface {1} 15.200 {2} 12.733 {3} 13.933 {4} 13.933 {5} 12.000 {6} 9.3333 {7} 15.200 {8} 12.600 {9} 13.200

1 Puzzle 1 0.492065 0.976725 0.996510 0.528253 0.010712 1.000000 0.774791 0.9374752 Puzzle 2 0.492065 0.983392 0.997617 0.999940 0.444092 0.820797 1.000000 0.9999983 Puzzle 3 0.976725 0.983392 1.000000 0.948262 0.100395 0.996510 0.995031 0.9999404 Strategy 1 0.996510 0.997617 1.000000 0.783253 0.006637 0.996510 0.995031 0.9999405 Strategy 2 0.528253 0.999940 0.948262 0.783253 0.384045 0.528253 0.999987 0.9976176 Strategy 3 0.010712 0.444092 0.100395 0.006637 0.384045 0.010712 0.499824 0.2716607 Duelling 1 1.000000 0.820797 0.996510 0.996510 0.528253 0.010712 0.418945 0.7506158 Duelling 2 0.774791 1.000000 0.995031 0.995031 0.999987 0.499824 0.418945 0.9998839 Duelling 3 0.937475 0.999998 0.999940 0.999940 0.997617 0.271660 0.750615 0.999883

Table 10.29: Tukey HSD test for autotelic experience flow dimension. Error: between; within; Pooled MS =18.648, df = 78.186. Significant values are shown in red.

Page 94: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 91

10.4 Interface and participant group

CellNo.

Genre Interface {1} 11.467 {2} 15.667 {3} 14.400 {4} 11.333 {5} 13.800 {6} 13.733 {7} 12.067 {8} 16.400 {9} 16.667

1 Gaming 1 0.001410 0.073441 1.000000 0.550692 0.589540 0.999876 0.001951 0.0009152 Gaming 2 0.001410 0.923010 0.010582 0.804371 0.772556 0.065448 0.999449 0.9947003 Gaming 3 0.073441 0.923010 0.192342 0.999876 0.999727 0.550692 0.738693 0.5895404 Control 1 1.000000 0.010582 0.192342 0.220025 0.251707 0.999449 0.001332 0.0006325 Control 2 0.550692 0.804371 0.999876 0.220025 1.000000 0.860821 0.400057 0.2701186 Control 3 0.589540 0.772556 0.999727 0.251707 1.000000 0.885127 0.365108 0.2422187 Touch 1 0.999876 0.065448 0.550692 0.999449 0.860821 0.885127 0.000908 0.0004058 Touch 2 0.001951 0.999449 0.738693 0.001332 0.400057 0.365108 0.000908 0.9999999 Touch 3 0.000915 0.994700 0.589540 0.000632 0.270118 0.242218 0.000405 0.999999

Table 10.30: Tukey HSD test for challenge-skill balanca flow dimension. Error: between; within; Pooled MS =10.256, df = 89.300. Significant values are shown in red.

CellNo.

Genre Interface {1} 13.333 {2} 15.667 {3} 16.133 {4} 13.600 {5} 15.333 {6} 14.200 {7} 14.067 {8} 15.867 {9} 16.200

1 Gaming 1 0.219195 0.068070 1.000000 0.731314 0.997937 0.999414 0.427308 0.2626062 Gaming 2 0.219195 0.999871 0.695217 0.999999 0.939109 0.903059 1.000000 0.9999463 Gaming 3 0.068070 0.999871 0.427308 0.998862 0.765733 0.695217 1.000000 1.0000004 Control 1 1.000000 0.695217 0.427308 0.611501 0.999148 0.999981 0.580610 0.3911915 Control 2 0.731314 0.999999 0.998862 0.611501 0.942941 0.974118 0.999946 0.9979376 Control 3 0.997937 0.939109 0.765733 0.999148 0.942941 1.000000 0.880870 0.7313147 Touch 1 0.999414 0.903059 0.695217 0.999981 0.974118 1.000000 0.562018 0.3291868 Touch 2 0.427308 1.000000 1.000000 0.580610 0.999946 0.880870 0.562018 0.9999909 Touch 3 0.262606 0.999946 1.000000 0.391191 0.997937 0.731314 0.329186 0.999990

Table 10.31: Tukey HSD test for action-awareness merging flow dimension. Error: between; within; PooledMS = 10.115, df = 83.085. Significant values are shown in red.

CellNo.

Genre Interface {1} 11.533 {2} 15.667 {3} 15.200 {4} 13.000 {5} 14.400 {6} 14.800 {7} 13.200 {8} 16.000 {9} 15.600

1 Gaming 1 0.000855 0.004454 0.947441 0.296323 0.150919 0.895625 0.009769 0.0269322 Gaming 2 0.000855 0.999876 0.393231 0.977970 0.998312 0.501836 0.999999 1.0000003 Gaming 3 0.004454 0.999876 0.652699 0.999057 0.999995 0.758465 0.999057 0.9999954 Control 1 0.947441 0.393231 0.652699 0.836915 0.569355 1.000000 0.240400 0.4284055 Control 2 0.296323 0.977970 0.999057 0.836915 0.999962 0.984313 0.915488 0.9843136 Control 3 0.150919 0.998312 0.999995 0.569355 0.999962 0.915488 0.984313 0.9990577 Touch 1 0.895625 0.501836 0.758465 1.000000 0.984313 0.915488 0.071049 0.1945638 Touch 2 0.009769 0.999999 0.999057 0.240400 0.915488 0.984313 0.071049 0.9999629 Touch 3 0.026932 1.000000 0.999995 0.428405 0.984313 0.999057 0.194563 0.999962

Table 10.32: Tukey HSD test for clear goals flow dimension. Error: between; within; Pooled MS = 10.667, df= 80.578. Significant values are shown in red.

Page 95: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 92

CellNo.

Genre Interface {1} 12.267 {2} 15.067 {3} 14.467 {4} 12.000 {5} 14.467 {6} 14.467 {7} 13.800 {8} 15.533 {9} 14.267

1 Gaming 1 0.115204 0.385606 1.000000 0.587720 0.587720 0.912301 0.106377 0.7050552 Gaming 2 0.115204 0.999514 0.160014 0.999845 0.999845 0.970262 0.999977 0.9986563 Gaming 3 0.385606 0.999514 0.429608 1.000000 1.000000 0.999659 0.989960 1.0000004 Control 1 1.000000 0.160014 0.429608 0.238615 0.238615 0.809040 0.058412 0.5475535 Control 2 0.587720 0.999845 1.000000 0.238615 1.000000 0.999659 0.989960 1.0000006 Control 3 0.587720 0.999845 1.000000 0.238615 1.000000 0.999659 0.989960 1.0000007 Touch 1 0.912301 0.970262 0.999659 0.809040 0.999659 0.999659 0.699209 0.9999268 Touch 2 0.106377 0.999977 0.989960 0.058412 0.989960 0.989960 0.699209 0.9294549 Touch 3 0.705055 0.998656 1.000000 0.547553 1.000000 1.000000 0.999926 0.929454

Table 10.33: Tukey HSD test for unambiguous feedback flow dimension. Error: between; within; Pooled MS =9.6370, df = 95.455. Significant values are shown in red.

CellNo.

Genre Interface {1} 13.800 {2} 16.400 {3} 13.733 {4} 13.133 {5} 15.067 {6} 14.867 {7} 12.867 {8} 15.067 {9} 13.467

1 Gaming 1 0.435262 1.000000 0.999899 0.989254 0.996656 0.998728 0.989254 1.0000002 Gaming 2 0.435262 0.400196 0.267801 0.985030 0.964498 0.178498 0.985030 0.4120693 Gaming 3 1.000000 0.400196 0.999955 0.985030 0.994925 0.999262 0.985030 1.0000004 Control 1 0.999899 0.267801 0.999955 0.793870 0.875239 1.000000 0.874518 1.0000005 Control 2 0.989254 0.985030 0.985030 0.793870 1.000000 0.773544 1.000000 0.9544676 Control 3 0.996656 0.964498 0.994925 0.875239 1.000000 0.852215 1.000000 0.9796187 Touch 1 0.998728 0.178498 0.999262 1.000000 0.773544 0.852215 0.657811 0.9998938 Touch 2 0.989254 0.985030 0.985030 0.874518 1.000000 1.000000 0.657811 0.9169289 Touch 3 1.000000 0.412069 1.000000 1.000000 0.954467 0.979618 0.999893 0.916928

Table 10.34: Tukey HSD test for concentration on task at hand flow dimension. Error: between; within; PooledMS = 13.00, df = 100.63. Significant values are shown in red.

CellNo.

Genre Interface {1} 12.000 {2} 15.133 {3} 13.000 {4} 13.600 {5} 13.933 {6} 14.333 {7} 11.800 {8} 15.600 {9} 12.733

1 Gaming 1 0.039190 0.979364 0.927585 0.815385 0.615103 1.000000 0.095950 0.9996092 Gaming 2 0.039190 0.392543 0.942670 0.986972 0.999236 0.158671 0.999987 0.5783173 Gaming 3 0.979364 0.392543 0.999913 0.997644 0.974738 0.986972 0.468555 1.0000004 Control 1 0.927585 0.942670 0.999913 0.999993 0.997396 0.867568 0.785966 0.9986295 Control 2 0.815385 0.986972 0.997644 0.999993 0.999972 0.721512 0.910076 0.9869726 Control 3 0.615103 0.999236 0.974738 0.997396 0.999972 0.504734 0.981625 0.9275857 Touch 1 1.000000 0.158671 0.986972 0.867568 0.721512 0.504734 0.004790 0.9866548 Touch 2 0.095950 0.999987 0.468555 0.785966 0.910076 0.981625 0.004790 0.0813769 Touch 3 0.999609 0.578317 1.000000 0.998629 0.986972 0.927585 0.986654 0.081376

Table 10.35: Tukey HSD test for sense of control flow dimension. Error: between; within; Pooled MS = 11.300,df = 82.059. Significant values are shown in red.

CellNo.

Genre Interface {1} 15.267 {2} 14.333 {3} 12.667 {4} 12.467 {5} 14.267 {6} 10.733 {7} 14.867 {8} 14.533 {9} 12.467

1 Gaming 1 0.997037 0.426844 0.657736 0.999199 0.086526 0.999999 0.999923 0.6577362 Gaming 2 0.997037 0.894187 0.948731 1.000000 0.318717 0.999993 1.000000 0.9487313 Gaming 3 0.426844 0.894187 1.000000 0.979549 0.937607 0.877426 0.948731 1.0000004 Control 1 0.657736 0.948731 1.000000 0.846225 0.871502 0.815617 0.910742 1.0000005 Control 2 0.999199 1.000000 0.979549 0.846225 0.089924 0.999983 1.000000 0.9583906 Control 3 0.086526 0.318717 0.937607 0.871502 0.089924 0.159815 0.250575 0.9666697 Touch 1 0.999999 0.999993 0.877426 0.815617 0.999983 0.159815 0.999999 0.5375158 Touch 2 0.999923 1.000000 0.948731 0.910742 1.000000 0.250575 0.999999 0.7223999 Touch 3 0.657736 0.948731 1.000000 1.000000 0.958390 0.966669 0.537515 0.722399

Table 10.36: Tukey HSD test for loss of self-consciousness flow dimension. Error: between; within; Pooled MS= 17.422, df = 82.800. Significant values are shown in red.

Page 96: Gesture-Based Interaction for Games on Multi-touch Devices

CHAPTER 10. APPENDIX CONTROL - EXPERIMENT RESULTS 93

CellNo.

Genre Interface {1} 15.867 {2} 14.067 {3} 14.467 {4} 14.200 {5} 14.400 {6} 11.333 {7} 16.600 {8} 14.400 {9} 13.667

1 Gaming 1 0.815305 0.948404 0.956755 0.979996 0.042998 0.999856 0.979996 0.8181202 Gaming 2 0.815305 0.999993 1.000000 1.000000 0.581046 0.676752 1.000000 0.9999993 Gaming 3 0.948404 0.999993 1.000000 1.000000 0.392091 0.841962 1.000000 0.9997234 Control 1 0.956755 1.000000 1.000000 1.000000 0.246087 0.737052 1.000000 0.9999875 Control 2 0.979996 1.000000 1.000000 1.000000 0.172815 0.818120 1.000000 0.9998566 Control 3 0.042998 0.581046 0.392091 0.246087 0.172815 0.008976 0.422052 0.7655017 Touch 1 0.999856 0.676752 0.841962 0.737052 0.818120 0.008976 0.599703 0.2196578 Touch 2 0.979996 1.000000 1.000000 1.000000 1.000000 0.422052 0.599703 0.9993299 Touch 3 0.818120 0.999999 0.999723 0.999987 0.999856 0.765501 0.219657 0.999329

Table 10.37: Tukey HSD test for transformation of time flow dimension. Error: between; within; Pooled MS= 14.722, df = 88.291. Significant values are shown in red.

CellNo.

Genre Interface {1} 14.667 {2} 10.667 {3} 11.467 {4} 14.267 {5} 12.000 {6} 11.267 {7} 15.400 {8} 14.667 {9} 13.733

1 Gaming 1 0.030455 0.165052 0.999999 0.750029 0.444092 0.999940 1.000000 0.9996322 Gaming 2 0.030455 0.998988 0.365053 0.995031 0.999987 0.081621 0.231134 0.5855503 Gaming 3 0.165052 0.998988 0.697668 0.999995 1.000000 0.250866 0.528253 0.8797014 Control 1 0.999999 0.365053 0.697668 0.605801 0.233392 0.998434 0.999999 0.9999955 Control 2 0.750029 0.995031 0.999995 0.605801 0.999479 0.444092 0.750029 0.9727566 Control 3 0.444092 0.999987 1.000000 0.233392 0.999479 0.194865 0.444092 0.8207977 Touch 1 0.999940 0.081621 0.250866 0.998434 0.444092 0.194865 0.999479 0.8911108 Touch 2 1.000000 0.231134 0.528253 0.999999 0.750029 0.444092 0.999479 0.9969189 Touch 3 0.999632 0.585550 0.879701 0.999995 0.972756 0.820797 0.891110 0.996918

Table 10.38: Tukey HSD test for autotelic experience flow dimension. Error: between; within; Pooled MS =18.648, df = 78.186. Significant values are shown in red.

Page 97: Gesture-Based Interaction for Games on Multi-touch Devices

References

[1] Apple iOS UIKit, . http://developer.apple.com/library/ios, Last Accessed: 30th Oct2010.

[2] Apple iPad, . http://www.apple.com/ipad, Last Accessed: 30th Oct 2010.

[3] Apple Macmini, . https://www.apple.com/macmini/, Last Accessed: 30th Oct 2010.

[4] Apple OS X Snow Leopard, . http://www.apple.com/macosx, Last Accessed: 30th Oct2010.

[5] Apple XCode IDE, . http://developer.apple.com/technologies/tools/xcode.html, LastAccessed: 30th Oct 2010.

[6] Babbie, E., and Mouton, J. The practive of social research. Oxford University Press,South Africa, 1998.

[7] Blake, E. Measuring presence. CSC4000W Games and Virtual Environments coursenotes (2010).

[8] Brown, E., and Cairns, P. A grounded investigation of game immersion. In CHI ’04:CHI ’04 extended abstracts on Human factors in computing systems (New York, NY, USA,2004), ACM, pp. 1297–1300.

[9] Buxton, B. Multi-touch systems that i have known and loved (overview).http://www.billbuxton.com/multitouchOverview.html, Last accessed: 30th Oct 2010.

[10] Csikszentmihalyi, M. Flow: the psychology of optimal experience. Harper Perennial,New York, 1990.

[11] Dillon, R. F., Edey, J. D., and Tombaugh, J. W. Measuring the true cost of commandselection: techniques and results. In CHI ’90: Proceedings of the SIGCHI conference onHuman factors in computing systems (New York, NY, USA, 1990), ACM, pp. 19–26.

[12] Ermi, L., and Mayra, F. Fundamental components of the gamplay experience. DiGRA2005: Changing Views: Worlds in Play (2005).

[13] Federoff, M. A., and Federoff, M. A. Heuristics and usability guidelines for thecreation and evaluation of fun in video games. Tech. rep., Indiana University, Bloomington,2002.

94

Page 98: Gesture-Based Interaction for Games on Multi-touch Devices

REFERENCES 95

[14] Forlines, C., Wigdor, D., Shen, C., and Balakrishnan, R. Direct-touch vs. mouseinput for tabletop displays. In CHI ’07: Proceedings of the SIGCHI conference on Humanfactors in computing systems (New York, NY, USA, 2007), ACM, pp. 647–656.

[15] Frohlich, D. M. Direct manipulation and other lessons. In Handbook of humancomputerinteraction (2nd ed (1997), Elsevier, pp. 463–488.

[16] Hazlett, R. L. Measuring emotional valence during interactive experiences: boys atvideo game play. In CHI ’06: Proceedings of the SIGCHI conference on Human Factors incomputing systems (New York, NY, USA, 2006), ACM, pp. 1023–1026.

[17] Huang, T., and Pavlovic, V. Hand gesture modeling, analysis and synthesis. In Proc1995 IEEE International Workshop on Automatic Face and Gesture Recognition (1995),pp. 73–79.

[18] Inkscape, . http://inkscape.org/, Last Accessed: 30th Oct 2010.

[19] Jackson, S., Eklund, R., and Martin, A. http://www.mindgarden.com/products/flow.htm,Last Accessed: 30th Oct 2010.

[20] Jackson, S., and Marsh, H. Development and validation of a scale to measure optimalexperience: The flow state scale. Journal of Sport and Exercise Psychology (1995), 17–35.

[21] J.D, I., and Kalyanaraman, S. The effects of technological advancement and violentcontent in video games on players’ feelings of presence, involvement, physiological arousa,and aggression. Journal of Communication. 57, 1 (2007), 532–555.

[22] Kang, H., Lee, C. W., and Jung, K. Recognition-based gesture spotting in videogames. Pattern Recogn. Lett. 25, 15 (2004), 1701–1714.

[23] Kin, K., Agrawala, M., and DeRose, T. Determining the benefits of direct-touch,bimanual, and multifinger input on a multitouch workstation. In GI ’09: Proceedings ofGraphics Interface 2009 (Toronto, Ont., Canada, Canada, 2009), Canadian InformationProcessing Society, pp. 119–124.

[24] Kratz, L., Smith, M., and Lee, F. J. Wiizards: 3d gesture recognition for game playinput. In Future Play ’07: Proceedings of the 2007 conference on Future Play (New York,NY, USA, 2007), ACM, pp. 209–212.

[25] Lessiter, J., Freeman, J., Keogh, E., and Davidoff, J. A cross-media presencequestionnaire: The itc-sense of presence inventory. Presence: Teleoper. Virtual Environ.10, 3 (2001), 282–297.

[26] Mandryk, R., Atkins, M. S., and Inkpen, K. A continuous and objective evaluationof emotional experience with interactive play environments. In CHI ’06: Proceedings of theSIGCHI conference on Human Factors in computing systems (New York, NY, USA, 2006),ACM, pp. 1027–1036.

[27] McAvinney, P. Telltale gestures. Byte, July (2000), 237–240.

Page 99: Gesture-Based Interaction for Games on Multi-touch Devices

REFERENCES 96

[28] McMahan, R. P., Gorton, D., Gresock, J., McConnell, W., and Bowman, D. A.Separating the effects of level of immersion and 3d interaction techniques. In VRST ’06:Proceedings of the ACM symposium on Virtual reality software and technology (New York,NY, USA, 2006), ACM, pp. 108–111.

[29] Moscovich, T. Principles and applications of multi-touch interaction. PhD thesis, Prov-idence, RI, USA, 2007. Adviser-Hughes, John F.

[30] Moscovich, T., and Hughes, J. F. Multi-finger cursor techniques. In GI ’06: Pro-ceedings of Graphics Interface 2006 (Toronto, Ont., Canada, Canada, 2006), CanadianInformation Processing Society, pp. 1–7.

[31] Moscovich, T., and Hughes, J. F. Indirect mappings of multi-touch input using oneand two hands. In CHI ’08: Proceeding of the twenty-sixth annual SIGCHI conference onHuman factors in computing systems (New York, NY, USA, 2008), ACM, pp. 1275–1284.

[32] North, C., Shneiderman, B., and Plaisant, C. User controlled overviews of an imagelibrary: a case study of the visible human. In DL ’96: Proceedings of the first ACMinternational conference on Digital libraries (New York, NY, USA, 1996), ACM, pp. 74–82.

[33] Open Graphics Library, . http://www.opengl.org/, Last Accessed: 30th Oct 2010.

[34] Payne, J., Keir, P., Elgoyhen, J., McLundie, M., Naef, M., Horner, M., andAnderson, P. Gameplay issues in the design of spatial 3d gestures for video games. InCHI ’06: CHI ’06 extended abstracts on Human factors in computing systems (New York,NY, USA, 2006), ACM, pp. 1217–1222.

[35] Ravaja, N., Salminen, M., Holopainen, J., Saari, T., Laarni, J., and Jarvinen,A. Emotional response patterns and sense of presence during video games: potential crite-rion variables for game design. In NordiCHI ’04: Proceedings of the third Nordic conferenceon Human-computer interaction (New York, NY, USA, 2004), ACM, pp. 339–347.

[36] Reisman, J. L., Davidson, P. L., and Han, J. Y. Generalizing multi-touch directmanipulation. In SIGGRAPH ’09: SIGGRAPH 2009: Talks (New York, NY, USA, 2009),ACM, pp. 1–1.

[37] Rubine, D. Combining gestures and direct manipulation. In CHI ’92: Proceedings of theSIGCHI conference on Human factors in computing systems (New York, NY, USA, 1992),ACM, pp. 659–660.

[38] Santello, M., Flanders, M., and J.F, S. Postural hand synergies for tool use. Journalof Neuroscience, 18 (1998), 10105–10115.

[39] Schiff, N. An investigation of multi-touch gesture-based gaming . (honours thesis). 2010.

[40] Sears, A., and Shneiderman, B. High precision touchscreens: design strategies andcomparisons with a mouse. Int. J. Man-Mach. Stud. 34, 4 (1991), 593–613.

[41] Shneiderman, B. Direct manipulation for comprehensible, predictable and controllableuser interfaces. In IUI ’97: Proceedings of the 2nd international conference on Intelligentuser interfaces (New York, NY, USA, 1997), ACM, pp. 33–39.

Page 100: Gesture-Based Interaction for Games on Multi-touch Devices

REFERENCES 97

[42] Sweetser, P., and Wyeth, P. Gameflow: a model for evaluating player enjoyment ingames. Comput. Entertain. 3, 3 (2005), 3–3.

[43] Tenenbaum, G., Fogarty, G. J., and Jackson, S. A. The flow experience: a raschanalysis of jackson’s flow state scale. J Outcome Meas 3, 3 (1999), 278–294.

[44] Watson, R. A survey of gesture recognition techniques. Tech. rep., 1993.

[45] Wigdor, D., and Morrison, G. Designing user interfaces for multi-touch and surface-gesture devices. In CHI EA ’10: Proceedings of the 28th of the international conferenceextended abstracts on Human factors in computing systems (New York, NY, USA, 2010),ACM, pp. 3193–3196.

[46] Wilson, D., Dimovska, D., Selvig, S., and Jarnfelt, P. Face-off in the magiccircle: getting players to look at each other, not the screen. In ACE ’09: Proceedings ofthe International Conference on Advances in Computer Enterntainment Technology (NewYork, NY, USA, 2009), ACM, pp. 462–462.

[47] Witmer, B. G., and Singer, M. J. Measuring presence in virtual environments: Apresence questionnaire. Presence 7 (1998), 225–240.

[48] Wobbrock, J. O., Morris, M. R., and Wilson, A. D. User-defined gestures forsurface computing. In CHI ’09: Proceedings of the 27th international conference on Humanfactors in computing systems (New York, NY, USA, 2009), ACM, pp. 1083–1092.

[49] Wood, D. Multi-touch gestures for games on the ipad. (honours thesis). 2010.

[50] Wu, M., Shen, C., Ryall, K., Forlines, C., and Balakrishnan, R. Gesture reg-istration, relaxation, and reuse for multi-point direct-touch surfaces. In TABLETOP ’06:Proceedings of the First IEEE International Workshop on Horizontal Interactive Human-Computer Systems (Washington, DC, USA, 2006), IEEE Computer Society, pp. 185–192.