POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION...

18
IADIS International Journal on WWW/Internet Vol. 12, No. 1, pp. 113-130 ISSN: 1645-7641 113 POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT Joanna Camargo Coelho. Leiden Institute of Advanced Computer Science, Leiden University, Niels Bohrweg 1, 2333CA Leiden, The Netherlands Fons J. Verbeek. Leiden Institute of Advanced Computer Science, Leiden University, Niels Bohrweg 1, 2333CA Leiden, The Netherlands ABSTRACT Performing tasks on virtual environments are increasingly becoming a normal practice due to the developments on graphic rendering systems and interaction techniques. Many fields profit from gestural 3D interaction from entertainment to medical purposes. Having this in mind, the aim of this study is to research the relevance of using determined 6DoF input devices when interacting with three-dimensional models in graphical interfaces. This paper presents an evaluation of 3D pointing tasks using Leap Motion sensor to support 3D object manipulation. Three controlled experiments were guided throughout the study, exposing test subjects to pointing task evaluations and object deformation, measuring the time taken to perform mesh extrusion and object translation. Qualitative data was gathered using the System Usability Scale questionnaire. The collected data shows a strong correlation between input device and performance time suggesting a dominance of the Leap Motion gestural interface over mouse interaction concerning single target three-dimensional pointing tasks. Multi-target tasks were better performed within mouse interaction due to 3D input system accuracy issues. Performance times regarding shape deformation task proofed that mouse interaction outperformed 3D Input device. KEYWORDS 3D Object Manipulation, Pointing Task Evaluation, 3D Input device, Leap Motion 1. INTRODUCTION Despite of the developments on 3D graphics rendering systems, we still face a lack of knowledge when it comes to interaction with three-dimensional environments. The biggest challenge up to date is to build an error-free open-domain interaction system that permits the user to manipulate the 3D objects in the most natural manner possible, since naturalness

Transcript of POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION...

Page 1: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

Vol. 12, No. 1, pp. 113-130

ISSN: 1645-7641

113

POINTING TASK EVALUATION OF GESTURAL

INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

Joanna Camargo Coelho. Leiden Institute of Advanced Computer Science, Leiden University,

Niels Bohrweg 1, 2333CA Leiden, The Netherlands

Fons J. Verbeek. Leiden Institute of Advanced Computer Science, Leiden University,

Niels Bohrweg 1, 2333CA Leiden, The Netherlands

ABSTRACT

Performing tasks on virtual environments are increasingly becoming a normal practice due to the

developments on graphic rendering systems and interaction techniques. Many fields profit from gestural

3D interaction from entertainment to medical purposes. Having this in mind, the aim of this study is to

research the relevance of using determined 6DoF input devices when interacting with three-dimensional

models in graphical interfaces. This paper presents an evaluation of 3D pointing tasks using Leap Motion

sensor to support 3D object manipulation. Three controlled experiments were guided throughout the

study, exposing test subjects to pointing task evaluations and object deformation, measuring the time

taken to perform mesh extrusion and object translation. Qualitative data was gathered using the System

Usability Scale questionnaire. The collected data shows a strong correlation between input device and

performance time suggesting a dominance of the Leap Motion gestural interface over mouse interaction

concerning single target three-dimensional pointing tasks. Multi-target tasks were better performed

within mouse interaction due to 3D input system accuracy issues. Performance times regarding shape

deformation task proofed that mouse interaction outperformed 3D Input device.

KEYWORDS

3D Object Manipulation, Pointing Task Evaluation, 3D Input device, Leap Motion

1. INTRODUCTION

Despite of the developments on 3D graphics rendering systems, we still face a lack of

knowledge when it comes to interaction with three-dimensional environments. The biggest

challenge up to date is to build an error-free open-domain interaction system that permits the

user to manipulate the 3D objects in the most natural manner possible, since naturalness

Page 2: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

114

influences directly the usability of the system along with the engagement that the user might

present during interaction [18].

Three-dimensional virtual objects and environments can be controlled in various manners,

for example by making use of 2D or 3D input devices, providing the user with three, six or

more degrees of freedom while translating and rotating objects [11]. Nowadays, the most

common scenario within 3D virtual object manipulation can be described as 3D graphics

rendering systems and simple desktop setups, which makes the interaction possible but still

not optimal [14]. More sophisticated Virtual Reality systems tend to use 6DoF sensors, which

can be described as 3D input devices that enable translation and rotation (pitch, yaw and roll)

in all three axes (cf. Figure 1). Such devices are used to measure position and orientation of

limbs providing three-dimensional data regarding the user’s movement.

Figure 1. Graphical description of movements addressed in 6DoF input devices

Even though we could observe much technical development in the field within the last two

decades, 6DoF interaction is still challenging due to limitations of the sensor technologies, not

enough knowledge on how humans interact with computer generated 3D environments and the

recurrent task-specific demands and constraints of each interaction device itself [7]. A few of

the most widely known scopes that benefit from three-dimensional interaction on virtual

environments are: 3D modeling and scene composition, visual programming, medical

visualization, prototyping, designing for engineering purposes, browsing large datasets,

Technology Enhanced Learning and real-time 3D communication such as the Web3D [7]. The

relevance of researching the aspects of 3D interaction lies in its many appliances that might

vary for every field. Nevertheless, the general goal of the casual user is usually related to

browsing, manipulating or interacting with three-dimensional data. Having that in mind, the

purpose of this study is to find out whether or not it is beneficial to use 6DoF devices for

manipulation of virtual objects.

In this paper we first present in section II a list of related work and their respective

relevance to our study. Section III makes a brief introduction to the concept of natural

interfaces. It is in section IV that we talk about input devices, in order to make a clear

distinction between three-dimensional input device types. Finally in section V the method

used in this study is introduced to the reader, describing experiment design choices and the

protocol of evaluation. Section VI presents the comparative results of the aforementioned

experiments, while sections VII and VIII present discussion and conclusion of the research,

respectively.

Page 3: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

115

2. RELATED WORK

Although much work has been done in the field of 3D object manipulation, there are several

aspects of interaction that still need to be further analyzed. Early research in the field was

conducted with the aim of evaluating 3D input devices in the context of 3D interaction

techniques and its relation to user performance [23]. This is still a common practice due to the

constant development of interaction devices and rendering systems.

To evaluate 3D input devices acting in virtual environments, researchers have often used

Fitts’s Law in order to predict user’s reaction time in relation to pointing tasks. A study by

Kouroupetroglou, G. et al. [10] shows a pointing task evaluation that performs a comparison

between mouse and Wii Remote Control input devices. The study was divided in 2D and 3D

experiments in which both Wii Remote and mouse conditions were tested. The two-

dimensional experiments were run in a plane virtual environment counting with 16 circular

targets arranged equidistantly from the starting point while, in the 3D case, 8 spherical targets

were positioned in the vertices of a cube. The results gathered from both conditions showed

that the Wii Remote was outperformed by the mouse in 2D and 3D pointing tasks. However, it

is important to notice that the Wii Remote response, and therefore the interaction, were

reported troublesome in determined light conditions.

Another study by Raynal et al. [16] defends the importance of unifying 3D pointing task

evaluation, based on the ergonomic requirements stated in the ISO 9241-9 standard. In this

study, researchers adapt the standard evaluation protocol of input devices for 2D pointing

tasks, considering important variations that a 3D environment might imply. The devices used

for the experiment are the 3D mouse Space Navigator and the Polhemus Patriot motion

tracking input system. One of the most striking adaptations concerns to the validation of

reached target in the context of pointing task. What is stated in the ISO 9241-9 is that the

validation is successful once the cursor is within the target’s width. These authors proposed,

however, that a collision with target already entails the validation of a target reached. This

results in a much more positive index of performance by the users and reinforces the necessity

of occasional adjustments in pointing task evaluation on three-dimensional environments.

In [2], Bérard et al. conducted two experiments with the aim to investigate the dominance

of the mouse in desktop 3D interaction in relation to 3D input devices. The devices used on

this research are mouse, DepthSlider, SpaceNavigator and Wii Remote. Evaluation was done

by measuring user’s performance time when completing pointing tasks inside of a virtual

cubic environment. In addition, in the attempt to analyze the bio-signals of the participants,

researchers recorded data of galvanic skin response (GSR), heart rate (HR) and volume pulse

amplitude (BVP). The experiment demonstrated that the mouse was more efficient than the

other devices for accurate placement. Researchers also concluded that the more degrees of

freedom, the worst the performance time to complete the task while the measured stress of the

user tends to be higher. Nonetheless, it still remains unclear whether the interaction design of

the experiment influenced negatively the results of the research in terms of 6DoF input

devices.

Page 4: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

116

3. NATURAL USER INTERFACES

Gestural interfaces are based on recognition and mathematical interpretation of gestures

performed by the user, resulting in interactive scenarios that vary in relation to case-specific

tasks depending on the goal of the interaction designer. Such interfaces are part of a group of

input systems denominated Natural User Interfaces, or NUI. Natural User Interfaces can be

classified in two main groups that can be ergonomically distinguished in relation to the physical

contact with the body of the user : wearable and touchless interfaces. As the name suggests,

wearable interfaces can be defined as input devices worn by users that contain sensors or

markers in order to capture motion with the desirable precision. Systems such as the Dataglove,

MOVE and WiiMote can be considered wearable Natural User Interfaces. Touchless interfaces,

on the other hand, are characterized by the lack of physical contact with the human body,

enabling the user to draw commands without having to touch any equipment. Devices under

this category can be essential for determined 3D tasks such as sterile image guided surgery,

once again reinforcing the importance of researching the usability os such devices. Working

examples of touchless NUI are the Microsoft Kinect, ASUS Xtion Pro Live, and the Leap

Motion sensor.

In this research, we chose to use of the Leap Motion sensor to lead the experiments with the

treatment group (cf. Figure 2) The device combines infrared LEDs and two cameras under a

black glass, enabling the software to track finger movements as you move them above the

sensor. The decision to test this device in detriment of others was determined by its

commercially announced qualities such as portability, purported accuracy and ease of use,

suggesting its possible popularization in the context of domestic 3D environments and virtual

object manipulation setups.

Figure 2. Leap Motion sensor

4. 3D OBJECT MANIPULATION

A few authors provide us with surveys and comparisons of distinct interaction techniques,

describing the main functions that these input devices perform in their respective virtual

environments. Chris Hand [7] reports three main operations that the fields, which profit from

3D interaction, usually make use of, namely: object manipulation, viewpoint manipulation and

application control. In this paper, we will focus on object manipulation, keeping in mind that

the other two main tasks should be researched in future work. According to Subramanian [1],

Page 5: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

117

the essential atomic actions within object manipulation can be described as selection,

translation and deforming. In this study we will focus mainly on translation and deforming

aspects, as we will further depict on our research experiments.

Our aim is to draw conclusions about the system’s performance through measurements

made during user interaction, thnerefore it is important to elucidate which are the variables

taken into account when analyzing the executed tasks. In his study about user performance in

relation to input devices, Zhai [23] defines six aspects to the usability of a 6DoF input device,

i.e. speed, accuracy, ease of learning, fatigue, coordination and device persistence. Among all

these characteristics of three-dimensional input interaction we will quantitatively measure

speed and ease of learning, while accuracy coordination and device persistence are known

variables inherent to the given system. Fatigue will be measured qualitatively with the help of a

usability questionnaire.

5. METHOD

As we could see, due to the variety of 3D input and interaction techniques, many different

methods are used to evaluate the performance of the user. Given that, the novel characteristics

of determined input devices might require the creation of ad-hoc approaches for 6DoF

interaction evaluation techniques. Two main approaches can be widely seen in literature related

to the field: structured approach and ad-hoc approaches. In summary we can describe the

structured approach as a compound of methods that aim to assess the pointing task data in a

structured manner usually based on Fitts’s model. The ad-hoc approaches may vary for case-

specific tasks and devices. In this paper we preferred to make use of the structured approach

among with inferential and descriptive data analysis in order to evaluate both input devices in

relation to the proposed experiments. Qualitative measurement was performed with the help of

the System Usability Scale (SUS), which was filled in by the test subjects right after completion

of all tasks.

5.1 Experiment Design

Test subjects were randomly divided in two groups under different conditions related to the

type of input device. Control group was exposed to the mouse condition while the experiment

group performed its tasks with Leap Motion gestural interface. Subjects from control and

experiment groups were exposed to the same virtual environment and target positions only

differing on their input interaction method. Reaction time was measured in all the tasks. The

user’s initial position as well as target coordinates are known and equal for all test subjects.

The experiment was designed in Processing.js counting with Onformative Library in order to

enable the gestural interaction. Overall, 35 subjects were tested, being 20 under 3D input

condition and 15 under mouse condition.

In order to observe the correlation between input device and effectiveness of object

translation we designed two 3D pointing tasks that were performed by the users in a given

three-dimensional virtual environment (cf. Figure 3). It is important to notice, that in both cases

viewpoint or camera manipulation was not enabled, providing the test subject with a single

angle of vision in order to make decisions with respect to their spatial movements. This

decision was took with the aim to isolate the distance and time variables, keeping in mind that

Page 6: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

118

viewpoint manipulation should be explored in future work. In the first pointing task, test

subjects were instructed to reach a point in space by positioning a red colored sphere onto the

first denominated target. The target was described to the user as the “intersection of all axis”.

The second task had two targets demanding the user to position the sphere on target 1 and

subsequently on target two. Please note that the trigonometric and statistical analysis regarding

the second pointing task are calculated considering the trajectory from point one to point two

and not from the starting point. The validation of target selection is defined within a field of 60

cubic pixels and once the sphere in positioned partially or completely within the field, a console

message returns the time taken to reach the target, in milliseconds.

Figure 3. Experiment 1

Figure 4. Mouse Interaction

Page 7: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

119

Figure 5. Leap Motion Interaction

Both pointing tasks were analyzed according to variations of Fitts’ Law in order to measure

the Index of Performance (cf. Equation 1) of the given tasks in relation to their Index of

Difficulty (cf. Equation . 2).

IP = ID/MT

(1)

ID = log2(D/W+1.0)

(2)

MT= a+b log2(D/W+1.0)

(3)

The first equation can be described as a formula used to calculate the Index of Performance

or throughput of a pointing task. Equation 2 aims to calculate the index of difficulty of each

pointing task where D is the distance between starting point to the center of the target and W is

the width of the target. Since in our experiment all given targets had the same dimensions, the

distinction between the two different given indexes of difficulty was determined by target

distances. Equation 3 can be used to predict time measurements concerning the pointing task,

where a and b represent empirical constants determined through linear regression

Although the conventional Fitts’s Law is commonly used in research and multidimensional

design tasks, the calculation applies only to one-dimensional movements, compromising the

comprehension of three-dimensional data, when it comes to manipulation of virtual objects in

3D environments. Due to our different starting point, we adapted Fitt’s law for work in a 3D

environment, where c is an arbitrary constant to be determined through linear regression and

is the angle between the starting point and target according to Figure 6. One can see in

equation 5 the adaptation we made considering the terms abovementioned in equation 2.

(4)

ID3=log2(D/W+ 1.0)+c sin

(5)

Considering the referred adaptation of the Fitts’s Law to three-dimensional tasks, indexes of

difficulty were calculated considering several values of c as indicated in [15] and constant

target width.

Page 8: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

120

Table 1. Variation of c value on adaptation of Fitts’s Law for analysis of three-dimensional tasks

Arbitrary

constant

TASK 1 TASK 2

c values D ID D ID

0 135 19 3,45 315 12 5,35

0.1 135 19 3,52 315 12 5,28

0.2 135 19 3,59 315 12 5,21

0.3 135 19 3,66 315 12 5,14

0.4 135 19 3,73 315 12 5,07

0.5 135 19 3,8 315 12 5

0.6 135 19 3,87 315 12 4,93

0.7 135 19 3,94 315 12 4,86

0.8 135 19 4,01 315 12 4,79

0.9 135 19 4,08 315 12 4,72

1 135 19 4,15 315 12 4,65

After performing both calculations, we could perceive that the three-dimensional version of

Fitts’s Law explains more clearly why the second target is harder to reach than the first one

since this formula takes into account the angle expressed in a two-dimensional frontal plane

from starting point in relation to target (cf. Figure 6), differentiating indexes of difficulty not

only considering distance and width of target but also the referred angle.

Figure 6. Reference angles

In addition to the first two pointing tasks, a third task concerning 3D object modeling was

developed with the aim to evaluate the overall performance of the subjects from the two input

conditions while deforming a 3D shape (cf. Figure 7), therefore calculating the average reaction

time in both situations. This task consists of re-shaping a deformed cube by extruding one face

Page 9: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

121

of the object. The interaction was designed by moving the cursor or tracked hand on a

determined axis.

Figure 7. Experiment 2

5.2 Experiment Procedure

The group under the mouse condition was submitted to a brief instructional video, since a pilot

study revealed that a few users were confused with the goal of the tasks. The 30 seconds of

audiovisual demonstration were followed by the completion of the three tasks and data logging.

Subjects under experimental condition were also exposed to a short video containing

instructions on how to perform the pointing tasks. However, unlike test subjects exposed to

mouse condition, the experimental group underwent a short training period that was performed

individually. Each subject was introduced to the Leap Motion gestural interface by performing

two minutes of interaction with a 3D environment specifically designed for learning purposes.

In this environment, users didn’t have a pre-determined task, therefore interacting freely with a

wired white sphere, being able to control the 3D position and rotation of the given shape. After

getting acquainted with the gestural interface interaction in the context of a 3D virtual world,

subjects were asked to perform two experiments concerning pointing tasks and one object

modeling experiment.

6. RESULTS

We tested with 15 participants for the mouse condition and 20 participants for the gestural

interface condition. Between the 35 participants, we have 23 male and 12 female subjects from

different ages and nationalities (cf. Figure 8). In figure 9 you can observe the distribution of

participants under both conditions by age groups.

Although the participants involved in this study have distinct backgrounds and levels of

expertise, it was a requirement for participation in the study to have interacted with any 3D

design software at least once.

Page 10: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

122

Figure 8. Number of participants in mouse and Leap Motion conditions by gender.

Figure 9. Number of participants in mouse and the Leap Motion conditions per age group.

6.1 The Mouse Condition

In Figure 10 you can see the learning rate of the mouse condition, showing that at a second

attempt, the user takes approximately 3 seconds less than the first time to reach the same target.

The moderated learning curve is expected since we assumed that the mouse input interaction is

mastered by all the users.

Figure 10. Learning mouse condition

Page 11: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

123

6.2 The Leap Motion Condition

The learning curve concerning the Leap Motion device is more accentuated since this is an

input device practically unknown by our sample population. However, the interaction device is

quite user-friendly, enabling performance time differences of even 8 seconds less in the second

attempt to reach target than in the first.

Figure 11. Learning Leap Motion condition

6.3 Comparison Between Conditions

To assure significance of the given values, a t-test was performed in both conditions for all the

three given tasks. In the first pointing task, we found significance in the performance time

scores for Leap Motion (M=20.25, SD=8.96) and mouse (M=33.09, SD=16.99) conditions;

t(30)=2.41, p = 0.05. The second pointing task showed the following t-test results regarding

Leap Motion (M=4.88, SD=2.6) and mouse (M=2.46, SD=1.32) conditions; t(30)=3.59, p =

0.05. The third task, which involved mesh extrusion, did not achieve the minimum rate required

by the t-test due to its high standard deviation: Leap Motion (M=19.45, SD=15.74) and mouse

(M=33.09, SD=16.99) ; t(30)=0.4, p = 0.05.

6.3.1 Task 1

The first pointing task performed in this experiment has only one target describing a movement

from starting point to target one, meaning that there are no obstacles or other tasks within this

trajectory. As we might observe, under the given constraints, 3D input interaction outperforms

mouse interaction regarding the first pointing task (only one target). It is important to notice

that the starting point was controlled in order to be equal for all test subjects.

Page 12: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

124

Figure 12. Comparing overall performance time means in both conditions (Task 1)

After analyzing performance time means, correlation was found between gender and task

completion time, showing that females outperformed males in the first pointing task under both

conditions (cf Figure13).

Figure 13. Comparison between performance time means in Task 1 for both conditions by gender

Below we can see a comparison between Leap Motion’s and Mouse’s learning curves

concerning a single target task and considering performance time means regarding first and

second attempt to reach the same target. Observing the aforementioned graph we can conclude

that although the time difference between second and first attempt is similar in both

conditions, the performance time means of the Leap Motion device show much faster

performance times than with mouse condition.

Page 13: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

125

Figure 14. Comparing learning process of both devices

6.3.2 Task 2

The second pointing task contains two targets, assuming a trajectory described along starting

point, 1st target and 2

nd target. The value we considered in the data analysis and geometric

calculations is equal to the spatial difference between target 2 and target 1. Unlike the first

pointing task, Task 2 showed faster performance times under the mouse condition. In our case

this might suggest that the additional degrees of freedom inherent to the gestural interface

might be misleading when consecutively aiming at targets with different “z” coordinates

rather than aiming at one single target. It is important to remember that viewpoint

manipulation was disabled and that under mouse condition, the “z” axis could be assessed

through the roll of the mouse while in the Leap Motion condition the third dimension is

achieved by finger-tracking.

Figure 15. Comparing overall performance time means in both conditions (Task 2)

As we might observe, male and female subjects had similar performance times under both

conditions during Task 2. No significant difference was found between performance time and

gender distinction.

Page 14: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

126

Figure 16. Comparison between performance time means in Task 2 for both conditions by gender

Comparison between qualitative measurement scores and performance time showed

correlation between shorter task completion times and higher scores on the System Usability

Scale, in which the Leap Motion condition scored higher, indicating a better satisfaction with

the device. We must, however, consider that there might be a novelty effect caused by the

unfamiliarity of the subject with the device, making subjects score higher on qualitative

evaluations due to their interest in such new technology.

Figure 17. Relation between performance time means for each task and device compared to SUS score

Page 15: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

127

Figure 18. Relation between individual Leap Motion performance time for task 1 compared to SUS score

Figure 19. Relation between individual Mouse performance time for task 1 compared to SUS score

6.3.3 Task 3

Although we can perceive a relevant difference between performance time means of mouse and

Leap Motion conditions, we must consider, as previously mentioned, that the standard

deviation of these data is quite high, not enabling a secure conclusion towards the effectiveness

of one device in detriment of the other in the mesh extrusion task given in this experiment.

Figure 20. Comparing overall performance time means in both conditions (Task 3)

Page 16: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

128

7. DISCUSSION

This study shows a comparative performance evaluation of pointing task interaction, showing

that although 3D input interaction is qualitatively very well rated by the participants, accuracy

is still an important issue. According to the experiment results, we can perceive that in more

dynamic virtual tasks can greatly affect interaction time.

In our sample population we could notice that single target tasks were really simple to

perform in the virtual environment with the gestural interface, but the same did not happen once

multiple targets were arranged. This can be explained by the fact that the second target was

located behind the first target and the z-axis was accessed through mouse-roll interaction on the

mouse condition, which is still much more accurate than the tracking performed by the Leap

Motion, showing that inaccuracy of the tracking can dramatically compromise performance

times.

Interesting gender correlations were found, showing that females outperformed males in the

first pointing task concerning performance times.

8. CONCLUSIONS AND FUTURE WORK

Based on the results from the experiments, we can conclude that, within the constraints of the

tasks developed in this research, the presented 3D input device outperformed mouse interaction

only in single target situations, showing that 3D translation is less cumbersome when the “z”

axis is provided as input based on real-life movement mappings. However, accuracy issues can

prejudice performance time of more complex spatial movements (multiple targets).

Negative aspects of using the 3D input device for complex spatial interactions were reported

on development stage. Device accuracy issues are one of the biggest challenges for the

popularization of those 3D input devices. Still concerning 3D tasks, expert users have shown in

both quantitative and qualitative studies to be extremely biased towards mouse interaction,

electing the mouse as the most reliable and practical device for manipulating 3D objects.

Further investigation and experimentation into viewpoint manipulation and application

control is strongly recommended, since that would provide us with more clear guidelines on

how to fully interact with a given 3D software by means of 3D input devices, including window

and menu navigation, state changes and camera control.

In order to assess the performance of other available 3D input devices when modeling and

manipulation 3D virtual objects, further research should be guided considering a broader

selection of 6DoF input systems, enabling a more complete overview of the advantages of one

technique in detriment of a second, or third one. It is interesting to point out that making an

assessment of the weak aspects from the evaluated 3D input systems could contribute with the

development of existing or novel interaction devices.

Page 17: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL

ENVIRONMENT

129

ACKNOWLEDGMENT

My deepest appreciation goes to my supervisor Dr. Ir. Fons Verbeek whose contributions were

innumerably valuable throughout the course of my study. I am also indebt to my colleague

Roy van Rooijen and collaborator Alessandro Atzeni whose suggestions and efforts made an

important contribution to my work. Last but not least, I would like to express my gratitude to

Rosana Camargo for her moral support and warm encouragements.

REFERENCES

1. Balakrishnan, R., Baudel, V, Kurtenbach, G., Fitzmaurice, G. The rockin’mouse: integral 3d manipulation on a plane. CHI ’97 (1997).

2. Bérard, F., Benovoy, J.Ip.M., El-Shimy, D., Blum, J.R., and Jeremy R. Did "Minority Report" get it

wrong? Superiority of the mouse over 3D input devices in a 3D placement task, INTERACT (2), Springer, (2009), 400-414.

3. Brooke, J. SUS: A Quick and Dirty Usability Scale. Usability Evaluation in Industry, (1996).

4. Chen, M., Mountford, S. J., Sellen A. A study in interactive 3-d rotation using 2-d control devices.

SIGGRAPH ’88: Proceedings of the 15th annual conference on Computer graphics and interactive techniques, (1998), 121–129.

5. Froehlich, B., Hochstrate, J., Skuk, V., Huckauf, A. The globefish and the globe-mouse: two new

six degree of freedom input devices for graphics applications, ACM Conference on Human Factors in Computing Systems (CHI), ACM Press (2006), 191–199.

6. Haan, G. Techniques and Architectures for 3D Interaction. TU Delft, (2009).

7. Hand, C. A survey of 3D interaction techniques, Computer Graphics Forum 16, (1997), 269-281.

8. Hinckley, K., Tullio, J., Pausch, R., Proffitt, D., Kassell. N. Usability analysis of 3d rotation

techniques. UIST ’97: Proceedings of the 10th annual ACM symposium on User interface software and technology, (1997), 1–10.

9. Jacob,R. J. K., Sibert, L. E., McFarlane, D. C., Preston ,J. M. Integrality and separability of input

devices. ACM Trans. Comput.-Hum. Interact., 1(1), (1994), 3–26.

10. Kouroupetroglou, G., Pino, A., Balmpakakis, A., Chalastanis, D., Golematis V., Ioannou N.,

Koutsoumpas, I. Using Wiimote for 2D and 3D pointing tasks: gesture performance evaluation. Gesture Workshop , Springer, (2011), 13-23.

11. Kulik, A., Hochstrate, J., Kunert, A., Froehlich, B. The influence of input device characteristics on spatial perception in desktop-based 3D applications. 3DUI , IEEE, (2009), 59-66.

12. Kunert, A., Huckauf, A., Froehlich B. A comparison of tracking- and controller-based input for

complex bimanual interaction in virtual environments. B. Froehlich, R. Blach, and R. van Liere, editors, EG IPT-EGVE 2007,(2007), 4352.

13. Lee, J., Boulanger, C.N. Direct, spatial, and dexterous interaction with see-through 3D desktop.

SIGGRAPH Posters , ACM, (2012), 69.

14. Liang, J., Green, M. JDCAD: A Highly interactive 3D modeling system. 3rd International Conference on CAD and Computer Graphics, Beijing, China, (1993), 217-222.

15. Murata, A., Iwase, H. Extending Fitts’ Law to a three-dimensional pointing task, Human Movement Science, (2001) 20 791–805.

16. Raynal, M., Dubois, E., Schmitt, B. Towards unification for pointing task evaluation in 3D desktop virtual environment. South CHI 2013: 562-580.

Page 18: POINTING TASK EVALUATION OF GESTURAL …POINTING TASK EVALUATION OF GESTURAL INTERFACE INTERACTION IN 3D VIRTUAL ENVIRONMENT 115 2. RELATED WORK Although much work has been done in

IADIS International Journal on WWW/Internet

130

17. Silveira, W. G. Manipulation of 3D objects in collaborative environments Using the Kinect Device. Federal University of Uberlândia. (2009).

18. Subramanian, S., Ijsselsteijn, W. Survey and classification of spatial object manipulation techniques. IPO, Center for User-System Interaction, Eindhoven University of Technology, (2000).

19. Ware, C. Using hand position for virtual object placement. The Visual Computer 6(5), (1990), 245–253.

20. Wuthrich, C.A. An analysis and a model of 3D interaction methods and devices for virtual reality. Proceedings of the Eurographics Workshop, (1999),18-29.

21. Zhai, S. Human performance in six degrees of freedom input control. Ph.D. Thesis. Univ. of Toronto, (1995).

22. Zhai, S. Interaction in 3D Graphics. SIGGRAPH Computer Graphics Newsletter 32, (1998).

23. Zhai S. User performance in relation to 3D input device design. ACM Computer Graphics 32(4), (1998), 50–54.