7 13

7

Click here to load reader

description

 

Transcript of 7 13

Page 1: 7 13

ISSN: 2277 – 9043

International Journal of Advanced Research in Computer Science and Electronics Engineering

Volume 1, Issue 5, July 2012

7 All Rights Reserved © 2012 IJARCSEE

Roadmaps for Usability Engineering

Sanjeev Narayan Bal

Asst. Prof. Dept. of Comp.Sc.

Trident Academy of Creative Technology, Bhubaneswar

[email protected]

Abstract-- Usability engineering is a cost-effective, user

centered process that ensures a high level of effectiveness,

efficiency, and safety in complex interactive systems. A

major challenge, and thus opportunity, in the field of

human-computer interaction (HCI) and specifically

usability engineering (UE) is designing effective user

interfaces for emerging technologies that have no

established design guidelines or interaction metaphors, or

introduce completely new ways for users to perceive and

interact with technology and the world around them. This

paper presents a brief description of usability engineering

activities, and discusses our experiences with leading

usability engineering activities for different applications.

we propose a usability engineering approach that employs

user-based studies to inform design, by iteratively

inserting a series of user-based studies into a traditional

usability engineering lifecycle to better inform initial user

interface designs. We will discuss interaction,

visualization, and adaptive user support for maps on

mobile devices. we propose an entangled User Centered

System Design (UCSD) and Feature Driven Development

(FDD) process for MCI ,where a usability engineering

model is used.

Index Terms--Usability engineering, mobile devices, Mass

Casualty Incident, User Centered Design, Feature Driven

Development, virtual environments

I. INTRODUCTION

Usability engineering is a cost-effective, user-centered

process that ensures a high level of effectiveness, efficiency,

and safety in complex interactive systems . Activities in this

process include user analysis, user task analysis, conceptual

and detailed user interface design, quantifiable usability

metrics, rapid prototyping, and various kinds of user-centered

evaluations of the user interface. These activities are further

explained in Section "Activities in Usability Engineering."

Usability engineering produces highly usable user interfaces

that are essential to reduced manning, reduced human error,

and increased productivity. Unfortunately, managers and

developers often have the misconception that usability

engineering activities add costs to a product's development life

cycle. In fact, usability engineering can reduce costs over the

life of the product, by reducing the need to add missed

functionality later in the development cycle, when such

additions are more expensive. The process is an integral part

of interactive application development, just as are systems

engineering and software engineering. Usability engineering

activities can be tailored to allow individualizing as needed

for a specific project or product development effort. The

usability engineering process applies to any interactive

system, ranging from training applications to multimedia CD-

ROMs to augmented and virtual environments to simulation

applications to graphical user interfaces (GUIs). The usability

engineering process is flexible enough to be applied at any

stage of the development life cycle, although early use of the

process provides the best opportunity for cost savings.

II. ACTIVITIES IN USABILITY ENGINEERING

Fig. 1 Typical user-centered activities associated with our usability

engineering process. Although the usual flow is generally left-to-right from

activity to activity, the arrows indicate the substantial iterations and revisions

that occurs in practice.

Page 2: 7 13

ISSN: 2277 – 9043

International Journal of Advanced Research in Computer Science and Electronics Engineering

Volume 1, Issue 5, July 2012

8 All Rights Reserved © 2012 IJARCSEE

As mentioned in the introduction, usability engineering

consists of numerous activities. Figure 1 shows a simple

diagram of the major activities. Usability engineering includes

both design and evaluations with users; it is not just

applicable at the evaluation phase. Usability engineering is not

typically hypothesis-testing-based experimentation, but

instead is structured, iterative user-centered design and

evaluation applied during all phases of the interactive system

development life cycle. Most existing usability engineering

methods were spawned by the development of traditional

desktop graphical user interface (GUIs).

In the following sections, we discuss several of the major

usability engineering activities, including domain analysis,

expert evaluation (also sometimes called heuristic evaluation

or usability inspection), formative usability evaluation, and

summative usability evaluation.

Domain Analysis

Domain analysis is the process by which answers to two

critical questions about a specific application context are

determined:

• Who are the users?

• What tasks will they perform?

Thus, a key activity in domain analysis is user task

analysis, which produces a complete description of tasks,

subtasks. and actions that an interactive system should provide

to support its human users, as well as other resources

necessary for users and the system to cooperatively perform

tasks . While it is preferable that user task analyses be

performed early in the development process, like all aspects of

user interface development, task analyses also need to be

flexible and potentially iterative, allowing for modifications to

user performance and other user interface requirements during

any stage of development. In our experience, interviewing an

existing and/or identified user base, along with subject matter

experts and application '"visionaries", Provides very useful

insight into what users need and expect from an application.

Observation-based analysis requires a user interaction

prototype, and as such, is used as a last resort. A combination

of early analysis of application documentation (when

available) and interviews with subject matter experts typically

provides the most effective user task analysis. Domain

analysis generates critical information used throughout all

stages of the usability engineering life cycle. A key result is a

top-down, typically hierarchical decomposition of detailed

user task descriptions. This decomposition serves as an

enumeration and explanation of desired functionality for use

by designers and evaluators, as well as required task

sequences. Other key results are one or more detailed

scenarios, describing potential uses of the application, and a

list of user-centered requirements. Without a clear

understanding of application domain user tasks and user

requirements, both evaluators and developers are forced to

"best guess" or interpret desired functionality, which

inevitably leads to poor user interface design.

Expert Evaluation

Expert evaluation (also called heuristic evaluation or

usability inspection) is the process of identifying potential

usability problems by comparing a user interface design to

established usability design guidelines. The identified

problems are then used to derive recommendations for

improving that design. This method is used by usability

experts to identify critical usability problems early in the

development cycle, so that these design issues can be

addressed as part of the iterative design process. Often the

usability experts rely explicitly and solely on established

usability design guidelines to determine whether a user

interface design effectively and efficiently supports user task

performance (i.e.. usability). But usability experts can also

rely more implicitly on design guidelines and work through

user task scenarios during their evaluation. Nielsen

recommends three to five evaluators for an expert evaluation,

and has shown empirically that fewer evaluators generally

identify only a small subset of problems and that more

evaluators produce diminishing results at higher costs. Each

evaluator first inspects the design alone, independently of

other evaluators' findings. Then the evaluators combine then

data to analyze both common and conflicting usability

findings. Results from an expert evaluation should not only

identify problematic user interface components and interaction

techniques, but should also indicate why a particular

component or technique is problematic. This is arguably the

most cost-effective type of usability evaluation, because it

does not involve users.

Formative Usability Evaluation

Formative evaluation is the process of assessing, refining,

and improving a user interface design by having

representative users perform task-based scenarios, observing

their performance, and collecting and analyzing data to

empirically identify usability problems. This observational

evaluation method can ensure usability of interactive systems

by including users early and continually throughout user

interface development. This method relies heavily on usage

context (e.g.. user tasks, user motivation), as well as a solid

understanding of Formative evaluation produces both

qualitative and quantitative results collected from

representative users during their performance of task scenarios

. Qualitative data include critical incidents, a user event that

has a significant impact, either positive or negative, on users'

task performance and/or satisfaction. Quantitative data include

metrics such as how long it takes a user to perform a given

task, the number of errors encountered during task

performance, measures of user satisfaction, and so on.

Collected quantitative data are then compared to appropriate

baseline metrics, sometimes initially redefining or altering

evaluators' perceptions of what should be considered baseline.

Both qualitative and quantitative data are equally important

since they each provide unique insight into a user interface

design's strengths and weaknesses.

Summative Usability Evaluation

Summative evaluation, in contrast to formative evaluation-

is a process that is typically performed after a product or some

part of its design is more or less complete. Its purpose is to

statistically compare several different systems or candidate

designs, for example, to determine which one is "better,"

where better is defined in advance. In practice, summative

evaluation can take many forms. The most common are the

comparative, field trial, and more recently, the expert review.

While both the field trial and expert review methods are well-

suited for design assessment, they typically involve

assessment of single prototypes or field-delivered designs.

Our experiences have found that the empirical comparative

approach employing representative users is very effective for

analyzing strengths and weaknesses of various well-formed,

candidate designs set within appropriate user scenarios.

However, it is the most costly type of evaluation because it

may need large numbers of users to achieve statistical validity

and reliability, and because data analysis can be complex and

challenging.

Page 3: 7 13

ISSN: 2277 – 9043

International Journal of Advanced Research in Computer Science and Electronics Engineering

Volume 1, Issue 5, July 2012

9 All Rights Reserved © 2012 IJARCSEE

III. USABILITY ENGINEERING APPROACHES

TO DESIGINING USER INTERFACES

To date, numerous approaches to software and user

interface design have been developed and applied. The

waterfall model, developed by Royce, was the first widely

known approach to software engineering. This model takes a

top-down approach based on functional decomposition. Royce

admitted that while this process was designed to support large

software development efforts, it was inherently flawed since it

did not support iteration; a property that he eventually added

the model. The spiral model (Boehm, [1]) was the first widely

recognized approach that utilized and promoted iteration. It is

useful for designing user interfaces (as well as software),

because it allows the details of user interfaces to emerge over

time, with iterative feedback from evaluation sessions feeding

design and redesign. As with usability engineering

approaches, the spiral model first creates a set of user-centered

requirements through a suite of traditional domain analysis

activities (e.g., structured interviews, participatory design,

etc.). Following requirements analysis, the second step simply

states that a "preliminary design is created for the new

system". Hix and Hartson [2] describe a star life cycle that is

explicitly designed to support the creation of user interfaces.

The points of the star represent typical design/development

activities such as "user analyses", "requirements/ usability

specifications", "rapid prototyping", etc, with each activity

connected through a single center "usability evaluation"

activity. The points of the start are not ordered, so one can

start at any point in the process, but can only proceed to

another point via usability evaluation. The design activities

focus on moving from a conceptual design to a detailed

design. Mayhew [3] describes a usability engineering lifecycle

that is iterative and centered on integrating users throughout

the entire development process. With respect to design, the

usability engineering lifecycle relies on screen design

standards, which are iteratively evaluated and updated. Both

the screen design standards as well as the detailed user

interface designs rely on style guides that can take the form of

a platform" style guide (e.g., Mac, Windows, etc.),

"corporate" style guide (applying a corporate "look and feel"),

"product family" style guide (e.g., MS Office Suite), etc.

Gabbard, Hix and Swan [4] present a cost-effective,

structured, iterative methodology for user-centered design and

evaluation of virtual environment (VE) user interfaces and

interaction. Fig. 2, depicts the general methodology, which is

based on sequentially performing:

1. user task analysis,

2. expert guidelines-based evaluation,

3. formative user-centered evaluation, and

4. summative comparative evaluations.

While similar methodologies have been applied to

traditional (GUI-based) computer systems, this methodology

is novel because we specifically designed it for —and applied

it to —VEs, and it leverages a set of heuristic guide-

lines specifically designed for VEs. These sets of heuristic

guidelines were derived from Gabbard's taxonomy of

usability characteristics for VEs [5,6]. A shortcoming of this

approach is that it does not give much guidance for design

activities. The approach does not describe how to engage in

design activities, but instead asserts that initial designs can be

created using input from task descriptions, sequences, and

dependencies as well as guidelines and heuristics from the

field. Since this methodology assumes the presence of

guidelines and heuristics to aid in designs to be evaluated

during the "expert guidelines-based evaluation" phase, it is not

applicable to emerging technologies such as augmented

reality, where user interface design guidelines and heuristics

have not yet been established. When examining many or the

approaches described above - and specifically the design and

evaluation activities - in most cases, design activities rely on

leveraging existing metaphors, style guides or standards in the

field (e.g., drop down menus, a browser's "back" button, etc.).

However, in cases where an application falls within an

emerging technological field, designers often have no existing

metaphors or style guides, much less standards on which to

base their design. Moreover, in cases where the technology

provides novel approaches to user interaction or

fundamentally alters the way users perceive the interaction

space (i.e., where technology and the real world come

together), designers often have little understanding of the

perceptual or cognitive ramifications of "best guess" designs.

As a result, a process is needed to help designers of novel user

interfaces iteratively create and evaluate designs, to gain a

better understanding of effective design parameters, and to

determine under what conditions these parameters are best

applied. Without this process, applications developed using

traditional usability engineering approaches can only improve

incrementally from initial designs which again, are often

based on developer's best guesses, given the absences of

guidelines, metaphors, and standards.

Fig. 2 The user-centered design and evaluation methodology for virtual

environment user interaction described by Gabbard .

User Interface Design Activities for Augmented Reality

As shown in Fig. 3, it can be argued that user-based

experiments are critical for driving design activities, usability,

and discovery early in an emerging technology's development

Page 4: 7 13

ISSN: 2277 – 9043

International Journal of Advanced Research in Computer Science and Electronics Engineering

Volume 1, Issue 5, July 2012

10 All Rights Reserved © 2012 IJARCSEE

(such as AR). As a technological field evolves,

lessons learned from conducting user-based studies are not

only critical for the usability of a particular application, but

provide value to the field as a whole in terms of insight into a

part of the user interface design space (e.g., of occlusion or

text legibility). As time progresses, contributions to the field

(from many researchers) begin to form a collection of

informal design guidelines and metaphors from which

researchers and application designers alike can draw from.

Eventually, the informal design guidelines are shaken down

into a collection of tried-and-true guidelines and metaphors

that are adopted by the community. Finally, the guidelines and

metaphors become "defacto" standards or at best deemed

"standards" by appropriate panels and committees. The

context of the work reported here, however, falls within the

application of user-based studies to inform user interface

design; the left-most box of Fig. 3. Based on our experiences

performing usability engineering, and specifically design and

evaluation activities for the Battlefield Augmented Reality

System, we propose an updated approach to user interface

design activities for augmented reality systems. This approach

emphasizes iterative design activities in between the user task

analysis phase (where requirements are gathered and user

tasks understood) and the formative user-centered evaluation

phase (where an application-level user interface prototype has

been developed and is under examination) (Fig. 2). With this

approach, w7e couple the user expert evaluation and user-

based studies to assist in the user interface design activity

(Fig. 3). Expert evaluations can be iteratively combined with

well-designed user-based studies to refine designers'

understanding of the design space, understanding of effective

design parameters (e.g., to identify subsequent user-based

studies), and most importantly to refine user interface designs.

A strength of this approach is that interface design activities

are driven by a number of activities; inputs from the user task

analysis phase (Fig. 2), user interface design parameters

correlated with good user interface performance (derived from

user-based studies), and expert evaluation results.

With this methodology, expert evaluations along with

user-based studies are iteratively applied to refine the user

interface design space.

Of the three main activities shown in Fig. 3, there are two

logical starting points: user interface design and user based

studies. An advantage of starting with user interface design

activities is that designers can start exploring the design space

prior to investing time in system development, and moreover,

can explore a number of candidate designs quickly and easily.

In the past, we have successfully used PowerPoint mockups to

examine dozens of AR design alternatives. If mocked up

correctly, the static designs can be presented through an

optical see through display, which allows designers to get an

idea of how the designs may be perceived when viewed

through an AR display in a representative context (e.g.,

indoors versus outdoors). Once a set of designs have been

created, expert evaluations can be applied to assess the static

user interface designs, culling user interface designs that are

likely to be less effective than others. The expert evaluations

are also useful in terms of further understanding the design

space by identifying potential user-based experimental factors

and levels. Once identified, user-based studies can be

conducted to further examine those factors and levels to

determine, for example, if the findings of the expert evaluation

match that of user-based studies. In cases where the design

space is somewhat understood and designers have specific

questions about how different design parameters might

support user task performance, designers may be able to

conduct a user-based study as a starting point. Under this

approach, designers start with experimental design parameters

as opposed to specific user interface designs. As shown in Fig.

4, user based studies not only identify user interface design

parameters to assist in UI design, but also have the potential to

produce UI design guidelines and lessons learned, as well as

generate innovation, which provides both tangible

contributions to the field while also improving the usability of

a specific application. Ultimately, a set of iteratively refined

user interface designs are produced that are the basis for the

overall application user interface design. This design can then

be evaluated using formative user-centered evaluation, as

described by Hix, Gabbard, and Swan [4].

Fig. 3 User-based experiments are a critical vehicle for discovery and

usability early in an emerging field's development. Over time, contributions

from the field emerge, leading eventually to adopted user interface design

guidelines and standards.

Fig. 4 We applied the depicted user-centered design activities as part of

our overall usability engineering approach when developing the Battlefield

Augmented Reality System .

Page 5: 7 13

ISSN: 2277 – 9043

International Journal of Advanced Research in Computer Science and Electronics Engineering

Volume 1, Issue 5, July 2012

11 All Rights Reserved © 2012 IJARCSEE

IV. USABILITY ENGINEERING FOR MOBILE

DEVICES

Usability determines to a major extent the success of

products and services that are based on information and

communication technology. Usability engineering [10] is an

approach to develop software that is easy to use. effective and

efficient, and is in our opinion based on three principles: 1)

Early and continuous focus on user and tasks. 2) empirical

measurement. 3) and iterative design. In UE several

development cycles (Figure 5), with assessments and re-

specifications, are worked through. A complex and interesting

scenario is developed with users, from which user

requirements and features can be derived. The features are

implemented and the quality is assessed by human computer

interaction (HCI) metrics. These metrics are closely related to

the social challenges, regarding comfort and acceptance, but

the technological challenges are also addressed in the metrics.

When taking the user as the center of the design it is also

important to take into account his or her cognitive task load

[9], for which several metrics exist. The assessment of the

metrics can be done in several different ways, either by

experts or users.

UE is a good design approach for a usable mobile map.

But thorough understanding of the dynamic use context is

crucial for user-centered design of mobile applications [7, 12].

With mobile devices it is necessary to test them eventually

while the user is mobile, which makes evaluation difficult.

Depending on the task and the application, usability can

highly differ between a sunny day and a rainy day, between a

noisy environment and a silent environment. The choice

between a lab-experiment or a field experiment is therefore far

from crucial for mobile devices [13]. The use of the mobile

map is mostly a secondary task; the primary task (route

planning or looking for a nearby restaurant) does have a

strong influence on the way of use. Not only the device has to

have a high degree of fidelity, but the primary task has to be

simulated realistic too. Zhang and Adipat [13] give an

overview of challenges in usability testing of mobile devices.

These challenges are the same as the design challenges for

mobile applications: mobile context, connectivity, small

screen size, different display resolutions, limited processing

capability and power, and different data entry methods. An

example is that low screen resolution can have disastrous

effects on the usability of a mobile application. There are

different frameworks for usability testing of mobile devices

[11,13]. In all frameworks, it is an important question whether

to do a usability test in the laboratory or in the field, and

whether experts are used or prospective users. Lab-

experiments are appropriate for improvement of the interface

design, for which the real device or an emulator can be used.

Field-experiments on the other hand are more appropriate

when the final application is tested [13]. The framework of

Streefkerk et al. [11] extends the framework of Zhang and

Adipat by that it not only gives instructions on which

experimental method to use , but gives constraints for when to

Fig. 5 The Usability engineering method

use which experimental method.

We conclude that difficulties for usability testing for

mobile devices exist, but several frameworks are available

which can be used for designing usable mobile map

applications. Because there are many challenges for mobile

applications, ease of use and effectiveness are crucial. Using a

sound usability engineering approach, we decide which

interface features for mobile maps help dealing with the

technological, environmental, and social challenges.

Example-- Tourist information and navigation support by

using 3D maps displayed on mobile devices

Laakso, Gjesdal and Sulebak [8] developed 3D maps with

tourist information and GPS for mobile devices. In this study

there was a strong focus on user requirements and feedback of

potential users on the prototypes. The study consisted of three

iterations. In the first iteration the intended user group was

asked how they did perform the task which the application

was going to support, and which functionalities they would

like to have in the proposed application. With the answers it

was possible to create a prototype which was tested in the

second iteration. The application was tested in a usability test

and with focus groups. Both the participants of the usability

test and the participants of the focus groups belonged to the

group of intended users. The tasks that were performed in the

usability test were typical tasks for the application and

performed in a realistic field environment. There was one

drawback of the design of the experiment. The 3D map was

shown on a mobile device whereas the 2D map was of paper.

Therefore it was difficult to compare the two views. In the

final iteration another prototype was evaluated with the use of

usability tests and questionnaires. The 3D map was found fun

but less usable than the 2D map. for which an explanation

could be that participants are used to 2D maps. Furthermore

results showed that location positioning, for example using

GPS, is very important for map information on mobile

devices. The experimental set-up of this study is a sound

example of usability engineering. All three general approaches

are followed; there is an early and continuous focus on the

user, empirical measurements are used, and it is an iterative

design. In this experiment the technological challenge for

Page 6: 7 13

ISSN: 2277 – 9043

International Journal of Advanced Research in Computer Science and Electronics Engineering

Volume 1, Issue 5, July 2012

12 All Rights Reserved © 2012 IJARCSEE

visualization of the viewpoint was examined and several HCI

metrics were used to measure the usability of the design.

Users were included in both the design and the usability

testing.

V. USABILITY ENGINEERING FOR MASS

CASUALITY INCIDENTS

To design an application system that is usable in case of an

MCI, we propose an entangled approach of User-Centered

System Design (UCSD) and Feature Driven Development

(FDD) [14,15,16,17]. To prepare users to be able to handle a

system in case of an MCI, we propose that the MDGS

provides training for MCIs within the regular day-to-day

business . This should be accomplished by using similar

support systems for handling MCIs and regular rescue and

transport missions. Speaking in terms of software engineering,

support for an MCI should be provided by an additional

module of the same MDGS framework that is used for

handling the regular day-to-day business.

Development Process

As already mentioned, we follow an entangled approach

that combines UCSD and FDD to keep the project and

development process focused (Figure 6). Classic elements of

UCSD (e.g. user studies, interviews) will be complemented

by:

•observing MCI exercises;

•accompanying paramedics and emergency physicians

while they are using the MDGS in regular missions;

•evaluating the usability of the MDGS in regular missions;

•attending emergency medical aid and MCI related

workshops.

Combining these and scientific information, features can

be derived. Natural dependencies between these "small, client

valued function[s]" [17] and a prioritization process will lead

to a sorted list of feature sets. To work through the list, we use

the iterative and incremental process of FDD. By using an

entangled FDD/UCSD process as our software

engineering paradigm, we are able to quickly roll out feature-

sets, as well as keeping them close to the users' needs

and expectations through repeated user-feedback.

General Principles for MDGSs

Figure 7, gives an overview of our proposed MDGS. The

overall design is based on the assumption that an MDGS

that follows the principle has to be technically feasible and

suitable for handling day-to-day rescue and transport

missions as well as MCIs. The basic features as shown in the

figure are:

• All rescue workers (paramedics, emergency physicians,

team leader, and incident commander) are using the same

handheld device, most likely a rugged tablet PC.

• All stakeholders (public-safety answering point, crisis

squad, hospital staff, and rescue teams) are kept in the

loop. They are aware of all necessary information. The users

are guided by dialogues that are simple enough to be

still useful even in very demanding situations.

• The location of every patient and rescue worker is made

available by location-based services for the stake holders in

command.

• Ambulance crews are informed on their way to the area

of operation as well as while waiting at the ambulance

assembly area. Supplying this information to the rescue

workers can help to cope with anxieties and help to prepare

them for the situations they will be confronted with .

• All relevant information is stored on central servers for

ad-hoc as well as post-hoc analyses. This information can

be very useful to implement organizational learning and

gradually improving the whole man-machine-system over

time.

At the moment our project is still in a first prototypical

state. The goal is to integrate the system as a module into

the R2-System, an end-to-end solution for regular transport

and rescue missions, of the DIGITALYS GmbH.

Fig. 6 Process model combining FDD and UCSD .

Fig. 7 Handling of MCIs as an integrated part of general MDGS.

Page 7: 7 13

ISSN: 2277 – 9043

International Journal of Advanced Research in Computer Science and Electronics Engineering

Volume 1, Issue 5, July 2012

13 All Rights Reserved © 2012 IJARCSEE

VI. CONCLUSION

From these three applications, we have learned many

lessons on how to improve the process of usability

engineering.

We have presented a modified usability engineering

approach to design that employs a combination of user

interface design, user-based studies and expert evaluation to

iteratively design a usable user interface as well as refine

designers' understanding of a specific design space. In this

paper we looked at the usability of maps on mobile devices.

Different methods of visualization, interaction, and adaptive

user support to obtain good usability were discussed in the

view of technical, environmental and social challenges. We

have also disclosed a model for Usability Engineering for

Mass Casualty Incidents.

REFERENCES

[1] B.W. Boehm, "A Spiral Model of Software Development

arid Enhancement", IEEE Computer, 21(5), 1-72,1988.

[2] D. Hix and H.R. Hartson, "Developing User Interfaces:

Ensuring Usability Through Product and Process", New York,

NY, John Wiley & Sons,1993.

[3] D.J. Mayhew, "The Usability Engineering Lifecycle, a

Practitioner's Handbook for User Interface Design", San

Francisco, CA, Morgan Kaufmann Publishers, 1999.'

[4] J.L. Gabbard, D. Hix and J.E. Swan II, "User-Centered

Design and Evaluation of Virtual Environments". Invited

Paper to IEEE Computer Graphics and Applications, Volume

19, Number 6, pages 51-59, November / December, 1999.

[5] J.L. Gabbard, "Taxonomy of Usability' Characteristics in

Virtual Environments". Master's thesis. Department of

Computer Science, Virginia Tech, 1997.

[6] J.E. Swan II and J.L. Gabbard, "Survey of User-Based

Experimentation in Augmented Reality", In Proceedings of 1st

International Conference on Virtual Reality, HCI International

2005, Las Vegas, Nevada, USA, July 22-27,2005.

[7] Gorlenko. L. & Merrick, R. (2003). No wires attached:

Usability challenges in the connected mobile world. IBM

Systems Journal, 42, 639-651.

[8] Laakso. K., Gjesdal. O., & Sulebak, J. R. (2003). Tourist

information and navigation support by using 3D maps

displayed on mobile devices. Proceedings of HCI in Mobile

Guides (Udine, Italy: in conjunction with Mobile HCI 2003).

[9] Neerincx. M. A. & Lindenberg. J. (2005). Situated

cognitive engineering for complex task environments.

Proceedings of the Seventh International Naturalistic Decision

Making Conference.

[10] Nielsen. J. (1994). Usability Engineering. Morgan

Kaufmann.

[11] Streefkerk, J. W.. Esch-Bussemakers. M. P.. & Neerincx.

M. A. (2006). Designing personal attentive user interfaces in

the mobile public safety domain. Computers in Human

Behavior, 22, 749-770.

[12] Vetere. F., Howard. S.. Pedell. S.. & Balbo. S. (2003).

Walking through mobile use: novel heuristics and their

application. Proceedings of OzCHI, 24-32.

[13] Zhang. D. & Adipat. B. (2005). Challenges.

Methodologies, and Issues in the Usability Testing of Mobile

Applications. INTERNATIONAL JOURNAL OF HUMAN-

COMPUTER INTERACTION, 18, 293-308.

[14] Chu, Y. and Ganz, A. 2007. WISTA: a wireless

telemedicine system for disaster patient care. Mob. Netw.

Appl. 12,201-214.

[15] Demchak, B., Chan, T. C, Griswold, W. G., and Lenert,

L. A. 2006. Situational awareness during mass-casualty

events: command and control. A MI A Anna Symp Proc, 905.

[16] Norman, D. A. and Draper, S. W. 1986. User centered

system design. New perspectives on human-computer

interaction. Erlbaum, Hillsdale, N.J.

[17] Palmer S. R. and Felsing J. M. 2002. A Practical Guide

to Feature-Driven Development. Prentice Hall

PTR, Upper Saddle River NJ.

Author

Sanjeev Narayan Bal

MBA, MCA, MTech (CSE)

Assistant Professor (CSE)

Trident Academy of Creative Technology, Bhubaneswar.

Life Member - ISTE

Published many papers in National and International Journals.