User Interface Prototyping using UML Specifications
Transcript of User Interface Prototyping using UML Specifications
1/25
Abstract
Recently, the use of scenarios for requirements acquisition has gained a lot of attention in the research community. Yet,
the transition from scenarios to formal specifications, the target of the requirements engineering process, remains ill-
defined. Over the past decade, the rapid prototyping of user interfaces has become quite a common technique in industry;
however, prototyping remains weak in linking the application domain with the user interface, and automated prototyping
is mostly limited to database-oriented applications. Most importantly, the prototyping and the scenario approaches lack
integration in the overall requirements engineering process.
In this paper1, we suggest a requirement engineering process that generates a user interface prototype from scenarios
and yields a formal specification of the application. Scenarios are acquired in the form of collaboration diagrams as
defined by the Unified Modeling Language (UML), and are enriched with user interface (UI) information. These
diagrams are automatically transformed into UML Statechart specifications of the UI objects involved. From the set of
obtained specifications, a UI prototype is generated that is embedded in a UI builder environment for further refinement.
Based on end user feedback, the collaboration diagrams and the UI prototype may be iteratively refined, and the result of
the overall process is a specification consisting of the Statechart diagrams of all the objects involved, together with the
generated and refined prototype of the UI. The algorithms underlying this process have been implemented and exercised
on a number of examples.
1 IntroductionOver the past years, scenarios have received significant attention and have been used for different purposes such as
understanding requirements [27], human computer interaction analysis [6, 26, 31], specification generation [1], and
object-oriented analysis and design [4, 16, 32, 33].
Scenarios are identified as a promising technique for requirement engineering [15]. A typical process for requirement
engineering based on scenarios such as proposed by Hsia et al. [15] has two main tasks. The first task consists of
generating scenarios specifications that describe system behavior. The second task concerns scenario validation with users
by simulation and prototyping. These tasks remain tedious activities as long as they are not supported by automated tools.
Object-oriented analysis and design methods offer a good framework for scenarios [4, 16, 33]. In our work, we have
adopted the Unified Modeling Language (UML) [34], which is emerging as a unified notation for object-oriented analysis
and design. It directly unifies the methods of Booch [4], Rumbaugh (OMT) [33], and Jacobson (OOSE) [16]. The UML
allows for the description of all major views of a system, but does not define any specific process for requirement
engineering nor for software development in general, beyond some preliminary process description reported, for instance,
1 This work is in part supported by FCAR (Fonds pour la formation des chercheurs et l’aide à la recherche au Québec) and by the SPOOL project
organized by CSER (Consortium Software Engineering Research) which is funded by Bell Canada, NSERC (National Sciences and Research Councilof Canada), and NRC (National Research Council of Canada).
User Interface Prototyping using UML Specifications
Mohammed Elkoutbi Ismaïl Khriss Rudolf K. Keller Département IRO Département IRO Département IRO
Université de Montréal Université de Montréal Université de Montréal C.P. 6128, succ. Centre-ville C.P. 6128, succ. Centre-ville C.P. 6128, succ. Centre-villeMontréal, QC H3C 3J7, Canada Montréal, QC H3C 3J7, Canada Montréal, QC H3C 3J7, Canada [email protected] [email protected] [email protected]
2/25
in [28]. As a result of our work, we suggest a process for requirement engineering that is fully compliant with the UML
notation.
For the purpose of validation in early development stages, rapid prototyping tools are commonly and widely used.
Recently, many advances have been made in user interface (UI) prototyping tools like interface builders and UI
management systems. Using these tools helps speeding up the design and implementation of UIs in comparison to
programming with UI toolkits [24]. Yet, the development of UIs is still time-consuming, since every UI object has to be
created and laid out explicitly. Specifications of dialogue controls have to be added by programming (for UI builders) or
via a specialized language (for UI management systems). Moreover, linking the UI and the application domain remains a
time-consuming activity since it has to be done manually. A number of systems, such as Genius [18], are described in the
literature that automatically generate UI prototypes from specifications of the application domain; yet, these systems use
data structure specifications and ignore task analysis in their process of UI generation. Therefore, the scope of these
systems is limited to data-oriented applications.
In this paper2, we suggest an approach for requirement engineering linking UML models with UI prototypes. It provides a
five activities process for deriving a prototype of the UI from scenarios and generating a formal specification of the
application. Scenarios are acquired in the form of UML collaboration diagrams and are enriched with UI information.
These diagrams are automatically transformed, based on our previous work [19, 35], into the UML Statechart
specifications of all objects involved. An algorithm is applied to generate a UI prototype from the set of obtained
specifications. The prototype is embedded in a UI builder environment for further refinement. Based on end user
feedback, the collaboration diagrams and the UI prototype may be iteratively refined, and the result of the overall process
is a specification consisting of the Statechart diagrams of all the objects involved, together with the generated and refined
prototype of the UI.
Section 2 of this paper gives a brief overview of the UML diagrams relevant for our work and introduces a running
example. Section 3 describes in detail the five activities of the process for deriving a UI prototype from scenarios
specifications. Section 4 addresses related work. In section 5, we discuss several aspects of our work. Finally, section 6
provides some concluding remarks and points out future work.
2 Unified Modeling LanguageThe UML [34] is an expressive language that can be used for problem conceptualization, software system specification as
well as implementation. It covers a wide range of issues from use cases and scenarios to state behavior and operation
declarations. The UML provides a syntactic notation to describe all major views of a system using different kinds of
diagrams. In this section, we first discuss the UML diagrams that are relevant for our approach: Class diagram (ClassD),
Use Case diagram (UsecaseD), Collaboration diagram (CollD) and Statechart diagram (StateD). We conclude the section
with an overview of the Object Constraint Language (OCL), which was adopted by the UML for capturing constraints. In
our approach, OCL and simple constraints are used for both complementing ClassDs and CollDs. As a running example,
we have chosen to study a part of an extended version of the Automatic Teller Machine (ATM) described in [29].
2.1 Class diagram (ClassD)The ClassD represents the static structure of the system. It identifies all the classes for a proposed system and specifies for
each class its attributes, operations, and relationships to other classes. Relationships include inheritance, association, and
aggregation. The ClassD is the central diagram of a UML model. Figure 1 depicts the ClassD for the ATM system.
2 A preliminary and less detailed version of this work can be found in [10b].
3/25
Transaction Account
User AT M
usecarry_out
interact* *
*
*
*
*
Figure1: ClassD of the ATM system.
2.2 Use case diagram (UsecaseD)The UsecaseD is concerned with the interaction between the system and actors (objects outside the system that interact
directly with it). It presents a collection of use cases and their corresponding external actors. A use case is a generic
description of an entire transaction involving several objects of the system. Use cases are represented as ellipses, and
actors are depicted as icons connected with solid lines to the use cases they interact with. One use case can call upon the
services of another use case. Such a relation is called a uses relation and is represented by a directed dashed line. The
direction of a uses relation does not imply any order of execution. Figure 2 shows an example of a UsecaseD
corresponding to the ATM system. In this UsecaseD, we find one actor (User) interacting with four use cases (Identify,
Withdraw, Deposit, and Balance). There are also several uses relations, for instance, the use case Withdraw uses the
services of the Identify uses case.
A UsecaseD is helpful in visualizing the context of a system and the boundaries of the system’s behavior. A given use
case is typically characterized by multiple scenarios.
Identify
Withdraw
Deposit
Balance
<<uses>>
<<uses>>
<<uses>>User
Figure 2: UsecaseD of the ATM system.
Referring to the two recent workshops on Object Modeling and User Interface Design at CHI’97 and CHI’98,
respectively, we find that UsecaseD modeling are functionally equivalent to task analysis. The hierarchy of tasks found in
task analysis can be modeled in a UsecaseD with the extends relation. Other types of relationships between tasks may be
4/25
captured in a UsecaseD via UML stereotypes. Moreover, scenarios that refine use cases in a UsecaseD allow for a precise
description of the details of user interaction.
2.3 Collaboration diagram (CollD)A scenario shows a particular series of interactions among objects in a single execution of a use case of a system
(execution instance of a use case). Scenarios can be viewed in two different ways through sequence diagrams
(SequenceDs) or CollDs. Both types of diagrams rely on the same underlying semantics. Conversion from one to the
other is possible. For our work, we chose to use CollDs because the UML documentation defines them more precisely
than SequenceDs.
A SequenceD shows interactions among a set of objects in temporal order, which is good for understanding timing issues.
A CollD concentrates on the structure of the interaction between objects and their inter-relationships rather than on the
temporal dimensions of a scenario. A CollD is a graph where nodes are objects participating in the scenario and edges
represent structural relations between objects (association, aggregation, inheritance, etc.). Messages sent between objects
are labeled with a text string and a direction arrow. To a given edge, multiple messages in both directions can be attached.
Each message label includes a sequence number representing the nested procedural calling sequence throughout the
scenario, and the message signature. UI information may be specified as user-defined constraints on messages (see
Section 3.1).
Sequence numbers contain a list of sequence elements separated by dots. Each sequence element consists of a number of
parts, such as:
• a compulsory number showing the sequential position of the message, and
• a letter indicating a concurrent thread (see messages 8a and 8b in Figure 3(a)), and
• an iteration indicator * indicating that several messages of the same form are sent sequentially to a single target or
concurrently to a set of targets.
For a complete definition of CollDs refer to [34].
Figures 3(a) and 3(b) depict two scenarios (CollDs) of the use case Withdraw. Figure 2(a) represents the scenario where
the withdrawal is correctly registered (regularWithdraw), and Figure 2(b) represents the case where the balance account is
not sufficient (balanceError).
1: insert_card(pin) {userAction} →2: passwd:=enter_password() {inputData(Account.password)} →4: [ok=true]: kind:=enter_operation() {inputData(Transaction.kind)} →5: mnt:=enter_amount() {inputData(Transaction.amount)} →9: get_cash() {userAction} →10: get card(){userAction} →
:Transaction :Account
:User
3: ok:=check_account(pin, passwd) ↓6: balance:=check_balance(mnt) ↓8b[balance>=mnt] update(mnt, kind) ↓
8a[balance>=mnt]: create_transaction(pin, mnt, kind) ↓
7[balance>=mnt ]: ok:=deliver_cash(mnt) {outputData(“Take your Cash”)} →
:ATM
Figure 3(a): Scenario regularWithdraw of the use case Withdraw.
5/25
1: insert_card(pin) {userAction} →2: passwd:=enter_password() {inputData(Account.password)} →4: [ok=true]: kind:=enter_operation() {inputData(Transaction.kind)} →5: mnt:=enter_amount() {inputData(Transaction.amount)} →8: get_card(){userAction} →
:Account
:User
3: ok:=check_account(pin, passwd) ↓6: balance:=check_balance(mnt) ↓
7[balance<mnt]: ok:=display_error() {outputData(“Insufficient funds”)} →
:ATM
Figure 3(b): Scenario balanceError of the use case Withdraw.
2.4 Statechart diagram (StateD)A StateD shows the sequence of states that an object goes through during its life cycle in response to stimuli. Generally, a
StateD may be attached to a class of objects with an interesting dynamic behavior.
The formalism (notation and semantics) used in StateDs is derived from Statecharts as defined by Harel [12]. Statecharts
are an extension of state-event diagrams to include hierarchy and concurrency. Any state in a Statechart can be
recursively decomposed into exclusive states (or-state) or concurrent states (and-state). When a transition in a Statechart
is triggered (event received and guard condition tested), the object leaves its current state, initiates the action(s) for that
transition and enters a new state. Any internal or external event is broadcasted to all states of all objects in the system.
Transitions between concurrent states are not allowed, but synchronization and information exchange are possible through
events. For sample StateDs, refer to figures 7(a) through 12.
2.5 Object Constraint Language (OCL)OCL offers UML modellers a means to describe a system more accurately than with diagrams alone. OCL is a language
in which one can write constraints that contain extra information or restrictions to UML diagrams. Constraints are
semantic conditions on UML model elements. They are displayed in braces ({constraint}), either directly in diagrams or
separately in a textual form. In the UML, fourteen standard constraints are defined [11]: association, global, local,
parameter, self, complete, disjoint, incomplete, overlapping, implicit, or, ordered, vote, and broadcast. It is also possible
to introduce user-defined constraints, by describing them as OCL expressions.
The OCL was originally developed by IBM and subsequently adopted by the Object Management Group (OMG) as a part
of the UML specification. It is intended to be simple to read and write and easy to use for non-programmers. The
principles of OCL are based on set theory and first order logic, and many of its concepts borrowed from the formal
specification language Z [38]. OCL has a number of fundamental datatypes (such as boolean, string, and numeric) and
collection types which are useful when working with lists of objects.
OCL can be used to specify class invariants, to describe event guard conditions and pre/post class methods. Furthermore,
OCL makes navigation through the class model easy and controllable. A detailed description of the OCL syntax can be
6/25
found in [34]. In our approach, OCL is used for enriching ClassDs (see Figure 6), and simple constraints are used for
annotating CollDs (see Figures 3(a) and 3(b)).
3 Description of the ApproachIn this section, we describe the process for deriving a UI prototype from scenarios using the UML artifacts. We aim to
provide a process that bridges two iterative software processes: the formal specification process as illustrated at the top of
Figure 4, and the UI prototyping process at the bottom of that figure.
DataSpecification
Scenarioacquisition
User interfacespecification
Specificationverification
Prototypegeneration
PrototypeEvaluation
Figure 4: View of the overall process combining formal specification and UI prototyping.
Data specifications (see Figure 4) are captured in a detailed ClassD which shows structural relationships between classes,
and specifies class attributes and methods together with pre- and postconditions. This information is used for scenario
acquisition via CollDs, and for prototype generation to enhance the visual aspect of the generated prototypes. User
interface specifications are derived from scenario descriptions, and are used for both generation of UI prototypes and for
specification verification (verifying coherence and completeness of the UI specification). The generated prototypes are
evaluated with end users to validate the users’ needs.
In this work, we focus on the UI prototyping process, essentially on the transformations represented by the bold arrows in
Figure 4. This process can be decomposed into five activities (see Figure 5) which are detailed below:
• Requirements acquisition (Section 3.1)
• Generation of partial specifications from scenarios (Section 3.2)
• Analysis of partial specifications (Section 3.3)
• Integration of partial specifications. (Section 3.4)
• User interface prototype generation (Section 3.5).
7/25
R eq u irem en tsacq u isition
C ollD s
C lassDU seC aseD
G en eratio n o f p a rtia lsp ec ifica tion s fro mscen arios
S ta teD s
In teg ratio n o f p a rtia lsp ec ifica tion s
In tegra tedS ta teD s
A n a ly sis o f p a rtia lsp ecifica tio n s
L abe lledS ta teD s
U ser in terfa ce p ro totyp eg en era tion
U I P ro to types
Figure 5: The five activities of the UI prototyping process.
3.1 Requirements acquisitionScenario modeling is the key technique mostly used in this activity. It is used in object-oriented methodologies [4, 16, and
33] as an approach to requirements engineering. The UML proposes a suitable framework for scenario acquisition using
UsecaseDs for capturing system fonctionalities and SequenceDs or CollDs for describing scenarios.
In this activity, the analyst first elaborates the ClassD of the system (Figure 1 shows an example), and for each class of
the ClassD, a detailed analysis can be done by identifying attributes and methods and defining pre- and postconditions.
An example of a detailed analysis of the class ATM is given in Figure 6. Secondly, the analyst elaborates the UsecaseD
for the system (see Figure 2). Finally, the analyst acquires scenarios as CollDs for each use case in the UsecaseD. Figures
3(a), and 3(b) show the two sample CollDs corresponding to the use case Withdraw of the ATM system.
Scenarios of a given use case are classified by type and ordered by frequency of use. We have considered two types of
scenarios: normal scenarios, which are executed in normal situations, and scenarios of exception executed in case of
errors and abnormal situations. The frequency of use (or the frequency of execution) of a scenario is a number between 1
and 10 assigned by the analyst to indicate how often a given scenario is likely to occur. In our example, the use case
Withdraw has one normal scenario (scenario regularWithdraw with frequency 10) and a scenario of exception (scenario
balanceError with frequency 4). This classification is used for the composition of UI blocks (see Section 3.5.4).
8/25
ATM
cash_available: boolean = truescreen: String = “main”cash_slot: String = “closed”card_slot: String = “empty”
insert_card(String pin)pre: cash_available=true and screen=”main” and cash_slot=”closed” and card_slot=”empty”post: cash_available=true and screen=”enter password” and cash_slot=”closed” and card_slot=”full”
enter_password()pre: cash_available=true and screen=”enter password” and cash_slot=”closed” and card_slot=”full”post: cash_available=true and (screen=”enter kind” or screen=”password incorrect”) and cash_slot=”closed” and card_slot=”full”
enter_kind()pre: cash_available=true and screen=”enter kind” and cash_slot=”closed” and card_slot=”full”post: cash_available=true and (screen=”deposit” or screen=”withdraw”) and cash_slot=”closed” and card_slot=”full”
enter_amount()pre: cash_available=true and (screen=”deposit” or screen=”withdraw”) and cash_slot=”closed” and card_slot=”full”post: cash_available=true and (screen=”deposit in progress” or screen=”withdraw in progress” or screen=”insufficient funds”) and cash_slot=”closed” and card_slot=”full”
verify_cash(mnt: float)pre: cash_available=true and screen=”withdraw in progress” and cash_slot=”closed” and card_slot=”full”post: cash_available=true and (screen=”take cash” or screen=”insufficient funds”) and (cash_slot=”opened” or cash_slot=”closed”) and card_slot=”full”
get_cash()pre: cash_available=true and screen=”take cash” and cash_slot=”opened” and card_slot=”full”post: cash_available=true and screen=”take card” and cash_slot=”closed” and card_slot=”ejected”
get_card()pre: cash_available=true and screen=”take card” and cash_slot=”closed” and card_slot=”ejected”post: cash_available=true and screen=”take card” and cash_slot=”closed” and card_slot=”empty”
Figure 6: The ATM class.
In the ATM system, the object ATM is a special object called interface object. An interface object is defined as an object
through which the user interacts with the system to enter input data and receive results. An interactive message is defined
as a message in a CollD that is sent to an interactive object. For UI generation purposes, we propose three user-defined
constraints associated with interactive messages. Note that the UML defines two standard constraints for messages: vote
and broadcast. The vote constraint restricts a collection of return messages, and the broadcast constraint specifies that the
constrained messages are not invoked in any particular order.
The three additional constraints for interactive messages are: inputData, outputData, and userAction. The inputData
constraint indicates that the corresponding message holds information input from the user. The outputData constraint
specifies that the corresponding message carries information for display. The userAction constraint indicates that a
control action such as a button action or a menu selection is associated with the corresponding message. When an
inputData or outputData constraints is attached to a message, a dependency relationship between this message and an
attribute of the class diagram is generally needed. This attribute represents the model part of the Model View Controller
9/25
(MVC) pattern [11b]. Dependencies in the UML are semantic connections between model elements. We define UI
dependency as a dependency between an interactive message with inputData or outputData constraints and the class
attribute affected by the interaction. A UI dependency can be visualized in a CollD either as a dashed line between the
dependent elements, or simply as a constraint parameter. In our work, we use the latter form. In Figure 3(a), for instance,
the message 2:passwd:=enter_password() has the inputData constraint {inputData(Account.password)}, indicating a UI
dependency with the attribute password of the class Account.
Given an interactive message that has been annotated with constraints and dependencies, our approach determines an UI
widget through which interaction will take in the UI prototype. This widget information replaces the original constraints
and dependencies in the CollD (see Figure 7). UI widget information is automatically determined based on the kind of
constraint and the type of the dependent attributes at hand. For generation, our approach applies in its current form the
following list of rules:
• a button widget is generated for a userAction constraint;
• an enabled textfield widget is generated in case of an inputData constraint with a dependent attribute of type String,
Real or Integer;
• a group of radio buttons widgets are generated in case of an inputData constraint with a dependent attribute of type
Enumeration having a size less than or equal to 6;
• an enabled list widget is generated in case of an inputData constraint with a dependent attribute of type
Enumeration having a size greater than 6 or with a dependent attribute of type collection ;
• an enabled table widget is generated in case of an inputData constraint with multiple dependent attributes;
• a disabled textfield widget is generated for an outputData constraint with one dependent attribute;
• a label widget is generated for an outputData constraint with no dependent attribute;
• a disabled list widget is generated in case of an outputData constraint with a dependent attribute of type
Enumeration having a size greater than 6 or with a dependent attribute of type collection ;
• a disabled table widget is generated in case of an outputData constraint with multiple dependent attributes.
Above list is based on heuristics and results found in the literature [3, 17, 23].
Applying these rules to the CollD of Figure3(a), we obtain the CollD shown in Figure 7.
1: insert_card(pin) {BUT} →2: passwd:=enter_password() {INP} →4: [ok=true]: kind:=enter_operation() {INP} →5: mnt:=enter_amount() {INP} →9: get_cash() {BUT} →10: get_card(){BUT} →
:Transaction :Account
:User
3: ok:=check_account(pin, passwd) ↓6: balance:=check_balance(mnt) ↓8b[balance>=mnt] update(mnt, kind) ↓
8a[balance>=mnt]: create_transaction(pin, mnt, kind) ↓
7[balance>=mnt ]: ok:=deliver_cash(mnt) {LAB} →
:ATM
Legend:BUT Button INP InputFiledLAB Label TEX TextField
Figure 7: Scenario regularWithdraw of the use case Withdraw with UI widgets.
10/25
3.2 Generation of partial specifications from scenariosIn this activity, we apply repeatedly on each CollD of the system the CTS (CollD To StateD) algorithm [35], in order to
generate automatically partial specifications for all the objects participating in the input scenarios.
Transforming one CollD into StateDs is, according to the CTS algorithm, a process of five steps. Step 1 creates a StateD
for every distinct class implied by the objects in the CollD. Step 2 introduces as state variables all variables that are not
attributes of the objects of the CollD. Step 3 creates transitions for the objects from which messages are sent. Step 4
creates transitions for the objects to which messages are sent. Finally, step 5 brings for all StateDs the set of generated
transitions into correct sequences, connecting them by states, split bars and merge bars. The sequencing follows the type
of messages in a CollD: iteration messages, conditional messages, concurrent messages, and messages with multiple
predecessors. After applying the CTS algorithm to the scenarios regularWithdraw and balanceError, we obtain for the
object ATM the partial StateDs shown in Figure 8(a) and Figure 8(b), respectively.
ATMPasswd: String, ok: boolean, kind: char,mnt, balance: float, num integer
insert_card {BUT}enter_operation [ok=true]{INP}
^kind
enter_password{INP}
^passwd → ok:=check_account(nip,passwd) enter_amount ^ mnt {INP}
→ balance:=check_balance(mnt)
[balance>=mnt]/deliver_cash
{TEX}
[balance>=mnt]
→ num:=Transaction.create(pin, mnt,kind)
[balance>=mnt]
→ Account.update mnt,kind)
get_cash {BUT}
get_card {BUT}
Figure 8(a): StateD for the object ATM generated by applying CTS algorithm on the scenario regularWithdraw.
ATMPasswd: String, ok: boolean, kind: char,mnt, balance: float, num: integer
insert_card {BUT} enter_operation[ ok=true]{INP}
^kind
enter_password{INP}
^passwd → ok:=check_account(nip,passwd)
enter_amount ^ mnt {INP}→ balance:=check_balance(mnt)
[balance<mnt] /display_error {TEX}
get_card {BUT}
Figure 8(b): StateD for the object ATM generated by applying CTS algorithm on the scenario balanceError.
11/25
3.3 Analysis of partial specificationsThe partial StateDs generated in the previous activity are unlabeled, i.e., their states do not carry names. However, the
scenario integration algorithm (see Section 3.4) is state based, requiring labeled StateDs as input. To obtain labeled
StateDs, our approach uses the pre- and postconditions of the underlying ClassD (cf. Figure 6) to add state names and
structuring information (grouping states). Given an unlabeled StateD, its state names are identified via the preconditions
of outgoing and the preconditions of incoming events. Note that the events in a StateD correspond to the methods of its
underlying class. Furthermore, structural information like grouping states may also be added.
Recall that pre- and postconditions of class methods are described using OCL. Let m be a class method, and pre(m) and
post(m) be the pre- and postconditions of that method, respectively. The conditions are expressed in disjunctive canonical
form, referring to class attributes. Syntactically, they adhere to the following subset of OCL:
pre(m) := ORexpessionpost(m) := ORexpression [(or IFexpression)*]IFexpression := if ORexpression then ORexpression endifORexpression := ANDexpression (or ANDexpression)*ANDexpression := Basicexpression (and Basicexpression)*Basicexpression := Identifier OP ValueIdentifier := Attribut | ParameterOP := < | <= | = | != | >= | >In the operation of state labeling, two main cases are considered (the details of the state labeling algorithm can be found
in [21]). The first case occurs when both the pre- and postconditions of the method m associated with the event e being
considered, are ORexpressions. In this case, the state from which the transition with event e emanates will be labeled
using pre(m) (see Figure 9(a)). The second case occurs when the postcondition of the event e is an IFexpression. Then,
the postcondition can be written in the following form: post(m) = (if C1 then D1 endif) or (if C2 then D2 endif) or … or
(if Cn then Dn endif), where Ci and Di are all ORexpressions. The algorithm takes the first pair of ORexpressions (C1, D1)
and assigns C1 to the state from which the transition with event e emanates, and D1 to the target state of that transition
(see Figure 9(b)). If coherence errors occur when handling events that are consecutive to event e, the algorithm
backtracks, and chooses the next pair of ORexpressions (C2, D2), and so on. There is at least one valid labeling for a given
StateD, providing that the conditions in the underlying class are semantically correct. In case several valid labelings are
possible, the algorithm leaves it up to the analyst to select the appropriate one.
(a) (b)
pre(m) e Ci e Di
Figure 9: Labeling states depending on post(m) of the event e.(a) post(m) is an ORexpression, (b) post(m) is an IFexpression.
Applying this algorithm to the StateD of Figure 8(a), we obtain the StateD shown in Figure10(a), annotated with the
labels explained in the legend of the figure. Applying the algorithm to the StateD of Figure 8(b), we obtain the StateD
shown in Figure 10(b). Note that no new state label is generated for this particular StateD.
12/25
Legend:S0 = cash_available=true and screen=”main” and cash_slot=”closed” and card_slot=”empty”S1 = cash_available=true and screen=”main” and cash_slot=”closed” and card_slot=”fill”S2 = cash_available=true and (screen=”enter_kind” or screen=”Password incrorrect”) and cash_slot=”closed” and card_slot=”fill”S3 = cash_available=true and (screen=”deposit” or screen=”withdraw”) and cash_slot=”closed” and card_slot=”fill”S4 = cash_available=true and (screen=”deposit in progress” or screen=”withdraw in progress” or screen=”insuffisient balance”) and cash_slot=”closed” and card_slot=”fill”S5 = cash_available=true and screen=”take cash” and cash_slot=”opened” and card_slot=”fill” and num=0S6 = cash_available=true and screen=”take cash” and cash_slot=”opened” and card_slot=”fill” and num!=0S7 = cash_available=true and screen=”take cash” and cash_slot=”opened” and card_slot=”fill” and Account.balance=balanceS8 = cash_available=true and screen=”take cash” and cash_slot=”opened” and card_slot=”fill” and Account.balance=balance-mntS9 = (S6 or S5) and (S7 or S8)S10 = cash_available=true and screen=”take card” and cash_slot=”closed” and card_slot=”fill”
ATMPasswd: String, ok: boolean, kind: char,mnt, balance: float, num: integer
S0
insert_card {BUT}S1
enter_operation[ ok=true]{INP}
^kindS2 S3
enter_password{INP}^passwd → ok:=check_account(nip,passwd)
S4
enter_amount ^ mnt {INP}→ balance:=check_balance(mnt)
[balance>=mnt]/ deliver_cash
{ TEX}
S5
S7S8
[balance>=mnt]
→ num:=Transaction.create(pin, mnt,kind)
[balance>=mnt]
→ Account.update mnt,kind)
S10
get_cash {BUT}
get_card {BUT}
S9
S6
Figure 10(a): The labeled StateD obtained from the StateD of Figure 8(a).
ATMPasswd: String, ok: boolean, kind: char,mnt: float
S0
insert_card {BUT}S1
enter_operation[ ok=true]{INP}
^kindS2 S3
enter_password{INP}
^passwd → ok:=check_account(nip,passwd)
S4
enter_amount ^ mnt {INP}→ balance:=check_balance(mnt)
[balance<mnt] /display_error {TEX}
S10
get_card {BUT}
Figure 10(b): The labeled StateD obtained from the StateD of Figure 8(b).
3.4 Integration of partial specificationsThe objective of this activity is to integrate for each object of the system all its partial labeled StateDs into one single
StateD [19,20]. We proceed by merging initially the StateDs by use case, and afterwards we merge the StateDs of the
different use cases. This integration in two steps is required for the activity of UI prototype generation (see Section 3.5).
Figure 11, for instance, shows the resultant StateD of the ATM object after integrating the two scenarios of the use case
Withdraw.
13/25
ATMPasswd: String, ok: boolean, kind: char,mnt, balance: float, num: integer
S0
T1= insert_card{BUT}
S1
T3= enter_operation[ok=true] {INP}
^kindS2 S3
T2= enter_password{INP}
^passwd → ok:=check_account(nip,passwd)
S4
T4= enter_amount ^ mnt{INP} → balance:=check_balance(mnt)
T5= [balance>=mnt]/ deliver_cash
{TEX}
S5
S7
S6
S8
T6= [balance>=mnt]
→ num:=Transaction.create(pin, mnt,kind)
T7= [balance>=mnt]
→ Account.update mnt,kind)
S10
T8= get_cash{BUT}
T10= get_card{BUT}
S9
T9= [balance<mnt]/ display_error {TEX}
Figure 11: The resultant StateD for the ATM object after integration of the two scenarios of the use case Withdraw.
The integration algorithm is incremental, and consists of three substeps: state checking, state merging and transition
merging.
3.4.1 State checkingBefore merging states of two StateDs, the algorithm checks if the same state appears in different levels of hierarchy in to
the two StateDs. Suppose that the algorithm has to merge for the object Obj, the StateD Std1 of Figure 11 (which
corresponds to the result of integrating two scenarios sc1 and sc2) and the StateD Std2 (corresponding to the scenario sc3).
The following errors will be detected:
The state b exists more than one in Std2.
The state e is not at the same level in Std1 and Std2.
If errors are detected the integration algorithm fails, and the analyst has to correct these incoherences in StateDs to be
merged. We suppose that the analyst has fixed the detected errors by replacing e with l in the StateD Std1 and b with k in
the StateD Std2 as shown in Figure 12.
14/25
x,y,z:scenarioList:={Sc1,Sc2}dynamicScenarioList:=scenarioListtransScenarioList:=[Sc1,Sc2,Sc2,Sc1]
aT1=e1[sc and x>y]/a1/sa
b
eT3=e3[sc and x>0]/a3/sa
f
gT4=e4[sc and z>0]/a4/sa
h
T2=e2[sc]/a2/sa
x,y:
e T1=e1[x>y]/a5
b
T2=e6[y>0]/a6
T6=e5/a7b
T7=e7[z>0]/a7ji
aT4=e7[x>0]/a1
T3=e1[x>0]/a1
d
T5=e2/a2
StateD Std1 StateD Std2
k in place of b
c
g
c
l in place of e
Figure 12: Examples of detected errors in the state checking operation.
3.4.2 State mergingWhen no errors are detected in the operation of state checking, the algorithm proceed to merge states of the two StateDs
(Std1 and Std2) level by level from top to bottom. For each level , the algorithm looks for the initial states in the two
StateDs. For instance, the state a is the initial state in the top level of Std1 and the state e is the initial state for in the top
level of Std2. If the initial states at the same level of the two StateDs are similar, an or-merging will take place which is
equivalent to the union of states in the same level of the two StateDs. If initials states are different, we merge the two
initial states into an and-state (which is the case in Std1 and Std2, see Figure 13). For the rest of states, we do an or-
merging.
3.4.3 Transition mergingIn a StateD, transitions are written in form of event[condition]/actions. The algorithm looks in StateDs to be merged, for
transitions having the same triplet fromstate, tostate, and the event field. Suppose that the algorithm finds trans1
(E[C1]/A1) in the first StateD and trans2 (E/[C2]/A2) in the second StateD having the same triplet Three cases have to be
considered. The first case occurs when trans1 and trans2 have the same condition fields (C1=C2) and different action parts
(A1!=A2); the algorithm outputs a message that the resultant StateD will have a non-deterministic behavior. The second
case occurs when trans1 and trans2 have different condition fields (C1!=C2) and the same action parts (A1=A2); the
merged transition will have the form E[C1 or C2]/A (A=A1=A2). The last case occurs when trans1 and trans2 have
different condition fields (C1!=C2) and different action parts (A1!=A2); the two transitions are not merged in this case. The
algorithm checks also the case when the same transition leads to different states in the two StateDs. An and-state is
created as a tostate for this transition. For example , the transitions T5 in Std1 and T2 in Std2 have the same fromstate and
tostates different. An andstate is created as the tostate of this transition (see Figure 13).
15/25
x,y,z:scenarioList:={Sc1,Sc2,Sc3}dynamicScenarioList:=scenarioListtransScenarioList:=[{Sc1,Sc3},{Sc2,Sc3},{Sc2},{Sc2},{Sc3},{Sc3},{Sc3},{Sc3},{Sc3}]
bT1=e1[(sc and x>y) or (sc and x>0)]/a1/sa
lT3=e3[sc and x>0]/a3/sa
f
gT4=e4[sc and z>0]/a4/sa
h
T6=e1[sc]/a2/sa
a
e T5=e7[sc and x>y]/a1/sa
hT3=e5[sc]/a7/sa
d
lT8=e3[sc and x>0]/a3/sa
f
T2=e2[sc]/a2/sa
Figure 13: Resultant StateD corresponding to the integration of the StateDs Std1 and Std2 of Figure 12.
3.4.4 Interleaving problemIn general, after integrating several scenarios, the resulting specification will capture more than the initial scenarios.
Figure 14 provides an example illustrating this problem (scenarios are represented as StateDs). Suppose we merge the two
scenarios Sc1 and Sc2. the resultant specification Sc will not only capture Sc1 and Sc2, but also two new scenarios,
corresponding to transaction sequences (T1, T2, T7, T8) and (T5, T6, T3, T4), respectively.
b c d eT1 T2 T3 T4
fa c g eT5 T6 T7 T8
a
Sc1
Sc2
c e
T1 T2b dT3 T4
f gT5 T6 T7 T8
Sca
Figure 14 : Interleaving Problem between Sc1 and Sc2.
16/25
To fix this problem, we have defined three composition variables: scenarioList, dynamicScenarioList and
transScenarioList.scenarioList is a set of scenario names (see Figure 13), it keeps scenario names that the StateD
captures. dynamicScenarioList is also a set of scenario names. It is initialized to scenarioList and can change during
the execution of the StateD. At each time of execution, it saves scenario names that remain possible in the next execution.
transScenarioList is an array of sets of scenario names. It keeps the scenario names concerned by each transition of
the StateD.
For each transition in a StateD, we introduce a special condition sc which is equal to [(transScenarioList[tr] ∩
dynamicScenarioList) ≠ φ] (tr is the index of a transition); and a special action sa which is equal to
dynamicScenarioList:= dynamicScenarioList ∩ transScenarioList[tr] excepting for transitions that end one
scenario where we introduce a re-initialization action ra which is equal to dynamicScenarioList:= scenarioList.
3.5 User interface prototype generationIn this activity, we derive UI prototypes for all the interface objects found in the system. Both the static and the dynamic
aspects of the UI prototypes are generated from the StateDs of the underlying interface objects. For each interface object,
we generate from its StateDs, as found in the various use cases, a standalone prototype. This prototype comprises a menu
to switch between the different use cases. The different screens of the prototype visualize the static aspect of the object;
the dynamic aspect of the object maps into the dialog control of the prototype. In our current implementation, prototypes
are Java applications comprising each a number of frames and navigation functionality (see Figures 15 and 19).
The process of prototype generation from interface object behavior specifications can be summarized in the following
algorithm (in the pseudocode, we use the “dot”-notation known from object-oriented languages).
Let IO be the set of interface objects in the system,Let UC={uc1, uc2,..., ucn} be the set of use cases of the system,For each io in IO
For each uci in UCIf io.usedInUsecase(uci) then
sd=io.getStateDforUsecase(uci)sd.generatePrototype()
End IfEnd Forio.generateCompletePrototype()
End For
The operation usedInUsecase(uci), applied to the object io, checks if the object io participates or not in one or more of
the CollDs associated with use case uci. If the operation returns true, the operation getStateDforUsecase(uci) is called,
which retrieves sd, the StateD capturing the behavior of object io that is related to this use case. From StateD sd, a UI
prototype is generated using the operation generatePrototype().
The operation generateCompletePrototype() integrates the prototypes generated for the various use cases into one single
application. This application comprises a menu (see Figure 15) providing as options the different use cases in which the
interface object participates.
Figure 15: Menu generated for the interface object ATM.
17/25
The operation of prototype generation (generatePrototype()) is composed of the following five operations:
• Generating graph of transitions (Section 3.5.1)
• Masking non-interactive transitions (Section 3.5.2)
• Identifying user interface blocks (Section 3.5.3)
• Composing user interface blocks (Section 3.5.4)
• Generating frames from composed user interface blocks (Section 3.5.5).
3.5.1 Generating graph of transitionsThis operation consists of deriving a directed graph of transitions (GT) from the StateD of an interface object o related to
a use case uci. Transitions of the StateD will represent the nodes of the GT. Edges will indicate the precedence of
execution between transitions. If transition t1 precedes transition t2 in execution, we will have an edge between the nodes
representing t1 and t2.
A GT has a list of nodes nodeList, a list of edges edgeList, and a list of initial nodes initialNodeList (entry nodes for the
graph).
The list of nodes nodeList of a GT is easily obtained since it corresponds to the transition list of the StateD at hand. The
list of edges edgeList of a GT is obtained by identifying for each transition t all the transitions that enter the state from
which t can be triggered. All these transitions precede the transition t and hence define each an edge to node t.
In the ATM system, given the StateD of the ATM object for the use case Withdraw (see Figure 11), the graph of
transitions generated is shown in Figure 16(a). The star character (*) is used to mark initial nodes in the graph.
Figure 16: (a) Transitions graph for the object ATM and the use case Withdraw (GT). (b) Transitions graph after masking non-interactive transitions (GT’).
(b)(a)
T2
T3
T4
T5
T6T7
T13
* T1
T12
T10
T2
T3
T4
T5
T13
* T1
T12
T10
18/25
3.5.2 Masking non-interactive transitionsThis operation consists of removing all transitions that do not directly affect the UI (i.e., that do not carry widgets). These
transitions are called non-interactive transitions. All such transitions are removed from the list of nodes nodeList and from
the list of initial nodes initialNodeList, and all edges defined by those transitions are removed from edgeList.
When a transition t is removed from nodeList, we remove all edges where t takes part, and we add new edges in order to
“bridge” the removed transition nodes. If the initialNodeList list of initial transitions contains any non-interactive
transitions, they are replaced by their successor nodes. The result of this operation on the graph of Figure 16(a) is the
updated graph GT’ shown in Figure 16(b).
3.5.3 Identifying user interface blocksThis operation consists of constructing a directed graph where nodes represent UI Blocks (UIB). A UIB is a subgraph of
GT’ consisting of a sequence of transition nodes that is characterized by a single input and a single output edge. The
beginning and the end of each UIB are identified from the graph GT’ based on the following rules:
(Rule 1) An initial node of GT’ is the beginning of a UIB.
(Rule 2) A node that has more than one input edge is the beginning of a UIB.
(Rule 3) A successor of a node that has more than one output edge is the beginning of a UIB.
(Rule 4) A predecessor of a node that has more than one input edge ends a UIB.
(Rule 5) A node that has more than one output edge ends a UIB.
(Rule 6) A node that has an output edge to an initial node ends a UIB.
Applying these rules to the graph of Figure 16(b), we obtain the graph GB shown in Figure 17 (a).
In this example, Rule 1 determines the beginning of B1 (T2) and Rule 5 the end of B1 (T3). Rules 3 and 5 determine the
UIBs B2 and B4. The UIB B3 is generated by applying Rule 2 and 6.
3.5.4 Composing user interface blocksGenerally, the UI blocks obtained from the previous operation contain only widgets and represent small parts of the
overall use case functionality. Our approach supports the combination of UIBs in order to have more interesting blocks
that can be transformed into suitable graphic windows. We use the following rules (heuristics) to merge the UIBs of a use
case :
(Rule 7) Adjacent UIBs belonging to the same scenario are merged (scenario membership).
(Rule 8) The operation of composition begins with scenarios having the highest frequency (scenario classification, see
Section 3.1).
(Rule 9) Two UIBs can only be grouped if the total of their widgets is less than 20 (ergonomic criterion).
Applying these rules to the GB of Figure 17(b) results the graph GB’ of UIBs shown in Figure 17(c).
19/25
Figure 17: Graph GB resulting from UIB identification on the graph GT’ of Figure 16(b)(a) elapsed view(b) collapsed view(c) GB’ result after UIB composition.
3.5.5 Generating frames from composed user interface blocksIn this operation, we generate for each UIB of GB’ a graphic frame. The generated frame contains the widgets of all the
transitions belonging to the concerned UIB. Edges between UIBs in GB’ are transformed to call functions in the
appropriate frame classes. In our current implementation, Java code is generated that is compatible with the interface
builder of Visual Café [37]. This gives the analyst the opportunity to customize the visual aspect of the generated frames.
The two frames derived from the composed building blocks of Figure 17(c) are shown in Figure 18.
The dynamic aspect of the UI is controlled by behavior specification (StateD) of the underlying interface object. Running
the generated prototype means symbolic execution of the StareD, or in our case, traversal of the transition graph GT’. The
prototype responds to all user interaction events captured in GT’, and ignores all other events.
To support prototype execution, a Simulation Window is generated (Figure 19, bottom window), as well as a dialog box to
Choose Scenarios (Figure 19, middle-right window). For example, after selecting the use case Withdraw from the
UseCases menu (Figure 19, top window), a message is displayed in the simulation window that confirms the use case
selection and prompts the user to click the button Insert_card. When this button is clicked, the fields Password, Kind of
Transaction, and Amount are enabled, and the simulator prompts the user for information entry. When execution reaches
a node in GT’ from which several continuation paths are possible, the prototype displays the dialog box for scenario
selection. In the example of Figure 19, the upper selection corresponds to the scenario regularWithdraw and, the lower
one to the scenario balanceError. Once a path has been selected, the traversal of GT’ continues.
B1
+B2
+B3
(b)
(c)
T2
T3
T4
T5
T13
* T1
T12
T10
B2
B1
B3
B4
B4
(a)
20/25
Figure 18: Frames generated from the graph GB’ of Figure 17(c) (use case Withdraw).
Figure 19: Prototype execution involving main window for use case selection (top window), a frame of the prototype UI (middle-left window), the simulation window (bottom window), and the dialog box for scenario selection
(middle-right window).
4 Related WorkIn this section, we first review some related work in the area of automatic generation of UIs from specifications. Then, we
address research dealing with the simulation of specifications.
A number of methods have been suggested for deriving the UI from specifications of the application domain. Typically,
data attributes serve as input for the selection of interaction objects according to rules based on style guidelines such as
CUA (Common User Access) [17], OpenLook [36], and Motif [25]. Such methods include the Genius, Janus, and
TRIDENT approaches.
In Genius [18], the application domain is captured in data models that are extended entity-relationship models [5]. The
analyst defines a number of views, where each view is a subset of entities, relationships, and attributes of the overall data
model, and specifies how these views are interconnected by means of a Petri-net based dialogue description. From these
view and dialog specifications, Genius generates the UI. Note, however, that the specification process is completely
manual.
21/25
Janus [2] derives the different windows of a UI from object models as introduced by Coad and Yourdon [8]. Non-abstract
classes are transformed into windows, with attributes and methods marked as irrelevant for the UI being ignored in the
transformation process. Janus does not address the dynamic aspect of UIs.
Note that, in contrast to our approach, both Genius and Janus use data structure specifications for UI generation, but
ignore task analysis altogether. As a consequence, such methods are little useful for systems other than data-oriented
applications.
TRIDENT [3] leverages both task analysis and functional requirements analysis. Task analysis proceeds by decomposing
the application into interactive tasks and by determining task attributes such as importance and user stereotype (user’s
task experience, user’s system experience, etc.). Functional requirements analysis builds an entity-relationship model for
the data and extracts from task analysis the tasks that should be treated as internal functions. An activity-chaining graph is
drawn to connect interactive tasks to data and functions. This graph serves as the input for the selection of different
windows, referred to as presentation units. TRIDENT provides three types of assistance in defining presentation units,
but is not specific about this assistance, nor does it address the dynamic aspect of UIs.
Simulation of specifications is supported by a variety of methods and tools, including STATEMATE, SCR, Albert, and the
work by Koskimies et al.
STATEMATE [13] is a commercial tool, which provides graphical and diagrammatic languages for describing a system
under development in three different views: structural, functional, and behavioral. Behavioral views are captured by
StateDs. The tool supports system simulation for verification purposes as well as automatic code generation. UI
generation is not supported. We consider STATEMATE as a complementary tool in respect to our approach: StateDs
synthesized by a tool such as ours may be passed to STATEMATE for simulation and analysis, and conversely, StateDs
of interface objects specified with STATEMATE may be complemented with a UI prototype using our approach. Thus,
with the two tools combined, both the functional and the UI aspect of a system can be simulated.
The SCR method [14] suggests a tabular notation for specifying requirements and provides a set of tools for simulation
and for automatic error detection. The formal model of specifications is the classic state machine model, and therefore, in
contrast to StateDs, concurrency is not supported. The SCR simulator tool allows for the integration of UIs; yet, the UIs
must be constructed manually using a GUI builder.
Albert [9] is a formal language for requirements engineering. It organizes a specification around active entities called
agents. Albert provides tools for incompleteness and inconsistency checks, and proposes an animation tool. This latter
tool, announced for the end of 1999, will allow for the exploration of the different possible “lives” associated with a given
specification.
Koskimies et al. [22], finally, present an algorithm for synthesizing state machines (StateDs) from a set of scenarios (the
differences to our synthesis algorithm are detailed in [35]). They propose an approach for design called design by
animation. During the simulation of the synthesized state machines, new scenarios are generated which may in turn fuel
the synthesis of more comprehensive state machines. Scenario generation can be supported via a UI, which must be
crafted manually.
6 Discussion of ApproachBelow, we discuss our approach in respect to the following points: scope and limitations, scenario-based approach, rapid
and evolutionary prototyping, validation, and practicality.
Scope and limitations of approach
22/25
The scope of our approach is threefold: (1) it proposes a process for requirements engineering compliant with the UML,
(2) it provides automatic support for building object specifications, and (3) it supports UI prototyping. Yet, at least three
limitations apply. First, the main direction of the approach is forward, that is, generation starts always with scenarios,
whereas modifications in the resultant object specifications and UI prototypes cannot be mapped back automatically into
the scenario set. Eventually, automatic modification of scenarios through the UI prototype should be supported. Second,
our approach may be applied to reactive systems exhibiting windows and widgets interfaces. However, in its current
form, it fails to support alternative UI paradigms. Finally, verification of characteristics such as coherence, completeness,
etc. is not supported. Rather, we have to rely on external tools such as STATEMATE to verify the specifications.
Scenario-based approach
Our approach to UI generation exhibits the advantages of scenario-based techniques. In contrast to many data-oriented
methods (see previous section), UIs are generated from specifications describing dynamic system behavior, which are
derived from task analysis. Once they are generated, data specifications may be used as the basis for further refinements.
In line with Rosson [30], who advocates a “model-first, middle-out design approach” that interleaves the modeling of
objects and scenarios, we put the emphasis on the (dynamic) modeling aspect, and generate the dynamic core of UIs
rather than focus on screen design and user-system dialog.
As scenarios describe only partial system views, there is a need to elicit all possible scenarios of the system. We have
defined transition variables as described in Section 3.4 to prohibit scenario interleaving, that is, the resulting
specifications will capture exactly the input scenarios and not more. However, our scenario integration algorithm can be
configured to allow scenario interleaving and to capture more than the mere input scenarios. In this way, new scenarios
may be generated from the already existing ones [19].
Rapid and evolutionary prototyping
In the proposed framework, we aim at rapid prototyping for the purpose of end user validation at an early stage of
development. The generated prototype serves as a vehicle for evaluating and enhancing the UI and the underlying
specification. Since the prototype is generated in Java source code, it can be evolved at the code level towards the target
application, to cover data and functional aspects. Since our framework is embedded in the UML, these aspects are
provided as class diagrams and activity diagrams, respectively, that may be transformed into Java classes by use of a
CASE tool such as Rational/Rose[29].
Validation of approach
The four algorithms (see Sections 3.2, 3.3, 3.4, and 3.5) that constitute the core of our approach have all been
implemented in Java. For scenario acquisition and for the presentation of the resulting specifications, we have adopted
two textual formats. The analyst may eventually be shielded from these formats by graphical editors for CollDs and
StateDs, like the ones found in commercial CASE tools. The Java code generated for the UI prototype is fully compatible
with the interface builder of Visual Café [37]. This permits the analyst to enhance the visual aspect of the UI before and
during end user validation. Note that the four algorithms have polynomial complexity.
Our approach has been successfully applied to a number of examples such as a library system presented in [10b], a gas
station simulator [7], the ATM (Automatic Teller Machine) system [33] presented in this paper, and a filing system [10].
On the average, half an hour per scenario was spent to convert an informal scenario description into a newly generated UI
prototype. For instance, in the case of the ATM example, comprising three main use cases with a total of seven scenarios,
half a day’s work yielded the overall UI prototype, as well as the complete set of StateDs of all interface and non-
interface objects involved. We estimate that coding the prototype and synthesizing the StateDs by hand would have taken
us three times this effort or more.
Practicality of approach
23/25
Our vision of a professional tool that supports our approach is a CASE tool supplying, beyond the functionality of the
algorithms of the approach, graphical editors for the UML diagrams needed, as well as a “widget tool” for editing CollDs.
Such a tool may visualize the widgets generated from interactive messages and may allow changing widget styles by
direct manipulation. Furthermore, such a tool may support a wider range of widget types than is currently being provided.
At the conceptual level, to further practicality, the rules for UI generation (see Sections 3.5.3 and 3.5.4) may be refined.
7 Conclusion and Future WorkThe work presented in this paper proposes a new approach to the generation of UI prototypes from scenarios. Scenarios
are acquired as CollDs enriched with UI information. These CollDs are transformed into dynamic specifications of the UI
objects involved in the system. Static and dynamic aspects of the UI are then derived from the obtained dynamic
specifications of the UI objects.
The most interesting features of our approach lie in the automation brought upon by the deployed algorithms, in the use of
the scenario approach addressing not only sequential scenarios but also scenarios that exhibit concurrent behaviors, and in
the derivation of executable prototypes that are embedded in a UI builder environment for refinement. The obtained
prototypes can be used for scenario validation with end users and can be evolved towards the target application.
As future work, we aim to provide automatic support for verification of scenarios and specifications (cf. top of Figure 4).
Furthermore, we plan to support backward engineering by allowing the automatic modification of scenarios through the
UI prototypes.
References[1] J. S. Anderson, and B. Durney, “Using Scenarios in Deficiency-driven Requirements Engineering”, Requirements Engineering’93,
IEEE Computer Society Press, 1993, pp. 134-141.
[2] H. Balzert, “From OOA to GUIs: The Janus System”, IEEE Software, 8(9), February 1996, pp. 43-47.
[3] F. Bodart, A.-M. Hennebert, J.-M. Leheureux, I. Provot, and J. Vanderdonckt, “A Model-based Approach to Presentation: A
Continuum from Task Analysis to Prototype”, Proceedings of the Eurographics Workshop on Design, Specification, Verification
of Interactive Systems, Carrara, Italy, June 1994, Focus on Computer Graphics, Springer-Verlag, Berlin, pp.77-94.
[4] G. Booch, Object Oriented Analysis and Design with Applications, Benjamin/Cummings Publishing Company Inc., Redwood City,
CA, 1994.
[5] P. Chen, “The Entity-Relationship Model – Toward a Unified View of Data”, ACM Transactions on Database Systems, 1(1), 1976,
pp. 9-36.
[6] G. Chin and M.B. Rosson, “Progressive Design: Staged Evolution of Scenarios in the Design of a Collaborative Science Learning
Envirinment”, Proc. of the Conference on Human Factors in Computing Systems (CHI’98), Los Angeles, April 1998, pp.611-618.
[7] D. Coleman, P. Arnold, S. Bodoff, C. Dollin, H. Gilchrist, F. Hayes, and P. Jeremaes, Object-Oriented Development: The Fusion
Method, Prentice-Hall, Inc., 1994.
[8] P. Coad and E. Yourdon, Object-oriented Analysis. Prentice Hall, Englewod Cliffs, NJ, 1991.
[9] P. Du Bois, E. Dubois, and J-M. Zeippen, “On the use of a formal requirements engineering language: The generalized railroad
crossing probelm”, Proc. of the Third IEEE International Symposium on Requirements Engineering, IEEE ICS Press, 1997.
[10] K. W. Derr, Applying OMT: A practical step-by-step guide to using the Object Modeling Technique, SIGS BOOKS/Prentice Hall,
1996.
[10b] M. Elkoutbi, I. Khriss, and R. K. Keller, Generating User Interface Prototypes from Scenarios, to appear in Proceedings of the
Fourth IEEE International Symposium on Requirements Engineering (RE’99), Limerick, Ireland, June 1999.
[11] H-E. Eriksson, and M. Pinker, UML-Toolkit, John Wiley and Sons, 1998.
24/25
[11b] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns. Elements of Reusable Object-Oriented Software. Addison-
Wesley, 1995.
[12] D. Harel, “Statecharts: A visual formalism for complex systems”, Science of Computer Programming, 8, June 1987, pp. 231-274.
[13] D. Harel, H. Lachover, A. Naamad, A. Pnueli, M. Politi, R. Sherman, A. Shtull-Trauring, and M. Trakhtenbrot, “STATEMATE:
A Working Environment for the Development of Complex Reactive Systems”, IEEE Transactions on Software Engineering,
(16)4, April 1990, pp. 403-414.
[14] C. Heitmeyer, J. Kirby, B. Labaw, and R. Bharadwaj, “SCR*: A Toolset for Specifying and Analyzing Software Requirements”,
Proc. of the 10th Annual Conference on Computer-Aided Verification, (CAV’98), Vancouver, Canada, 1998, pp. 526-531.
[15] P. Hsia, J. Samuel, J. Gao, D. Kung, Y. Toyoshima, and C. Chen, “Formal approach to scenario analysis”, IEEE Software, (11)2,
March 1994, pp. 33-41.
[16] I. Jacobson, M. Christerson, P. Jonson, and G. Overgaard, Object-Oriented Software Engineering: A Use Case Driven Approach,
Addison-Wesley, 1992.
[17] IBM, Systems Application Architecture: Common User Access – Guide to User Interface Design – Advanced Interface Design
Reference, IBM, 1991.
[18] C. Janssen, A. Weisbecker, and U. Ziegler, “Generating User Interfaces from Data Models and Dialogue Net Specifications”,
Proc. of the Conference on Human Factors in Computing Systems (CHI’93), Amsterdam, The Netherlands, April 1993, pp. 418-
423.
[19] I. Khriss, M. Elkoutbi, and R. K. Keller, “Automating the Synthesis of UML Statechart Diagrams from Multiple Collaboration
Diagrams”, Proc. of the International Workshop on the Unified Modelling Language UML’98: Beyond the Notation, Mulhouse,
France, June 1998, pp. 115-126bis.
[20] I. Khriss, M. Elkoutbi and R. K. Keller, A New Approach to the Synthesis of Behavioral Specifications from Scenarios, Technical
Report GELO-82, Université de Montréal, Montréal, Québec, Canada, January 1998.
[21] I. Khriss, M. Elkoutbi and R. K. Keller, Automatic Synthesis of Behavioral Specifications from Scenarios, Technical Report
GELO-96 (in preparation), Université de Montréal, Montréal, Québec, Canada, January 1999.
[22] K. Koskimies, T. Systa, J. Tuomi and T. Mannisto, “Automatic support for modeling OO software”, IEEE Software, 15(1),
January/February 1998, pp. 42-50.
[23] T.P. Maher. Computer Science Thesis "Automated Generation of the User Interfaces", Department of Computer Science and
Computer Engineering, La Trobe University, Melbourne, Australia, 1994.
[24] B. A. Myers, “User Interface Software Tools”, ACM Transactions on Computer-Human Interaction, 2(1), March 1995, pp. 64-
103.
[25] Open Software Foundation, OSF/Motif Style Guide, Prentice Hall, Englewood Cliffs, NJ, USA, 1990.
[26] B. A. Nardi, “The Use Of Scenarios In Design”, SIGCHI Bulletin, 24(4), October 1992.
[27] C. Potts, K. Takahashi and A. Anton, Inquiry-Based Scenario Analysis of System Requirements, Technical Report GIT-CC-94/14,
Georgia Institute of Technology, 1994.
[28] Rational Software Corporation, Rational Objectory Process 2.1 – your UML Process, Santa Clara, CA, 1998.
[29] Rational Software Corporation, Rational Rose, Santa Clara, CA, 1998.
[30] M. B. Rosson, “Integrating Development of Task and Object Models”, Communications of the ACM, 42(1), January 1999, pp. 49-
56.
[31] M. B. Rosson and J.M. Carroll, “Integrating Task and software development in object-oriented applications”, Proc. Of Human
Factors in Computing Systems, CHI’95, ACM Press, 1995, pp.377-384.
[32] K. S. Rubin and A. Goldberg, “Object Behavior Analysis”, Communications of the ACM, 35(9), September 1992, pp. 48-62.
[33] J. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, and W. Lorensen, Object-oriented Modeling and Design, Prentice-Hall, Inc.,
1991.
[34] J. Rumbaugh, I. Jacobson, and G. Booch, The Unified Modeling Language Reference Manual, Addison Wesley, Inc., 1999.
[35] S. Schönberger, R. K. Keller and I. Khriss, Algorithmic Support for Transformations in Object-Oriented Software Development,
Technical Report GELO-83, Université de Montréal, Montréal, Québec, Canada, April 1998.
25/25
[36] Sun Microsystems, Inc. and AT&T, OPEN LOOK GUI Application Style Guidelines, Addison-Wesley, USA, 1990.
[37] Symantec, Inc, Visual Café for Java: User Guide, Symantec, Inc., 1997.
[38] J.B. Wordsworth, Software Development with Z: A Practical Approach to Formal Methods in Software Engineering, Addison-
Wesley, Wokingham, 1992.