SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid...

97
SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer Engineering Ottawa, Canada, K1S 5B6 http://www.sce.carleton.ca/faculty/petriu.html

Transcript of SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid...

Page 1: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 1

Software Performance Modeling

Dorina C. Petriu, Mohammad Alhaj, Rasha TawhidCarleton University

Department of Systems and Computer EngineeringOttawa, Canada, K1S 5B6

http://www.sce.carleton.ca/faculty/petriu.html

Page 2: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 2

Analysis of Non-Functional Properties Model-Driven Engineering enables the analysis of non-functional properties

(NFP) of software models examples of NFPs: performance, scalability, reliability, security, etc. many existing formalisms and tools for NFP analysis

queueing networks, Petri nets, process algebras, Markov chains, fault trees, probabilistic time automata, formal logic, etc.

research challenge: bridge the gap between MDD and existing NFP analysis formalisms and tools rather than ‘reinventing the wheel’

Approach: add additional annotations for expressing different NFPs to software models define model transformation from annotated software models to different NFP

analysis models using existing solvers, analyze the NFP models and give feedback to designers

In the UML world: define extensions as UML Profiles for expressing NFPs UML Profile for Schedulability, Performance and Time (SPT) UML Profile for Modeling and Analysis of Real-Time and Embedded systems

(MARTE)

Page 3: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 3

Software performance evaluation in MDE

Software performance evaluation in the context of Model-Driven Engineering: starting point: UML software model used also for code generation add performance annotations (using the MARTE profile) generate a performance analysis model

queueing networks, Petri nets, stochastic process algebra, Markov chain, etc. solve analysis model to obtain quantitative results analyze results and give feedback to designers

UML + MARTESoftware Model

UML Tool

Model-to-model Transformation

Performance Analysis

Tool

Feedback to designers

Performance Model

Performance Analysis Results

Model-to-code Transformation

Software Code

Code generation

Performance evaluation

Page 4: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 4

PUMA transformation approach

PUMA project: Performance from Unified Model Analysis

Software model with performance annotations (Smodel)

Transform Smodel to

CSM (S2C)

Improve Smodel

Core Scenario Model

(CSM)

Transform CSM to some

Pmodel (C2P)

Performance model

(Pmodel)

Explore solution space

Performance results and

design advice

Page 5: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 5

Transformation Target:Performance Models

Page 6: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 6

Performance modeling formalisms

Analytic models Queueing Networks (QN)

capture well contention for resources efficient analytical solutions exists for a class of QN (“separable” QN):

possible to derive steady-state performance measures without resorting to the underlying state space.

Stochastic Petri Nets good flow models, but not as good for resource contention Markov chain-based solution suffers from state space explosion

Stochastic Process Algebra introduced in mid-90s by merging Process Algebra and Markov Chains

Stochastic Automata Networks communicating automata synchronized by events; random execution times Markov chain-based solution (corresponds to the system space state)

Simulation models less constrained in their modeling power, can capture more details harder to build and more expensive to solve (running the model repeatedly).

Page 7: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 7

Queueing Networks (QN)

Queueing network model = a directed graph: nodes are service centres, each representing a resource; customers, representing the jobs, flow through the system and compete

for these resources; arcs with associated routing probabilities (or visit ratios) determine the

paths that customers take through the network. used to model systems with stochastic characteristics multiple customer classes: each class has its own workload intensity

(the arrival rate or number of customers), service demands and visit ratios

bottleneck service center: saturates first (highest demand, utilization)

CPU

Disk 1

Disk 2

out

CPU CPU

Disk Disk

Disk Disk

out

CPU

Disk 1

Disk 2 Terminals

. . . CPU

Disk Terminals

. . .

Open QN system Closed QN system

Page 8: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 8

Single Service Center: Non-linear Performance

Typical non-linear behaviour for queue length and waiting time server reaches saturation at a certain arrival rate (utilization close to 1) at low workload intensity: an arriving customer meets low competition, so its

residence time is roughly equal to its service demand as the workload intensity rises, congestion increases, and the residence time along

with it as the service center approaches saturation, small increases in arrival rate result in

dramatic increases in residence time.

00.10.20.30.40.50.60.70.80.9

1

0 0.2 0.4 0.6 0.8

Arrival Rate

Uti

lizat

ion

0

5

10

15

20

0 0.2 0.4 0.6 0.8

Arrival rate

Res

iden

ce T

ime

0

5

10

15

20

0 0.2 0.4 0.6 0.8

Arrival rateQ

ueu

e le

ng

th

Utilization Residence Time Queue length

Page 9: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 9

Layered Queueing Network (LQN) model http://www.sce.carleton.ca/rads/lqn/lqn-documentation

LQN is a extension of QN models both software tasks

(rectangles) and hardware devices (circles)

represents nested services (a server is also a client to other servers)

software components have entries corresponding to different services

arcs represent service requests (synchronous, asynchronous and forwarding)

multi-servers used to model components with internal concurrency

clientE

service1 service2 Appl

query1 query2

ClientCPU

ClientT

DB

ApplCPU

DBCPU

Disk1 Disk2

taskentries

device

Page 10: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 10

LQN extensions: activities, fork/join

e2Remot

e Client

Internet

Local Wks

Web Proc

RemoteWks

SDisk

eComm Proc

DB Proc

Secure Proc

1..n

e1Local Client

1..m

e4Web

Servere3

a4 [e4]

&a1

a2 a3

&

e5

eComm

Server

e7 DBe6Secure

DB

Disk

Page 11: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 11LQN Metamodelpackage LQNmetamodel

-thinkTime : float = 0.0-hostDemand : float

-hostDemCV : float = 1.0

-deterministicFlag : Integer = 0

-repetitionsForLoop : float = 1.0

-probForBranch : float = 1.0

-replyFwdFlag : Boolean

Activity

-multiplicity : Integer = 1

-priorityOnHost : Integer = 1

-schedulerType

Task-meanCount ...

Call

-probForward

Forward

-replyFlag = true

-successor.after = phase2...

Phase1

-multiplicity : Integer = 1

-schedulerType

Processor

Entry

-replyFlag = False

-successor = NIL

Phase2

Precedence

Sequence

Branch

Merge

Fork

Join

-actSetForTask

0..*

0..1

-callByActivity

0..*

1

-fwdToEntry

1

-fwdTo

1

-fwdByEntry

0..*

1

-callToEntry

1

-callTo

1

-actSetForEntry

0..*

0..1

-successor

1

1..*

-predecessor

1

-after1..*

0..1

-replyTo 0..*

-firstActivity 1

1

-allocatedTask 0..*

-host 1

-taskOperation 1..*

-schedulableProcess 1

-before

-fwdBy

SyncCall

AsyncCall

Page 12: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 12

Performance versus Schedulability

Difference between performance and schedulability analysis performance analysis: timing properties of best-effort and soft real-time systems

e.g., information processing systems, web-based applications and services, enterprise systems, multimedia, telecommunications

schedulability analysis: applied to hard real-time systems with strict deadlines analysis often based on worst-case execution time, deterministic assumptions

Statistical performance results (analysis outputs): mean (and variance) of throughput, delay (response time), queue length resource utilization probability of missing a target response time

Input parameters to the analysis - also probabilistic: random arrival process random execution time for an operation probability of requesting a resource

Performance models represents a system at runtime must include characteristics of software application and underlying platforms

Page 13: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 13

UML Profiles for performance annotations:

SPT and MARTE

Page 14: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 14

UML SPT Profile Structure

Analysis Models Infrastructure Models

«modelLibrary»RealTimeCORBAModel

General Resource Modeling Framework

«profile»RTresourceModeling

«profile»RTconcurrencyModeling

«import» «import»

«profile»RTtimeModeling

«profile»PAprofile

«import»

«profile»RSAprofile

«import»

«profile»SAProfile

«import»«import»

Page 15: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 15

SPT Performance Profile: Fundamental concepts

Scenarios define execution paths with externally visible end points. QoS requirements can be placed on scenarios.

Each scenario is executed by a workload: open workload: requests arriving at in some predetermined pattern closed workload: a fixed number of active or potential users or jobs

Scenario steps: the elements of scenarios joined by predecessor-successor relationships which may include forks, joins and loops.

a step may be an elementary operation or a whole sub-scenario Resources are used by scenario steps. Quantitative resource demands for

each step must be given in performance annotations.The main reason for building performance models is to compute additional delays due to the competition for resources!

Performance results include resource utilizations, waiting times, response times, throughputs.

Performance analysis is applied to real-time systems with stochastic characteristics and soft deadlines (use mean value analysis methods).

Page 16: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 16

SPT Performance Profile: the domain model

PerformanceContext

WorkloadresponseTimepriority

PScenariohostExecDemandresponseTime

PResourceutilizationschedulingPolicythroughput

PProcessingResourceprocessingRatecontextSwitchTimepriorityRangeisPreeemptible

PPassiveResourcewaitingTimeresponseTimecapacityaccessTime

PStepprobabilityrepetitiondelayoperationsintervalexecutionTime

ClosedWorkloadpopulationexternalDelay

OpenWorkloadoccurencePattern

0..n

1..n

1..n 1

11

1..n 1..n

0..n

0..n

0..n

0..1

1..n

1

1{ordered}

+successor

+predecessor

+root

+host

WorkloadResources

Scenario/Step

Page 17: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 17

MARTE overview

MARTE domain model

MarteFoundations

MarteAnalysisModelMarteDesignModel

Foundations for modeling and analysis of RT/E systems : CoreElements NFPs Time Generic resource modeling Generic component modeling Allocation

Specialization of MARTE foundations for annotating models for analysis purpose: Generic quantitative analysis Schedulability analysis Performance analysis

Specialization of MARTE foundations for modeling purpose (specification, design, etc.): RTE model of computation and communication Software resource modeling Hardware resource modeling

Page 18: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 18

GQAM dependencies and architecture

GQAM (Generic Quantitative Analysis Modeling): Common concepts for analysis SAM: Modeling support for schedulability analysis techniques. PAM: Modeling support for performance analysis techniques.

GQAM

Time GRM

« import » « import »

SAM PAM

« import » « import »

« modelLibrary »MARTE_Library

« import »

NFPs

« import »

GQAM_Workload GQAM_Resources« import »

GQAM_Observers« import »

Page 19: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 19

Annotated deployment diagram

«execHost»dbHost:

{commRcvOverhead = (0.14,ms/KB),commTxOverhead = (0.07,ms/KB),resMult = 3}

«execHost»ebHost:

{commRcvOverhead = (0.15,ms/KB),commTxOverhead = (0.1,ms/KB),resMult = 5}

«execHost»webServerHost:

{commRcvOverhead = (0.1,ms/KB),commTxOverhead = (0.2,ms/KB)}

«commHost»internet:

{blockT = (100,us)}

«deploy» «deploy»

: Database : WebServer

«artifact»webServerA

: EBrowser

«artifact»

ebA

«artifact»databaseA

«deploy»

«manifest»«manifest»«manifest»

«commHost»lan:

{blockT = (10,us),capacity = (100,Mb/s)}

blockT describes a pure latency for the link

commRcvOvh and commTxOvh are host-specific costs of receiving and sending messages

resMult = 5 describes a symmetric multiprocessor with 5 processors

Page 20: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 20

Simple scenario

«PaRunTInstance»webServer: WebServer{poolSize = (webthreads=80),

instance = webserver}

«PaRunTInstance»database: Database{poolSize = (dbthreads=5),

instance = database}

eb: EBrowser

«paStep»«paCommStep»

2:

{hostDemand = (12.4,ms),rep = (1.3,-,mean),msgSize = (2,KB)}

«PaCommStep»

4:

{msgSize = (75,KB)}

«paCommStep»

3:

{msgSize = (50,KB)}

«PaStep»

«PaWorkloadEvent»

1:

{open (interArrT=(exp(17,ms))),

{hostDemand = (4.5,ms)}

«PaRunTInstance»

initial step is stereotypedfor workload (open), execution demandrequest message size

a swimlane or lifeline stereotyped «PaRunTInstance» references a runtime active instance; poolSize specifies the multiplicity

«paCommStep»

Page 21: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 21

Transformation Principles from SModels to PModels

Page 22: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 22

UML model for performance analysis

For performance analysis, a UML model should contain: Key use cases realized by representative scenarios

• frequently executed, with performance constraints• each scenario is a graph of steps (partial ordering)

Resources used by each scenarioresource types: active or passive, physical or logical,

hardware or software• examples: processor, disk, process, software server, lock, buffer

quantitative resource demands for each scenario step • how much, how many times?

Workload intensity for each scenarioopen workload: arrival rate of requests for the scenarioclosed workload: number of simultaneous users

Page 23: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 23

Direct UML to LQN Transformation: our first approach

Mapping principle: software and hardware resources → service centres scenarios → job flow from centre to centre

Generate LQN model structure (tasks, devices and their connections) from the structural view: active software instances → LQN tasks map deployment nodes → LQN devices

Generate LQN detailed elements (entries, phases, activities and their parameters) from the behavioural view: identify communication patterns in key scenarios due to architectural

patterns client/server, forwarding server chain, pipeline, blackboard, etc.

aggregate scenario steps according to each pattern and map to entries, phases, etc.

compute LQN parameters from resource demands of scenario steps.

Page 24: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 24

Generating the LQN model structurea) High-level architecture

b) DeploymentProcS

ProcCMode

m

Internet

LAN

ProcDB

Disk1

Client

WebServer

Database

Generated LQN model structure

• Software tasks generated for high-level software components according to the architectural patterns used.

• Hardware tasks generated for devices from deployment diagram

<<process>>

Client

1..nClient Server

CLIENT SERVER

ClientServer

<<process>>

Server

<<process>>

Database

Client Server

CLIENT SERVER

ClientServer

<<process>>

Client

1..nClient Server

CLIENT SERVER

ClientServer

<<process>>

Server

<<process>>

Database

Client Server

CLIENT SERVER

ClientServer

<<process>>

User

1..nClient Server

CLIENT SERVER

ClientServer

<<process>>

Server

<<process>>

WebServer

<<process>>

Database

<<process>>

Database

Client Server

CLIENT SERVER

ClientServer

Client1 ClientN

<<Internet>>

<<Modem>> <<Modem>>

ProcC1 ProcCN

Server

<<LAN>>ProcS

Database

<<disk>>

ProcDB

User1 UserN

<<Internet>>

<<Modem>> <<Modem>>

ProcC1 ProcCN

<<LAN>>ProcS

Database

<<disk>>Disk1

ProcDB

WebServer

Page 25: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 25

Client Server Pattern

ClientSever

ClientServerClient Serve

r

1..n 1

a) Client Sever collaboration

Structure the participants and their relationship

Behaviour Synchronous communication style - the client sends the request and

remains blocked until the sender replies

b) Client Sever behaviour

Client

continue work

request service

serve request and

reply

waiting

Server

complete service (opt)

wait for reply

Page 26: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 26

Mapping the Client Server Pattern to LQN

e1[ph1]

e1, ph1 e2, ph1

e2, ph2e2

[ph1, ph2]

Client CPU

User

continue

request service

and reply

waiting

WebServer

complete

e1, ph1

Client

work

request service

and reply

ServerUser

continue

request service

waiting

WebServer

complete service (opt)

...

Client

work

request service

serve request

Server

wait for reply

Client

CPUServer

Server

For each subset of scenario steps mapped to a LQN phase or activity, compute the execution time S:

S = i=1,n ri si

where ri = number of repetitions and si = execution time of step i.

LQN

Page 27: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 27

Identify patterns in a scenario

browse and select items

UserInterface

idle

ECommServ DBMS

<<PAstep>>{PArep= $r}

<<PAstep>>{PAdemand= ‘assm’, ’mean’, $md1, ‘ms’}

<<PAstep>>{PAdemand= ‘assm’, ’mean’, $md2, ‘ms’}

<<PAstep>>{PAdemand= ‘assm’, ’mean’, $md3, ‘ms’}

idle

add to invoice

log transaction

generate page

display

check valid item code

add item to query

sanitize query

phase 1

phase 1

phase 1

phase 2

<<PAstep>> {PAdemand= ‘assm’, ’mean’, $md3, ‘ms’}

waiting

e1

[ph1]

User Interface

LQN

e2[ph1]

ECommServer

e3

[ph1, ph2]DBMS

waiting

Page 28: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 28

Transformation using a pivot language

Page 29: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 29

Pivot languages

Pivot language, also called a bridge or intermediate language, can be used as an intermediary for translation.

Avoids the combinatorial explosion of translators across every combination of languages.

Examples of pivot languages for performance analysis: Core Scenario Model (CSM) Klaper PMIF + S-PMIF Palladio Model

Transformations from N source languages to M target languages require N*M transformations.

L1

L2

LN

L’1

L’2

L’M

. . . . . .

Using a pivot language, only N+M transformations.

Also, a smaller semantic gap

L1

L2

LN

L’1

L’2

L’M

. . . . . .Lp

Page 30: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 30

Core Scenario Model

CSM: a pivot Domain Specific Language used in the PUMA project at Carleton University (Performance from Unified Model Analysis)

Semantically – between the software and performance domains focused on scenarios and resources performance data is intrinsic to CSM

quantitative resource demands made by scenario steps workload

UML+SPT

UML+MARTE

UCM

LQN

QN

Petri Net

CSM

Simulation

PUMA Transformation chain

Page 31: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 31

CSM Metamodel

ExternalOperation

Join

CSM

StepGeneralResource

Scenario

PassiveResource

ActiveResource

ProcessingResource

PathConnection

Branch ForkMerge StartEnd

ResourceAcquire

ResourceRelease

Workload

1..*

Sequence

Component

1

1

1..*

1..*

0..10..1

*

*

(m = 1, n = 1) (m = 1, n = 2..*) (m = 2..*, n = 1) (m = 1, n = 2..*) (m = 2..*, n = 1)

+host

(m = 1, n = 0) (m = 0, n = 1)

OpenWorkload

ClosedWorkload

Message

1..*

m

+sou

rce +target

n

0..1

+predecessor

0..1

+suc

cess

or

Workload

Scenario/steps

Resources

Page 32: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 32

CSM metamodel Basic scenario elements, similar to the SPT Performance Profile

scenario composed of steps a step may be refined as a sub-scenario

precedence relationships among steps sequence, branch, merge, fork, join, loop

steps performed by components running on hosts (Processor resources) resources and acquire/release operations on resources

inferred for Component-based resources (Processes) Four kinds of resources in CSM:

ProcessingResource (a node in a deployment diagram) ComponentResource (process, or active object)

component in a deployment lifeline in SD may correspond to a runtime component swimlane in AD may correspond to a runtime component

LogicalResource (declared as GRMresource) extOp resource - implied resource to execute external operations

Page 33: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 33

CORBA-based case study

Two CORBA-based client-server systems: H-ORB (handle-driven ORB): the client gets the address of the server from the

agent and communicates with the server directly. F-ORB (forwarding ORB): the agent forwards the client request to the

appropriate server, which returns the results of the computations directly to the client.

Synthetic application: Contains two services A and B; two copies of each service are provided; The clients connect to these services through the ORB. Each client executes a cycle repeatedly, making one request to Server A

(distributed randomly between copies A1 and A2) and one to Server B (distributed randomly between copies B1 and B2).

The client performs a bind operation before every request. Since the experiments were performed on a local area network, the inter-node

delay that would appear in a wide-area network was simulated by making a sender process sleep for D units of time before sending a message.

Page 34: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 34

H-ORB deployment and scenario

client1

«artifact»

Client-art

«deploy»

«manifest»

«GaCommHost»

:LAN

«GaExecHost»

PC1

«GaExecHost»

PCN

«GaExecHost»

PA

«GaExecHost»

PC2

client2

«artifact»

Client-art

«deploy»

«manifest»

clientN

«artifact»

Client-a

«deploy»

«manifest»

agent

«artifact»

Agent-art

«deploy»

«manifest»

«GaExecHost»

PA1

«GaExecHost»

PB1

«GaExecHost»

PB2

«GaExecHost»

PA2

«deploy» «deploy» «deploy» «deploy»

«artifact»

ServerA-art«artifact»

ServerA-art«artifact»

ServerB-art«artifact»

ServerB-art

«manifest» «manifest» «manifest» «manifest»

ServerA1 ServerA2 ServerB1 ServerB2

«GaAnalysisContext»ad HORB

«PaRunTInstance» «PaRunTInstance» «PaRunTInstance» «PaRunTInstance» «PaRunTInstance» «PaRunTInstance»

Client Agent ServerA1 ServerA2 ServerB1 ServerB2

Sleep

Sleep

Sleep

Sleep

Sleep

GetHandle

Sleep

A1Work

Sleep

A1Work

Sleep

GetHandle

Sleep

A1Work

Sleep

A1Work

«GaWorkloadEvent» {pattern=(closed(Population= $N))}

«PaStep» {hostDemand=(4,ms)}

«PaStep» {hostDemand=(4,ms)}

«PaStep» {prob=0.5, hostDemand=($SA,ms)}

«PaStep» {prob=0.5, hostDemand=($SA,ms)}

«PaStep» {prob=0.5, hostDemand=($SB,ms)}

«PaStep» {prob=0.5, hostDemand=($SB,ms)}

Deployment Scenario as activity diagram

Page 35: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 35H-ORB scenario as sequence

diagram

GetHandle() «PaStep» {hostDemand=(4,ms)}

«PaStep» {hostDemand=($SA,ms)}

Client Agent ServerA1 ServerA2 ServerB1 ServerB2

alt

alt

«PaRunTInstance» «PaRunTInstance» «PaRunTInstance» «PaRunTInstance» «PaRunTInstance» «PaRunTInstance»

«PaStep» {prob=0.5}

«PaStep» {prob=0.5}

«PaStep» {prob=0.5}

«PaStep» {prob=0.5}

«GaWorkloadEvent» {pattern=(closed(Population= $N))}

GetHandle() «PaStep» {hostDemand=(4,ms)}

«GaAnalysisContext»sd HORB

ref Sleep

«PaStep» {hostDemand=($SA,ms)}

«PaStep» {hostDemand=($SB,ms)}

«PaStep» {hostDemand=($SB,ms)}

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

B1Work()

B2Work()

A1Work()

A2Work()

Page 36: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 36

Transformation from UML+MARTE to CSM

Structural elements are generated first: CSM ProcessingResource CSM Component

Scenarios described by SD CSM Start PathConnection is generated

first, and the workload information is attached to it

Lifelines stereotyped «PaRunTInstance» correspond to an active runtime instance

The translation follows the message flow of the scenario, generating corresponding Steps and PathConnections

a UML Execution Occurrence generates a simple Step

Complex CSM Steps with a nested scenario correspond to operand regions of UML Combined Fragments and Interaction Occurrences.

MARTE CSM

«GaWorkloadEvent» Closed/OpenWorkload

«GaScenario» Scenario

«PaStep» Step

«PaCommStep» Step (for the message)

«GaResAcq» ResourceAcquire

«GaResRel» ResourceRelease

«PaResPass» ResourcePass

«GaExecHost» ProcessingResource

«PaCommHost» ProcessingResource

«PaRunTInstance» Component

«PaLogicalResource» LogicalResource

Mapping of MARTE stereotypes to CSM model elements

Page 37: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 37Transformation from SD to CSM

GetHandle() «PaStep» {hostDemand=(4,ms)}

«PaStep» {hostDemand=($SA,ms)}

Client Agent ServerA1 ServerA2 ServerB1 ServerB2

alt

alt

«PaRunTInstance»«PaRunTInstance»«PaRunTInstance»«PaRunTInstance»«PaRunTInstance»«PaRunTInstance»

«PaStep» {prob=0.5}

«PaStep» {prob=0.5}

«PaStep» {prob=0.5}

«PaStep» {prob=0.5}

«GaWorkloadEvent» {pattern=(closed(Population= $N))}

GetHandle() «PaStep» {hostDemand=(4,ms)}

«GaAnalysisContext»sd HORB

ref Sleep

«PaStep» {hostDemand=($SA,ms)}

«PaStep» {hostDemand=($SB,ms)}

«PaStep» {hostDemand=($SB,ms)}

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

ref Sleep

B1Work()

B2Work()

A1Work()

A2Work()

Start:HORB

R_Acquire: client

Step:

R_Acquire: agent

Step: GetHandle()

Step: Sleep()

R_Release: agent

Step: Sleep()

Branch

Merge

R_Acquire: agent

Step: GetHandle()

R_Release: agent

Branch

Step: OpB1 Step: OpB2

Merge

R_Release:client

End

Step: Sleep()

Step: Sleep()

Step: OpA1 Step: OpA2

Start: Sleep

R_Acquire: ServerS

Step: sleep()

R_Release: ServerS

End

Start: OpA1

R_Acquire: ServerA1

Step:A1Work()

R_Release: ServerA1

End

Step: Sleep()

Start: OpB1

R_Acquire: ServerB1

Step:B1Work()

R_Release: ServerB1

End

Step: Sleep()

Page 38: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 38

Transformation from CSM to LQN The first transformation phase parses the CSM resources and generates:

a LQN Task for each CSM Component a LQN Processor for each CSM ProcessingResource

The second transformation phase traverses the CSM to determine: the branching structure and the sequencing of Steps within branches the calling interactions between Components.

A new LQN Entry is generated whenever a task receives a call The entry internals are described by LQN Activities that represent a graph of CSM Steps

or by Phases.

Clientclient_e

ServerB2

B2WorkServerB1

B1WorkServerA2

A2WorkServerA1

A1Work

AgentGet Handle

Sleepsleep_e

PA

dummy

PC

PA1 PA2 PB1 PB2

(2)

(1)

(4)

(1) (1) (1) (1)

LQN model for the H-ORB system

Page 39: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 39

Validation of LQN against measurements

The LQN results are compared with measurements of the implementation of a performance prototype based on a Commercial-Off-The-Shelf (COTS) middleware product and a synthetic workload running on a network of Sun workstations using Solaris 2.6

H-ORB F-ORB

Page 40: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 40

Extending PUMA for Service-Oriented

Architecture

Page 41: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 41

PUMA4SOA approach

Extensions Smodel adapted to service-

based systems: Business process model Service model

Separation between: PIM: platform independent PSM: platform specific

Use Performance Completion feature model to specify platform variability

Techniques Use Aspect-oriented Models for

platform operations aspect composition may

take place at different levels: UML, CSM, LQN

Traceability between different kinds of models

Page 42: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 42

Source PIM: (a) Business Process Model

Eligibility Referral System (ERS)

Page 43: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 43

Source PIM: (b) Service Architecture Model

<<Participant>> es:EstimatorServer

<<Participant>> na:NursingAccount

<<Participant>> dm:Datamanagement

<<Participant>> pa:PhysicianAccount

<<Request>> physicianAuth

<<Participant>> as:AdmissionServer

<<Request>> validateTransfer

<<Request>> confirmTransfer

<<Request>> payorAuth

<<Request>> recordTransfer

<<Request>> scheduleTransfer

<<Request>> requestReferral <<Service>>

physicianAuth

<<Service>> recordTransfer

<<Service>> scheduleTransfer

<<Service>> requestReferral

<<Service>> validateTransfer

<<Service>> confirmTransfer

<<Service>> payorAuth

SoaML stereotypes: <<Participant>> indicates parties that provides or consumes services. <<Request>> indicates the consumption of a service. <<Service>> indicates the offered service.

Page 44: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 44

Source PIM: (c) Service Behaviour Model

join points for platform aspects

Page 45: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 45

Deployment of the primary model

Admission node

Transferring node

Insurance node

Models describing the platform

Page 46: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 46

Performance Completion Feature Model Describes the variability in the service platform The Service Platform feature model in the example defines:

three mandatory feature groups: Operation, Message Protocol and Realization two optional feature groups: Communication and Data compression

Each feature is described by an aspect model to be composed with the base model

Service Platform

Operation

<< Feature>> Invocation

<<Feature>> Publishing

<< Feature>> Discovery

<< Feature>> Subscribing

Communication

<<Feature>> Http

<<Feature>> Unsecure

Message Protocol

<<Feature>> SOAP

<<Feature>> Secure

Realization

<<Feature>> Web service

<< Feature>> REST

<<Feature>> DCOM

<< Feature>> CORBA

<< Feature>> SSL Protocol

<< Feature>> TSL Protocol

Data Compression

<<Feature>> Uncompressed

<<Feature>> Compressed

<1 -1> <1 -1>

<1 -1>

<1 -1> <1 -1>

<1 -1>

Page 47: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 47Generic Aspect Model: Service

Invocation

Generic Aspect Model: e.g. Service Invocation Aspect

defines the structure and behavior of the platform aspect in a generic format

uses generic names (i.e., formal parameters) for software and hardware resources

uses generic performance annotations (MARTE variables)

Advantage: reusability Context-specific aspect model:

after identifying a join point, the generic aspect model is bound to the context of the join point

the context-specific aspect model is composed with the platform model

Page 48: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 48

Binding generic to concrete resources

Generic names (parameters) are bound to concrete names corresponding to the context of the join point

Sometime new resources are added to the primary model User input is required (e.g., in form of Excel spreadsheet, as discussed later).

Page 49: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 49

Binding performance annotation variables

Annotation variables allowed in MARTE are used as generic performance annotations

Bound to concrete reusable platform-specific annotations

Page 50: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 50

PSM: scenario after composition in UML

composed service invocation aspect

composed service response aspect

Page 51: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 51Aspect composition at CSM level

Steps: PIM and Platform aspect models in UML

are transformed into CSM separately. The UML workflow model is transformed

into the CSM top level model. The UML service behavior models are

transformed into a set of CSM sub-scenario models.

AOM is used to perform aspect composition to generate the CSM of PSM.

CSM is then transformed into LQN. Advantages:

CSM has a lightweight metamodel compared with UML

it is easier to implement the aspect composition in CSM and to insure its consistency.

Drawbacks: point cuts cannot be defined completely at

the CSM level, because not all details of the UML service architecture model are transformed to CSM.

Page 52: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 52Aspect composition at LQN level

Steps: The PIM and Platform Aspect models at

CSM are first transformed into LQN model separately.

The CSM top level scenario model is transformed into a top level LQN activity graph.

The CSM sub-scenario models are transformed into a set of tasks with entries that represents the steps.

AOM aspect composition is performed to generate the LQN of PSM.

Advantages: LQN has a lightweight metamodel similar

to CSM Drawbacks:

point cuts cannot be defined completely at the LQN level, because many details of the UML service architecture model are lost.

The granularity of the aspects should correspond to entries – otherwise the composition becomes more difficult.

Page 53: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 53

Traceability of Model Transformations

PUMA4SOA defines two modeling layers for the software: workflow layer represented by the Workflow model. service layer represented by the Service Architecture model and the Service

behaviour model Model traceability is maintained by separating the transformation of the

workflow layer and the service layer Why traceability is desirable:

it makes the model transformation more modular, especially when there are many workflows in the UML design model.

it facilitates the reporting of performance results in software model terms.

Page 54: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 54LQN Model for ERF case study

&

user User

dProcessEligibilityReferral

& dSelectingReferral

+ W

orkflow

dInitialPatientTransfer

dPerformPhysicianAuthorization dPerformPayorAuthorization

dConfirmTransfer

dCompleteTransfer

receive MW-PA

receive MW-ES ConfirmTransfer ValidatingRequest AdmisionServer

FillTransferForm NursingAccount

receive MW-AS

receive MW-DM

delay Net

UserP

ReferralBusinessProcess

transferring

dm

insurance

admission Disk

ObtainPayerAuthorization EstimatorServer

send MW-NA

ObtainPhysicianAuthorization PhysicianAccount

ScheduleTransfer RequestReferral Update RecordTransferForm DataManagement

ReadData Disk WriteData

dValidatingRequest

dProcessEligibilityTransfer

dSchedulingTransfer

Page 55: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 55

Finer service granularitya) Finer granularity: Response time Vs. # of Users

0

10

20

30

40

50

60

70

80

90

100

0 20 40 60 80 100 120# of Users

Resp

onse

tim

e (se

c.) A

B

C

D

E

b) Finer granularity: Throughput vs. # of Users

0

0.0005

0.001

0.0015

0.002

0.0025

0.003

0.0035

0 20 40 60 80 100 120# of Users

Thro

ughput

A

B

C

D

E

A: The base case; multiplicity of all tasks and hardware devices is 1, except for the number of users. Transferring processor is the system bottleneck.

B: Resolve bottleneck by increasing the multiplicity of the bottleneck processor node to 4 processors. Only slight improvement because the next bottleneck - the middleware MW_NA - kicks in.

C: The software bottleneck is resolved by multi-threading MW_NA. D: Increasing to 2 the number of disks units for Disk1 and adding additional threads to

the next software bottleneck tasks, dm1 and MW-DM1. The throughput goes up by 24% with respect to case C. The bottleneck moves to DM1 processor.

E: Increasing the number of DM1 processors to 2 has a considerable effect.

Page 56: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 56

Coarser service granularity

A: The base case. The software task dm1 is the initial bottleneck. B: The software bottleneck is resolved by multi-threading dm1. The response time is

reduced slightly and the bottleneck moves to Disk1. C: Increasing to 2 the number of disks units for Disk1 has a considerable effect. The

maximum throughput goes up by 60% with respect to case B. The bottleneck moves to the Transferring processor.

D: Increasing the multiplicity of the Transferring processor to 2 processors, and adding additional threads to the next software bottleneck task MW-NA; the throughput grows by 11 %.

c) Coarser service granularity: Response time vs. # of Users

0

10

20

30

40

50

60

70

80

0 20 40 60 80 100 120# of Users

Response tim

e (sec)

A

B

C

D

d) Coarser service granularity: Throughput vs. #of Users

0

0.0005

0.001

0.0015

0.002

0.0025

0.003

0.0035

0 20 40 60 80 100 120

# of Users

Thro

ughput

A

B

C

D

Page 57: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 57

Coarser versus finer service granularity

Difference between the D cases of the two alternatives. The compared configurations are similar in number of processors, disks and threads, except that the system with coarser granularity performs fewer service invocations through the web service middleware.

e) Finer and Coarser service granularity: Response time vs, # of Users

0

10

20

30

40

50

60

0 20 40 60 80 100 120

# of Users

Resp

onse

tim

e (se

c)

Finer service granularity

Coarser service granularity

f) Finer and Coarser service granularity: Throughput vs. # of Users

0

0.0005

0.001

0.0015

0.002

0.0025

0.003

0.0035

0 20 40 60 80 100 120

# of Users

Thro

ughput

Finer service granularity

Coarser service granularity

Page 58: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 58

Extending PUMA for Software Product Lines

Page 59: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 59

Software Product Line (SPL) Software Product Line (SPL) engineering takes advantage of the

commonality and variability between the products of a family SPL challenges to model/manage commonality and variability between family

members to support the generation of a specific products by reusing core family

assets. Objective of the research:

integrate performance analysis in the UML-based model-driven development process for SPL

SPL parametric performance annotations become part of the reusable family assets

Why? early performance analysis helps developers to gain insight in the

performance properties of a system during its development help developers to choose between different design alternatives early in

the lifecycle, to built systems that will meet their performance requirements.

Page 60: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 60

Challenge

Main research challenge: a large semantic difference between a SPL model and a performance model

SPL model a collection of core “generic” asset models, which are building blocks for

many different products with all kind of options and alternatives cannot be implemented, run and measured as such

Performance model “model @ runtime” focusing on how a running system is using its resources

in a certain operational mode under a well defined workload Proposed Approach:

Two-phase process for automating the derivation of a performance model for a concrete product from an annotated SPL model:

transformation of an annotated SPL model to an annotated model for a given product – includes binding of parametric performance annotations

further transformation to a performance model by known techniques (PUMA).

Page 61: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 61

Features

Feature is a concept for modeling variability that represents a requirement or characteristic provided by one or more members of the product line

Feature model is essential for both variability management and product derivation

Feature models are used in this work to represent two different variability spaces: Regular feature model: representing functional variability between

products Performance Completion (PC) feature model: representing variability

in the platform Mapping feature to the model elements realizing it:

Regular feature: by PL stereotypes indicating the feature or condition PC feature: by MARTE performance-related stereotypes and

attributes

Page 62: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 62

Transformation approach

first phase second phase

Software Domain

Application engineering

User

Domain engineeringUML+MARTE+SPLV SPL Model

Feature model

M2MT: Generate Parameter

Spreadsheet

DiagnosisPerformance

Feedback

M2MT: Instantiate Specific Product

Model

LQNSolver

User enters concrete

values

Concrete Annotations Spreadsheet

M2MT: Perform Binding

UML+ MARTE

Product Model

M2MT: PUMA

Transformation

Feature Configuration

Product Model with Generic Annotations

Parameter Spreadsheet

LQN Performance

Model

Performance

Results

PC-Feature model

Performance Domain

Page 63: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 63

E-commerce SPL Feature Model: FODA notation

Page 64: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 64

{mutually exclusive feature}

«optional feature» Purchase Order

«exactly-one-of feature group» Catalog

«alternative feature» Static

«alternative feature» Dynamic

{mutually exclusive feature}

«at-least-one-of feature group» Delivery

«optional feature» Electronic

«optional feature» Shipping

«at-least-one-of feature group» Invoices

«optional feature» On-line Display

«optional feature» Printed Invoice

«common feature»

E-Commerce Kernel

«exactly-one-of feature group» Customer

«alternative feature» Business Customer

«alternative feature» Home Customer

{mutually exclusive feature}

«at-least-one-of feature group» Payment

«optional feature» CreditCard

«optional feature» Check

«optional feature» DebitCard

«at-least-one-of feature group» Customer Attractions

«optional feature» Promotions

«optional feature» Membership Discount

«optional feature» Sales

«requires»

«requires»

«requires»

«at-least-one-of feature group» Customer Inquiries

«optional feature» Help Desk

«optional feature» Call Center

«requires»

«mutually includes»

«mutually includes»«mutually includes»

«at-least-one-of feature group» ShippingType

«optional feature» Normal

«optional feature» Express

«optional feature» Package Slip

«optional feature» Several Language

«at-least-one-of feature group» International Sale

«optional feature» Currency Conversion

«optional feature» Tariffs Calculation

«exactly-one-of feature group» Data Storage

«alternative feature» Distributed

«alternative feature» Centralized

«optional feature» I/E Laws

«optional feature» Switching Menu

«more-than-one-required»

«more-than-one-required»

«more-than-one-required»

«mutually includes»«mutually includes»

«mutually includes»

«mutually includes»

«mutually includes»

E-commerce SPL Feature Model: UML

Page 65: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 65

Modeling Variability in design models

Variability in Use Case Models stereotypes applied to use cases: «kernel», «optional», «alternative» feature-based grouping of use cases in packages variation points inside a use case:

complex variations: use “extend” and “include” relationships small variations: define variation points in the scenarios realizing a

use case Variability in Structure Models

stereotypes for classes: “kernel class”, “optional class”, “variant class” Variability in Behavior Models

scenario models (for every scenario corresponding to every use case): stereotypes for interaction diagrams: «kernel», «optional», or

«alternative» variation points defined as alternative fragments possible to model variability by using inherited and parameterized

statecharts (not used in this paper).

Page 66: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 66

«optional»International Sales {vp=International} «optional»

Customer Inquiry {vp=Inquiries}

«optional»Customer Attractions

{vp=Attractions}

«optional» Deliver Purchase Order

«optional» Prepare Purchase Order

«alternative» Check Customer

Account

«kernel» Make Purchase Order

«kernel»Confirm Shipping

«alternative» Send Invoice

Customer

Authorization Center

Supplier

Wholesaler

Bank

Feature=BusinessCustomer

Feature=HomeCustomer

Feature=BusinessCustomer

«optional» DebitCard

«extend»«extend»

«alternative»Bill Customer

{ext point=Payment}{vp=Switching Menu}

Feature=Purchase Order

Feature=Purchase Order

Feature=BusinessCustomer

«kernel»Browse Catalog

{vp=Catalog}

«alternative»Create Requisition{vp=DataStorage}

«extend»

«kernel»Process Delivery Order {ext point=Delivery}

«extend»

«optional» Electronic

Feature=Electronic Delivery

«alternative»Confirm Delivery {vp=Data Storage}

Feature=Credit Card

Feature=Check

Feature=International Sale

Feature=Customer Inquiries

Feature=Customer Attractions

«optional» CreditCard

«optional» Check

«extend»

Feature=Debit Card

«optional»Shipping

{vp=ShippingType}

Feature=HomeCustomer

Feature=Shipping Delivery

E-commerce SPL: Use Case Model

Page 67: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 67

E-commerce SPL: Fragment of Class Diagram

Page 68: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 68

«kernel»«PaRunTInstance»

{instance = $CatServer,host=$CatNode}

:Catalog

«variant»«PaRunTInstance»

{instance=$CBrowser,host=$CustNode}

:CustomerInterface

«optional»«PaRunTInstance»{instance = $DiskT,host=$DeskTNode}:StaticStorage

«GaAnalysisContext» {contextParams= $N1, $Z1, $ReqT, $FSize, $Blocks}

catalogInfo

«PaCommStep»{msgSize=($RetL,KB)}

sd Browse Catalog

getList

«GaWorkloadEvent»{pattern=$PattBC}«PaStep» {hostDemand=($GetLD, ms),respT=($ReqT,ms),calc)}«PaCommStep» {msgSize = ($GetL *0.2,KB), commTxOvh=($GetLSend,ms),commRcvOvh=($GetLRcv, ms)}

«optional»«PaRunTInstance»{instance =$ProDB,host=$ProDBNode}

:ProductDB

«optional»«PaRunTInstance»{instance =$ProDis,host=$ProDisNode}:ProductDiplay

«AltDesignTime» {VP=Catalog} «SingleChoiceFeature» {RegB=True}

alt [Static]

[Dynamic]

E-commerce SPL: Browse Catalog scenario

Page 69: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 69

«optional»«PaRunTInstance»{instance=$Billing,host=$BillingNode}

:Billing

«variant»«PaRunTInstance»

{instance=$Supplier, host=$SupplierNode}

:SupplierInterface

«kernel»«PaRunTInstance»

{instance =$DeliOrder, host=$DeliOrdNode}

:DeliveryOrder

«optional»«PaRunTInstance»

{instance=$CAccount, host=$CAccountNode}

:CustomerAccount

«variant»«PaRunTInstance»

{instance=$CBrowser, host=$CustNode}

:CustomerInterface

sd Bill Customer«GaAnalysisContext» {contextParams= $N1, $Z1, $ReqT, $FSize, $Blocks}

«optional»«PaRunTInstance»{instance=$DMenu, host=$DMenuNode}

:DisplayMenu

opt [Switching Menu]

opt [CreditCard]ref

Pay by CreditCard

«OptDesignTime» {VP=Payment} «MultiChoiceFeature»{AltB=True} «SingleChoiceFeature»{RegB=True}

opt [DebitCard]ref

Pay by DebitCard

opt [Check]ref

Pay by Check

«OptDesignTime» {VP=Payment} «MultiChoiceFeature» {AltB=True} «SingleChoiceFeature» {RegB=True}

«OptDesignTime» {VP=Payment} «MultiChoiceFeature» {AltB=True} «SingleChoiceFeature» {RegB=True}

«OptDesignTime» {VP=Switching Menu} «SingleChoiceFeature» {OptB=True}

E-commerce SPL: Bill Customer Scenario

Page 70: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 70

Select the desired feature configuration for the product: feature configuration is a set of compatible features that uniquely

characterize a product A generated UML+MARTE model for a specific product contains:

use case model for the specific product product class diagram sequence diagrams for each scenario in each selected use case for the

product Each diagram of the generated product model is obtained from a

SPL diagram by selecting only the model elements that realize the desired features

Profile use in the generated product model: only MARTE is used (still with generic parameters) the PL profile has been eliminated as the variability dependent on

regular features has been resolved.

Product Model Derivation

Page 71: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 71

{mutually exclusive feature}

«optional feature» purchaseOrder

«exactly-one-of feature group» Catalog

«alternative feature» Static

«alternative feature» Dynamic

{mutually exclusive feature}

«at-least-one-of feature group» Delivery

«optional feature» Electronic

«optional feature» Shipping

«at-least-one-of feature group» Invoices

«optional feature» On-line Display

«optional feature» Printed Invoice

«common feature»

E-Commerce Kernel

«exactly-one-of feature group» Customer

«alternative feature» Business Customer

«alternative feature» Home Customer

{mutually exclusive feature}

«at-least-one-of feature group» Payment

«optional feature» CreditCard

«optional feature» Check

«optional feature» DebitCard

«at-least-one-of feature group» Customer Attractions

«optional feature» Promotions

«optional feature» Membership Discount

«optional feature» Sales

«requires»

«requires»

«requires»

«at-least-one-of feature group» Customer Inquiries

«optional feature» Help Desk

«optional feature» Call Center

«requires»

«mutually includes»

«mutually includes»«mutually includes»

«at-least-one-of feature group» ShippingType

«optional feature» Normal

«optional feature» Express

«optional feature» Package Slip

«optional feature» Several Language

«at-least-one-of feature group» International Sale

«optional feature» Currency Conversion

«optional feature» Tariffs Calculation

«exactly-one-of feature group» Data Storage

«alternative feature» Distributed

«alternative feature» Centralized

«optional feature» I/E Laws

«optional feature» Switching Menu

«more-than-one-required»

«more-than-one-required»

«more-than-one-required»

«alternative feature» Home Customer «alternative feature»

Dynamic

«optional feature» Electronic

«optional feature» Sales

«optional feature» CreditCard

«optional feature» DebitCard

«optional feature» Switching Menu

«optional feature» Shipping

«optional feature» Normal

«optional feature» Printed Invoice

«optional feature» On-line Display

«common feature»

E-Commerce Kernel«mutually includes»

«mutually includes» «mutually includes»

«mutually includes»

«mutually includes»

Feature configuration for Home Customer Product

Page 72: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 72

«optional»International Sales {vp=International}

«optional»Customer Attractions

{vp=Attractions}

«optional» Deliver Purchase Order

«optional» Prepare Purchase Order

«alternative» Check Customer

Account

«kernel» Make Purchase Order

«kernel»Confirm Shipping

«alternative» Send Invoice

Customer

Authorization Center

Supplier

Wholesaler

Bank

Feature=BusinessCustomer

Feature=HomeCustomer

Feature=BusinessCustomer

«optional» DebitCard

«extend»«extend»

«alternative»Bill Customer

{ext point=Payment}{vp=Switching Menu}

Feature=Purchase Order

Feature=BusinessCustomer

«kernel»Browse Catalog

{vp=Catalog}

Feature=Purchase Order

«alternative»Create Requisition{vp=DataStorage}

«extend»

«kernel»Process Delivery Order {ext point=Delivery}

«extend»

«optional» Electronic

Feature=Electronic Delivery

Feature=Shipping Delivery

«alternative»Confirm Delivery {vp=Data Storage}

Feature=Credit Card

Feature=Check

Feature=International Sale

Feature=Customer Inquiries

Feature=Customer Attractions

«optional» CreditCard

«optional» Check

«extend»

Feature=Debit Card

«optional»Shipping

{vp=ShippingType}

Feature=HomeCustomer

«optional»Customer Inquiry

{vp=Inquiries}

Use Case Model for Home Customer Product

Page 73: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 73

ModelA:Modelid=MA

extensionPoint

packagedElement

packagedElement

Pro1:Propertyid=P1

type=UCCisUnique=‘false’association=AC1upperValue=1lowerValue=1

ownedEnd

AC:Associationid=AC1

memberEnd =P1, P2

packagedElement

Customer:Actorid=ActA

Pro2:Propertyid=P2

type=ActAisUnique=‘false’association=AC1upperValue=1lowerValue=1

ownedEnd

ExP2:ExtensionPoint

id=ExP2

Browse:UseCaseid=UCC

1

3

2

Browse:UseCase

id=UCC

Customer:Actor

id=ActA

AC:Association

id=AC1

Pro1:Property

id=P1

Pro2:Property

id=P2

endType

endType

Abstract syntax

Customer

<<kernel>> Browse

selected

selected

Use case – actor: stereotype only the use cases

Implicit selection of non-annotated elements

Page 74: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 74

Class Diagram (fragment) for Home Customer

Page 75: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 75

(c)

PA1:Propertyid=PP1

type=CBisUnique=‘false’

association=A1upperValue=‘*’lowerValue=1

ownedAttribute

packagedElement

ownedAttribute

Payment:Classid=CA

CreditCard:Classid=CB

packagedElement

PB1:Propertyid=PP3type=CAisUnique=‘false’association=A1upperValue=‘*’lowerValue=1

ownedAttribute

AB:Associationid=A1

memberEnd =PP1, PP3

packagedElement

PA2:Propertyid=PP2isUnique=‘false’upperValue=1lowerValue=1defaultValue

1

2

<<optional>>CreditCard

<<optional>>Payment

Payment:Class

id=CA

CreditCard:Class

id=CB

PB1:Property

id=PP3

PA1:Property

id=PP1

AB:Association

id=A1

memberEnd

memberEnd

association

association

Abstract syntax

selected

selected

Association between two classes:stereotype only the classes

Implicit selection of non-annotated elements

Page 76: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 76

«kernel»«PaRunTInstance»

{instance = $CatServer,host=$CatNode}

:Catalog

«variant»«PaRunTInstance»

{instance=$CBrowser,host=$CustNode}

:CustomerInterface

«optional»«PaRunTInstance»{instance = $DiskT,host=$DeskTNode}:StaticStorage

«GaAnalysisContext» {contextParams= $N1, $Z1, $ReqT, $FSize, $Blocks}

catalogInfo

«PaCommStep»{msgSize=($RetL,KB)}

sd Browse Catalog

getList

«GaWorkloadEvent»{pattern=$PattBC}«PaStep» {hostDemand=($GetLD, ms),respT=($ReqT,ms),calc)}«PaCommStep» {msgSize = ($GetL *0.2,KB), commTxOvh=($GetLSend,ms),commRcvOvh=($GetLRcv, ms)}

«optional»«PaRunTInstance»{instance =$ProDB,host=$ProDBNode}

:ProductDB

«optional»«PaRunTInstance»{instance =$ProDis,host=$ProDisNode}:ProductDiplay

«AltDesignTime» {VP=Catalog} «SingleChoiceFeature» {RegB=True}

alt

[Dynamic]

[Static]

Transformation of Browse Catalog scenario for Home Customer Prodduct

Page 77: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 77

Generated Browse Catalog Scenario

Page 78: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 78

«optional»«PaRunTInstance»{instance=$Billing,host=$BillingNode}

:Billing

«variant»«PaRunTInstance»

{instance=$Supplier, host=$SupplierNode}

:SupplierInterface

«kernel»«PaRunTInstance»

{instance =$DeliOrder, host=$DeliOrdNode}

:DeliveryOrder

«optional»«PaRunTInstance»

{instance=$CAccount, host=$CAccountNode}

:CustomerAccount

«variant»«PaRunTInstance»

{instance=$CBrowser, host=$CustNode}

:CustomerInterface

«GaAnalysisContext» {contextParams= $N1, $Z1, $ReqT, $FSize, $Blocks}

«optional»«PaRunTInstance»{instance=$DMenu, host=$DMenuNode}

:DisplayMenu

[Switching Menu]

«OptDesignTime» {VP=Payment} «MultiChoiceFeature» {AltB=True} «SingleChoiceFeature» {RegB=True}

«OptDesignTime» {VP=Switching Menu} «SingleChoiceFeature» {OptB=True}

Transformation of Bill Customer scenario

Page 79: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 79

Generated Bill Customer scenario

Page 80: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 80

Propose a user-friendly solution compared to an older approach: binding information was given as a set of couples <param, value>

created manually by the developer after inspecting the generated product model

New solution: collect automatically all generic parameters that need binding from the generated UML+MARTE product model present them to developers in a spreadsheet format, together with

context and guiding information developers will enter concrete binding values on the same spreadsheet

Collect automatically the hardware resources (e.g., hosts) present their list when the developer needs to choose a resource for

software-to-hardware allocation Automate the mapping between PC-features and MARTE

annotations. Transformation performs the actual binding after reading concrete

values from spreadsheet.

Handling generic parameters

Page 81: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 81

Software Domain

Application engineering

User

Domain engineeringUML+MARTE+PL SPL Model

Feature model

M2MT: Generate Parameter

Spreadsheet

DiagnosisPerformance

Feedback

M2MT: Instantiate Specific Product

Model

LQNSolver

User enters concrete

values

Concrete Annotations Spreadsheet

M2MT: Perform Binding

UML+ MARTE

Product Model

M2MT: PUMA

Transformation

Feature Configuration

Product Model with Generic Annotations

Parameter Spreadsheet

LQN Performance

Model

Performance

Results

PC-Feature model

Performance Domain

Parameter spreadsheet: derivation and use

Page 82: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 82

The generic parameters of a product model derived from the SPL model are of different kinds: product-specific resource demands such as: execution times, number of

repetitions, probabilities of different steps software-to-hardware allocation such as component instances to

processors platform/environment-specific performance details (a.k.a. performance

completions). Binding to concrete values:

the performance analyst needs to provide concrete values for all generic parameters

this transforms the generic product model into a platform-specific model describing the run-time behaviour of the product for a specific run-time environment.

Kinds of generic parameters

Page 83: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 83

securityLevel

highSecuritymediumSecuritylowSecurity

secureCommunication

secured unsecured

SSL Protocol TLS Protocol

channelType

LAN Internet PAN

internetConnection

WirelessDSLPower-line

externalDeviceType

diskmonitor

PlatformChoice

.NETCORBAEnterprise JavaBeans COM

USBDVDCD Hard Disk

dataCompression

compressed uncompressed

<1-1>

<1-1>

<1-1>

<1-1>

<1-1>

<1-1><1-1>

<1-1>

<1-1>

Performance completion feature model

Performance completions close the gap between the high-level design model and its different implementations, by introducing details of the execution environment/platform in the product model.

Page 84: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 84

PC-feature Affected Performance

Attribute

MARTE Stereotype

MARTE Attribute

secureCommunication Communication overhead PaCommStep commRcvOverheadcommTxOverhead

channelType Channel Capacity Channel Latency

GaCommHost capacity blockT

dataCompression Message size Communication overhead

PaCommStep msgSize commRcvOverheadcommTxOverhead

messageType Communication overhead PaCommStep commTxOverhead

Mapping PC-features to MARTE

Page 85: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 85

Software Domain

Application engineering

User

Domain engineeringUML+MARTE+PL SPL Model

Feature model

M2MT: Generate Parameter

Spreadsheet

DiagnosisPerformance

Feedback

M2MT: Instantiate Specific Product

Model

LQNSolver

User enters concrete

values

Concrete Annotations Spreadsheet

M2MT: Perform Binding

UML+ MARTE

Product Model

M2MT: PUMA

Transformation

Feature Configuration

Product Model with Generic Annotations

Parameter Spreadsheet

LQN Performance

Model

Performance

Results

PC-Feature model

Performance Domain

Generate parameter spreadsheet - context

Page 86: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 86

Multi-step transformation based on:Hugo Brunelière , “ATL Transformation Example: Microsoft Office Excel Extractor” from the Eclipse ATL website

Generate parameter spreadsheet - details

Page 87: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 87

Generated Spreadsheet Example

Page 88: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 88

Message and its context

Page 89: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 89

Mapping PC-features to MARTE

Page 90: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 90

Guideline for Value

Guidelines for choosing concrete values

Page 91: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 91

Spreadsheet with the user input

Page 92: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 92

«optional»«PaRunTInstance»

{instance =$ProDis,

host=$ProDisNode}:ProductDiplay

Using attribute “host” for allocation

Page 93: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 93

Software Domain

Application engineering

User

Domain engineeringUML+MARTE+PL SPL Model

Feature model

M2MT: Generate Parameter

Spreadsheet

DiagnosisPerformance

Feedback

M2MT: Instantiate Specific Product

Model

LQNSolver

User enters concrete

values

Concrete Annotations Spreadsheet

M2MT: Perform Binding

UML+ MARTE

Product Model

M2MT: PUMA

Transformation

Feature Configuration

Product Model with Generic Annotations

Parameter Spreadsheet

LQN Performance

Model

Performance

Results

PC-Feature model

Performance Domain

Perform Binding - transformation context

Page 94: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 94

Concrete Annotations Spreadsheet

M2MT (a): Generate XML

Model

M2MT (b): Generate XML

model with required syntax

M2MT (d): Perform Binding

XML Model

XML Model with required

syntax

M2MT (c): Generate XML

File

XML File with required syntax

Product Model with Generic Annotations

UML+ MARTE Product

ModelProduct

Deployment

Perform Binding - details

Page 95: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 95

Conclusions

Integrating performance analysis within the model-driven development of service-oriented systems has many potential benefits For service consumers: how to chose the “best” services available For service providers: how to design and configure their systems to

optimize the use of resources and meet performance requirements For software developers: analyze design and configuration alternative,

evaluate tradeoffs For performance analysts: automate the generation of PModels from

SModels to keep them in synch, reuse platform performance annotations

Benefits of integrating performance analysis in the early phases of SPL development process Reusability applied to performance annotations Annotate the SPL model once with generic performance annotations

instead of starting from scratch for every product User-friendly approach for handling a large number of generic

performance annotations.

Page 96: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 96

Challenges (1)

Human qualifications Software developers are not trained in all the formalisms used for the

analysis of non-functional properties (NFPs) Need to hide the analysis from developers, yet the software models have

to be annotated with extra info for each NFP Who interprets the analysis results and gives feedback to developers for

changing the software? Abstraction level

Different NFPs may require source software models at different levels of abstractions/details

How to keep all the models consistent? Tool interoperability

difficult to integrate so many different tools some tools are running on different platforms

Page 97: SFM 2012: MDE slide 1 Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer.

SFM 2012: MDE slide 97

Challenges (2)

Integrate NFP analysis in the software development process for each NFP explore the state space for different design alternatives,

configurations, workload parameters, etc. in what order to evaluate the NFPs? is there a leading NFP?

Impact of software model change on the NFP analysis Propagate change throughout the transformation chain Incremental transformation instead of starting from scratch after every

change. A lot more to do, theoretically and practically

merging performance modeling and measurements use runtime monitoring data for better performance models use performance models to support runtime changes (autonomic systems)

applying variability modeling to service-oriented systems manage runtime changes, adapt to context

provide ideas and background for building better tools.