From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to...

10
From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon Gu Carleton University well formed annotated UML models introduction to LQN high-level view of the transformation www.sce.carleton.ca/rads/puma/

description

From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 3 Software architecture and deployment Retrieve SDiskIO DEserver Client Server server client > 1..k DEclient > 1..n DEclient > ClientCPU > Ethernet > ServerCPU DEserver > Sdisk >

Transcript of From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to...

Page 1: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 1

From UML to Performance Models: High Level View

Dorina C. PetriuGordon Gu

Carleton University

well formed annotated UML models introduction to LQN high-level view of the transformation

www.sce.carleton.ca/rads/puma/

Page 2: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 2

Well-formed annotated UML model key use cases described by representative scenarios

frequently executed, have performance constraints resources used by each scenario

resource types: active or passive, physical or logical, hardware or software

examples: processor, disk, process, software server, lock, buffer quantitative resource demands must be given for each

scenario step how much, how many times?

workload intensity for each scenario open workload: arrival rate of requests for the scenario closed workload: number of simultaneous users

Page 3: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 3

Software architecture and deployment

Retrieve

SDiskIO

DEserver

Client Server

serverclient

<<PAresource>>

<<PAresource>>

1..k

DEclient<<PAresource>>

1..n

DEclient

<<PAhost>>ClientCPU

<<PA

reso

urce

>>Et

hern

et

<<PAhost>>ServerCPU

DEserver

<<PAresource>>

Sdisk

<<GRMdeploy>> <<GRMdeploy>>

Page 4: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 4

Scenario with performance annotationsClient RetrieveT SDiskIOT

request document wait_S

accept request

read request

update logfile

wait_D

write to logfile

parse request

get document

read from disk

senddocument

recycle thread receive document

<<PAstep>> {PAdemand=(‘msrd’, ’mean’,(220/$cpuS,’ms’))}

<<PAstep>>{PAdemand=(‘msrd’,’mean’,(1.30 + 130/$cpuS,’ms’))}

<<PAclosed Load>> {Papopulation = $Nusers}

<<PAstep>>{PAdemand=(‘asmd’,’mean’

(0.5,’ms’)),PAextOp=(‘net1’,1)PArespTime=

(‘req’,’mean’, (1,’sec’),(‘pred’,’mean’,$RespT)}}

<<PAstep>> {PAdemand=(‘asmd’, ’mean’, (1.5,’ms’))}

<<PAstep>>{PAdemand=(‘msrd’,’mean’,(35/$cpuS,’ms’))}

<<PAstep>> {PAdemand=(‘msrd’,’mean’,

(25/$cpuS,’ms’))}

<<PAstep>>{PAdemand=(‘msrd’,’mean’,

($cdS,’ms’)),PAextOp=(‘readDisk’, $DocS’)}

<PAstep>> {PAdemand=(‘msrd’,’mean’,

(0.70,’ms’)), PAextOp=(‘writeDisk’, $RP’)}

<PAstep>> {PAdemand=(‘msr’,’mean’,

(170/$cpuS,’ms’))}

<<PAstep>>{PAdemand=(‘msrd’,’mean’,

($scdC/$cpuS,’ms’)),PAextOp=(‘net2’,$DocS’)}

<<PAstep>> {PAdemand=(‘msrd’,’mean’,

($gcdC/$cpuS,’ms’))}

Page 5: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 5

Layered Queueing Network (LQN) model http://www.sce.carleton.ca/rads/lqn/lqn-documentation

Advantages of LQN modeling models software tasks (rectangles) and

hardware devices (circles) represents nested services (a server is

also a client to other servers) software components have entries

corresponding to different services arcs represent service requests

(synchronous and asynchronous) multi-servers used to model

components with internal concurrency

What can we get from the LQN solver Service time (mean, variance) Waiting time Probability of missing deadline Throughput Utilization

clientE

DBWrite DBRead DB

DKWrite DKRead

ClientCPU

ClientT

Disk

DBCPU

DBDisk

Page 6: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 6

UML -> LQN Transformations: Mapping the structure

Comp

<<PAhost>>

XCPU

XCPU<<PAhost>>

XCPU(3)

<<PAresource>>

YdiskYdisk (4)

<<PAresource>>

CompT

XCPU

(5)

<<PAhost>>

XCPU

<<PAresource>>ThreadT

XCPU

(6)

Thread

Comp

<<PAhost>>

XCPU

Comp<<PAresource>>1..n

(1)Comp<<PAresource>>1..n

(1)

Active<<PAresource>> (2)<<PAresource>>

ActiveActive<<PAresource>> (2)

XCPU<<PAhost>>

XCPU(3)XCPU<<PAhost>>

XCPUXCPU(3)

<<

YdiskYdisk (4)<<

YdiskYdisk (4)

<<PAresource>>

(5)

<<PAhost>>

XCPU

<<GRMdeploy>>

<<PAresource>>

(6)

Thread

<<GRMdeploy>>

CompT

Active

Page 7: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 7

UML->LQN Transformation: Mapping the Behavior

Client CPU

User

continue

request service

and reply

waiting

WebServer

complete service (opt)

e1, ph1 ...

Client

work

request service

serve request and reply

Server

wait for reply

User

continue

request service

waiting

WebServer

complete service (opt)

e1, ph1 e2, ph1

e2, ph2

...

Client

work

request service

serve request and reply

Server

wait for reply

e1[ph1] Client

CPUServer

e2[ph1, ph2] Server

Page 8: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 8

Mapping software architecture and physical devices to LQN

b) Mapping physical resources (processors and I/O devices) to LQN devices

a) Mapping software architecture to LQN tasks

DEclientT

RetrieveT

SDiskIOT

Retrieve

SDiskIODEserver

Client Server

serverclient

<<PAresource>>

<<PAresource>>

1..k

DEclient<<PAresource>>1..n

DEclientT

RetrieveT

SDiskIOT

ClientCPU

ServerCPU

Sdisk

DEclient

<<PAhost>>

ClientCPU

<<P

Are

sour

ce>>

Eth

erne

t

<<PAhost>>ServerCPU

DEserver

<<PAresource>>Sdisk

<<GRMdeploy>> <<GRMdeploy>>

Page 9: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 9

Effect of communication network

DEclientT

RetrieveT

SDiskIOT

ClientCPU

ServerCPU

Sdisk

DEclient

<<PAhost>>ClientCPU

<<P

Are

sour

ce>>

Eth

erne

t

<<PAhost>>ServerCPU

DEserver

<<PAresource>>Sdisk

<<GRMdeploy>> <<GRMdeploy>> net1

net2

dummy CPU

Ethernet

Page 10: From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.

From UML to Performance Models 29 June 2004Dorina C. Petriu, Gordon Gu page 10

Groups of scenario steps to LQN entriesDEclientT RetrieveT SDiskIOT

requestdocument

wait_r

readrequest

updatelogfile

wait_d

parse request

getdocument

accept request

accept request

senddocument

write to logfile

read from disk

senddocument

entry clientE phase 2

entry retrieveE phase 1

entry retrieveE ph 2

entry write phase 1

entry read phase 1

DEclientT Client CPU

Sdisk

Ethernet

clientE[ph2]

net1 dummy CPU

net1E

RetrieveT sever CPU

retrieveE[ph1,ph2]

net2net2ESDiskIOTwrite

[ph1]read[ph1]