MIT6_262S11_chap06VIP

53
8/13/2019 MIT6_262S11_chap06VIP http://slidepdf.com/reader/full/mit6262s11chap06vip 1/53 Chapter 6 MARKOV PROCESSES WITH COUNTABLE STATE SPACES 6.1 Introduction Recall that a Markov chain is a discrete-time process {n ; n 0} for which the state at each time n 1 is an integer-valued random variable (rv) that is statistically dependent on 0 ,...X n1 only through n1 . A countable-state  Markov  process 1 (Markov process for short) is a generalization of  a Markov chain in the sense that, along with the Markov chain {n ; n 1}, there is a randomly-varying holding interval in each state which is exponentially distributed with a parameter determined by the current state. To be more specific, let 0 =i, 1 =  j, 2 = k,... , denote a sample path of  the sequence of  states in the Markov chain (henceforth called the embedded  Markov  chain ). Then the holding  interval  n between the time that state n1 = ` is entered and n is entered is a nonnegative exponential rv with parameter ν  , i.e., for all u 0, Pr {n u | n1 = `} = 1 exp(ν  u). (6.1) Furthermore, n , conditional on n1 , is  jointly independent of  m for all m =  n 1 and of  m for all m = n. If  we visualize starting this process at time 0 in state 0 = i, then the first transition of  the embedded Markov chain enters state 1 =  j with the transition probability ij of  the embedded chain. This transition occurs at time 1 , where 1 is independent of  1 and exponential with rate ν i . Next, conditional on 1 = j , the next transition enters state 2 = k with the transition probability  jk . This transition occurs after an interval 2 , i.e., at time 1 + 2 , where 2 is independent of  2 and exponential with rate ν  j . Subsequent transitions occur similarly, with the new state, say n = i, determined from the old state, say n1 = `, via i , and the new holding interval n determined via the exponential rate ν  . Figure 6.1 illustrates the statistical dependencies between the rv’s {n ; n 0} and {n ; n 1}. 1 These processes are often called continuous-time  Markov chains. 262

Transcript of MIT6_262S11_chap06VIP

Page 1: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 1/53

Chapter6MARKOVPROCESSESWITH COUNTABLESTATESPACES 6.1 IntroductionRecall that a Markov chain is a discrete-time process X n;n≥0 for which the state ateach time n ≥ 1 is an integer-valued random variable (rv) that is statistically dependenton X 0, . . . X n−1 only through X n−1. A countable-state Markov process 1 (Markov processfor short) is a generalizationof a Markov chain in the sense that, alongwith the Markovchain X n;n ≥ 1, there is a randomly-varying holding interval in each state which isexponentiallydistributedwithaparameterdeterminedbythecurrentstate.Tobe morespecific, letX 0=i, X 1= j, X 2 =k , . . . , denotea samplepath of thesequenceof states in the Markov chain (henceforth called the embedded Markov chain ). Then theholding interval U n betweenthetimethatstateX n−1 =`isenteredandX n isenteredisanonnegativeexponentialrvwithparameterν ,i.e.,forallu≥0,

PrU n ≤u|X n−1 =`= 1−exp(−ν u). (6.1)Furthermore,U n,conditionalonX n−1, is jointly independentof X m forallm= n−1andof U m forallm=n.

If we visualizestartingthis process at time 0 in state X 0 = i, then the first transitionof theembeddedMarkovchainentersstateX 1 = j withthetransitionprobabilityP ij of theembedded chain. This transition occurs at time U 1, where U 1 is independent of X 1 andexponential with rate ν i. Next, conditional on X 1 = j, the next transition enters stateX 2 =kwiththetransitionprobabilityP jk . ThistransitionoccursafteranintervalU 2,i.e.,attimeU 1+U 2,whereU 2 is independentof X 2 andexponentialwithrateν j. Subsequenttransitionsoccursimilarly,withthenewstate,sayX n =i,determinedfromtheoldstate,sayX n−1 =`,viaP i,andthenewholdingintervalU n determinedviatheexponentialrateν . Figure 6.1 illustrates the statistical dependencies between the rv’s X n;n ≥ 0 andU n;n≥1.

1Theseprocessesareoftencalledcontinuous-time Markovchains.262

Page 2: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 2/53

2636.1. INTRODUCTION

U 1 U 2 U 3 U 4

X 0 X 1 X 2 X 3

Figure6.1: The statisticaldependenciesbetween therv’sof aMarkovprocess. Eachholding interval U i, conditionalon the current state X i−1, is independent of all otherstatesandholdingintervals.

TheepochsatwhichsuccessivetransitionsoccuraredenotedS 1, S 2, . . .,sowehaveS 1 =U 1, S 2 =U 1 +U 2, and in generalS n =n U m for n≥1 wug S 0 =0. The state of am=1Markovprocessatanytimet >0 isdenotedbyX (t)and isgivenby

X (t) =X n forS n ≤t < S n+1 foreachn≥0.ThisdefinesastochasticprocessX (t);t≥0 inthesensethateachsamplepointω∈Ωmapstoasequenceof samplevaluesof X n;n≥0andS n;n≥1,andthusintoasamplefunctionof X (t);t≥0. ThisstochasticprocessiswhatisusuallyreferredtoasaMarkovprocess, but it is often simpler to view X n;n≥0,S n;n≥1 as a characterizationof theprocess. Figure6.2illustratestherelationshipbetweenallthesequantities.

U 1rateν i

U 2rateν j

U 3rateν k

X 0 =i X 1 = j X 2 =k

0 X (t) =i S 1 X (t) = j S 2 X (t) = k S 3Figure 6.2: The relationship of the holding intervals U n;n ≥ 1 and the epochsS n;n≥1 at which state changesoccur. Thestate X (t) of the Markov process andthecorrespondingstateof theembeddedMarkovchainarealso illustrated. Notethatif X n =i,thenX (t) =iforS n ≤t < S n+1

Thiscanbesummarizedinthefollowingdefinition.Definition6.1.1. Acountable-state Markov process X (t); t≥0 is a stochastic process mapping each nonnegative real number t to the nonnegative integer-valued rv X (t) in such a way that for each t≥0,

n

X (t) =X n for S n ≤t < S n+1; S 0 =0; S n =U m for n≥1, (6.2)

m=1where X n; n ≥ 0 is a Markov chain with a countably infinite or finite state space and each U n, given X n−1 =i, is exponential with rate ν i >0and is conditionally independent of all other U m and X m.Thetacitassumptionsthatthestatespace isthesetof nonnegative integersandthattheprocessstartsatt=0aretakenonlyfornotationalsimplicitybutwillserveourneedshere.Weassumethroughoutthischapter(exceptinafewplaceswherespecifiedotherwise)thatthe embedded Markov chain has no self transitions, i.e., P ii = 0 for all states i. One

Page 3: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 3/53

λ

264 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

reasonforthis isthatsuchtransitionsare invisible inX (t);t≥0. Another isthatwiththis assumption, the sample functions of X (t);t ≥ 0 and the joint sample functions of X n;n≥0andU n;n≥1uniquelyspecifyeachother.We are not interested for the moment in exploring the probability distribution of X (t)for given values of t, but one important feature of this distribution is that for any timest >τ >0andanystatesi, j,

PrX (t)= j|X (τ )=i,X (s) =x(s);s <τ =PrX (t−τ )= j|X (0)=i. (6.3)This property arises because of the memoryless property of the exponential distribution.Thatis,if X (τ ) =i,itmakesnodiff erencehowlongtheprocesshasbeeninstateibeforeτ ;thetimetothenexttransitionisstillexponentialwithrateν i andsubsequentstatesandholdingintervalsaredeterminedasif theprocessstartsinstateiattime0. Thiswillbeseenmore clearly in the following exposition. This property is the reason why these processesarecalledMarkov,andisoftentakenasthedefiningpropertyof Markovprocesses.Example6.1.1. The M/M/1queue: AnM/M/1queuehasPoissonarrivalsataratedenotedbyλandhasasingleserverwithanexponentialservicedistributionof rateµ >λ(seeFigure6.3). Successiveservicetimesareindependent,bothof eachotherandof arrivals.The state X (t) of the queue is the total number of customers either in the queue or inservice. WhenX (t)=0,thetimetothenexttransitionisthetimeuntilthenextarrival,i.e.,ν 0 =λ. WhenX (t) =i, i≥1,theserver isbusyandthetimetothenexttransitionisthe timeuntil eithera new arrivaloccursor a departureoccurs. Thusν i =λ+µ. Forthe embeddedMarkov chain, P 01 =1 sinceonlyarrivals are possible in state0, and theyincreasethestateto1. Intheotherstates,P i,i−1 =µ/(λ+µ)andP i,i+1 =λ/(λ+µ).

1 λ+µ λ/(λ+µ) λ+µ λ/(λ+µ) λ+µ

3 . . .

1

2 µ/(λ+µ) µ/(λ+µ) µ/(λ+µ)

Figure6.3: TheembeddedMarkovchainforanM/M/1queue. Eachnodeiislabeledwiththecorrespondingrateν i of theexponentiallydistributedholding intervaltothenexttransition. Eachtransition,sayito j,islabeledwiththecorrespondingtransitionprobabilityP ij intheembeddedMarkovchain.

TheembeddedMarkovchain isaBirth-deathchain,and itssteadystateprobabilitiescanbecalculatedeasilyusing(5.25). Theresultis

π0

=

1−ρwhere

ρ=

2 µ

πn = 1−

2ρ2

ρn−1 forn≥1. (6.4)Notethatif λ<<µ,thenπ0 andπ1 areeachcloseto1/2(i.e.,theembeddedchainmostlyalternates between states 0 and 1, and higher ordered states are rarely entered), whereasbecauseof thelargeholdingintervalinstate0,theprocessspendsmostof itstimeinstate0 waiting for arrivals. The steady-state probability πi of state i in the embedded chain

Page 4: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 4/53

2656.1. INTRODUCTION

is the long-term fraction of the total transitions that go to state i. We will shortly learnhow to find the long term fraction of time spent in state i as opposed to this fraction of transitions,butfornowwereturntothegeneralstudyof Markovprocesses.The evolution of a Markov process can be visualized in several ways. We have alreadylookedatthefirst,inwhichforeachstateX n−1 =iintheembeddedchain,thenextstateX n isdeterminedbytheprobabilitiesP ij; j≥0of theembeddedMarkovchain,andtheholding intervalU n is independentlydeterminedbytheexponentialdistributionwithrateν i.Forasecondviewpoint,supposeanindependentPoissonprocessof rateν i >0isassociatedwith each state i. When the Markov process enters a given state i, the next transitionoccurs at the next arrival epoch in the Poisson process for state i. At that epoch, a newstateischosenaccordingtothetransitionprobabilitiesP ij. Sincethechoiceof nextstate,givenstatei,isindependentof theintervalinstatei,thisviewdescribesthesameprocessasthefirstview.Forathirdvisualization,suppose,foreachpairof statesiand j,thatanindependentPoissonprocessof rateν iP ij isassociatedwithapossibletransitionto j conditionalonbeing ini.WhentheMarkovprocessentersagivenstatei,boththetimeof thenexttransitionandthe choice of the nextstate are determinedby the set of i to j Poissonprocesses over allpossiblenextstates j. Thetransitionoccursattheepochof thefirstarrival,forthegiveni,toanyof theito jprocesses,andthenextstateisthe jforwhichthatfirstarrivaloccurred.Sincesuchacollectionof independentPoissonprocessesisequivalenttoasingleprocessof rate ν i followed by an independent selection according to the transition probabilitiesP ij,thisviewagaindescribesthesameprocessastheotherviews.It isconvenient inthisthirdvisualizationtodefinetherate fromanystateitoanyotherstate j as

q ij =ν iP ij.If wesumover j,weseethatν i andP ij arealsouniquelydeterminedbyq ij;i, j≥0as

ν i = q ij ; P ij =q ij/ν i. (6.5) j

This means that the fundamental characterization of the Markov process in terms of theP ij andtheν i canbereplacedbyacharacterizationintermsof thesetof transitionratesq ij. Inmanycases, this is a more naturalapproach. For the M/M/1queue, forexample,q i,i+1 is simply the arrival rate λ. Similarly, q i,i−1 is the departurerate µ whentherearecustomers to be served, i.e., when i > 0. Figure 6.4 shows Figure 6.3 incorporating thisnotationalsimplification.Notethat the interarrival density forthePoissonprocess, fromanygivenstate itootherstate j, is given by q ijexp(−q ij x). On the other hand, given that the process is in statei,theprobabilitydensityforthe intervaluntilthenexttransition,whetherconditionedonthenextstateornot,isν iexp(−ν ix)whereν i = jq ij. Onemightargue,incorrectly,that,conditionalonthenexttransitionbeingtostate j,thetimetothattransitionhasdensityq ijexp(−q ijx). Exercise6.1usesanM/M/1queuetoprovideaguidedexplanationof whythisargument is incorrect.

Page 5: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 5/53

λ λ λ

λδ λδ λδ

266 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

. . .0

1

3 2 µ µ µ

Figure6.4: TheMarkovprocessforanM/M/1queue. Eachtransition(i, j)islabelledwiththecorrespondingtransitionrateq ij.

6.1.1 Thesampled-timeapproximationtoaMarkovprocessAsyetanotherwaytovisualizeaMarkovprocess, considerapproximatingtheprocessbyviewingitonlyattimesseparatedbyagivenincrementsizeδ . ThePoissonprocessesabovearethenapproximatedbyBernoulliprocesseswherethetransitionprobability fromito jinthesampled-timechainisdefinedtobeq ijδ forall j= i.The Markov process is then approximated by a Markov chain. Since each δ q ij decreaseswithdecreasingδ ,thereisanincreasingprobabilityof notransitionoutof anygivenstateinthetime incrementδ . Thesemustbemodeledwithself-transitionprobabilities,sayP ii(δ )whichmustsatisfy

P ii(δ ) = 1− q ijδ = 1−ν iδ foreachi≥0. j

ThisisillustratedinFigure6.5fortheM/M/1queue. Recallthatthissampled-timeM/M/1MarkovchainwasanalyzedinSection5.4andthesteady-stateprobabilitieswereshowntobe

pi(δ )=(1−ρ)ρi foralli≥0 whereρ=λ/µ. (6.6)We have denoted the steady-state probabilities here by pi(δ ) to avoid confusion with thesteady-state probabilities for the embedded chain. As discussed later, the steady-stateprobabilitiesin(6.6)donotdependonδ ,solongasδ issmallenoughthattheself-transitionprobabilitiesarenonnegative.

0

1

2

3 . . . µδ µδ µδ

1−λδ 1−(λ+µ)δ 1−(λ+µ)δ 1−(λ+µ)δFigure6.5: ApproximatinganM/M/1queuebyasampled-timeMarkovchain.

Thissampled-timeapproximation is an approximation in two ways. First, transitionsoccur only at integer multiples of the increment δ , and second, q ijδ is an approximation toPrX (δ )= j|X (0)=i. From (6.3), PrX (δ )= j|X (0)=i = q ij δ +o(δ ), so this secondapproximationisincreasinglygoodasδ →0.Asobservedabove,thesteady-stateprobabilitiesforthesampled-timeapproximationtoanM/M/1queuedonotdependonδ . Asseenlater,whenevertheembeddedchainispositive

Page 6: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 6/53

6.2. STEADY-STATE BEHAVIOR OF IRREDUCIBLE MARKOV PROCESSES 267

recurrentandasampled-timeapproximationexistsforaMarkovprocess,thenthesteady-stateprobabilityof eachstatei is independentof δ andrepresentsthe limitingfractionof timespent instateiwithprobability1.Figure 6.6 illustrates the sampled-timeapproximationof a generic Markov process. NotethatP ii(δ )isequalto1−δν i foreachiinanysuchapproximation,andthusitisnecessaryforδ tobesmallenoughtosatisfyν iδ ≤1foralli. Forafinitestatespace,thisissatisfiedfor any δ ≤ [maxiν i]−1. For a countably infinite state space, however, the sampled-timeapproximation requires the existence of some finite B such that ν i ≤ B for all i. Theconsequencesof havingnosuchboundareexploredinthenextsection.

q 13 δ q

13q 31 δ q

31

1 2 2 3 1

3q 12 q

23 δ q 12 δ q

23

ν 1 =q

12+q 13 ν 2 =q

31ν 2 =q 23 1−δν 1 1−δν 2 1−δν 3

Figure6.6: ApproximatingagenericMarkovprocessbyitssampled-timeMarkovchain.

6.2 Steady-statebehaviorof irreducibleMarkovprocessesDefinition6.2.1(IrreducibleMarkovprocesses). An irreducible Markov process is a Markov process for which the embedded Markov chain is irreducible ( i.e., all states are in the same class).The analysis in this chapter is restricted almost entirely to irreducible Markov processes.Thereason forthisrestriction isnotthatMarkovprocesseswithmultipleclassesof statesare unimportant, but rather that they can usually be best understood by first looking attheembeddedMarkovchainsforthevariousclassesmakingupthatoverallchain.We will ask the same types of steady-state questions for Markov processes as we askedabout Markov chains. In particular, under what conditions is there a set of steady-stateprobabilities, p0, p1, . . . with the property that for any given starting state i, the limitingfractionof of timespentinanygivenstate j is p j withprobability1? Dotheseprobabilitiesalsohavethepropertythat p j =limt→∞PrX (t) = j|X 0 =i?We will find that simplyhaving a positive-recurrent embeddedMarkov chain is not quiteenoughtoensurethatsuchasetof probabilitiesexists. Itisalsonecessaryfortheembedded-chainsteady-stateprobabilitiesπi;i≥0andtheholding-interval parametersν i;i≥0tosatisfyiπi/ν i <∞. Wewillinterpretthislatterconditionasassertingthatthelimitinglong-term rate at which transitions occur must be strictly positive. Finally we will showthat when these conditionsare satisfied, the steady-stateprobabilities for the process are

Page 7: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 7/53

268 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

relatedtothoseof theembeddedchainby p j =

π

k

/

k

ν

/ j

ν k. (6.7)Definition6.2.2. The steady-state process probabilities, p0, p1, . . . for a Markov process are a set of numbers satisfying (6.7),where πi;i≥0and ν i;i≥0are the steady-state probabilities for the embedded chain and the holding-interval rates respectively.As one might guess, the appropriate approach to answering these questions comes fromapplyingrenewaltheorytovariousrenewalprocessesassociatedwiththeMarkovprocess.Manyof theneededresults forthishavealreadybeendeveloped in lookingatthesteady-statebehaviorof countable-stateMarkovchains.Westartwithaverytechnical lemmathatwillperhapsappearobvious,andthereader iswelcome to ignore the proof until perhaps questioning the issue later. The lemma is notrestrictedto irreducibleprocesses,althoughweonlyuseit inthatcase.Lemma6.2.1. Consider a Markov process for which the embedded chain starts in some given state i. Then the holding time intervals, U 1, U 2, . . . are all rv’s. Let M i(t) be the number of transitions made by the process uptoand including time t. Then with probability 1(WP1),

lim M i(t) =∞. (6.8)t→∞

Proof: The first holding interval U 1 is exponential with rate ν i > 0, so it is clearly a rv(i.e.,notdefective). Ingeneralthestateafterthe(n−1)thtransitionhasthePMFP n−1,ij

sothecomplementarydistributionfunctionof U n isk

lim

PrU n

> u =k

P ij

n−1

∞exp(−ν

ju).

k→∞ j=1

P ijn−1exp(−ν ju) + P n+1

ij forevery k. ≤ j=1 j=k+1

For each k, the first sum above approaches 0 with increasing u and the second sum approaches0withincreasingksothelimitasu→ ∞mustbe0andU n isarv.It followsthat each S n =U 1 + +U n is alsoa rv. Now S n; n≥1 is the sequenceof · · ·arrival epochs in an arrival process, so we have the set equality S n ≤t=M i(t)≥nfor each choice of n. Since S n is a rv, we have limt→∞PrS n ≤t =1 for each n. Thuslimt→∞PrM i(t)≥n=1foralln. Thismeansthatthesetof samplepointsω forwhichlimt→∞M i(t, ω) < n has probability 0 for all n, and thus limt→∞M i(t, ω) = ∞ WP1.

6.2.1 RenewalsonsuccessiveentriestoagivenstateForanirreducibleMarkovprocesswithX 0 =i,letM ij(t)bethenumberof transitionsintostate j over the interval (0, t]. We want to find when this is a delayed renewal counting

Page 8: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 8/53

6.2. STEADY-STATE BEHAVIOR OF IRREDUCIBLE MARKOV PROCESSES 269

process. It is clear that the sequence of epochs at which state j is entered form renewalpoints, since they form renewal points in the embedded Markov chain and the holdingintervalsbetweentransitionsdependonlyonthecurrentstate. Thequestionsarewhetherthefirstentrytostate j mustoccurwithinsomefinitetime,andwhetherrecurrencesto jmust

occur

within

finite

time.

The

following

lemma

answers

these

questions

for

the

case

wheretheembeddedchainisrecurrent(eitherpositiverecurrentornullrecurrent).Lemma6.2.2. Consider a Markov process with an irreducible recurrent embedded chain X n;n≥0. Given X 0 =i, let M ij(t);t≥0be the number of transitions intoa given state j in the interval (0, t]. Then M ij(t);t≥0is a delayed renewal counting process (or an ordinary renewal counting process if j=i).Proof: GivenX 0 =i,letN ij(n)bethenumberof transitionsintostate jthatoccurintheembeddedMarkovchainbythenth transitionof theembeddedchain. FromLemma5.1.4,N ij(n);n≥0 is a delayed renewal process, so from Lemma 4.8.2, limn→∞N ij(n) =∞

withprobability1. NotethatM ij(t) =N ij(M i(t)),whereM i(t)isthetotalnumberof statetransitions(betweenallstates)intheinterval(0, t]. Thus,withprobability1,

lim M ij(t)= lim N ij(M i(t)) = lim N ij(n) =∞.t→∞ t→∞ n→∞

wherewe have usedLemma6.2.1, which assertsthat limt→∞M i(t) =∞withprobability1.It follows that the interval W 1 until the first transition to state j, and the subsequentintervalW 2untilthenexttransitiontostate j,arebothfinitewithprobability1. SubsequentintervalshavethesamedistributionasW 2,andallintervalsareindependent,soM ij(t);t≥0 is a delayed renewal process with inter-renewal intervals W k;k ≥ 1. If i = j, thenallW k areidenticallydistributedandwehaveanordinaryrenewalprocess,completingtheproof.Theinter-renewalintervalsW 2, W 3, . . . forM ij(t);t≥0abovearewell-definednonnegativeIIDrv’swhosedistributiondependson jbutnoti. Theyeitherhaveanexpectationasafinitenumberorhavean infiniteexpectation. Ineithercase,thisexpectation isdenotedasE[W ( j)]=W ( j). This is the mean timebetween successive entries to state j, and wewillseelaterthatinsomecasesthismeantimecanbe infinite.

6.2.2 The limitingfractionof time ineachstateInordertostudythefractionof timespent instate j,wedefineadelayedrenewal-rewardprocess, based on M ij(t);t ≥ 0, for which unit reward is accumulated whenever theprocess is in state j. That is (given X 0 = i), Rij(t) = 1 when X (t) = j and Rij(t) = 0otherwise. (seeFigure6.7). Giventhattransitionn−1of theembeddedchainentersstate j,theintervalU n isexponentialwithrateν j,soE[U n|X n−1= j] = 1/ν j.Let p j(i) be the limiting time-average fraction of time spent in state j. We will see laterthatsucha limitexistsWP1,thatthe limitdoesnotdependon i, andthat it is equalto

Page 9: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 9/53

270 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

X n

1= j

X n= j

X n+1

= j

X n+2= j

X n+3

= j

U n Rij(t)

W k

Figure 6.7: The delayed renewal-reward process Rij(t);t ≥ 0 for time in state j.Therewardisonewhenevertheprocessisinstate j,i.e.,Rij(t) =IX(t)=j. Arenewaloccursoneachentrytostate j, sotherewardstartsateachsuchentryandcontinuesuntilastatetransition,assumedtoenterastateotherthan j. Therewardthenceasesuntil the next renewal, i.e., the next entry to state j. The figure illustrates the kthinter-renewal interval, of durationW k, which is assumed to start on the n−1st statetransition. The expected interval over which a reward is accumulated is ν j and theexpecteddurationof theinter-renewalintervalisW ( j).

the steady-stateprobability p j in(6.7). Since U ( j) = 1/ν j, Theorems4.4.1and4.8.4, forordinaryanddelayedrenewal-rewardprocessesrespectively,statethat2

tRij(τ )dτ

p j(i) = lim

0 WP1 (6.9)t→∞ t

= U ( j) = 1

. (6.10)W ( j) ν jW ( j)

Thisshowsthatthelimitingtimeaverage, p j(i),existswithprobability1andisindependentof thestartingstatei. Weshowlaterthatitisthesteady-stateprocessprobabilitygivenby(6.7). Wecanalsoinvestigatethelimit,ast→ ∞,of theprobabilitythatX (t) = j. Thisisequalto limt→∞E[R(t)] fortherenewal-rewardprocessabove. Becauseof theexponentialholdingintervals,theinter-renewaltimesarenon-arithmetic,andfromBlackwell’stheorem,intheformof (4.105),

1lim PrX (t) = j= = p j(i). (6.11)

t→∞ ν jW ( j)Wesummarizetheseresultsinthefollowing lemma.Lemma6.2.3. Consider an irreducible Markov process with a recurrent embedded Markov chain starting in X 0 = i. Then with probability 1, the limiting time average in state j is given by p j(i) = 1 . This is also the limit,as t→ ∞,of PrX (t) = j.

ν jW ( j)

6.2.3 Finding p j(i); j≥0 intermsof π j; j≥0Nextwemustexpressthemeaninter-renewaltime,W ( j),intermsof moreaccessiblequantitiesthatallowustoshowthat p j(i) = p j where p j isthesteady-stateprocessprobability

2Theorems4.4.1 and 4.8.4 do not cover the case whereW ( j) =∞, but, since the expected reward perrenewalinterval isfinite,it isnothardtoverify(6.9)inthatspecialcase.

Page 10: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 10/53

6.2. STEADY-STATE BEHAVIOR OF IRREDUCIBLE MARKOV PROCESSES 271

of Definition 6.2.2. We now assume that the embedded chain is not only recurrent butalso positive recurrent with steady-state probabilities π j; j ≥ 0. We continue to assumeagivenstartingstateX 0 =i. Applyingthestronglawfordelayedrenewalprocesses(Theorem4.8.1)totheMarkovprocess,

lim M ij (t)/t= 1/W ( j) WP1. (6.12)t→∞

Asbefore,M ij(t) =N ij(M i(t)). Sincelimt→∞M i(t) =∞withprobability1,lim M ij(t)

= lim N ij(M i(t))= lim N ij(n)

=π j WP1. (6.13)t→∞ M i(t) t→∞ M i(t) n→∞ n

Inthelaststep,weappliedthesamestronglawtotheembeddedchain. Combining(6.12)and(6.13),thefollowingequalitiesholdwithprobability1.

1 M ij(t)

=

lim

W ( j) t→∞ tM ij(t)M i(t)

= limt→∞ M i(t) t

= π j lim M i(t). (6.14)

t→∞ tThistellsusthatW ( j)π j isthesame forall j. Also,sinceπ j >0 forapositiverecurrentchain, it tell us that if W ( j) < ∞ for one state j, it is finite for all states. Also theseexpected recurrence times are finite if and only if limt→∞M i(t)/t > 0. Finally, it saysimplicitlythatlimt→∞M i(t)/texistsWP1andhasthesamevalueforallstartingstatesi.Thereisrelativelylittlelefttodo,andthefollowingtheoremdoesmostof it.Theorem6.2.1. Consider an irreducible Markov process with a positive recurrent embedded Markov chain. Let π j; j ≥0 be the steady-state probabilities of the embedded chain and let X 0 = i be the starting state. Then, with probability 1, the limiting time-average fraction of time spent in any arbitrary state j is the steady-state process probability in (6.7),i.e.,

p j(i) = p j =π

k

/

k

ν

/ j

ν k. (6.15)The expected time between returns tostate j is

W ( j) =kπk/ν k, (6.16)π j

and the limiting rate at which transitions take place is independent of the starting state and given by

lim M i(t)=

1WP1. (6.17)

t→∞ t kπk/ν k

Page 11: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 11/53

272 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

Discussion: Recall that p j(i) was defined as a time average WP1, and we saw earlierthat this time average existed with a value independent of i. The theorem states thatthistimeaverage(andthe limitingensembleaverage) isgivenbythesteady-stateprocessprobabilitiesin(6.7). Thus,aftertheproof,wecanstopdistinguishingthesequantities.Atasuperficiallevel,thetheoremisalmostobviousfromwhatwehavedone. Inparticular,substituting(6.14)into(6.9),weseethat

p j(i) = π jlim M i(t)

WP1. (6.18)ν j t→∞ t

Since p j(i)=limt→∞PrX (t) = j,andsinceX (t) is insomestateatalltimes,wewouldconjecture(andeveninsistif wedidn’treadon)that j p j(i)=1. Addingthatconditiontonormalize(6.18),weget(6.15),and(6.16)and(6.17)followimmediately. Thetroubleisthatif jπ j/ν j =∞,then(6.15)saysthat p j =0forall j,and(6.17)saysthatlimM i(t)/t=0,i.e., the process ‘gets tired’ with increasingt and therateof transitionsdecreasestoward0. Therathertechnicalproof tofollowdealswiththese limitsmorecarefully.Proof: We have seen in (6.14) that limt→∞M i(t)/t is equal to a constant, say α, withprobability1andthatthisconstant isthesame forallstartingstatesi. Wefirstconsiderthecasewhereα >0. Inthiscase,from(6.14),W ( j)<∞forall j. Choosinganygiven jandanypositive integer`,considerarenewal-rewardprocesswithrenewalsontransitionsto j andarewardR (t)=1whenX (t)≤`. Thisrewardisindependentof j andequaltoij Rik(t). Thus,from(6.9),wehavek=1

t R (τ )dτ 0 ijlim

= pk(i). (6.19)

t→∞ tk=1

If weletEhR jibetheexpectedrewardoverarenewalinterval,then,fromTheorem4.8.4,

t 0 Rij

(τ )dτ EhR j

ilim = . (6.20)

t→∞ t U jNotethatEhRiaboveisnon-decreasingin`andgoestothelimitW ( j)as`→ ∞. Thus, jcombining(6.19)and(6.20),weseethat

lim pk(i) = 1.→∞

k=1

Withthisaddedrelation,(6.15),(6.16),and(6.17) followas inthediscussion. Thiscompletestheproof forthecasewhereα >0.For the remainingcase, where limt→∞M i(t)/t=α=0, (6.14)shows that W ( j) =∞ forall j and(6.18)thenshowsthat p j(i) = 0forall j. Wegiveaguidedproof inExercise6.6that, forα=0,wemusthaveiπi/ν i =∞. It followsthat(6.15),(6.16),and(6.17)areallsatisfied.

Page 12: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 12/53

6.2. STEADY-STATE BEHAVIOR OF IRREDUCIBLE MARKOV PROCESSES 273

Thishasbeenquiteadifficultproof forsomethingthatmightseemalmostobviousforsimpleexamples. However,thefactthatthesetime-averagesarevalidoverallsamplepointswithprobability1 isnotobviousand the fact thatπ jW ( j) is independentof j is certainlynotobvious.Themostsubtlethinghere,however, isthat if iπi/ν i =∞,then p j =0 forallstates j.Thisisstrangebecausethetime-averagestateprobabilitiesdonotaddto1,andalsostrangebecausetheembeddedMarkovchaincontinuestomaketransitions,andthesetransitions,in steady state for the Markov chain, occur with the probabilities πi. Example 6.2.1 andExercise 6.3 give some insight into this. Some added insight can be gained by looking atthe embedded Markov chain starting in steady state, i.e., with probabilities πi;i ≥ 0.GivenX 0 =i,theexpectedtimetoatransitionis1/ν i,sotheunconditionalexpectedtimetoatransitionisiπi/ν i,which is infiniteforthecaseunderconsideration. This isnotaphenomenonthatcanbeeasilyunderstoodintuitively,butExample6.2.1andExercise6.3willhelp.6.2.4 Solving forthesteady-stateprocessprobabilitiesdirectlyLetusreturntothecasewherekπk/ν k <∞,whichisthecaseof virtuallyallapplications.We have seen that a Markov process can be specified in terms of the time-transitionsq ij = ν iP ij, and it is useful to express the steady-state equations for p j directly in termsof q ij ratherthan indirectly in termsof the embeddedchain. As a usefulprelude to this,wefirstexpresstheπ j intermsof the p j. Denotekπk/ν k asβ <∞. Then,from(6.15), p j =π j/ν jβ , soπ j = p jν jβ . Expressingthisalongwiththe normalizationkπk =1, weobtain

πi = piν i. (6.21)

k pkν kThus,β = 1/k pkν k,so

πk/ν k =

k1 pkν k. (6.22)

kWecannowsubstituteπi asgivenby(6.21)intothesteady-stateequationsfortheembeddedMarkovchain,i.e.,π j =iπiP ij forall j,obtaining

p jν j = piν iP iji

foreachstate j. Sinceν iP ij =q ij, p jν j = piq ij;

pi = 1. (6.23)i i

Thissetof equations isknownasthesteady-stateequations fortheMarkovprocess. Thenormalizationconditioni pi =1 isaconsequenceof (6.22)andalsoof (6.15). Equation(6.23)hasaniceinterpretationinthatthetermontheleftisthesteady-staterateatwhich

Page 13: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 13/53

274 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

transitionsoccuroutof state j and the termon the right is therate atwhich transitionsoccurintostate j. Sincethetotalnumberof entriesto j mustdiff erbyatmost1fromtheexitsfrom j foreachsamplepath,thisequationisnotsurprising.The embeddedchain is positive recurrent, so its steady-stateequationshave a unique so

lution with all πi > 0. Thus (6.23) also has a unique solution with all pi > 0 under thethe added condition thatiπi/ν i < ∞. However, we would like to solve (6.23) directlywithoutworryingabouttheembeddedchain.If wefindasolutionto(6.23),however,and if i piν i <∞ inthatsolution,thenthecorrespondingsetof πi from(6.21)mustbetheuniquesteady-statesolutionfortheembeddedchain. Thus the solution for pi must be the corresponding steady-state solution for theMarkovprocess. Thisissummarizedinthefollowingtheorem.Theorem6.2.2. Assume an irreducible Markov process and let pi;i ≥ 0 be a solution to (6.23). If

i piν i <∞, then, first, that solution is unique,second,each pi is positive,

and third, the embedded Markov chain is positive recurrent with steady-state probabilities satisfying (6.21). Also, if the embedded chain is positive recurrent,and iπi/ν i <∞then the set of pi satisfying (6.15) is the unique solution to(6.23).

6.2.5 Thesampled-timeapproximationagainFor an alternative viewof the probabilities pi;i≥0, considerthespecialcase (butthetypicalcase)wherethetransitionratesν i;i≥0arebounded. Considerthesampled-timeapproximationtotheprocessforagivenincrementsizeδ ≤[maxiν i]−1 (seeFigure6.6). Let pi(δ );i≥0bethesetof steady-stateprobabilitiesforthesampled-timechain,assumingthattheyexist. Thesesteady-stateprobabilitiessatisfy

p j(δ ) = pi(δ )q ijδ + p j(δ )(1−ν jδ ); p j(δ )≥0; p j(δ ) = 1. (6.24)

i= j j

The first equation simplifies to p j(δ )ν j =i= j pi(δ )q ij, which is the same as (6.23). It

follows that the steady-stateprobabilities pi;i ≥0 for the process are the same as thesteady-state probabilities pi(δ );i ≥ 0 for the sampled-time approximation. Note thatthisisnotanapproximation; pi(δ )isexactlyequalto pi forallvaluesof δ ≤1/supiν i. Weshallseelaterthatthedynamicsof aMarkovprocessarenotquitesowellmodeledbythesampledtimeapproximationexceptinthelimitδ →0.

6.2.6Pathological

cases

Example6.2.1(Zerotransitionrate). Consider the Markov process with a positive-recurrent embedded chain in Figure 6.8. This models a variation of an M/M/1 queue inwhich the server becomes increasingly rattled and slow as the queue builds up, and thecustomersbecomealmostequallydiscouragedaboutentering. Thedownwarddrift in thetransitionsismorethanovercomebytheslow-down in largenumberedstates. Transitionscontinue to occur, but the number of transitions per unit time goes to 0 with increasing

Page 14: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 14/53

6.3. THE KOLMOGOROV DIFFERENTIALEQUATIONS 275

time. Although the embedded chain has a steady-state solution, the process can not beviewedashavinganysortof steadystate. Exercise6.3givessomeadded insight intothistypeof situation.

1 1 2−

1

0.4 2−

2

0.4 2−

3

3 . . .0 1 2 0.6 0.6 0.6

Figure6.8: TheMarkovprocessforavariationonM/M/1wherearrivalsandservicesgetslowerwithincreasingstate. Eachnodeihasarateν i = 2−i. TheembeddedchaintransitionprobabilitiesareP i,i+1 = 0.4fori≥1andP i,i−1 = 0.6fori≥1,thusensuringthat the embeddedMarkov chain is positiverecurrent. Notethat q i,i+1 > q i+1,i, thusensuringthattheMarkovprocessdriftstotheright.

Itisalsopossiblefor(6.23)tohaveasolutionfor pi;i≥0withi pi =1,buti piν i =∞. Thisisnotpossibleforapositiverecurrentembeddedchain,butispossiblebothif theembedded Markov chain is transient and if it is null recurrent. A transient chain meansthat there is a positive probability that the embedded chain willnever return to a stateafter leaving it, and thus there can be no sensible kind of steady-state behavior for theprocess. These processes are characterized by arbitrarily large transition rates from thevariousstates, andtheseallowtheprocesstotransitthroughan infinitenumberof statesinafinitetime.Processes for which there is a non-zero probability of passing throughan infinite numberof states in a finite time are called irregular . Exercises 6.8 and 6.7 give some insight intoirregular processes. Exercise 6.9 gives an example of a process that is not irregular, butfor which (6.23) has a solution withi pi = 1 and the embedded Markov chain is nullrecurrent. WerestrictourattentioninwhatfollowstoirreducibleMarkovchainsforwhich(6.23) has a solution,

pi =1, and piν i <∞. This is slightly more restrictive than

necessary,butprocessesforwhichi piν i =∞(seeExercise6.9)arenotveryrobust.

6.3 TheKolmogorovdiff erentialequationsLet P ij(t) be the probability that a Markov process X (t);t ≥ 0 is in state j at time tgiventhatX (0)=i,

P ij(t) =P X (t)= j|X (0)=i. (6.25)P ij(t)isanalogoustothenth ordertransitionprobabilitiesP ij

n forMarkovchains. Wehavealready seen that limt→∞P ij (t) = p j for the case where the embedded chain is positiverecurrent andi πi/ν i <∞. Here we want to find the transient behavior, and we startby deriving the Chapman-Kolmogorov equations for Markov processes. Let s and t be

Page 15: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 15/53

276 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

arbitrarytimes,0< s < t. Byincludingthestateattimes,wecanrewrite(6.25)as=

P ij(t)

kPrX (t)= j, X (s)=k|X (0)=i

=

k PrX (s)=k|X (0)=i

PrX (t)= j

|X (s)=k

;

alli,j,

(6.26)

wherewe have usedthe Markov condition, (6.3). Giventhat X (s) =k, theresidualtimeuntilthenexttransitionafters isexponentialwithrateν k, andthustheprocess startingattimesinstatek isstatisticallyidenticaltothatstartingattime0 instatek. Thus,foranys, 0≤s≤t,wehave

P X (t)= j|X (s)=k=P kj(t−s).Substituting this into (6.26), we have the Chapman-Kolmogorov equations for a Markovprocess,

P ij(t) =

P ik(s)P kj(t−

s).

(6.27)

k

Theseequationscorrespondto(3.8)forMarkovchains. Wenowusetheseequationstoderivetwotypesof setsof diff erentialequationsforP ij(t). ThefirstarecalledtheKolmogorov forward di ff erential equations , and the second theKolmogorov backward di ff erential equations . The forward equations are obtained by letting s approach t from below, and thebackwardequationsareobtainedby lettingsapproach0 fromabove. Firstwederive theforwardequations.Fort−ssmallandpositive,P kj (t−s) in(6.27)canbeexpressedas(t−s)q kj +o(t−s)fork= j. Similarly,P jj(t−s)canbeexpressedas1−(t−s)ν j +o(s). Thus(6.27)becomes

P ij(t) =

[P ik(s)(t−s)q kj] +P ij(s)[1−(t−s)ν j]+o(t−s) (6.28)k= j

Wewanttoexpressthis,inthelimits→t,asadiff erentialequation. Todothis,subtractP ij(s)frombothsidesanddividebyt−s.

P ij(t)−P ij(s)t−s

= (P ik(s)q kj ) −P ij(s)ν j +o(s)

s k= j

Takingthe limitass→tfrombelow,3 wegettheKolmogorovforwardequations,dP ij(t)

dt = (P ik(t)q kj)−P ij(t)ν j, (6.29)k= j

Thefirsttermontherightsideof (6.29) istherateatwhichtransitionsoccur intostate jat time t and thesecondterm is the rateat which transitionsoccur out of state j. Thusthediff erenceof theseterms isthenetrateatwhichtransitionsoccur into j,which istherateatwhichP ij(t)isincreasingattimet.

3Wehaveassumedthatthesumandthelimitin(6.3)canbeinterchanged. Thisiscertainlyvalidif thestatespaceisfinite,whichistheonlycaseweanalyzeinwhatfollows.

Page 16: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 16/53

λ

λλ+µ

2776.3. THE KOLMOGOROV DIFFERENTIALEQUATIONS

Example6.3.1(AqueuelessM/M/1queue). Considerthe following2-stateMarkovprocesswhereq 01 =λandq 10 =µ.

1

0 µ

ThiscanbeviewedasamodelforanM/M/1queuewithnostorageforwaitingcustomers.Whenthesystemisempty(state0),memorylesscustomersarriveatrateλ,andwhentheserverisbusy,anexponentialserveroperatesatrateµ,withthesystemreturningtostate0whenserviceiscompleted.TofindP 01(t),theprobabilityof state1attimetconditionalonstate0attime0,weusetheKolmogorovforwardequationsforP 01(t),getting

dP 01(t)= P 00(t)q 01−P 01(t)ν 1 = P 00(t)λ−P 01(t)µ.

dtUsingthefactthatP 00(t) = 1−P 01(t),thisbecomes

dP 01(t)=λ−P 01(t)(λ+µ).

dtUsingtheboundaryconditionP 01(0)=0,thesolutionis

P 01(t) = h1−e−(λ+µ)t

i (6.30)

ThusP 01(t)is0att=0andincreasesast→ ∞toitssteady-statevalueinstate1,whichisλ/(λ+µ).Ingeneral,foranygivenstartingstateiinaMarkovprocesswithMstates,(6.29)providesa set of M simultaneous linear diff erential equations, one for each j, 1 ≤ j ≤M. As wesaw intheexample,oneof these isredundantbecauseM P ij(t)=1. This leavesM−1 j=1simultaneouslineardiff erentialequationstobesolved.For more than 2 or 3 states, it is more convenient to express (6.29) in matrix form. Let[P (t)](foreacht >0)beanMbyMmatrixwhosei, j element isP ij (t). Let [Q]beanMbyMmatrixwhosei, jelementisq ij foreachi= jand−ν j fori= j. Then(6.29)becomes

d[P (t)]= [P (t)][Q]. (6.31)

dtForExample6.3.1,P ij(t)canbecalculatedforeachi, j asin(6.30),resultingin

µ + λ e−(λ+µ)t λ λ e−(λ+µ)t

λ

[P (t)]=λ+

µ

µ λ+

λ

µe−(λ+µ)t

λ+

λ

µ −λ+

λ

µe−(λ+µ)t

[Q] =−λ λ+µ +

λ+µ λ+µ −λ+µ µ −µ

In order to provide some insight into the general solution of (6.31), we go back to thesampled time approximation of a Markov process. With an increment of size δ betweensamples,theprobabilityof atransitionfromito j,i= j,isq ij δ + o(δ ),andtheprobability

Page 17: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 17/53

278 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

of remaininginstateiis1−ν iδ + o(δ ). Thus,intermsof thematrix[Q]of transitionrates,thetransitionprobabilitymatrix inthesampledtimemodel is [I ] +δ [Q],where [I ] istheidentitymatrix. Wedenotethismatrixby[W δ] = [I ] +δ [Q]. Notethatλisaneigenvalueof [Q]if andonlyif 1+ λδ isaneigenvalueof [W δ]. Alsotheeigenvectorsof thesecorrespondingeigenvalues

are

the

same.

That

is,

if

ν ν ν is

a

right

eigenvector

of

[Q]

with

eigenvalue

λ,

then

ν ν ν isarighteigenvectorof [W δ]witheigenvalue1+λδ ,andconversely. Similarly, if p isalefteigenvectorof [Q]witheigenvalueλ,thenp isalefteigenvectorof [W δ]witheigenvalue1 +λδ ,andconversely.WesawinSection6.2.5thatthesteady-stateprobabilityvectorp of aMarkovprocessisthesameasthatof anysampled-timeapproximation. Wehavenowseenthat, inaddition,all theeigenvectorsarethesameandtheeigenvaluesaresimplyrelated. Thusstudyof thesediff erentialequationscanbelargelyreplacedbystudyingthesampled-timeapproximation.Thefollowingtheoremusesourknowledgeof theeigenvaluesandeigenvectorsof transitionmatricessuchas[W δ]inSection3.4,tobemorespecificaboutthepropertiesof [Q].Theorem6.3.1. Consider an irreducible finite-state Markov process with M states. Then the matrix [Q] for that process has an eigenvalue λequal to0. That eigenvalue has a right eigenvector e =(1,1, . . . ,1)T which is unique within a scale factor. It has a left eigenvector p = ( p1, . . . , pM ) that is positive, sums to 1, satisfies (6.23),and is unique within a scale factor. All the other eigenvalues of [Q]have strictly negative real parts.Proof: SinceallMstatescommunicate,thesampledtimechainisrecurrent. FromTheorem3.4.1, [W δ] has a unique eigenvalue λ = 1. The corresponding right eigenvector ise andthe left eigenvector is the steady-state probability vectorp as given in (3.9). Since [W δ]is recurrent, the components of p are strictly positive. From the equivalence of (6.23)and (6.24), p , as given by (6.23), is the steady-state probability vector of the process.Eacheigenvalueλδ of [W δ]correspondstoaneigenvalueλof [Q]withthecorrespondenceλδ =1+λδ ,i.e.,λ= (λδ−1)/δ . Thustheeigenvalue1of [W δ]correspondstotheeigenvalue0of [Q]. Since|λδ| ≤1andλδ =1forallothereigenvalues,theothereigenvaluesof [ Q]allhavestrictlynegativerealparts,completingtheproof.WecompletethissectionbyderivingtheKomogorovbackwardequations. Forssmallandpositive,theChapman-Kolmogorovequationsin(6.27)become

=

k=i

SubtractingP ij(t−s)frombothsidesanddividingbys,

P ij(t) P ik(s)P kj(t−s)k

= sq ikP kj(t−s)+(1−sν i)P ij(t−s)+o(s)

P ij(t)−P ij(t−s)s

= q ikP kj(t−s)−ν iP ij(t−s) +o(s)

s k=i

dP ij(t)dt

= q ikP kj(t)−ν iP ij(t) (6.32)

k=i

Page 18: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 18/53

6.4. UNIFORMIZATION 279

Inmatrixform,thisisexpressedasd[P (t)]

= [Q][P (t)] (6.33)dt

By comparing (6.33 and (6.31), we see that [Q][P (t)] = [P (t)][Q], i.e., that the matrices[Q] and [P (t)] commute. Simultaneous linear diff erential equations appear in so manyapplicationsthatweleavethefurtherexplorationof theseforwardandbackwardequationsas simple diff erential equation topics rather than topics that have special properties forMarkovprocesses.

6.4 UniformizationUpuntilnow,wehavediscussedMarkovprocessesundertheassumptionthatq ii = 0 (i.e.,notransitionsfromastate into itself areallowed). Wenowconsiderwhathappens if thisrestriction is removed. Suppose we start with some Markov process defined by a set of transition rates q ij with q ii = 0, and we modify this process by some arbitrary choice of q ii ≥0 foreachstatei. ThismodificationchangestheembeddedMarkovchain,sinceν i isincreasedfromk q ik tok q ik+q ii. From(6.5),P ij ischangedtoq ij /ν i forthenew=i =i

valueof ν i foreachi, j. Thusthesteady-stateprobabilitiesπi fortheembeddedchainarechanged. TheMarkovprocessX (t);t≥0 isnotchanged,sinceatransition fromi intoitself doesnotchangeX (t)anddoesnotchangethedistributionof thetimeuntilthenexttransitiontoadiff erentstate. Thesteady-stateprobabilitiesfortheprocessstillsatisfy

p jν j = pkq kj ;

pi = 1. (6.34)k i

Theadditionof thenewtermq jj increasesν j byq jj ,thus increasingthe lefthandsideby p j q jj. Therighthandsideissimilarlyincreasedby p j q jj,sothatthesolutionisunchanged(aswealreadydetermineditmustbe).A particularly convenient way to add self-transitions is to add them in such a way as tomake the transition rate ν j the same for all states. Assuming that the transition ratesν i;i≥0arebounded,wedefineν ∗ assup j ν j fortheoriginaltransitionrates. Thenwesetq jj =ν ∗−

k= j q jk foreach j. Withthisadditionof self-transitions,alltransitionrates

becomeν ∗. From(6.21),weseethatthenewsteadystateprobabilities,πi∗,intheembedded

Markovchainbecomeequaltothesteady-stateprocessprobabilities, pi. Naturally,wecouldalsochooseanyν greaterthanν ∗ andincreaseeachq jj tomakealltransitionratesequaltothatvalueof ν . Whenthetransitionratesarechangedinthisway,theresultingembeddedchainiscalledauniformized chain andtheMarkovprocessiscalledtheuniformized process .Theuniformizedprocessisthesameastheoriginalprocess,exceptthatquantitieslikethenumberof transitionsoversomeintervalarediff erentbecauseof theself transitions.Assumingthatalltransitionratesaremadeequaltoν ∗,thenewtransitionprobabilitiesintheembeddedchainbecomeP ij

∗ =q ij /ν ∗. LetN (t)bethetotalnumberof transitionsthatoccurfrom0totintheuniformizedprocess. Sincetherateof transitionsisthesamefromall states and the inter-transition intervals are independent and identically exponentially

Page 19: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 19/53

280 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

distributed, N (t) is a Poisson counting process of rate ν ∗. Also, N (t) is independent of the sequenceof transitions in the embeddeduniformizedMarkov chain. Thus, given thatN (t) =n, the probability that X (t) = j given that X (0)= i is just the probability that

ntheembeddedchaingoesfromito j inν steps,i.e.,P ij∗ . Thisgivesusanotherformulafor

calculatingP ij(t),

(i.e.,

the

probability

that

X (t) =

j

given

that

X (0)

=

i).

e−ν ∗t(ν ∗t)nP ij(t) =

P ij∗n . (6.35)

n!n=0

Anothersituationwheretheuniformizedprocess isuseful is inextendingMarkovdecisiontheorytoMarkovprocesses,butwedonotpursuethis.

6.5 Birth-deathprocessesBirth-death processes are very similar to the birth-death Markov chains that we studiedearlier. Heretransitionsoccuronlybetweenneighboringstates,soitisconvenienttodefineλi asq i,i+1 andµi asq i,i−1 (seeFigure6.9). Sincethenumberof transitionsfromitoi+ 1iswithin1of thenumberof transitionsfromi+ 1toiforeverysamplepath,weconcludethat

piλi = pi+1µi+1. (6.36)This can also be obtained inductively from (6.23) using the same argument that we usedearlierforbirth-deathMarkovchains.

λ0 λ1 λ2 λ3

4

. . .0

1

2

3

µ1 µ2 µ3 µ4

Figure6.9: Birth-deathprocess.

Defineρi asλi/µi+1. Thenapplying(6.36)iteratively,weobtainthesteady-stateequationsi−1

pi = p0Y

ρ j ; i≥1. (6.37) j=0

Wecansolvefor p0 bysubstituting(6.37)intoi pi,yielding1

p0 = . (6.38)1 +∞ i−1

i=1 j=0ρ jFor the M/M/1 queue, the state of the Markov process is the number of customers inthe system (i.e., customers either in queue or in service). The transitions from i to i+ 1correspondtoarrivals,andsincethearrivalprocessisPoissonof rateλ,wehaveλi =λforall i≥0. The transitions from i to i−1 correspond to departures, and since the service

Page 20: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 20/53

µ−λ

6.5. BIRTH-DEATH PROCESSES 281

timedistributionisexponentialwithparameterµ,say,wehaveµi =µforalli≥1. Thus,(6.38)simplifiesto p0 = 1−ρ,whereρ=λ/µandthus

pi =(1−ρ)ρi; i≥0. (6.39)Weassumethatρ <1,whichisrequiredforpositiverecurrence. Theprobabilitythatthereare i or morecustomers in the system in steady state is then given by P X (t)≥i=ρiandtheexpectednumberof customersinthesystemisgivenby

∞ρ

E[X (t)]=P X (t)≥i=

1−ρ . (6.40)i=1

TheexpectedtimethatacustomerspendsinthesysteminsteadystatecannowbedeterminedbyLittle’sformula(Theorem4.5.3).

E[Systemtime]=E[X (t)]

= 1

. (6.41)λ λ(1−ρ) µ−λTheexpectedtimethatacustomerspendsinthequeue(i.e.,beforeenteringservice)is justtheexpectedsystemtimelesstheexpectedservicetime,so

1 1 ρE[Queueingtime]=

µ−λ− =µ−λ . (6.42)

µFinally,theexpectednumberof customers inthequeuecanbe foundbyapplyingLittle’sformulato(6.42),

λρ

E[Numberinqueue]= . (6.43)Notethattheexpectednumberof customersinthesystemandinthequeuedependonlyonρ,sothatif thearrivalrateandserviceratewerebothspeededupbythesamefactor,theseexpected values would remain the same. The expected system time and queueing time,howeverwoulddecreasebythefactorof therateincreases. Notealsothatasρapproaches1, all these quantities approach infinity as 1/(1−ρ). At the value ρ = 1, the embeddedMarkov chain becomes null-recurrent and the steady-stateprobabilities(both πi;i≥0and pi;i≥0)canbeviewedasbeingall0orasfailingtoexist.Therearemany types of queueingsystemsthatcan be modeled as birth-deathprocesses.For

example

the

arrival

rate

could

vary

with

the

number

in

the

system

and

the

service

rate

couldvarywiththenumber inthesystem. Allof thesesystemscanbeanalyzedinsteadystateinthesameway,but(6.37)and(6.38)canbecomequitemessyinthesemorecomplexsystems. As anexample,weanalyzethe M/M/msystem. Heretherearem servers,eachwith exponentially distributed service times with parameter µ. When i customers are inthesystem,thereareiserversworkingfori < mandallmserversareworkingfori≥m.With iserversworking,theprobability of a departure inan incremental timeδ is iµδ , sothatµi isiµfori < mandmµfori≥m(seeFigure6.10).

Page 21: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 21/53

λ λ λ λ

282 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

Define ρ = λ/(mµ). Then in terms of our general birth-death process notation, ρi =mρ/(i+1)fori < mandρi =ρfori≥m. From(6.37),wehave

mρ mρ mρ p0(mρ)i pi = p0

1 2 · · ·

i =

i!

; i≤m (6.44)i m

pi = p0ρ m; i≥m. (6.45)

m!

4 . . .0

1

2 3

µ 2µ 3µ 3µFigure6.10: M/M/mqueueform=3..

Wecanfind p0 bysumming pi andsettingtheresultequalto1;asolutionexists if ρ <1.Nothingsimplifiesmuchinthissum,exceptthati≥m pi = p0(ρm)m/[m!(1−ρ)],andthesolutionis

(mρ)m m−1(mρ)i−1

p0 = + . (6.46)m!(1−ρ)

i!

i=0

6.6 Reversibility forMarkovprocessesInSection5.3onreversibilityforMarkovchains,(5.37)showedthatthebackwardtransitionprobabilitiesP ij

∗ insteadystatesatisfyπiP ij

∗ =π jP ji . (6.47)Theseequationsarethenvalidfortheembeddedchainof aMarkovprocess. Next,considerbackwardtransitionsintheprocessitself. Giventhattheprocessisinstatei,theprobabilityof atransitioninanincrementδ of timeisν iδ +o(δ ),andtransitionsinsuccessiveincrementsare independent. Thus, if we view the process running backward in time, the probabilityof a transition in each increment δ of time is also ν iδ +o(δ ) with independence betweenincrements. Thus, going to the limit δ → 0, the distribution of the time backward to atransitionisexponentialwithparameterν i. ThismeansthattheprocessrunningbackwardsisagainaMarkovprocesswithtransitionprobabilitiesP ij

∗ andtransitionratesν i. Figure6.11helpstoillustratethis.Sincethesteady-stateprobabilities pi;i≥0fortheMarkovprocessaredeterminedby

pi = πi/ν i, (6.48)

k πk/ν k

Page 22: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 22/53

2836.6. REVERSIBILITY FOR MARKOV PROCESSES

Statei State j,rateν j Statek

t1 t2

Figure 6.11: The forward process enters state j at time t1 and departs at t2. Thebackwardprocessentersstate jattimet2 anddepartsatt1. Inanysamplefunction,asillustrated,theintervalinagivenstateisthesameintheforwardandbackwardprocess.GivenX (t) = j,thetimeforwardtothenexttransitionandthetimebackwardtotheprevioustransitionareeachexponentialwithrateν j.

andsinceπi;i≥0andν i;i≥0arethesamefortheforwardandbackwardprocesses,weseethatthesteady-stateprobabilitiesinthebackwardMarkovprocessarethesameasthe steady-state probabilities in the forward process. This result can also be seen by thecorrespondencebetweensamplefunctionsintheforwardandbackwardprocesses.The transition rates in the backwardprocess are defined by q ij

∗ =ν iP ij∗ . Using(6.47), we

haveq ij

∗ =ν jP ij∗ = ν iπ jP ji

= ν iπ jq ji. (6.49)

πi πiν jFrom(6.48),wenotethat p j =απ j/ν j and pi =απi/ν i forthesamevalueof α. Thustheratioof π j/ν j toπi/ν i is p j/pi. Thissimplifies(6.49)toq ij

∗ = p jq ji/pi,and piq ij

∗ = p jq ji . (6.50)Thisequationcanbe usedas analternatedefinitionof thebackwardtransitionrates. Tointerpretthis, letδ beavanishinglysmall incrementof timeandassumetheprocess is insteady state at time t. Then δ p jq ji ≈ PrX (t) = jPrX (t+δ ) =i X (t) = j whereasδ piq ij

∗ ≈PrX (t+δ ) =iPrX (t) = j|X (t+δ ) =i. |AMarkovprocessisdefinedtobereversible if q ij

∗ =q ij foralli, j. If theembeddedMarkovchainisreversible,(i.e.,P ij

∗ =P ij foralli, j),thenonecanrepeattheabovestepsusingP ijand q ij inplaceof P ij

∗ and q ij∗ to see that piq ij = p jq ji for all i, j. Thus, if the embedded

chain is reversible, the process is also. Similarly, if the Markov process is reversible, theabove argument can be reversed to see that the embedded chain is reversible. Thus, wehavethefollowingusefullemma.Lemma6.6.1. Assume that steady-state probabilities pi;i≥0exist in an irreducible Markov process ( i.e.,(6.23)has a solution and piν i <∞). Then the Markov process is reversible if and only if the embedded chain is reversible.One can find the steady-state probabilitiesof a reversibleMarkov process and simultaneouslyshowthatitisreversiblebythefollowingusefultheorem(whichisdirectlyanalogoustoTheorem5.3.2of chapter5).

Page 23: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 23/53

284 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

Theorem6.6.1. For an irreducible Markov process, assume that pi;i ≥ 0 is a set of nonnegative numbers summing to1,satisfying i piν i ≤ ∞,and satisfying

piq ij = p jq ji for all i,j. (6.51)Then pi;i≥0 is the set of steady-state probabilities for the process, pi >0 for all i, the process is reversible,and the embedded chain is positive recurrent.Proof: Summing(6.51)overi,weobtain

piq ij = p jν j forall j.i

These,alongwithi pi =1arethesteady-stateequationsfortheprocess. Theseequationshave a solution, and by Theorem 6.2.2, pi > 0 for all i, the embedded chain is positiverecurrent, and pi = limt→∞PrX (t) =i. Comparing (6.51) with (6.50), we see thatq ij =q

ij

∗ ,sotheprocess isreversible.There are many irreducible Markov processes that are not reversible but for which thebackwardprocesshas interestingpropertiesthatcanbededuced,at least intuitively,fromthe forward process. Jackson networks (to be studied shortly) and many more complexnetworks of queues fall into this category. The following simple theorem allows us to usewhatever combinationof intuitive reasoningand wishfulthinkingwe desireto guess boththetransitionratesq ij

∗ in thebackwardprocess andthesteady-stateprobabilities,andtothenverifyrigorouslythattheguessiscorrect. Onemightthinkthatguessingissomehowunscientific,butinfact,theartof educatedguessingandintuitivereasoningleadstomuchof thebestresearch.Theorem6.6.2. For an irreducible Markov process,assume that a set of positive numbers pi;i ≥ 0 satisfy i pi = 1 and i piν i < ∞. Also assume that a set of nonnegative numbers q ij

∗ satisfy the twosets of equations

q ij = q ij∗ for all i (6.52)

j j= p jq ∗ for all i,j. (6.53) piq ij ji

Then piis the set of steady-state probabilities for the process, pi >0 for all i,the embedded chain is positive recurrent,and q ij

∗ is the set of transition rates in the backward process.Proof: Sum(6.53)overi. Usingthefactthat jq ij =ν i andusing(6.52),weobtain

piq ij = p jν j forall j. (6.54)

iThese,alongwithi pi =1,arethesteady-stateequationsfortheprocess. Theseequationsthushaveasolution,andbyTheorem6.2.2, pi >0foralli,theembeddedchainispositiverecurrent, and pi = limt→∞PrX (t) =i. Finally, q ∗ as given by (6.53) is the backwardij

transitionrateasgivenby(6.50)foralli, j.

Page 24: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 24/53

Page 25: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 25/53

286 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

a1a2 d1 a3 d2

d3 a4 d4

Rightmoving(forward)M/M/1process

d4d3 a4 d2 a3

a2 d1 a1

Leftmoving(backward)M/M/1processFigure6.12: FCFSarrivalsanddeparturesinrightandleftmovingM/M/1processes.

Burke’stheoremremainstrueforanybirth-deathMarkovprocessinsteadystateforwhichq

i,i+1 =λforalli≥0. Forexample,partsaandbarevalidforM/M/mqueues;partc is

alsovalid(see[22]),buttheargumenthereisnotadequatesincethefirstcustomertoenterthesystemmightnotbethefirsttodepart.We next show how Burke’s theorem can be used to analyze a tandem set of queues. Asshown inFigure6.13,wehaveanM/M/1queueingsystemwithPoissonarrivalsatrateλandserviceatrateµ1. Thedeparturesfromthisqueueingsystemarethearrivalstoasecondqueueingsystem,andweassumethatadeparturefromqueue1attimet instantaneouslyentersqueueingsystem2atthesametimet. Thesecondqueueingsystemhasasingleserverand the service times are IID and exponentially distributed with rate µ2. The successiveservice times at system 2 are also independent of the arrivals to systems 1 and 2, andindependentof theservicetimesinsystem1. SincewehavealreadyseenthatthedeparturesfromthefirstsystemarePoissonwithrateλ,thearrivalstothesecondqueuearePoissonwithrateλ. ThusthesecondsystemisalsoM/M/1.

M/M/1 M/M/1λ λ λ

µ1 µ2

Figure 6.13: A tandem queueing system. Assuming that λ > µ1 and λ > µ2, thedeparturesfromeachqueuearePoissonof rateλ.

Let X(t) be the state of queueing system 1 and Y (t) be the state of queueing system 2.Since X (t) at time t is independent of the departures from system 1 prior to t, X (t) isindependent of the arrivals to system 2 prior to time t. Since Y (t) depends only on thearrivalstosystem2priortotandontheservicetimesthathavebeencompletedpriortot,weseethatX (t) is independentof Y (t). This leavesaslightnit-pickingquestionaboutwhat happens atthe instant of a departure fromsystem 1. We haveconsideredthe stateX (t) at the instant of a departure to be the number of customers remaining in system 1

Page 26: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 26/53

2876.6. REVERSIBILITY FOR MARKOV PROCESSES

notcountingthedepartingcustomer. AlsothestateY (t)isthestateinsystem2includingthe new arrival at instant t. The state X (t) then is independent of the departures up toandincludingt,sothatX (t)andY (t)arestillindependent.NextassumethatbothsystemsuseFCFSservice. Consideracustomerthatleavessystem1 at time t. The time at which that customer arrived at system 1, and thus the waitingtime in system 1 for that customer, is independent of the departures prior to t. Thismeansthatthestateof system2immediatelybeforethegivencustomerarrivesattimetisindependentof thetimethecustomerspent insystem1. Itthereforefollowsthatthetimethatthecustomerspends insystem2 is independentof thetimespent insystem1. Thusthetotalsystemtimethatacustomerspendsinbothsystem1andsystem2isthesumof two independentrandomvariables.Thissameargumentcanbeappliedtomorethan2queueingsystemsintandem. Itcanalsobeappliedtomoregeneralnetworksof queues,eachwithsingleserverswithexponentiallydistributedservicetimes. Therestrictionhereisthattherecannotbeanycycleof queueingsystemswheredeparturesfromeachqueueinthecyclecanenterthenextqueueinthecycle.The problem posed by such cycles can be seen easily in the following exampleof a singlequeueingsystemwithfeedback(seeFigure6.14).

M/M/1λQ

1−Qλ

µ

Figure6.14: Aqueuewith feedback. Assumingthatµ>λ/Q, theexogenousoutputisPoissonof rateλ.

Weassumethatthequeueingsystem inFigure6.14hasasingleserverwithIIDexponentiallydistributedservicetimesthatareindependentof arrivaltimes. TheexogenousarrivalsfromoutsidethesystemarePoissonwithrateλ. WithprobabilityQ,thedeparturesfromthe queue leave the entire system, and, alternatively, with probability 1−Q, they returninstantaneously to the input of the queue. Successive choices between leaving the systemand returning to the input are IID and independent of exogenous arrivals and of servicetimes. Figure 6.15 shows a sample function of the arrivals and departures in the case inwhich the service rate µ is very much greater than the exogenous arrival rate λ. Eachexogenous

arrival

spawns

a

geometricallydistributed

set

of

departures

and

simultaneous

re-entries. Thustheoverallarrivalprocesstothequeue,countingbothexogenousarrivalsandfeedbackfromtheoutput,isnotPoisson. Note,however,thatif welookattheMarkovprocessdescription,thedeparturesthatare fedbacktothe inputcorrespondtoself loopsfromonestatetoitself. ThustheMarkovprocessisthesameasonewithouttheself loopswithaservicerateequaltoµQ. Thus,fromBurke’stheorem,theexogenousdeparturesarePoissonwithrateλ. Alsothesteady-statedistributionof X (t) isP X (t) =i=(1−ρ)ρiwhereρ=λ/(µQ)(assuming,of course,thatρ <1).

Page 27: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 27/53

288 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

exogenous arrival

endogenousdepartures

re-entries

exogenousdepartures

Figure6.15: Samplepathof arrivalsanddeparturesforqueuewithfeedback.

The tandem queueing system of Figure 6.13 can also be regarded as a combined Markovprocess in which the state at time t is the pair (X (t), Y (t)). The transitions in thisprocesscorrespondto,first,exogenousarrivals inwhichX (t) increases,second,exogenousdeparturesinwhichY (t)decreases,andthird,transfersfromsystem1tosystem2inwhichX (t)decreasesandY (t)simultaneously increases. Thecombinedprocess isnotreversiblesincethere isnotransition inwhichX (t) increasesandY (t)simultaneouslydecreases. Inthenextsection,weshowhowtoanalyzethesecombinedMarkovprocessesformoregeneralnetworksof queues.

6.7 JacksonnetworksInmanyqueueingsituations,acustomerhastowaitinanumberof diff erentqueuesbeforecompleting the desired transaction and leaving the system. For example, when we go totheregistryof motorvehiclestogetadriver’s license,wemustwait inonequeuetohavetheapplicationprocessed,inanotherqueuetopayforthelicense,andinyetathirdqueuetoobtainaphotographforthe license. Inamultiprocessorcomputerfacility,a jobcanbequeuedwaiting forserviceatoneprocessor,thengotowait foranotherprocessor,andsoforth;frequentlythesameprocessorisvisitedseveraltimesbeforethe jobiscompleted. Ina data network, packets traverse multiple intermediate nodes; at each node they enter aqueuewaitingfortransmissiontoothernodes.Suchsystemsaremodeledbyanetworkof queues,andJacksonnetworksareperhapsthesimplestmodelsof suchnetworks. Insuchamodel,wehaveanetworkof k interconnectedqueueingsystemswhichwecallnodes. Eachof thek nodesreceivescustomers(i.e.,tasksor jobs)bothfromoutsidethenetwork(exogenousinputs)andfromothernodeswithinthenetwork(endogenousinputs). ItisassumedthattheexogenousinputstoeachnodeiformaPoissonprocessof rateri andthatthesePoissonprocessesareindependentof eachother.Foranalyticalconvenience,weregardthisasasinglePoissoninputprocessof rateλ0,witheachinputindependentlygoingtoeachnodeiwithprobabilityQ0i =ri/λ0.Each node i contains a single server, and the successive service times at node i are IIDrandomvariableswithanexponentiallydistributedservicetimeof rateµi. Theservicetimesateachnodearealso independentof theservicetimesatallothernodesand independentof theexogenousarrivaltimesatallnodes. Whenacustomercompletesserviceatagiven

Page 28: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 28/53

2896.7. JACKSON NETWORKS

λ0Q02

λ0Q02 Q12 Q11 2 Q21

1

Q22

Q20 Q31 Q32Q10

Q13 Q23

3

Q30

λ0Q03 Q33

Figure 6.16: A Jackson network with 3 nodes. Given a departure from node i, theprobability that departure goes to node j (or, for j = 0, departs the system) is Qij.Notethatadeparturefromnodeicanre-enternodeiwithprobabilityQii. Theoverallexogenousarrivalrate isλ0, and,conditionalonanarrival,theprobabilitythearrivalentersnodeiisQ0i.

node i, that customer is routed to node j with probability Qij (see Figure 6.16). It isalso possible for the customer to depart from the network entirely (called an exogenousdeparture),andthisoccurswithprobabilityQi0 = 1 −

j≥1 Qij. Foracustomerdeparting

fromnodei,thenextnode j isarandomvariablewithPMFQij,0≤ j≤k.Successive choices of the next node for customers at node i are IID, independent of thecustomerroutingatothernodes, independentof allservicetimes,and independentof theexogenous inputs. Notationally, we are regarding the outside world as a fictitious node 0fromwhichcustomersappearandtowhichtheydisappear.Whenacustomerisroutedfromnodeitonode j,itisassumedthattheroutingisinstantaneous;thusattheepochof adeparturefromnode i,there isasimultaneousendogenousarrivalatnode j. Thusanode jreceivesPoissonexogenousarrivalsfromoutsidethesystemat rate λ0Q0 j and receives endogenous arrivals from other nodes according to the probabilistic rules just described. We can visualize these combined exogenous and endogenousarrivals asbeingserved in FCFS fashion,but itreallymakesno diff erence inwhich ordertheyareserved,sincethecustomersarestatisticallyidenticalandsimplygiverisetoserviceatnode j atrateµ j whenevertherearecustomerstobeserved.TheJacksonqueueingnetwork,as justdefined,isfullydescribedbytheexogenousinputrateλ0,

the

service

rates

µi,

and

the

routing

probabilities

Qij; 0

i, j

k.

The

network

asawholeisaMarkovprocessinwhichthestateisavectorm = (m1, m2, . . . , mk),wheremi,1≤i≤k, isthenumberof customersatnodei. Statechangesoccuruponexogenousarrivalstothevariousnodes,exogenousdeparturesfromthevariousnodes,anddeparturesfromonenodethatenteranothernode. Inavanishinglysmallintervalδ of time,giventhatthe state at the beginningof that interval ism , anexogenousarrival at node j occurs inthe intervalwithprobabilityλ0Q0 jδ andchangesthestatetom

=m +e j wheree j isaunitvectorwithaoneinposition j. If mi >0,anexogenousdeparturefromnodeioccurs

Page 29: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 29/53

290 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

inthe interval withprobability µiQi0δ andchangesthe statetom =m −e i. Finally, if

mi > 0, a departure from node i entering node j occurs in the interval with probabilityµiQij δ andchangesthestatetom

=m −e i+e j. Thus,thetransitionratesaregivenbyq m ,m = λ0Q0 j form

=m +e j, 1≤i≤k (6.55)= µiQi0 form

=m −m i, mi >0, 1≤i≤k (6.56)= µiQij form

=m −e i+e j, mi >0, 1≤i, j≤k (6.57)= 0 forallotherchoicesof m

.Notethatadeparture fromnodeithatre-entersnodeicausesatransition fromstatem back intostatem ;wedisallowedsuchtransitionsinsections6.1and6.2,butshowedthattheycausednoproblemsinourdiscussionof uniformization. Itisconvenienttoallowtheseself-transitionshere,partlyfortheaddedgeneralityandpartlytoillustratethatthesinglenodenetworkwithfeedbackof Figure6.14isanexampleof aJacksonnetwork.Ourobjectiveistofindthesteady-stateprobabilities p(m )forthistypeof process,andourplanof attackisinaccordancewithTheorem6.6.2;thatis,weshallguessasetof transitionrates forthebackwardMarkovprocess,usethesetoguess p(m ),andthenverifythattheguessesarecorrect. Beforemakingtheseguesses,however, wemustfindouta littlemoreabout how the system works, so as to guide the guesswork. Let us define λi for each i,1≤i≤k,asthetime-averageoverallrateof arrivalstonodei, includingbothexogenousandendogenousarrivals. Sinceλ0 istherateof exogenous inputs,we can interpretλi/λ0astheexpectednumberof visitstonodeiperexogenousinput. Theendogenousarrivalstonode iarenotnecessarilyPoisson, astheexampleof a singlequeuewith feedbackshows,andwearenotevensureatthispointthatsuchatime-averagerateexistsinanyreasonablesense. However, letusassume forthetimebeingthatsuch ratesexistandthatthetime-averagerateof departuresfromeachnodeequalsthetime-averagerateof arrivals(i.e.,thequeue

sizes

do

not

grow

linearly

with

time).

Then

these

rates

must

satisfy

the

equation

k

λ j =λiQij ; 1≤ j≤k. (6.58)

j=0Toseethis,notethatλ0Q0 j istherateof exogenousarrivalsto j. Alsoλi isthetime-averagerateatwhichcustomersdepartfromqueuei,andλiQij istherateatwhichcustomersgofromnodeitonode j. Thus,therighthandsideof (6.58)isthesumof theexogenousandendogenousarrivalratestonode j. Notethedistinctionbetweenthetime-averagerateof customersgoingfromito jin(6.58)andtherateq m ,m =µiQij form

=m −e i+e j, mi >0in(6.57). Theratein(6.57)isconditionedonastatem withmi >0,whereasthatin(6.58)istheoveralltime-averagerate,averagedoverallstates.Note that Qij; 0 ≤ i, j ≤k forms a stochastic matrix and (6.58) is formally equivalentto the equations for steady-stateprobabilities(except that steady-stateprobabilitiessumto1). Theusualequationsforsteady-stateprobabilitiesincludeanequationfor j=0,butthat equation is redundant. Thus we know that, if there is a path between each pair of nodes (includingthe fictitiousnode 0), then (6.58) has a solution for λi; 0≤i≤k, andthatsolution isuniquewithinascale factor. Theknownvalueof λ0 determinesthisscalefactorandmakesthesolutionunique. Notethatwedon’thavetobecarefulatthispoint

Page 30: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 30/53

2916.7. JACKSON NETWORKS

about whether these rates are time-averages in any nice sense, since this will be verifiedlater; wedohavetomake surethat(6.58)hasasolution,however,since itwillappear inour solution for p(m ). Thus we assume in what follows that a path exists between eachpairof nodes,andthusthat(6.58)hasauniquesolutionasafunctionof λ0.We now make the final necessary assumption about the network, which is that µi > λifor each node i. This will turn out to be required in order to make the process positiverecurrent. We also define ρi as λi/µi. We shall find that, even though the inputs to anindividual node i are not Poisson in general, there is a steady-state distribution for thenumber of customers at i, and that distribution is the same as that of an M/M/1 queuewiththeparameterρi.Nowconsiderthebackwardtimeprocess. Wehaveseenthatonlythreekindsof transitionsare possible in the forward process. First, there are transitions fromm tom

=m +e jforany j, 1≤ j≤k. Second,therearetransitionsfromm tom −e i foranyi, 1≥i≤k,suchthatmi >0. Third,therearetransitionsfromm tom

=m −e i+e j for1≤i, j≤kwith mi >0. Thus in the backward process, transitions fromm

tom are possible onlyforthem ,m pairsabove. Correspondingtoeacharrivalintheforwardprocess,thereisadepartureinthebackwardprocess;foreachforwarddeparture,thereisabackwardarrival;andforeachforwardpassagefromito j,thereisabackwardpassagefrom j toi.We now make the conjecture that the backward process is itself a Jackson network withPoissonexogenousarrivalsatratesλ0Q∗

0 j,servicetimesthatareexponentialwithratesµi, and routing probabilities Qij

∗ . The backward routing probabilities Q∗ij must

be chosen to be consistent with the transition rates in the forward process. Since eachtransitionfromito j intheforwardprocessmustcorrespondtoatransitionfrom j toiinthebackwardprocess,weshouldhave

λiQij =λ jQ ji∗ ; 0≤i, j≤k. (6.59)NotethatλiQij representstherateatwhichforwardtransitionsgofromito j,andλi representstherateatwhichforwardtransitionsleave nodei. Equation(6.59)takesadvantageof the factthatλi isalsotherateatwhich forwardtransitionsenter nodei,andthustherate at which backward transitions leave node i. Using the conjecturethat the backwardtime system is a Jackson network with routing probabilities Q∗ ; 0 ≤ i, j ≤ k, we canij

writedownthebackwardtransitionratesinthesamewayas(6.55-6.57),q ∗ = λ0Q∗

0 j form =m +e j (6.60)

m ,m = µiQi

∗0 form

=m −e i, mi >0, 1≤i≤k (6.61)= µiQ∗ form

m−i >0, 1≤i, j≤k. (6.62)ij =m −e i+e j,If wesubstitute(6.59)into(6.60)-(6.62),weobtain

q ∗ = λ jQ j0 form =m +e j, (6.63)

m ,m 1≤ j≤k= (µi/λi)λ0Q0i form

=m −e i, mi >0, 1≤i≤k (6.64)= (µi/λi)λ jQ ji form

=m −e i+e j , mi >0, 1≤i, j≤k. (6.65)Thisgivesusourhypothesizedbackwardtransitionratesintermsof theparametersof theoriginalJacksonnetwork. Tousetheorem6.6.2,wemustverifythatthereisasetof positive

Page 31: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 31/53

292 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

numbers, p(m ), satisfyingm p(m )=1 and

m ν m pm <∞, and a set of nonnegativenumbersq ∗ satisfyingthefollowingtwosetsof equations:

m ,m p(m )q m ,m = p(m

)q ∗ forallm ,m (6.66)

m ,m q m ,m =

q ∗m ,m forallm . (6.67)

m m

Weverify(6.66)bysubstituting(6.55)-(6.57)ontheleftsideof (6.66)and(6.63)-(6.65)onthe right side. Recallingthatρi isdefinedas λi/µi, andcancellingout commontermsoneachside,wehave

p(m ) = p(m )/ρ j form

=m +e j (6.68) p(m ) = p(m

)ρi form =m −e i, mi >0 (6.69)

p(m ) = p(m )ρi/ρ j form

=m −e i+e j, mi >0. (6.70)

YLooking at the casem

=m −e i, and using this equation repeatedly to get from state(0,0, . . . ,0)uptoanarbitrarym,weobtain

k

p(m ) = p(0,0, . . . ,0) miρi . (6.71)i=1

It is easyto verifythat (6.71) satisfies(6.68)-(6.70) for all possible transitions. Summingoverallm tosolvefor p(0,0, . . . ,0),weget

1 = p(m ) = p(0,0, . . . ,0) mkm1 m2

YY

ρ ρm1,m2,...,mk m1 m2 mk

= p(0,0. . .,0)(1−ρ1)−1(1−ρ2)−1. . .(1−ρk)−1.Thus, p(0,0, . . .,0)=(1−ρ1)(1−ρ2). . .(1−ρk),andsubstitutingthisin(6.71),weget

k k

ρ . . . 1 2 k

mi

h(1−ρi)ρ i

.i p(m ) = pi(mi) = (6.72)

i=1 i=1mwhere pi(m)=(1−ρi)ρ isthesteady-statedistributionof asingleM/M/1queue. Nowi

that we have found the steady-state distribution implied by our assumption about thebackwardprocessbeingaJacksonnetwork,ourremainingtaskistoverify(6.67)Toverify(6.67),i.e.,

m q m ,m =m q ∗ ,firstconsidertherightside. Using(6.60)to

m ,m

sumoverallm =m +e j,then(6.61)tosumoverm

=m −e i (forisuchthatmi >0),andfinally(6.62)tosumoverm

=m −e i+e j ,(againforisuchthatmi >0),wegetk k

(6.73) λ0Q∗0 j + µiQ∗

i0+ Q∗ij.q ∗ =

m ,m µim j=1 i:mj>0 i:mi>0 j=1

UsingthefactQ∗ isastochasticmatrix,then, q ∗ =λ0+m ,m

µi. (6.74)

m i:mi>0

Page 32: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 32/53

2936.7. JACKSON NETWORKS

The lefthandsideof (6.67)canbesummedinthesamewaytogettheresultontherightside of (6.74), but we can see that this must be the result by simplyobservingthat λ0 isthe rate of exogenous arrivals andi:mi>0µi is the overall rate of service completions instatem . Notethatthisalsoverifiesthatν m =m q m ,m ≥λ0 +iµi,andsinceν m isbounded,

m ν m p(m )

<

∞.

Since

all

the

conditions

of

Theorem

6.6.2

are

satisfied,

p(m ),

as given in (6.72), gives the steady-stateprobabilities for the Jackson network. This alsoverifiesthatthebackwardprocessisaJacksonnetwork,andhencetheexogenousdeparturesarePoissonandindependent.Although the exogenous arrivals and departures in a Jackson network are Poisson, theendogenous processes of customers travelling from one node to another are typically notPoisson if there are feedback paths in the network. Also, although (6.72) shows that the

k k miY

numbersof customersatthediff erentnodesareindependentrandomvariablesatanygiventime insteady-state, it isnotgenerallytruethatthenumberof customersatonenodeatonetimeisindependentof thenumberof customersatanothernodeatanothertime.

Y

Therearemanygeneralizationsof thereversibilityargumentsusedabove,andmanynetworksituationsinwhichthenodeshave independentstatesatacommontime. Wediscuss justtwoof themhereandrefertoKelly,[13],foracompletetreatment.For the first generalization, assume that the service time at each node depends on the

Y

numberof customersatthatnode,i.e.,µi isreplacedbyµi,mi. NotethatthisincludestheM/M/mtypeof situationinwhicheachnodehasseveral independentexponentialservers.Withthismodification,thetransitionrates in(6.56)and(6.57)aremodifiedbyreplacingµi with µi,mi. The hypothesizedbackward transition ratesare modified in the same way,andtheonlyeff ectof thesechangesistoreplaceρi andρ j foreachiand j in(6.68)-(6.70)withρi,mi =λi/µi,mi andρ j,mj =λ j/µ j,mj. Withthischange,(6.71)becomes

= p(m ) pi(mi) = pi(0) ρi,j (6.75)i=1 i=1 j=0

1∞ −mY=

Thus, p(m )isgivenbytheproductdistributionof k individualbirth-deathsystems.6.7.1 ClosedJacksonnetworksThe

second

generalization

is

to

anetwork

of

queues

with

a

fixed

number

M

of

customers

in the system and with no exogenous inputs or outputs. Such networks are calledclosed Jackson networks, whereas the networks analyzed above are often called open Jackson networks. Suppose a k node closed network has routing probabilities Qij, 1 ≤ i, j ≤ k,where jQij =1,andhasexponentialservicetimesof rateµi (thiscanbegeneralizedtoµi,mi as above). We make the same assumptionsas before about independence of servicevariablesandroutingvariables,andassumethatthereisapathbetweeneachpairof nodes.SinceQij; 1≤i, j≤kformsanirreduciblestochasticmatrix,thereisaonedimensional

−1 pi(0) 1 + (6.76) ρi,j .

m=1 j=0

Page 33: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 33/53

294 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

setof solutionstothesteady-stateequationsλ j =λiQij ; 1≤ j≤k. (6.77)

i

Weinterpret

λi as

the

time-average

rate

of

transitions

that

go

into

node

i.

Since

this

set

of equationscanonlybesolvedwithinanunknownmultiplicativeconstant,andsincethisconstantcanonlybedeterminedattheendof theargument,wedefineπi; 1≤i≤kastheparticularsolutionof (6.77)satisfying

π j =πiQij; 1≤ j≤k; πi = 1. (6.78)

i iThus, for all i, λi = απi, where α is some unknown constant. The state of the Markovprocess is again taken asm = (m1, m2, . . . , mk) with the conditioni mi = M . Thetransitionratesof theMarkovprocessarethesameasforopennetworks,exceptthattherearenoexogenousarrivalsordepartures;thus(6.55)-(6.57)arereplacedby

q m ,m =µiQij form =m −e i+e j, mi >0, 1≤i, j≤k. (6.79)

We hypothesize that the backward time process is also a closed Jackson network, and asbefore,weconcludethatif thehypothesisistrue,thebackwardtransitionratesshouldbe

q m

∗,m = µiQ∗ form

mi >0, 1≤i, j≤k (6.80)ij =m −e i+e j,whereλiQij = λ jQ∗ for1≤i, j≤k. (6.81) ji

In order to use Theorem 6.6.2 again, we must verify that a PMF p(m ) exists satisfying p(m )q m ,m = p(m

)q ∗ for all possible states and transitions, and we must also verifythat

m q m ,m =m

m

,m

q ∗ for all possiblem . This latter verification is virtually them ,m

sameasbeforeandisleftasanexercise. Theformerverification,withtheuseof (72),(73),and(74),becomes

p(m )(µi/λi) = p(m )(µ j/λ j) form

=m −e i+e j, mi >0. (6.82)Usingtheopennetworksolutiontoguideourintuition,weseethatthefollowingchoiceof p(m )satisfies(6.82)forallpossiblem (i.e.,allm suchthati mi =M )

k

p(m ) =A Y(λi/µi)mi; form suchthat

mi =M. (6.83)i=1 i

TheconstantAisanormalizingconstant,chosentomake p(m )sumtounity. Theproblemwith(6.83)isthatwedonotknowλi (exceptwithinamultiplicativeconstantindependentof

i).

Fortunately,

however,

if

we

substitute

πi/α

for

λi,

we

see

that

α

is

raised

to

the

power

−M , independentof thestatem . Thus, lettingA =Aα−M ,oursolutionbecomes

K

p(m ) = A Y(πi/µi)mi; form suchthat

mi =M. (6.84)i=1 i

1 kπimi

= Y . (6.85)

Am :imi=M µii=1

Page 34: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 34/53

6.8. SEMI-MARKOV PROCESSES 295

Notethatthesteady-statedistributionof theclosedJacksonnetworkhasbeenfoundwithoutsolvingforthetime-averagetransitionrates. Notealsothatthesteady-statedistributionlooksverysimilartothatforanopennetwork;thatis,itisaproductdistributionoverthenodeswithageometrictypedistributionwithineachnode. This issomewhatmisleading,however,

since

the

constant

A

can

be

quite

diffi

cultto

calculate.

It

is

surprising

at

first

thatthe parameterof thegeometricdistributioncan be changedbyaconstant multiplierin(6.84)and(6.85)(i.e.,πi couldbe replacedwithλi) andthesolutiondoesnotchange;theimportantquantityistherelativevaluesof πi/µi fromonevalueof itoanotherratherthantheabsolutevalue.Inordertofindλi (andthis is important,since itsayshowquicklythesystemisdoingitswork),notethatλi =µiPrmi >0). SolvingforPrmi >0requiresfindingtheconstantA in (6.79). In fact, the major diff erence between open and closed networks is that therelevantconstantsforclosednetworksaretedioustocalculate(evenbycomputer)forlargenetworksandlargeM .

6.8 Semi-MarkovprocessesSemi-Markov processes aregeneralizationsof Markovprocessesinwhichthetimeintervalsbetweentransitionshaveanarbitrarydistributionratherthananexponentialdistribution.To be specific, there is an embedded Markov chain, X n; n ≥ 0 with a finite or countably infinite state space, and a sequence U n;n ≥ 1 of holding intervals between statetransitions. The epochs at which state transitions occur are then given, for n ≥ 1, asS n =m

n=1U n. The process starts at time 0 with S 0 definedto be 0. The semi-Markov

processisthenthecontinuous-timeprocessX (t);t≥0where,foreachn≥0,X (t) =X nfort inthe intervalS n ≤X n < S n+1. Initially,X 0 =iwherei isanygivenelementof thestatespace.The holding intervals U n;n ≥ 1 are nonnegative rv’s that depend only on the currentstate X n−1 and the next state X n. More precisely, given X n−1 = j and X n = k,say, theinterval U n is independent of U m; m < n and independent of X m;m < n−1 Theconditionaldistributionfunctionforsuchan intervalU n isdenotedbyGij(u),i.e.,

PrU n ≤u|X n−1 = j, X n =k=G jk (u). (6.86)The dependencies between the rv’s X n;n ≥ 0 and U n;n ≥ 1 is illustrated in Figure6.17.Theconditionalmeanof U n,conditionalonX n−1 = j,X n =k, isdenotedU ( j, k),i.e.,

U (i, j) =E[U n |X n−1 =i, X n = j] =Z

u≥0[1−Gij (u)]du. (6.87)Asemi-MarkovprocessevolvesinessentiallythesamewayasaMarkovprocess. Givenaninitialstate, X 0 =i attime0, a newstateX 1 = j isselectedaccordingto theembeddedchain with probability P ij. Then U 1 =S 1 is selected using the distributionGij (u). Nexta new state X 2 = k is chosen according to the probability P jk ; then, given X 1 = j and

Page 35: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 35/53

296 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

U 1 U 2 U 3 U 4

X 0 X 1 X 2 X 3Figure6.17: Thestatisticaldependenciesbetweentherv’sof asemi-Markovprocess.EachholdingintervalU n,conditionalonthecurrentstateX n−1 andnextstateX n, isindependentof allotherstatesandholdingintervals. Notethat,conditionalonX n,theholdingintervalsU n, U n−1, . . .,arestatisticallyindependentof U n+1, X n+2, . . . .

X 2 = k, the interval U 2 is selected with distribution function G jk (u). Successive statetransitionsandtransitiontimesarechoseninthesameway.Thesteady-statebehaviorof semi-Markovprocessescanbeanalyzedinvirtuallythesameway as Markov processes. We outline this in what follows, and often omit proofs wheretheyarethesameasthecorrespondingproof forMarkovprocesses. First,sincetheholdingintervals, U n, arerv’s, thetransitionepochs, S n =n U n, arealsorv’s. The followingm=1lemmathenfollowsinthesamewayasLemma6.2.1forMarkovprocesses.Lemma6.8.1. Let M i(t) be the number of transitions in a semi-Markov process in the interval (0, t] for some given initial state X 0 =i. Then limt→∞M i(t) =∞WP1.In what follows, we assume that the embedded Markov chain is irreducible and positive-recurrent. We want to find the limiting fraction of time that the process spends in anygiven state, say j. We will find that this limit exists WP1, and will find that it dependsonlyonthesteady-stateprobabilitiesof theembeddedMarkovchainandontheexpectedholding

interval

in

each

state.

This

is

given

by

U ( j) =E[U n = j] =P jkE[U n = j, X n =k] =P jk U ( j, k), (6.88)|X n−1

k|X n−1

kwhereU ( j, k) isgivenin6.87. Thesteady-stateprobabilitiesπi;i≥0fortheembeddedchain tell us the fraction of transitions that enter any given state i. Since U (i) is theexpectedholdingintervalinipertransitionintoi,wewouldguessthatthefractionof timespent in state i should be proportional to πiU (i). Normalizing, we would guess that thetime-averageprobabilityof beinginstateishouldbe

p j =π

k

U

k

(

U

j

(

)k)

. (6.89)Identifyingthemeanholding interval,U j with1/ν j,this isthesameresultthatweestablishedfortheMarkovprocesscase. Usingthesamearguments,wefindthisisvalidforthesemi-Markov case. It is valid both in the conventional case where each p j is positive and

j p j =1,andalsointhecasewherekπkU (k) =∞,whereeach p j =0. Theanalysisisbasedonthefactthatsuccessivetransitionstosomegivenstate,say j,givenX 0 =i,formadelayedrenewalprocess.

Page 36: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 36/53

2976.8. SEMI-MARKOV PROCESSES

Lemma6.8.2. Consider a semi-Markov process with an irreducible recurrent embedded chain X n;n≥0. Given X 0 =i, let M ij(t);t≥0 be the number of transitions intoa given state j in the interval (0, t]. Then M ij(t);t≥0 is a delayed renewal process (or, if j=i, is an ordinary renewal process).ThisisthesameasLemma6.2.2,butitisnotquitesoobviousthatsuccessiveintervalsbetweenvisitstostate jarestatisticallyindependent. Thiscanbeseen,however,fromFigure6.17, which makes itclearthat, givenX n = j, the futureholding intervals, U n, U n+1, . . .,areindependentof thepastintervalsU n−1, U n−2, . . ..Next,usingthesamerenewalrewardprocessasinLemma6.2.3,assigningreward1wheneverX (t) = j,wedefineW n astheintervalbetweenthen−1standthenthentrytostate j andgetthefollowinglemma:Lemma6.8.3. Consider a semi-Markov process with an irreducible, recurrent, embedded Markov chain starting in X 0 =i. Then with probability 1,the limiting time-average in state j is given by p j(i) =

U j.W ( j)

This lemmahas omittedany assertionaboutthe limitingensemble probability of state j,i.e., limt→∞PrX (t) = j. This follows easily from Blackwell’s theorem, but depends onwhetherthesuccessiveentriestostate j,i.e.,W n;n≥1,arearithmeticornon-arithmetic.This isexplored inExercise6.33. The lemmashows(asexpected)thatthe limitingtime-average in each state is independent of the starting state, so we henceforth replace p j(i)with p j.Next, let M i(t) be the total number of transitions in the semi-Markov process up to andincluding time n, given X 0 = i. This is not a renewal counting process, but, as withMarkov processes, it provides a way to combine the time-average results for all states j.ThefollowingtheoremisthesameasthatforMarkovprocesses,exceptfortheomissionof ensembleaverageresults.Theorem6.8.1. Consider a semi Markov process with an irreducible, positive-recurrent,embedded Markov chain. Let π j; j ≥0 be the steady-state probabilities of the embedded chain and let X 0 =ibe the starting state.Then,with probability 1,the limiting time-average fraction of time spent in any arbitrary state j is given by

p j =π

k

U

k

(

U

j)

( j). (6.90)

The expected time between returns tostate j is W ( j) =

kπkU (k)

, (6.91)π j

and the rate at which transitions take place is independent of X 0 and given by lim M i(t)

= 1

WP1. (6.92)t→∞ t

kπkU (k)

Page 37: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 37/53

298 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

Forasemi-Markovprocess,knowingthesteadystateprobabilityof X (t) = jforlargetdoesnotcompletelyspecifythesteady-statebehavior. Anotherimportantsteadystatequestionis to determine the fraction of time involved in i to j transitions. To make this notionprecise,defineY (t)astheresidualtimeuntilthenexttransitionaftertimet(ie.,t+Y (t)is

the

epoch

of

the

next

transition

after

time

t).

We

want

to

determine

the

fraction

of

time

t over which X (t) = i and X (t+Y (t)) = j. Equivalently, for a non-arithmetic process,wewanttodeterminePrX (t) =i, X (t+Y (t) = j)inthelimitast→ ∞. CallthislimitQ(i, j).Consider a renewal process, starting in state i and with renewals on transitions to statei. Define a reward R(t) = 1 for X (t) = i, X (t+Y (t)) = j and R(t) = 0 otherwise (seeFigure 6.18). That is, for each n such that X (S n) = i and X (S n+1) = j, R(t) = 1 forS n ≤ t < S n+1. The expected reward in an inter-renewal interval is then P ijU (i, j) . ItfollowsthatQ(i, j)isgivenby

tR(τ )(dτ P ijU (i, j) piP ijU (i, j)0Q(i, j)= lim = = . (6.93)

t→∞ t W (i) U (i)

U n

X n=i X n+1= j X n+2= j X n+3=i X n+4= j X n+5=i X n+6= j

W k

Figure6.18: Therenewal-rewardprocessforito j transitions. Theexpectedvalueof U n if X n =iandX n+1 = j isU (i, j)andtheexpected intervalbetweenentriestoi isW (i).

6.8.1 Example—theM/G/1queueAsoneexampleof asemi-Markovchain,consideranM/G/1queue. Ratherthantheusualinterpretation inwhichthestateof thesystem is thenumberof customers inthe system,we view the state of the system as changing only at departure times; the new state at adeparturetime is the number of customers left behind by the departure. This state thenremainsfixeduntilthenextdeparture. NewcustomersstillenterthesystemaccordingtothePoissonarrivalprocess,butthesenewcustomersarenotconsideredaspartof thestateuntil the next departure time. The number of customers in the system at arrival epochsdoesnotingeneralconstitutea“state”forthesystem,sincetheageof thecurrentserviceisalsonecessaryaspartof thestatisticalcharacterizationof theprocess.One purpose of this example is to illustrate that it is often more convenient to visualizethetransitionintervalU n =S n−S n−1 asbeingchosenfirstandthenewstateX n asbeingchosensecondratherthanchoosingthestatefirstandthetransitiontimesecond. FortheM/G/1 queue, first suppose that the state is some i > 0. In this case, service begins on

Page 38: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 38/53

6.9. SUMMARY 299

the next customer immediately after the old customer departs. Thus, U n, conditional onX n =ifori >0,hasthedistributionof theservicetime,sayG(u). Themeanintervaluntilastatetransitionoccursis

U (i) =Z∞

[1−

G(u)]du;

i >

0.

(6.94)

0

Giventheintervaluforatransitionfromstatei >0,thenumberof arrivalsinthatperiodisaPoissonrandomvariablewithmeanλu,whereλ isthePoissonarrivalrate. Sincethenextstate j istheoldstatei,plusthenumberof newarrivals,minusthesingledeparture,

(λu) j+i+1exp(−λu)PrX n+1 = j|X n =i, U n =u=

( j−i+1)! . (6.95)for j ≥i−1. For j < i−1,theprobabilityabove is0. TheunconditionalprobabilityP ijof atransitionfromito j canthenbefoundbymultiplyingtherightsideof (6.95)bytheprobabilitydensityg(u)of theservicetimeand integratingoveru.

P ij =Z∞ G(u)(λu) j−i+1exp(−λu)

du; j≥i−1, i>0. (6.96)( j−i+1)0

Forthecasei=0,theservermustwaituntilthenextarrivalbeforestartingservice. Thustheexpectedtimefromenteringtheemptystateuntilaservicecompletionis

U (0)=(1/λ) +Z∞

[1−G(u)]du. (6.97)0

WecanevaluateP 0 j byobservingthatthedepartureof thatfirstarrivalleaves jcustomersinthissystemif andonlyif jcustomersarriveduringtheservicetimeof thatfirstcustomer;i.e.,

the

new

state

doesn’t

depend

on

how

long

the

server

waits

for

anew

customer

to

serve,

butonlyonthearrivalswhilethatcustomerisbeingserved. Lettingg(u)bethedensityof theservicetime,

P 0 j =Z∞ g(u)(λu) jexp(−λu)

du; (6.98) j! j≥0.

0

6.9 SummaryWe have seen that Markov processes with countable state spaces are remarkably similarto Markovchainswith countable state spaces,and throughoutthe chapter, we frequentlymade use of both the embedded chain corresponding to the process and to the sampledtimeapproximationtotheprocess.For irreducible processes, the steady-state equations, (6.23) andi piν i = 1, were foundtospecifythesteady-stateprobabilities, pi,whichhavesignificancebothastime-averagesandaslimitingprobabilities. If thetransitionratesν i arebounded,thenthesampled-timeapproximation exists and has the same steady-state probabilities as the Markov processitself. If thetransitionratesν i areunboundedbuti piν i <∞,thentheembeddedchain

Page 39: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 39/53

Page 40: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 40/53

3016.10. EXERCISES

6.10 ExercisesExercise6.1. ConsideranM/M/1queueasrepresentedinFigure6.4. Assumethroughoutthat X 0 = i where i > 0. The purpose of this exercise is to understand the relationshipbetweentheholding intervaluntilthenextstatetransitionandthe intervaluntilthenextarrival to the M/M/1queue. Your explanations in the following partscan and shouldbeverybrief.a)Explainwhytheexpectedholding intervalE[U 1|X 0 =i]untilthenextstatetransitionis1/(λ+µ).b)ExplainwhytheexpectedholdingintervalU 1,conditionalonX 0 =iandX 1 =i+1,is

E[U 1|X 0 =i,X 1 =i+ 1]=1/(λ+µ).ShowthatE[U 1|X 0 =i, X 1 =i−1]isthesame.c)LetV bethetimeof thefirstarrivalaftertime0(thismayoccureitherbeforeorafterthetimeW of thefirstdeparture.) Showthat

1E[V |X 0 =i,X 1 =i+ 1] =

λ+µ1 1

E[V |X 0 =i,X 1 =i−1] =λ+µ+

λ.

Hint: In the second equation, use the memorylessness of the exponential rv and the factthatV underthisconditionisthetimetothefirstdepartureplustheremainingtimetoanarrival.d)Useyoursolutiontopartc)plustheprobabilityof movingupordown intheMarkovchaintoshowthatE[V ] = 1/λ. (Note: youalreadyknowthatE[V ] = 1/λ. Thepurposehere istoshowthatyoursolutiontopartc)isconsistentwiththatfact.)Exercise6.2. ConsideraMarkovprocessforwhichtheembeddedMarkovchainisabirth-deathchainwithtransitionprobabilitiesP i,i+1 = 2/5foralli≥0,P i,i−1 = 3/5foralli≥1,P 01 =1,andP ij =0otherwise.a)Findthesteady-stateprobabilitiesπi;i≥0fortheembeddedchain.b) Assume that the transition rate ν i out of state i, for i ≥0, is given by ν i = 2i. Findthetransitionratesq ijbetweenstatesandfindthesteady-stateprobabilities pifortheMarkov

process.

Explain

heuristically

why

πi

= pi.

c)Explainwhythereisnosampled-timeapproximationforthisprocess. Thentruncatetheembeddedchaintostates0tomandfindthesteady-stateprobabilitiesforthesampled-timeapproximationtothetruncatedprocess.d) Show that as m→ ∞, the steady-stateprobabilities for the sequenceof sampled-timeapproximationsapproachestheprobabilities pi inpartb).

Page 41: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 41/53

302 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

Exercise6.3. ConsideraMarkovprocessforwhichtheembeddedMarkovchainisabirth-deathchainwithtransitionprobabilitiesP i,i+1 = 2/5foralli≥1,P i,i−1 = 3/5foralli≥1,P 01 =1,andP ij =0otherwise.a)Findthesteady-stateprobabilitiesπi;i≥0fortheembeddedchain.b) Assumethatthetransitionrateoutof statei,fori≥0,isgivenbyν i = 2−i. Findthetransitionratesq ijbetweenstatesandshowthatthere isnoprobabilityvectorsolution pi;i≥0to(6.23).e) Argue that the expected time between visits to any given state i is infinite. Find theexpected number of transitions between visits to any given state i. Argue that, startingfromanystatei,aneventualreturntostateioccurswithprobability1.f)Considerthesampled-timeapproximationof thisprocesswithδ =1. Drawthegraphof theresultingMarkovchainandarguewhy itmustbenull-recurrent.Exercise6.4. ConsidertheMarkovprocessfortheM/M/1queue,asgiveninFigure6.4.a) Find thesteadystate processprobabilities(as a functionof ρ=λ/µ) from (6.15)andalsoasthesolutionto(6.23). Verifythatthetwosolutionsarethesame.b)Fortheremainingpartsof theexercise,assumethatρ= 0.01,thusensuring(foraidingintuition)thatstates0and1aremuchmoreprobablethantheotherstates. Assumethattheprocesshasbeenrunningforaverylongtimeandisinsteadystate. Explaininyourownwords the diff erence between π1 (the steady-state probability of state 1 in the embeddedchain)and p1 (thesteady-stateprobabilitythattheprocess is instate1. Moreexplicitly,whatexperimentscouldyouperform(repeatedly)ontheprocesstomeasureπ1 and p1.c)Nowsupposeyouwanttostarttheprocessinsteadystate. Showthatitisimpossibletochooseinitialprobabilitiessothatboththeprocessandtheembeddedchainstartinsteadystate. Whichversionof steadystate isclosesttoyour intuitiveview? (There isnocorrectanswer here, but it is important to realiize thatthe notion of steady state is notquiteassimpleasyoumight imagine).d)LetM (t)bethenumberof transitions(countingbotharrivalsanddepartures)thattakeplacebytimetinthisMarkovprocessandassumethattheembeddedMarkovchainstartsinsteadystateattime0. LetU 1, U 2, . . . ,bethesequenceof holdingintervalsbetweentransitions(withU 1 beingthetimetothefirsttransition). Showthattheserv’sareidenticallydistributed. Show by examplethat they are not independent (i.e., M (t) is not a renewalprocess).Exercise6.5. ConsidertheMarkovprocessillustratedbelow. Thetransitionsarelabelledby the rate q ij at which those transitions occur. The process can be viewed as a singleserver queuewhere arrivalsbecome increasinglydiscouragedas the queue lengthens. Theword time-average below refers to the limiting time-average over each sample-pathof theprocess,exceptforasetof samplepathsof probability0.

λ λ/2 λ/3 λ/4

4 . . .0 1 2 3 µ µ µ µ

Page 42: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 42/53

β

β

3036.10. EXERCISES

a) Find the time-average fraction of time pi spent in each state i >0 in terms of p0 andthen solve for p0. Hint: First find an equationrelating pi to pi+1 for each i. It also may

xhelptorecallthepowerseriesexpansionof e .b) Find a closed form solution to i piν i where ν i is the departure rate from state i.Showthattheprocess ispositiverecurrent forallchoicesof λ >0andµ >0andexplainintuitivelywhythismustbeso.c) For the embedded Markov chain corresponding to this process, find the steady-stateprobabilitiesπi foreachi≥0andthetransitionprobabilitiesP ij foreachi, j.d)Foreach i, findboththetime-average intervalandthetime-averagenumberof overallstatetransitionsbetweensuccessivevisitstoi.Exercise6.6. (Detailfromproof of Theorem6.2.1)a)LetU n bethenthholdingintervalfora Markovprocessstarting in steadystate, withPrX 0 =i=πi. Show thatE[U n] =kπk/ν k foreachintegern.b) Let S n be the epoch of thenth transition. Show thatPrS n ≥nβ ≤kπk/ν k for allβ >0.c) Let M (t) be the number of transitionsof the Markov process up to time t, given thatX 0 isinsteadystate. ShowthatPrM (nβ )< n>1−kπk/ν k.d)Showthatif kπk/ν k isfinite,thenlimt→∞M (t)/t=0WP1isimpossible. (Notethatthisisequivalenttoshowingthat limt→∞M (t)/t=0WP1implieskπk/ν k =∞).e) Let M i(t) be the number of transitions by time t starting in state i. Show that if kπk/ν k isfinite,thenlimt→∞M i(t)/t=0WP1is impossible.Exercise6.7. a)Considertheprocessinthefigurebelow. TheprocessstartsatX (0)=1,andforalli≥1,P i,i+1 =1andν i =i2 foralli. LetS n betheepochwhenthenth transitionoccurs. Showthat

n

E[S n] =i−2 <2foralln.

i=1Hint: Upperboundthesumfromi=2by integratingx−2 fromx=1.

1 4 9 16 1 2 3 4

b) Use the Markov inequality to show that PrS n >4 ≤ 1/2 for all n. Show that theprobabilityof an infinitenumberof transitionsbytime4 isat least1/2.Exercise6.8. a)ConsideraMarkovprocesswiththesetof states0,1, . . .inwhichthetransitionratesq ijbetweenstatesaregivenbyq i,i+1 =(3/5)2i fori≥0,q i,i−1 =(2/5)2i

Page 43: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 43/53

304 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

for i≥1, and q ij =0 otherwise. Find the transitionrate ν i out of state i for each i≥0andfindthetransitionprobabilitiesP ij fortheembeddedMarkovchain.b)Findasolution pi;i≥0with

i pi =1to(6.23).

c)Show

that

all

states

of

the

embedded

Markov

chain

are

transient.

d) Explain in your own words why your solution to part b) is not in any sense a set of steady-stateprobabilitiesExercise6.9. Let q i,i+1 = 2i−1 for all i≥0 and letq i,i−1 = 2i−1 forall i≥1. Allothertransitionratesare0.a)Solvethesteady-stateequationsandshowthat pi = 2−i−1 foralli≥0.b)FindthetransitionprobabilitiesfortheembeddedMarkovchainandshowthatthechainisnullrecurrent.c)Foranystatei,considertherenewalprocessforwhichtheMarkovprocessstartsinstateiandrenewalsoccuroneachtransitiontostatei. Showthat,foreachi≥1,theexpectedinter-renewal interval isequalto2. Hint: Userenewal-rewardtheory.d)Showthattheexpectednumberof transitionsbetweeneachentryintostateiisinfinite.Explainwhythisdoesnot meanthataninfinitenumberof transitionscanoccurinafinitetime.Exercise6.10. a)ConsiderthetwostateMarkovprocessof Example6.3.1withq 01 =λandq 10 =µ. Findtheeigenvaluesandeigenvectorsof thetransitionratematrix[Q].b)If [Q]hasMdistincteigenvalues,thediff erentialequationd[P (t)]/dt= [Q][P (t)]canbesolvedbytheequation

M

[P (t)]=ν ν ν ietλip T

i,i=1

wherep i andν ν ν i aretheleftandrighteigenvectorsof eigenvalueλi. ShowthatthisequationgivesthesamesolutionasthatgivenforExample6.3.1.Exercise6.11. ConsiderthethreestateMarkovprocessbelow;thenumbergivenonedge(i, j)isq ij,thetransitionratefromito j. Assumethattheprocessisinsteadystate.

1 1 21

2 4

2 4

3

Page 44: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 44/53

3056.10. EXERCISES

a)Isthisprocessreversible?b)Find pi,thetime-averagefractionof timespentinstateiforeachi.c)Giventhattheprocessisinstateiattimet,findthemeandelayfromtuntiltheprocessleaves

state

i.

d)Findπi,thetime-averagefractionof alltransitionsthatgointostateiforeachi.e)Supposetheprocess is insteadystateattimet. Findthesteadystateprobabilitythatthenextstatetobeenteredisstate1.f)Giventhattheprocessisinstate1attimet,findthemeandelayuntiltheprocessfirstreturnstostate1.g) Consider an arbitrary irreducible finite-state Markov process in which q ij =q ji for alli, j. Eithershowthatsuchaprocessisreversibleorfindacounterexample.Exercise6.12. a) ConsideranM/M/1queueingsystemwith arrivalrateλ, servicerateµ,µ >λ. Assumethatthequeueisinsteadystate. Giventhatanarrivaloccursattimet,findtheprobabilitythatthesystemis instateiimmediatelyafter timet.b)AssumingFCFSservice,andconditionalonicustomersinthesystemimmediatelyaftertheabovearrival,characterizethetimeuntiltheabovecustomerdepartsasasumof randomvariables.c) Find the unconditional probability density of the time until the above customer departs. Hint: Youknow(fromsplittingaPoissonprocess)thatthesumof ageometricallydistributednumberof IIDexponentiallydistributedrandomvariablesisexponentiallydistributed. Usethesameideahere.Exercise6.13. a)ConsideranM/M/1queueinsteadystate. Assumeρ=λ/µ<1. Findthe probability Q(i, j) for i≥ j >0 thatthesystem is in state i attimet andthat i− jdeparturesoccurbeforethenextarrival.b)FindthePMFof thestateimmediatelybeforethefirstarrivalaftertimet.c)There isa well-knownqueueingprinciplecalledPASTA, standing for“Poissonarrivalsseetime-averages”. Given yourresultsabove, give a moreprecisestatement of whatthatprinciplemeansinthecaseof theM/M/1queue.Exercise6.14. Asmallbookieshophasroom foratmosttwocustomers. Potentialcustomers arrive at a Poisson rate of 10 customersper hour; they enter if there is room andareturnedaway,nevertoreturn,otherwise. Thebookieservestheadmittedcustomers inorder,requiringanexponentiallydistributedtimeof mean4minutespercustomer.a)Findthesteady-statedistributionof numberof customersintheshop.b)Findtherateatwhichpotentialcustomersareturnedaway.c)Supposethebookiehiresanassistant;thebookieandassistant,workingtogether,nowserve each customer in an exponentially distributedtime of mean 2 minutes, but there is

Page 45: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 45/53

306 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

only room for one customer (i.e., the customer being served) in the shop. Find the newrateatwhichcustomersareturnedaway.

Exercise6.15. This exercise explores a continuous time version of a simple branchingprocess.Consider a population of primitive organisms which do nothing but procreate and die.In particular, the population starts at time 0 with one organism. This organism has anexponentially distributed lifespan T 0 with rate µ (i.e., PrT 0 ≥τ = e−µτ ). While thisorganismisalive, itgivesbirthtoneworganismsaccordingtoaPoissonprocessof rateλ.Eachof theseneworganisms,whilealive,givesbirthtoyetotherorganisms. The lifespanandbirthrateforeachof theseneworganismsareindependentlyandidenticallydistributedtothoseof thefirstorganism. Alltheseandsubsequentorganismsgivebirthanddieinthesameway,againindepedentlyof allotherorganisms.a) Let X (t) be the number of (live) organisms in the population at time t. Show thatX (t); t≥0 isaMarkovprocessandspecifythetransitionratesbetweenthestates.b)FindtheembeddedMarkovchainX n;n≥0correspondingtotheMarkovprocess inparta). FindthetransitionprobabilitiesforthisMarkovchain.c)ExplainwhytheMarkovprocessandMarkovchainabovearenotirreducible. Note: Themajorityof resultsyouhaveseenforMarkovprocessesassumetheprocessisirreducible,sobecarefulnottousethoseresultsinthisexercise.d)Forpurposesof analysis,addanadditionaltransitionof rateλ fromstate0tostate1.ShowthattheMarkovprocessandtheembeddedchainareirreducible. Findthevaluesof λandµforwhichthemodifiedchainispositiverecurrent,null-recurrent, andtransient.e)Assumethatλ < µ. FindthesteadystateprocessprobabilitiesforthemodifiedMarkovprocess.f)Findthemeanrecurrencetimebetweenvisitstostate0forthemodifiedMarkovprocess.g)FindthemeantimeT forthepopulationintheoriginalMarkovprocesstodieout. Note:WehaveseenbeforethatremovingtransitionsfromaMarkovchainorprocesstocreateatrappingstatecanmakeiteasiertofindmeanrecurrencetimes. Thisisanexampleof theopposite,whereaddinganexit fromatrappingstatemakes iteasytofindtherecurrencetime.

Exercise6.16.

Consider

the

job

sharing

computer

system

illustrated

below.

Incoming

jobs

arrive from the left in a Poisson stream. Each job, independently of other jobs, requirespre-processing insystem1withprobabilityQ. Jobs insystem1areservedFCFSandtheservicetimesforsuccessive jobsenteringsystem1areIIDwithanexponentialdistributionof mean 1/µ1. The jobs entering system 2 are also served FCFS and successive servicetimesareIIDwithanexponentialdistributionof mean1/µ2. Theservicetimesinthetwosystemsare independentof eachotherandof thearrivaltimes. Assumethatµ1 >λQandthatµ2 > λ. Assumethatthecombinedsystemisinsteadystate.

Page 46: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 46/53

3076.10. EXERCISES

System1µ1

System2µ2

λ Q

1−QaIstheinputtosystem1Poisson? Explain.b)Areeachof thetwoinputprocessescomingintosystem2Poisson? Explain.d) Give the joint steady-state PMF of the number of jobs in the two systems. Explainbriefly.e)What istheprobability thatthefirst jobto leavesystem1 aftertimet is thesameasthefirst jobthatenteredtheentiresystemaftertimet?f) What is the probability that the first job to leave system 2 after time t both passedthroughsystem1andarrivedatsystem1aftertimet?Exercise6.17. Considerthefollowingcombinedqueueingsystem. ThefirstqueuesystemisM/M/1withservicerateµ1. ThesecondqueuesystemhasIIDexponentiallydistributedservicetimes withrate µ2. Each departure fromsystem1 independently1−Q1. System2hasan additionalPoisson inputof rateλ2, independent of inputsandoutputs fromthefirst system. Each departure from the second system independently leaves the combinedsystemwithprobabilityQ2 andre-enterssystem2withprobability1−Q2. Forpartsa),b),c)assumethatQ2 = 1 (i.e.,thereisnofeedback).

λ2 System1 System2λ1 Q1 Q2

µ1 µ21−Q2 (partd))1−Q1

a)Characterizetheprocessof departuresfromsystem1thatentersystem2andcharacterizetheoverallprocessof arrivalstosystem2.b)AssumingFCFSservice ineachsystem,findthesteadystatedistributionof timethatacustomerspendsineachsystem.c) For a customer that goes through both systems, show why the time in each system isindependentof thatintheother;findthedistributionof thecombinedsystemtimeforsucha

customer.

d)NowassumethatQ2 <1. IsthedepartureprocessfromthecombinedsystemPoisson?Whichof thethreeinputprocessestosystem2arePoisson? Whichof the inputprocessesareindependent? Explainyourreasoning,butdonotattemptformalproofs.Exercise6.18. Suppose a Markov chain with transition probabilitiesP ij is reversible.Supposewechangethetransitionprobabilitiesoutof state0fromP 0 j; j≥0toP 0

j; j≥

Page 47: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 47/53

Page 48: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 48/53

3096.10. EXERCISES

a)Findthemeantimebetweenbusyperiods(i.e.,thetimeuntilanewarrivaloccursafterthesystembecomesempty).b)Findthetime-averagefractionof timethatthesystemisbusy.c)

Find

the

mean

duration,

E[B],

of

a

busy

period.

Hint:

use

a)

and

b).

d) Explain briefly why the customer that starts a busy period remains in the system fortheentirebusyperiod;usethistofindtheexpectedsystemtimeof acustomergiventhatthatcustomerarriveswhenthesystemisempty.e) Is there any statistical dependence between the system time of a given customer (i.e.,thetime fromthecustomer’sarrivaluntildeparture)andthenumberof customers inthesystemwhenthegivencustomerarrives?f)Showthatacustomer’sexpectedsystemtime isequaltoE[B]. Hint: Lookcarefullyatyouranswerstod)ande).g) Let C be the expected system time of a customer conditional on the service time X of that customer being 1. Find (in terms of C ) the expected system time of a customerconditionalonX =2;(Hint: compareacustomerwithX =2totwocustomerswithX = 1each);repeatforarbitraryX =x.h)FindtheconstantC . Hint: usef)andg);don’tdoanytediouscalculations.Exercise6.21. Consideraqueueingsystemwithtwoclassesof customers,typeAandtypeB. TypeAcustomersarriveaccordingtoaPoissonprocessof rateλA andcustomersof typeB arriveaccordingtoan independentPoissonprocessof rateλB . ThequeuehasaFCFSserverwithexponentiallydistributedIIDservicetimesof rateµ >λA +λB. Characterizethedepartureprocessof classAcustomers;explaincarefully. Hint: Considerthecombinedarrivalprocessandbe judiciousabouthowtoselectbetweenAandB typesof customers.Exercise6.22. Considerapre-emptiveresumelastcomefirstserve(LCFS)queueingsystemwithtwoclassesof customers. TypeAcustomerarrivalsarePoissonwithrateλA andTypeBcustomerarrivalsarePoissonwithrateλB . TheservicetimefortypeAcustomersisexponentialwithrateµA andthatfortypeB isexponentialwithrateµB. Eachservicetimeis independentof allotherservicetimesandof allarrivalepochs.Definethe“state”of thesystemattimetbythestringof customertypesinthesystematt, in order of arrival. Thus state AB meansthat the systemcontains two customers, oneof typeAandtheotherof typeB;thetypeB customerarrivedlater,soisinservice. Theset

of

possible

states

arising

from

transitions

out

of

AB

is

as

follows:

ABA if anothertypeAarrives.ABB if anothertypeB arrives.Aif thecustomerinservice(B)departs.Notethatwheneveracustomercompletesservice,thenextmostrecentlyarrivedresumesservice,sothestatechangesbyeliminatingthefinalelement inthestring.

Page 49: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 49/53

310 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

a)Drawagraphforthestatesof theprocess,showingallstateswith2orfewercustomersand a couple of states with 3 customers (label the empty state as E ). Draw an arrow,labelled by the rate, for each state transition. Explain why these are states in a Markovprocess.b) Is this process reversible. Explain. Assume positive recurrence. Hint: If there is atransitionfromonestateS toanotherstateS ,howisthenumberof transitionsfromS toS relatedtothenumberfromS toS ?c)Characterizetheprocessof typeAdeparturesfromthesystem(i.e.,aretheyPoisson?;dotheyformarenewalprocess?;atwhatratedotheydepart?;etc.)d) Express the steady-state probability PrA of state A in terms of the probability of theemptystatePrE . FindtheprobabilityPrABandtheprobabilityPrABBA intermsof PrE . UsethenotationρA =λA/µA andρB =λB/µB .e)LetQn betheprobabilityof ncustomers inthesystem,asa functionof Q0 =PrE .

nShowthatQn =(1−ρ)ρ whereρ=ρA+ρB .Exercise6.23. a) Generalize Exercise 6.22 to the case in which there are m types of customers,eachwith independentPoissonarrivalsandeachwith independentexponentialservicetimes. Letλi andµi bethearrivalrateandserviceraterespectivelyof theith user.Letρi =λi/µi andassumethatρ=ρ1+ρ2+ +ρm <1. Inparticular,show,asbefore· · ·

nthattheprobabilityof ncustomersinthesystemisQn =(1−ρ)ρ for0≤n <∞.b)Viewthecustomersinparta)asasingletypeof customerwithPoissonarrivalsof rateλ=iλi andwithaservicedensityi(λi/λ)µiexp(−µix). Showthattheexpectedservicetimeisρ/λ. Notethatyouhaveshownthat,if aservicedistributioncanberepresentedasaweightedsumof exponentials,thenthedistributionof customersinthesystemforLCFSserviceisthesameasfortheM/M/1queuewithequalmeanservicetime.Exercise6.24. Considerak nodeJacksontypenetworkwiththemodificationthateachnodeihassserversratherthanoneserver. Eachserveratihasanexponentiallydistributedservicetimewithrateµi. Theexogenousinputratetonodeiisρi =λ0Q0i andeachoutputfromiisswitchedto jwithprobabilityQij andswitchedoutof thesystemwithprobabilityQi0 (as inthetext). Letλi,1≤i≤k,bethesolution,forgivenλ0,to

k

λ j =λiQij;

i=01≤ j≤k andassumethatλi <sµi; 1≤i≤k. Showthatthesteady-stateprobabilityof statem is

k

Prm =Y pi(mi),

i=1where pi(mi)istheprobabilityof statemi in an(M , M , s)queue. Hint: simplyextendtheargument inthetexttothemultipleservercase.

Page 50: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 50/53

3116.10. EXERCISES

Exercise6.25. Suppose a Markov process with the set of states A is reversible and hassteady-stateprobabilities pi; i∈A. Let B be a subsetof Aand assumethat theprocessischangedbysettingq ij =0foralli∈B, j /∈B. Assumingthatthenewprocess(startinginB andremininginB)isirreducible,showthatthenewprocessisreversibleandfinditssteady-state

probabilities.

Exercise6.26. Consider a queueing system with two classes of customers. Type A customerarrivalsarePoissonwithrateλA andTypeBcustomerarrivalsarePoissonwithrateλB. Theservicetime fortype A customers is exponentialwithrate µA andthat for typeB isexponentialwithrateµB. Eachservicetime is independentof allotherservicetimesandof allarrivalepochs.a)Firstassumethereareinfinitelymanyidenticalservers,andeachnewarrivalimmediatelyenters an idle server and beginsservice. Letthe stateof the system be (i, j) where iand j arethenumbersof typeAandB customersrespectivelyinservice. Drawagraphof thestate transitions for i ≤ 2, j ≤ 2. Find the steady-state PMF, p(i, j);i, j ≥ 0, for theMarkovprocess. Hint: NotethatthetypeAandtypeB customersdonot interact.b)Assumefortherestof theexercisethatthere issomefinitenumbermof servers. Customers who arrive when all servers are occupied are turned away. Find the steady-statePMF, p(i, j);i, j ≥ 0, i+ j ≤ m, in terms of p(0,0) for this Markov process. Hint:Combineparta)withtheresultof Exercise6.25.c) Let Qn be the probability that there are n customers in service at some given time insteadystate. ShowthatQn = p(0,0)ρn/n!for0≤n≤mwhereρ=ρA+ρB ,ρA =λA/µA,andρB =λB/µB . Solvefor p(0,0).Exercise6.27. a) Generalize Exercise 6.26 to the case in which there are K types of customers,eachwith independentPoissonarrivalsandeachwith independentexponentialservicetimes. Letλk andµk bethearrivalrateandserviceraterespectivelyforthekh usertype,1≤k≤K . Letρk =λk/µk andρ=ρ1+ρ2+ +ρK . Inparticular,show,asbefore,· · ·thattheprobabilityof ncustomersinthesystemisQn = p(0, . . . ,0)ρn/n!for0≤n≤m.b)Viewthecustomersinparta)asasingletypeof customerwithPoissonarrivalsof rateλ =kλk and with a service densityk(λk/λ)µkexp(−µkx). Show that the expectedservice time is ρ/λ. Note that what you have shown is that, if a service distributioncanberepresentedasaweightedsumof exponentials,thenthedistributionof customersinthesystemisthesameasfortheM/M/m,mqueuewithequalmeanservicetime.Exercise6.28. Considerasampled-timeM/D/m/mqueueingsystem. ThearrivalprocessisBernoulliwithprobabilityλ<<1of arrivalineachtimeunit. Therearemservers;eacharrival enters a server if a server is not busyand otherwisethe arrival is discarded. If anarrival enters a server, it keeps the server busy for d units of time and then departs; d issome integerconstantandisthesameforeachserver.Let n, 0 ≤ n ≤ m be the number of customers in service at a given time and let xi bethe number of time units that the ith of those n customers (in order of arrival) has been

Page 51: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 51/53

312 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES

inservice. Thusthestateof thesystemcanbetakenas(n,x ) = (n, x1, x2, . . . , xn)where0≤n≤mand1≤x1 < x2 <. . .<xn ≤d.Let A(n,x ) denote the next state if the present state is (n,x ) and a new arrival entersservice. Thatis,

A(n,x ) = (n+ 1,1, x1+ 1, x2+ 1, . . . , xn+1) forn < mandxn < d (6.99)A(n,x ) = (n,1, x1+ 1, x2+ 1, . . . , xn−1+1) forn≤mandxn =d. (6.100)

That is, the newcustomerreceives oneunitof serviceby thenextstatetime, and alltheoldcustomersreceive oneadditionalunitof service. If theoldestcustomerhasreceiveddunitsof service,thenitleavesthesystembythenextstatetime. Notethatitispossibleforacustomerwithdunitsof serviceatthepresenttimetoleavethesystemandbereplacedby an arrival at the present time (i.e., (6.100) with n=m, xn =d). Let B(n,x ) denotethenextstate if eithernoarrivaloccursor if anewarrival isdiscarded.

B(n,x ) = (n, x1+ 1, x2+ 1, . . ., xn+1) forxn < d (6.101)B(n,x ) = (n−1, x1+ 1, x2+ 1, . . . , xn−1+1) forxn =d. (6.102)

a)Hypothesizethatthebackward chain forthissystem isalsoasampled-timeM/D/m/mqueueingsystem,butthatthestate(n, x1, . . . , xn)(0≤n≤m,1≤x1 < x2 <. . .<xn ≤d)has a diff erent interpretation: n is the number of customers as before, but xi is now theremainingservicerequiredbycustomeri. Explainhowthishypothesisleadstothefollowingsteady-stateequations:

λπn,x = (1−λ)πA(n,x ) ;n<m, xn < d (6.103)λπn,x = λπA(n,x ) ;n≤m, xn =d (6.104)

(1−λ)πn,x = λπB(n,x ) ;n≤m, xn =d (6.105)(1−λ)πn,x = (1−λ)πB(n,x ) ;n≤m, xn <d. (6.106)

b)Usingthishypothesis,findπn,x intermsof π0,whereπ0 istheprobabilityof anemptysystem. Hint: Use(6.105)and(6.106);youranswershoulddependonn,butnotx .c)Verifythattheabovehypothesisiscorrect.d)Findanexpressionforπ0.e)Findanexpressionforthesteady-stateprobabilitythatanarrivingcustomerisdiscarded.Exercise6.29. A taxi alternates between three locations. When it reaches location 1 itis equally likely to go next to either 2 or 3. When it reaches 2 it will next go to 1 withprobability1/3andto3withprobability2/3. From3italwaysgoesto1. Themeantimebetween locationsiand j aret12 =20,t13 =30,t23 =30. Assumetij =t ji).Findthe(limiting)probabilitythatthetaxi’smostrecentstopwasatlocationi,i= 1,2,3?Whatisthe(limiting)probabilitythatthetaxi isheadingfor location2?What fraction of time is the taxi traveling from 2 to 3. Note: Upon arrival at a locationthetaxiimmediatelydeparts.

Page 52: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 52/53

6.10. EXERCISES 313

Exercise6.30. (Semi-Markov continuation of Exercise 6.5)a) Assume that the MarkovprocessinExercise6.5ischangedinthefollowingway: whenevertheprocessentersstate0,thetimespentbeforeleavingstate0isnowauniformly distributed rv,takingvaluesfrom0to2/λ. Allothertransitionsremainthesame. Forthisnewprocess,determinewhetherthesuccessive

epochs

of

entry

to

state

0

form

renewal

epochs,

whether

the

successive

epochs

of exit from state 0 formrenewal epochs, andwhetherthe successive entries to any othergivenstateiformrenewalepochs.a) Foreach i, findboththetime-average intervaland thetime-averagenumber of overallstatetransitionsbetweensuccessivevisitstoi.b)IsthismodifiedprocessaMarkovprocessinthesensethatPrX (t) =i|X (τ ) = j, X (s) =k=PrX (t) =i|X (τ ) = jforall0< s <τ < tandalli,j,k? Explain.Exercise6.31. ConsideranM/G/1queueingsystemwith Poissonarrivalsof rateλandexpected service timeE[X ]. Let ρ=λE[X ] and assume ρ <1. Consider a semi-Markovprocess model of the M/G/1 queueing system in which transitions occur on departuresfromthequeueingsystemandthestate isthenumberof customers immediatelyfollowingadeparture.a)Supposeacolleaguehascalculatedthesteady-stateprobabilities piof being instatei for each i ≥ 0. For each i ≥ 0, find the steady-state probability pii of state i in theembeddedMarkovchain. Giveyoursolutionasafunctionof ρ,πi,and p0.b)Calculate p0 asafunctionof ρ.c)Findπi asafunctionof ρand pi.d)Is pi thesameasthesteady-stateprobabilitythatthequeueingsystemcontainsicus

tomersat

agiven

time?

Explain

carefully.

Exercise6.32. Consider an M/G/1 queue in which the arrival rate is λ and the servicetime distributin is uniform (0,2W ) with λW < 1. Define a semi-Markov chain followingtheframeworkfortheM/G/1queueinSection6.8.1.a)FindP 0 j; j≥0.b)FindP ij fori >0; j≥i−1.Exercise6.33. Considerasemi-MarkovprocessforwhichtheembeddedMarkovchainisirreducible

and

positive-recurrent.

Assume

that

the

distribution

of

inter-renewal

intervals

for one state j is arithmetic with span d. Show that the distribution of inter-renewalintervalsforallstatesisarithmeticwiththesamespan.

Page 53: MIT6_262S11_chap06VIP

8/13/2019 MIT6_262S11_chap06VIP

http://slidepdf.com/reader/full/mit6262s11chap06vip 53/53

MIT OpenCourseWarehttp://ocw.mit.edu

6.262 Discrete Stochastic Processes

Spring 2011

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.