Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting...
Transcript of Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting...
![Page 1: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/1.jpg)
Discrete Time Markov Chains, LimitingDistribution and Classification
Bo Friis Nielsen1
1DTU Informatics
02407 Stochastic Processes 3, September 19 2017
Bo Friis Nielsen Limiting Distribution and Classification
![Page 2: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/2.jpg)
Discrete time Markov chains
Today:
I Discrete time Markov chains - invariant probabilitydistribution
I Classification of statesI Classification of chains
Next week
I Poisson process
Two weeks from now
I Birth- and Death Processes
Bo Friis Nielsen Limiting Distribution and Classification
![Page 3: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/3.jpg)
Regular Transition Probability Matrices
P =∣∣∣∣Pij
∣∣∣∣ , 0 ≤ i , j ≤ N
Regular: If Pk > 0 for some kIn that case limn→∞ P(n)
ij = πj
Theorem 4.1 (Page 168) let P be a regular transitionprobability matrix on the states 0,1, . . . ,N. Then the limitingdistribution π = (π0, π1, πN) is the unique nonnegative solutionof the equations
πj =N∑
k=0
πkPij , π = πP
N∑k=0
πk = 1, π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 4: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/4.jpg)
Interpretation of πj ’s
I Limiting probabilities limn→∞ P(n)ij = πj
I Long term averages limn→∞11∑m
n=1 P(n)ij = πj
I Stationary distribution π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 5: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/5.jpg)
Interpretation of πj ’s
I Limiting probabilities limn→∞ P(n)ij = πj
I Long term averages limn→∞11∑m
n=1 P(n)ij = πj
I Stationary distribution π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 6: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/6.jpg)
Interpretation of πj ’s
I Limiting probabilities limn→∞ P(n)ij = πj
I Long term averages limn→∞11∑m
n=1 P(n)ij = πj
I Stationary distribution π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 7: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/7.jpg)
Interpretation of πj ’s
I Limiting probabilities limn→∞ P(n)ij = πj
I Long term averages limn→∞11∑m
n=1 P(n)ij = πj
I Stationary distribution π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 8: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/8.jpg)
A Social Mobility Example
Son’s ClassLower Middle Upper
Lower 0.40 0.50 0.10Father’s Middle 0.05 0.70 0.25Class Upper 0.05 0.50 0.45
P8 =
∣∣∣∣∣∣∣∣∣∣∣∣
0.0772 0.6250 0.29780.0769 0.6250 0.29810.0769 0.6250 0.2981
∣∣∣∣∣∣∣∣∣∣∣∣
π0 = 0.40π0 + 0.05π1 + 0.05π2
π1 = 0.50π0 + 0.70π1 + 0.50π2
π2 = 0.10π0 + 0.25π1 + 0.45π2
1 = π0 + π1 + π2
Bo Friis Nielsen Limiting Distribution and Classification
![Page 9: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/9.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 10: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/10.jpg)
Classification of Markov chain states
I States which cannot be left, once entered
- absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 11: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/11.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 12: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/12.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain
-recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 13: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/13.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 14: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/14.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can be
I finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 15: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/15.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite
- postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 16: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/16.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrent
I infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 17: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/17.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite
- null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 18: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/18.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 19: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/19.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain
- transient statesI States which can only be visited at certain time epochs -
periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 20: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/20.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 21: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/21.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs
-periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 22: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/22.jpg)
Classification of Markov chain states
I States which cannot be left, once entered - absorbingstates
I States where the return some time in the future is certain -recurrent or persistent states
I The mean time to return can beI finite - postive recurrence/non-null recurrentI infinite - null recurrent
I States where the return some time in the future isuncertain - transient states
I States which can only be visited at certain time epochs -periodic states
Bo Friis Nielsen Limiting Distribution and Classification
![Page 23: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/23.jpg)
Classification of States
I j is accessible from i if P(n)ij > 0 for some n
I If j is accessible from i and i is accessible from j we saythat the two states communicate
I Communicating states constitute equivalence classes (anequivalence relation)
I i communicates with j and j communicates with k then iand k communicates
Bo Friis Nielsen Limiting Distribution and Classification
![Page 24: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/24.jpg)
Classification of States
I j is accessible from i if P(n)ij > 0 for some n
I If j is accessible from i and i is accessible from j we saythat the two states communicate
I Communicating states constitute equivalence classes (anequivalence relation)
I i communicates with j and j communicates with k then iand k communicates
Bo Friis Nielsen Limiting Distribution and Classification
![Page 25: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/25.jpg)
Classification of States
I j is accessible from i if P(n)ij > 0 for some n
I If j is accessible from i and i is accessible from j we saythat the two states communicate
I Communicating states constitute equivalence classes (anequivalence relation)
I i communicates with j and j communicates with k then iand k communicates
Bo Friis Nielsen Limiting Distribution and Classification
![Page 26: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/26.jpg)
Classification of States
I j is accessible from i if P(n)ij > 0 for some n
I If j is accessible from i and i is accessible from j we saythat the two states communicate
I Communicating states constitute equivalence classes (anequivalence relation)
I i communicates with j and j communicates with k then iand k communicates
Bo Friis Nielsen Limiting Distribution and Classification
![Page 27: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/27.jpg)
Classification of States
I j is accessible from i if P(n)ij > 0 for some n
I If j is accessible from i and i is accessible from j we saythat the two states communicate
I Communicating states constitute equivalence classes (anequivalence relation)
I i communicates with j and j communicates with k then iand k communicates
Bo Friis Nielsen Limiting Distribution and Classification
![Page 28: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/28.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions
- first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 29: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/29.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions.
Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 30: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/30.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij =
P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 31: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/31.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 32: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/32.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .
What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 33: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/33.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 34: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/34.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij
=∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 35: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/35.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij
≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 36: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/36.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 37: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/37.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution.
Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 38: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/38.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij
(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 39: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/39.jpg)
First passage and first return timesWe can formalise the discussion of state classification by use ofa certain class of probability distributions - first passage timedistributions. Define the first passage probability
f (n)ij = P{X1 6= j ,X2 6= j , . . . ,Xn−1 6= j ,Xn = j |X0 = i}
This is the probability of reaching j for the first time at time nhaving started in i .What is the probability of ever reaching j?
fij =∞∑
n=1
f (n)ij ≤ 1
The probabilities f (n)ij constitiute a probability distribution. Onthe contrary we cannot say anything in general on
∑∞n=1 p(n)
ij(the n-step transition probabilities)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 40: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/40.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 41: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/41.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if
fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 42: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/42.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii
(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 43: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/43.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)
= 1I A state is positive or non-null recurrent if E(Ti) <∞.
E(Ti) =∑∞
n=1 nf (n)ii = µiI A state is null recurrent if E(Ti) = µi =∞
I A state is transient if fii < 1.In this case we define µi =∞ for later convenience.
I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 44: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/44.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 45: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/45.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if
E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 46: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/46.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti)
<∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 47: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/47.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.
E(Ti) =∑∞
n=1 nf (n)ii = µiI A state is null recurrent if E(Ti) = µi =∞
I A state is transient if fii < 1.In this case we define µi =∞ for later convenience.
I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 48: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/48.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii
= µiI A state is null recurrent if E(Ti) = µi =∞
I A state is transient if fii < 1.In this case we define µi =∞ for later convenience.
I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 49: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/49.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 50: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/50.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if
E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 51: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/51.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) =
µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 52: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/52.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞
I A state is transient if fii < 1.In this case we define µi =∞ for later convenience.
I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 53: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/53.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if
fii < 1.In this case we define µi =∞ for later convenience.
I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 54: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/54.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii
< 1.In this case we define µi =∞ for later convenience.
I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 55: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/55.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 56: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/56.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.
I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 57: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/57.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .
I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 58: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/58.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic
if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 59: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/59.jpg)
State classification by f (n)ii
I A state is recurrent (persistent) if fii(=∑∞
n=1 f (n)ii
)= 1
I A state is positive or non-null recurrent if E(Ti) <∞.E(Ti) =
∑∞n=1 nf (n)ii = µi
I A state is null recurrent if E(Ti) = µi =∞I A state is transient if fii < 1.
In this case we define µi =∞ for later convenience.I A peridoic state has nonzero pii(nk) for some k .I A state is ergodic if it is positive recurrent and aperiodic.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 60: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/60.jpg)
Classification of Markov chains
I We can identify subclasses of states with the sameproperties
I All states which can mutually reach each other will be ofthe same type
I Once again the formal analysis is a little bit heavy, but try tostick to the fundamentals, definitions (concepts) and results
Bo Friis Nielsen Limiting Distribution and Classification
![Page 61: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/61.jpg)
Classification of Markov chains
I We can identify subclasses of states with the sameproperties
I All states which can mutually reach each other will be ofthe same type
I Once again the formal analysis is a little bit heavy, but try tostick to the fundamentals, definitions (concepts) and results
Bo Friis Nielsen Limiting Distribution and Classification
![Page 62: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/62.jpg)
Classification of Markov chains
I We can identify subclasses of states with the sameproperties
I All states which can mutually reach each other will be ofthe same type
I Once again the formal analysis is a little bit heavy, but try tostick to the fundamentals, definitions (concepts) and results
Bo Friis Nielsen Limiting Distribution and Classification
![Page 63: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/63.jpg)
Classification of Markov chains
I We can identify subclasses of states with the sameproperties
I All states which can mutually reach each other will be ofthe same type
I Once again the formal analysis is a little bit heavy,
but try tostick to the fundamentals, definitions (concepts) and results
Bo Friis Nielsen Limiting Distribution and Classification
![Page 64: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/64.jpg)
Classification of Markov chains
I We can identify subclasses of states with the sameproperties
I All states which can mutually reach each other will be ofthe same type
I Once again the formal analysis is a little bit heavy, but try tostick to the fundamentals,
definitions (concepts) and results
Bo Friis Nielsen Limiting Distribution and Classification
![Page 65: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/65.jpg)
Classification of Markov chains
I We can identify subclasses of states with the sameproperties
I All states which can mutually reach each other will be ofthe same type
I Once again the formal analysis is a little bit heavy, but try tostick to the fundamentals, definitions (concepts) and results
Bo Friis Nielsen Limiting Distribution and Classification
![Page 66: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/66.jpg)
Properties of sets of intercommunicating states
I (a) i and j has the same periodI (b) i is transient if and only if j is transientI (c) i is null persistent (null recurrent) if and only if j is null
persistent
Bo Friis Nielsen Limiting Distribution and Classification
![Page 67: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/67.jpg)
Properties of sets of intercommunicating states
I (a) i and j has the same period
I (b) i is transient if and only if j is transientI (c) i is null persistent (null recurrent) if and only if j is null
persistent
Bo Friis Nielsen Limiting Distribution and Classification
![Page 68: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/68.jpg)
Properties of sets of intercommunicating states
I (a) i and j has the same periodI (b) i is transient if and only if j is transient
I (c) i is null persistent (null recurrent) if and only if j is nullpersistent
Bo Friis Nielsen Limiting Distribution and Classification
![Page 69: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/69.jpg)
Properties of sets of intercommunicating states
I (a) i and j has the same periodI (b) i is transient if and only if j is transientI (c) i is null persistent (null recurrent) if and only if j is null
persistent
Bo Friis Nielsen Limiting Distribution and Classification
![Page 70: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/70.jpg)
A set C of states is called
I (a) Closed if pij = 0 for all i ∈ C, j /∈ CI (b) Irreducible if i ↔ j for all i , j ∈ C.
TheoremDecomposition Theorem The state space S can be partitioneduniquely as
S = T ∪ C1 ∪ C2 ∪ . . .
where T is the set of transient states, and the Ci are irreducibleclosed sets of persistent states �
LemmaIf S is finite, then at least one state is persistent(recurrent) andall persistent states are non-null (positive recurrent) �
Bo Friis Nielsen Limiting Distribution and Classification
![Page 71: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/71.jpg)
A set C of states is calledI (a) Closed if pij = 0 for all i ∈ C, j /∈ C
I (b) Irreducible if i ↔ j for all i , j ∈ C.
TheoremDecomposition Theorem The state space S can be partitioneduniquely as
S = T ∪ C1 ∪ C2 ∪ . . .
where T is the set of transient states, and the Ci are irreducibleclosed sets of persistent states �
LemmaIf S is finite, then at least one state is persistent(recurrent) andall persistent states are non-null (positive recurrent) �
Bo Friis Nielsen Limiting Distribution and Classification
![Page 72: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/72.jpg)
A set C of states is calledI (a) Closed if pij = 0 for all i ∈ C, j /∈ CI (b) Irreducible if i ↔ j for all i , j ∈ C.
Theorem
Decomposition Theorem The state space S can be partitioneduniquely as
S = T ∪ C1 ∪ C2 ∪ . . .
where T is the set of transient states, and the Ci are irreducibleclosed sets of persistent states �
LemmaIf S is finite, then at least one state is persistent(recurrent) andall persistent states are non-null (positive recurrent) �
Bo Friis Nielsen Limiting Distribution and Classification
![Page 73: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/73.jpg)
A set C of states is calledI (a) Closed if pij = 0 for all i ∈ C, j /∈ CI (b) Irreducible if i ↔ j for all i , j ∈ C.
TheoremDecomposition Theorem The state space S can be partitioneduniquely as
S = T ∪ C1 ∪ C2 ∪ . . .
where T is the set of transient states, and the Ci are irreducibleclosed sets of persistent states �
LemmaIf S is finite, then at least one state is persistent(recurrent) andall persistent states are non-null (positive recurrent) �
Bo Friis Nielsen Limiting Distribution and Classification
![Page 74: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/74.jpg)
A set C of states is calledI (a) Closed if pij = 0 for all i ∈ C, j /∈ CI (b) Irreducible if i ↔ j for all i , j ∈ C.
TheoremDecomposition Theorem The state space S can be partitioneduniquely as
S = T ∪ C1 ∪ C2 ∪ . . .
where T is the set of transient states, and the Ci are irreducibleclosed sets of persistent states
�
LemmaIf S is finite, then at least one state is persistent(recurrent) andall persistent states are non-null (positive recurrent) �
Bo Friis Nielsen Limiting Distribution and Classification
![Page 75: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/75.jpg)
A set C of states is calledI (a) Closed if pij = 0 for all i ∈ C, j /∈ CI (b) Irreducible if i ↔ j for all i , j ∈ C.
TheoremDecomposition Theorem The state space S can be partitioneduniquely as
S = T ∪ C1 ∪ C2 ∪ . . .
where T is the set of transient states, and the Ci are irreducibleclosed sets of persistent states �
LemmaIf S is finite, then at least one state is persistent(recurrent) andall persistent states are non-null (positive recurrent) �
Bo Friis Nielsen Limiting Distribution and Classification
![Page 76: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/76.jpg)
Basic Limit Theorem
Theorem 4.3 The basic limit theorem of Markov chains
(a) Consider a recurrent irreducible aperiodic Markovchain. Let P(n)
ii be the probability of entering state iat the nth transition, n = 1,2, . . . , given thatX0 = i . By our earlier convention P(0)
ii = 1. Let f (n)iibe the probability of first returning to state i at thenth transition n = 1,2, . . . , where f (0)ii = 0. Then
limn→∞
P(n)ii =
1∑∞n=0 nf (n)ii
=1mi
(b) under the same conditions as in (a),limn→∞ P(n)
ji = limn→∞ P(n)ii for all j .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 77: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/77.jpg)
Basic Limit Theorem
Theorem 4.3 The basic limit theorem of Markov chains
(a) Consider a recurrent irreducible aperiodic Markovchain. Let P(n)
ii be the probability of entering state iat the nth transition, n = 1,2, . . . , given thatX0 = i . By our earlier convention P(0)
ii = 1. Let f (n)iibe the probability of first returning to state i at thenth transition n = 1,2, . . . , where f (0)ii = 0. Then
limn→∞
P(n)ii =
1∑∞n=0 nf (n)ii
=1mi
(b) under the same conditions as in (a),limn→∞ P(n)
ji = limn→∞ P(n)ii for all j .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 78: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/78.jpg)
Basic Limit Theorem
Theorem 4.3 The basic limit theorem of Markov chains
(a) Consider a recurrent irreducible aperiodic Markovchain. Let P(n)
ii be the probability of entering state iat the nth transition, n = 1,2, . . . , given thatX0 = i . By our earlier convention P(0)
ii = 1. Let f (n)iibe the probability of first returning to state i at thenth transition n = 1,2, . . . , where f (0)ii = 0. Then
limn→∞
P(n)ii =
1∑∞n=0 nf (n)ii
=1mi
(b) under the same conditions as in (a),limn→∞ P(n)
ji = limn→∞ P(n)ii for all j .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 79: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/79.jpg)
An example chain (random walk with reflectingbarriers)
P
=
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
With initial probability distribution p(0) = (1,0,0,0,0,0,0,0) or
X0 = 1.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 80: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/80.jpg)
An example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
With initial probability distribution p(0) = (1,0,0,0,0,0,0,0) orX0 = 1.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 81: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/81.jpg)
An example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
With initial probability distribution p(0) = (1,0,0,0,0,0,0,0) or
X0 = 1.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 82: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/82.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 83: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/83.jpg)
Properties of that chain
I We have a finite number of states
I From state 1 we can reach state j with a probabilityf1j ≥ 0.4j−1, j > 1.
I From state j we can reach state 1 with a probabilityfj1 ≥ 0.3j−1, j > 1.
I Thus all states communicate and the chain is irreducible.Generally we won’t bother with bounds for the fij ’s.
I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 84: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/84.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j
with a probabilityf1j ≥ 0.4j−1, j > 1.
I From state j we can reach state 1 with a probabilityfj1 ≥ 0.3j−1, j > 1.
I Thus all states communicate and the chain is irreducible.Generally we won’t bother with bounds for the fij ’s.
I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 85: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/85.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 86: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/86.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1,
j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 87: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/87.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.
I From state j we can reach state 1 with a probabilityfj1 ≥ 0.3j−1, j > 1.
I Thus all states communicate and the chain is irreducible.Generally we won’t bother with bounds for the fij ’s.
I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 88: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/88.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 89: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/89.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1,
j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 90: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/90.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.
I Thus all states communicate and the chain is irreducible.Generally we won’t bother with bounds for the fij ’s.
I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 91: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/91.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 92: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/92.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.
I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 93: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/93.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrent
I A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 94: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/94.jpg)
Properties of that chain
I We have a finite number of statesI From state 1 we can reach state j with a probability
f1j ≥ 0.4j−1, j > 1.I From state j we can reach state 1 with a probability
fj1 ≥ 0.3j−1, j > 1.I Thus all states communicate and the chain is irreducible.
Generally we won’t bother with bounds for the fij ’s.I Since the chain is finite all states are positive recurrentI A look on the behaviour of the chain
Bo Friis Nielsen Limiting Distribution and Classification
![Page 95: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/95.jpg)
A number of different sample paths Xn’s
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 96: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/96.jpg)
A number of different sample paths Xn’s
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 97: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/97.jpg)
A number of different sample paths Xn’s
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 98: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/98.jpg)
A number of different sample paths Xn’s
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 99: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/99.jpg)
A number of different sample paths Xn’s
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
0 10 20 30 40 50 60 701
2
3
4
5
6
7
8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 100: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/100.jpg)
The state probabilities
p(n)j
0 10 20 30 40 50 60 700
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 101: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/101.jpg)
The state probabilities
p(n)j
0 10 20 30 40 50 60 700
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 102: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/102.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij
→ 1µj
as n→∞, for all i and j
Three important remarksI If the chain is transient or null-persistent (null-recurrent)
p(n)ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 103: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/103.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij →
1µj
as n→∞, for all i and j
Three important remarks
I If the chain is transient or null-persistent (null-recurrent)p(n)
ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 104: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/104.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij →
1µj
as n→∞, for all i and j
Three important remarksI If the chain is transient or null-persistent (null-recurrent)
p(n)ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 105: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/105.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij →
1µj
as n→∞, for all i and j
Three important remarksI If the chain is transient or null-persistent (null-recurrent)
p(n)ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 106: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/106.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij →
1µj
as n→∞, for all i and j
Three important remarksI If the chain is transient or null-persistent (null-recurrent)
p(n)ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 107: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/107.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij →
1µj
as n→∞, for all i and j
Three important remarksI If the chain is transient or null-persistent (null-recurrent)
p(n)ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 108: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/108.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij →
1µj
as n→∞, for all i and j
Three important remarksI If the chain is transient or null-persistent (null-recurrent)
p(n)ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j
does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 109: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/109.jpg)
Limiting distribution
For an irreducible aperiodic chain, we have that
p(n)ij →
1µj
as n→∞, for all i and j
Three important remarksI If the chain is transient or null-persistent (null-recurrent)
p(n)ij → 0
I If the chain is positive recurrent p(n)ij →
1µj
I The limiting probability of Xn = j does not depend on thestarting state X0 = i
Bo Friis Nielsen Limiting Distribution and Classification
![Page 110: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/110.jpg)
The stationary distribution
I A distribution that does not change with n
I The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P = p(n−1) by our
assumption of p(n) being constantI Expressed differently π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 111: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/111.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constant
I The implication of this is p(n) = p(n−1)P = p(n−1) by ourassumption of p(n) being constant
I Expressed differently π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 112: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/112.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n)
= p(n−1)P = p(n−1) by ourassumption of p(n) being constant
I Expressed differently π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 113: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/113.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P
= p(n−1) by ourassumption of p(n) being constant
I Expressed differently π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 114: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/114.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P = p(n−1)
by ourassumption of p(n) being constant
I Expressed differently π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 115: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/115.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P = p(n−1) by our
assumption of p(n) being constant
I Expressed differently π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 116: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/116.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P = p(n−1) by our
assumption of p(n) being constantI Expressed differently
π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 117: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/117.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P = p(n−1) by our
assumption of p(n) being constantI Expressed differently π =
πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 118: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/118.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P = p(n−1) by our
assumption of p(n) being constantI Expressed differently π = π
P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 119: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/119.jpg)
The stationary distribution
I A distribution that does not change with nI The elements of p(n) are all constantI The implication of this is p(n) = p(n−1)P = p(n−1) by our
assumption of p(n) being constantI Expressed differently π = πP
Bo Friis Nielsen Limiting Distribution and Classification
![Page 120: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/120.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such that
I (a) πj ≥ 0 for all j , and∑
j πj = 1I (b) π = πP, which is to say that πj =
∑i πipij for all j .
�VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 121: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/121.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 122: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/122.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 123: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/123.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANT
An irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 124: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/124.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π
if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 125: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/125.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent
(positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 126: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/126.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);
in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 127: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/127.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution
and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 128: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/128.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S,
where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 129: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/129.jpg)
Stationary distribution
DefinitionThe vector π is called a stationary distribution of the chain if πhas entries (πj : j ∈ S) such thatI (a) πj ≥ 0 for all j , and
∑j πj = 1
I (b) π = πP, which is to say that πj =∑
i πipij for all j .�
VERY IMPORTANTAn irreducible chain has a stationary distribution π if and only ifall the states are non-null persistent (positive recurrent);in thiscase, π is the unique stationary distribution and is given byπi =
1µi
for each i ∈ S, where µi is the mean recurrence time ofi .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 130: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/130.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 131: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/131.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 132: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/132.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 133: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/133.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1
= π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 134: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/134.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1
· 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 135: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/135.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 +
π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 136: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/136.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2
· 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 137: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/137.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3
π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 138: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/138.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2
= π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 139: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/139.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1
· 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 140: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/140.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4
+ π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 141: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/141.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2
· 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 142: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/142.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 ·
0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 143: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/143.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 +
π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 144: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/144.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3
· 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 145: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/145.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3
π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 146: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/146.jpg)
The example chain (random walk with reflectingbarriers)
P =
0.6 0.4 0.0 0.0 0.0 0.0 0.0 0.00.3 0.3 0.4 0.0 0.0 0.0 0.0 0.00.0 0.3 0.3 0.4 0.0 0.0 0.0 0.00.0 0.0 0.3 0.3 0.4 0.0 0.0 0.00.0 0.0 0.0 0.3 0.3 0.4 0.0 0.00.0 0.0 0.0 0.0 0.3 0.3 0.4 0.00.0 0.0 0.0 0.0 0.0 0.3 0.3 0.40.0 0.0 0.0 0.0 0.0 0.0 0.3 0.7
π = πP
Elementwise the matrix equation is πi =∑
j πjpji
π1 = π1 · 0.6 + π2 · 0.3π2 = π1 · 0.4 + π2 · 0.3 + π3 · 0.3π3 = π2 · 0.4 + π3 · 0.3 + π4 · 0.3
Bo Friis Nielsen Limiting Distribution and Classification
![Page 147: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/147.jpg)
π1 = π1 · 0.6 + π2 · 0.3
πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3π8 = π7 · 0.4 + π8 · 0.7
Or
π2 =1− 0.6
0.3π1
πj+1 =1
0.3((1− 0.3)πj − 0.4πj−1)
Can be solved recursively to find:
πj =
(0.40.3
)j−1
π1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 148: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/148.jpg)
π1 = π1 · 0.6 + π2 · 0.3πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3
π8 = π7 · 0.4 + π8 · 0.7
Or
π2 =1− 0.6
0.3π1
πj+1 =1
0.3((1− 0.3)πj − 0.4πj−1)
Can be solved recursively to find:
πj =
(0.40.3
)j−1
π1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 149: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/149.jpg)
π1 = π1 · 0.6 + π2 · 0.3πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3π8 = π7 · 0.4 + π8 · 0.7
Or
π2 =1− 0.6
0.3π1
πj+1 =1
0.3((1− 0.3)πj − 0.4πj−1)
Can be solved recursively to find:
πj =
(0.40.3
)j−1
π1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 150: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/150.jpg)
π1 = π1 · 0.6 + π2 · 0.3πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3π8 = π7 · 0.4 + π8 · 0.7
Or
π2 =1− 0.6
0.3π1
πj+1 =1
0.3((1− 0.3)πj − 0.4πj−1)
Can be solved recursively to find:
πj =
(0.40.3
)j−1
π1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 151: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/151.jpg)
π1 = π1 · 0.6 + π2 · 0.3πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3π8 = π7 · 0.4 + π8 · 0.7
Or
π2 =1− 0.6
0.3π1
πj+1 =1
0.3((1− 0.3)πj − 0.4πj−1)
Can be solved recursively to find:
πj =
(0.40.3
)j−1
π1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 152: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/152.jpg)
π1 = π1 · 0.6 + π2 · 0.3πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3π8 = π7 · 0.4 + π8 · 0.7
Or
π2 =1− 0.6
0.3π1
πj+1 =1
0.3((1− 0.3)πj − 0.4πj−1)
Can be solved recursively
to find:
πj =
(0.40.3
)j−1
π1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 153: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/153.jpg)
π1 = π1 · 0.6 + π2 · 0.3πj = πj−1 · 0.4 + πj · 0.3 + πj+1 · 0.3π8 = π7 · 0.4 + π8 · 0.7
Or
π2 =1− 0.6
0.3π1
πj+1 =1
0.3((1− 0.3)πj − 0.4πj−1)
Can be solved recursively to find:
πj =
(0.40.3
)j−1
π1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 154: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/154.jpg)
The normalising conditionI We note that we don’t have to use the last equation
I We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 155: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/155.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 156: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/156.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj
= 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 157: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/157.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 158: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/158.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1
= π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 159: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/159.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 160: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/160.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 161: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/161.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 162: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/162.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1
N + 1 N <∞,a = 11
1−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 163: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/163.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 164: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/164.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 165: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/165.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 166: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/166.jpg)
The normalising conditionI We note that we don’t have to use the last equationI We need a solution which is a probability distribution
8∑j=1
πj = 1,8∑
j=1
(0.40.3
)j−1
π1 = π1
7∑k=0
(0.40.3
)k
N∑i=0
ai =
1−aN+1
1−a N <∞,a 6= 1N + 1 N <∞,a = 1
11−a N =∞, |a| < 1
Such that
1 = π11−
(0.40.3
)8
1− 0.40.3
⇔ π1 =1− 0.4
0.3
1−(0.4
0.3
)8
Bo Friis Nielsen Limiting Distribution and Classification
![Page 167: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/167.jpg)
The solution of π = πP
I More or less straightforward, but one problemI if x is a solution such that x = xP then obviously
(kx) = (kx)P is also a solution.I Recall the definition of eigenvalues/eigen vectorsI If Ay = λy we say that λ is an eigenvalue of A with an
associated eigenvector y . Here y is a right eigenvector,there is also a left eigenvector
Bo Friis Nielsen Limiting Distribution and Classification
![Page 168: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/168.jpg)
The solution of π = πP
I More or less straightforward,
but one problemI if x is a solution such that x = xP then obviously
(kx) = (kx)P is also a solution.I Recall the definition of eigenvalues/eigen vectorsI If Ay = λy we say that λ is an eigenvalue of A with an
associated eigenvector y . Here y is a right eigenvector,there is also a left eigenvector
Bo Friis Nielsen Limiting Distribution and Classification
![Page 169: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/169.jpg)
The solution of π = πP
I More or less straightforward, but one problem
I if x is a solution such that x = xP then obviously(kx) = (kx)P is also a solution.
I Recall the definition of eigenvalues/eigen vectorsI If Ay = λy we say that λ is an eigenvalue of A with an
associated eigenvector y . Here y is a right eigenvector,there is also a left eigenvector
Bo Friis Nielsen Limiting Distribution and Classification
![Page 170: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/170.jpg)
The solution of π = πP
I More or less straightforward, but one problemI if x is a solution such that x = xP
then obviously(kx) = (kx)P is also a solution.
I Recall the definition of eigenvalues/eigen vectorsI If Ay = λy we say that λ is an eigenvalue of A with an
associated eigenvector y . Here y is a right eigenvector,there is also a left eigenvector
Bo Friis Nielsen Limiting Distribution and Classification
![Page 171: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/171.jpg)
The solution of π = πP
I More or less straightforward, but one problemI if x is a solution such that x = xP then obviously
(kx) = (kx)P is also a solution.
I Recall the definition of eigenvalues/eigen vectorsI If Ay = λy we say that λ is an eigenvalue of A with an
associated eigenvector y . Here y is a right eigenvector,there is also a left eigenvector
Bo Friis Nielsen Limiting Distribution and Classification
![Page 172: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/172.jpg)
The solution of π = πP
I More or less straightforward, but one problemI if x is a solution such that x = xP then obviously
(kx) = (kx)P is also a solution.I Recall the definition of eigenvalues/eigen vectors
I If Ay = λy we say that λ is an eigenvalue of A with anassociated eigenvector y . Here y is a right eigenvector,there is also a left eigenvector
Bo Friis Nielsen Limiting Distribution and Classification
![Page 173: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/173.jpg)
The solution of π = πP
I More or less straightforward, but one problemI if x is a solution such that x = xP then obviously
(kx) = (kx)P is also a solution.I Recall the definition of eigenvalues/eigen vectorsI If Ay = λy we say that λ is an eigenvalue of A with an
associated eigenvector y . Here y is a right eigenvector,there is also a left eigenvector
Bo Friis Nielsen Limiting Distribution and Classification
![Page 174: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/174.jpg)
The solution of π = πP continued
I The vector π is a left eigenvector of P.
I The main theorem says that there is a unique eigenvectorassociated with the eigenvalue 1 of P
I In practice this means that the we can only solve but anormalising condition
I But we have the normalising condition by∑
j πj = 1 thiscan expressed as π1 = 1. Where
1 =
11...1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 175: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/175.jpg)
The solution of π = πP continued
I The vector π is a left eigenvector of P.I The main theorem says that there is a unique eigenvector
associated with the eigenvalue 1 of P
I In practice this means that the we can only solve but anormalising condition
I But we have the normalising condition by∑
j πj = 1 thiscan expressed as π1 = 1. Where
1 =
11...1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 176: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/176.jpg)
The solution of π = πP continued
I The vector π is a left eigenvector of P.I The main theorem says that there is a unique eigenvector
associated with the eigenvalue 1 of PI In practice this means that the we can only solve but a
normalising condition
I But we have the normalising condition by∑
j πj = 1 thiscan expressed as π1 = 1. Where
1 =
11...1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 177: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/177.jpg)
The solution of π = πP continued
I The vector π is a left eigenvector of P.I The main theorem says that there is a unique eigenvector
associated with the eigenvalue 1 of PI In practice this means that the we can only solve but a
normalising conditionI But we have the normalising condition by
∑j πj = 1
thiscan expressed as π1 = 1. Where
1 =
11...1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 178: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/178.jpg)
The solution of π = πP continued
I The vector π is a left eigenvector of P.I The main theorem says that there is a unique eigenvector
associated with the eigenvalue 1 of PI In practice this means that the we can only solve but a
normalising conditionI But we have the normalising condition by
∑j πj = 1 this
can expressed as π1 = 1. Where
1 =
11...1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 179: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/179.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)
I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 180: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/180.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution.
If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 181: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/181.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.
I The limiting distribution, i.e. p(n) → π for n→∞ (theMarkov chain has to be aperiodic too). Also p(n)
ij → πj .I The mean recurrence time for state i is µi =
1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 182: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/182.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution,
i.e. p(n) → π for n→∞ (theMarkov chain has to be aperiodic too). Also p(n)
ij → πj .I The mean recurrence time for state i is µi =
1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 183: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/183.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n)
→ π for n→∞ (theMarkov chain has to be aperiodic too). Also p(n)
ij → πj .I The mean recurrence time for state i is µi =
1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 184: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/184.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞
(theMarkov chain has to be aperiodic too). Also p(n)
ij → πj .I The mean recurrence time for state i is µi =
1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 185: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/185.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too).
Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 186: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/186.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij
→ πj .I The mean recurrence time for state i is µi =
1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 187: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/187.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 188: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/188.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i
is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 189: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/189.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi
= 1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 190: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/190.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.
I The mean number of visits in state j between twosuccessive visits to state i is πj
πi.
I The long run average probability of finding the Markovchain in state i is πi . πi = limn→∞
1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 191: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/191.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i
is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 192: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/192.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.
I The long run average probability of finding the Markovchain in state i is πi . πi = limn→∞
1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 193: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/193.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi .
πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 194: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/194.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i
also true forperiodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 195: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/195.jpg)
Roles of the solution to π = πP
For an irreducible Markov chain, (the condition we need toverify)I The stationary solution. If p(0) = π then p(n) = π for all n.I The limiting distribution, i.e. p(n) → π for n→∞ (the
Markov chain has to be aperiodic too). Also p(n)ij → πj .
I The mean recurrence time for state i is µi =1πi
.I The mean number of visits in state j between two
successive visits to state i is πjπi
.I The long run average probability of finding the Markov
chain in state i is πi . πi = limn→∞1n∑n
k=1 p(k)i also true for
periodic chains.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 196: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/196.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 197: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/197.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.
The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 198: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/198.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 199: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/199.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1
= π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 200: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/200.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1
p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 201: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/201.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 +
π2 π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 202: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/202.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2
π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 203: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/203.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2
= π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 204: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/204.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2 = π1
p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 205: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/205.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2 = π1p2
+ π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 206: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/206.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2 = π1p2 + π3
πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 207: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/207.jpg)
Example (null-recurrent) chain
P =
p1 p2 p3 p4 p5 . . .1 0 0 0 0 . . .0 1 0 0 0 . . .0 0 1 0 0 . . .0 0 0 1 0 . . .0 0 0 0 1 . . .. . . . . . . . . . . . . . . . . .
For pj > 0 the chain is obviously irreducible.The main theorem tells us that we can investigate directly forπ = πP.
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 208: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/208.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 209: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/209.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2
= (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 210: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/210.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1
π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 211: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/211.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1
πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 212: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/212.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 213: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/213.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1
⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 214: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/214.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 215: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/215.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 216: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/216.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 217: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/217.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1
∞∑j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 218: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/218.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 219: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/219.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi =
π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 220: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/220.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 221: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/221.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 222: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/222.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 223: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/223.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi
= π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 224: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/224.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 225: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/225.jpg)
π1 = π1p1 + π2 π2 = π1p2 + π3 πj = π1pj + πj+1
we get
π2 = (1−p1)π1 π3 = (1−p1−p2)π1 πj = (1−p1 · · ·−pj−1)π1
πj = (1− p1 · · · − pj−1)π1 ⇔ πj = π1
1−j−1∑i=1
pi
⇔ πj = π1
∞∑i=j
pi
Normalisation
∞∑j=1
πj = 1∞∑
j=1
π1
∞∑i=j
pi = π1
∞∑i=1
i∑j=1
pi = π1
∞∑i=1
ipi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 226: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/226.jpg)
1 2
p
p
pp11
12
22
21
P =
[p11 p12p21 p22
]
Bo Friis Nielsen Limiting Distribution and Classification
![Page 227: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/227.jpg)
1 2
p
p
pp11
12
22
21
P =
[p11 p12p21 p22
]
Bo Friis Nielsen Limiting Distribution and Classification
![Page 228: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/228.jpg)
Reversible Markov chains
I Solve sequence of linear equations instead of the wholesystem
I Local balance in probability flow as opposed to globalbalance
I Nice theoretical construction
Bo Friis Nielsen Limiting Distribution and Classification
![Page 229: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/229.jpg)
Reversible Markov chains
I Solve sequence of linear equations instead of the wholesystem
I Local balance in probability flow as opposed to globalbalance
I Nice theoretical construction
Bo Friis Nielsen Limiting Distribution and Classification
![Page 230: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/230.jpg)
Reversible Markov chains
I Solve sequence of linear equations instead of the wholesystem
I Local balance in probability flow as opposed to globalbalance
I Nice theoretical construction
Bo Friis Nielsen Limiting Distribution and Classification
![Page 231: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/231.jpg)
Bo Friis Nielsen Limiting Distribution and Classification
![Page 232: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/232.jpg)
Local balance equations
πi
=∑
j
πjpji πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 233: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/233.jpg)
Local balance equations
πi =∑
j
πj
pji πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 234: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/234.jpg)
Local balance equations
πi =∑
j
πjpji
πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 235: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/235.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1
=∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 236: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/236.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji
πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 237: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/237.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji πi
∑j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 238: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/238.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji πi∑
j
pij
=∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 239: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/239.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 240: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/240.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 241: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/241.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we get
πipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 242: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/242.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 243: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/243.jpg)
Local balance equations
πi =∑
j
πjpji πi · 1 =∑
j
πjpji πi∑
j
pij =∑
j
πjpji
∑j
πipij =∑
j
πjpji
Term for term we getπipij = πjpji
If they are fulfilled for each i and j , the global balance equationscan be obtained by summation.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 244: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/244.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 245: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/245.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j}
= P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 246: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/246.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}
P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 247: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/247.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 248: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/248.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 249: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/249.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 250: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/250.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 251: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/251.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance)
this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 252: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/252.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji
=P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 253: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/253.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}
P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 254: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/254.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j}
= P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 255: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/255.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i}
thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 256: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/256.jpg)
Why reversible?
P{Xn−1 = i ∩ Xn = j} = P{Xn−1 = i}P{Xn = j |Xn−1 = i}
= P{Xn−1 = i}pij
and for a stationary chain
πipij
For a reversible chain (local balance) this is πipij = πjpji =P{Xn−1 = j}P{Xn = i |Xn−1 = j} = P{Xn−1 = j ∩ Xn = i} thereversed sequence.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 257: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/257.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 258: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/258.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i}
=P{Xn−1 = j ∩ Xn = i}
P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 259: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/259.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}
P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 260: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/260.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 261: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/261.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}
P{Xn = i |Xn−1 = j}P{Xn = i}
=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 262: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/262.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}
=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 263: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/263.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}
For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 264: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/264.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 265: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/265.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 266: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/266.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 267: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/267.jpg)
Another look at a similar question
P{Xn−1 = j |Xn = i} = P{Xn−1 = j ∩ Xn = i}P{Xn = i}
=P{Xn−1 = j}P{Xn = i |Xn−1 = j}
P{Xn = i}=P{Xn−1 = j}pji
P{Xn = i}For a stationary chain we get
πjpji
πi
The chain is reversible if P{Xn−1 = j |Xn = i} = pij leading tothe local balance equations
pij =πjpji
πi
Bo Friis Nielsen Limiting Distribution and Classification
![Page 268: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/268.jpg)
Exercise 10 (16/12/91 ex.1)
In connection with an examination of the reliability of somesoftware intended for use in control of modern ferries one isinterested in examining a stochastic model of the use of acontrol program.The control program works as " state machine " i.e. it can be ina number of different levels, 4 are considered here. The levelsdepend on the physical state of the ferry. With every shift oftime unit while the program is run, the program will change fromlevel j to level k with probability pjk .
Bo Friis Nielsen Limiting Distribution and Classification
![Page 269: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/269.jpg)
Two possibilities are considered:
The program has no errors and will run continously shifting between the fourlevels.The program has a critical error. In this case it is possible that the error isfound, this happens with probality qi , i = 1, 2, 3, 4 depending on the level.The error will be corrected immediately and the program will from then on bewithout faults.Alternatively the program can stop with a critical error (the ferry will continueto sail, but without control). This happens with probability ri , i = 1, 2, 3, 4.In general qi + ri < 1, a program with errors can thus work and the error isnot nescesarily discovered. It is assumed that detection of an error, as wellas the apperance of a fault happens coincidently with shift between levels.The program starts running in level 1, and it is known that the programcontains one critical error.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 270: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/270.jpg)
Two possibilities are considered:The program has no errors and will run continously shifting between the fourlevels.
The program has a critical error. In this case it is possible that the error isfound, this happens with probality qi , i = 1, 2, 3, 4 depending on the level.The error will be corrected immediately and the program will from then on bewithout faults.Alternatively the program can stop with a critical error (the ferry will continueto sail, but without control). This happens with probability ri , i = 1, 2, 3, 4.In general qi + ri < 1, a program with errors can thus work and the error isnot nescesarily discovered. It is assumed that detection of an error, as wellas the apperance of a fault happens coincidently with shift between levels.The program starts running in level 1, and it is known that the programcontains one critical error.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 271: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/271.jpg)
Two possibilities are considered:The program has no errors and will run continously shifting between the fourlevels.The program has a critical error. In this case it is possible that the error isfound, this happens with probality qi , i = 1, 2, 3, 4 depending on the level.The error will be corrected immediately and the program will from then on bewithout faults.
Alternatively the program can stop with a critical error (the ferry will continueto sail, but without control). This happens with probability ri , i = 1, 2, 3, 4.In general qi + ri < 1, a program with errors can thus work and the error isnot nescesarily discovered. It is assumed that detection of an error, as wellas the apperance of a fault happens coincidently with shift between levels.The program starts running in level 1, and it is known that the programcontains one critical error.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 272: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/272.jpg)
Two possibilities are considered:The program has no errors and will run continously shifting between the fourlevels.The program has a critical error. In this case it is possible that the error isfound, this happens with probality qi , i = 1, 2, 3, 4 depending on the level.The error will be corrected immediately and the program will from then on bewithout faults.Alternatively the program can stop with a critical error (the ferry will continueto sail, but without control). This happens with probability ri , i = 1, 2, 3, 4.
In general qi + ri < 1, a program with errors can thus work and the error isnot nescesarily discovered. It is assumed that detection of an error, as wellas the apperance of a fault happens coincidently with shift between levels.The program starts running in level 1, and it is known that the programcontains one critical error.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 273: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/273.jpg)
Two possibilities are considered:The program has no errors and will run continously shifting between the fourlevels.The program has a critical error. In this case it is possible that the error isfound, this happens with probality qi , i = 1, 2, 3, 4 depending on the level.The error will be corrected immediately and the program will from then on bewithout faults.Alternatively the program can stop with a critical error (the ferry will continueto sail, but without control). This happens with probability ri , i = 1, 2, 3, 4.In general qi + ri < 1, a program with errors can thus work and the error isnot nescesarily discovered. It is assumed that detection of an error, as wellas the apperance of a fault happens coincidently with shift between levels.
The program starts running in level 1, and it is known that the programcontains one critical error.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 274: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/274.jpg)
Two possibilities are considered:The program has no errors and will run continously shifting between the fourlevels.The program has a critical error. In this case it is possible that the error isfound, this happens with probality qi , i = 1, 2, 3, 4 depending on the level.The error will be corrected immediately and the program will from then on bewithout faults.Alternatively the program can stop with a critical error (the ferry will continueto sail, but without control). This happens with probability ri , i = 1, 2, 3, 4.In general qi + ri < 1, a program with errors can thus work and the error isnot nescesarily discovered. It is assumed that detection of an error, as wellas the apperance of a fault happens coincidently with shift between levels.The program starts running in level 1, and it is known that the programcontains one critical error.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 275: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/275.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 276: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/276.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 277: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/277.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain.
A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 278: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/278.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 279: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/279.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 280: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/280.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.
1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 281: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/281.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.
1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 282: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/282.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .
5-8: The programme is operating in level i-4, the criticalerror is not detected.
The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 283: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/283.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .
5-8: The programme is operating in level i-4, the criticalerror is not detected.
The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 284: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/284.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.
The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 285: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/285.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 286: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/286.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1
~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 287: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/287.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0
~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 288: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/288.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0
~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 289: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/289.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0
P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 290: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/290.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P
0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 291: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/291.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0
~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 292: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/292.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r
Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 293: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/293.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P
Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 294: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/294.jpg)
Solution: Question 1
Formulate a stochastic process ( Markov chain) in discretetime describing this system.
The model is a discrete time Markov chain. A possibledefinition of states could be
0: The programme has stopped.1-4: The programme is operating safely in level i .5-8: The programme is operating in level i-4, the critical
error is not detected.The transition matrix A is
A =
1 ~0 ~0~0 P 0~r Diag(qi)P Diag(Si)P
Bo Friis Nielsen Limiting Distribution and Classification
![Page 295: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/295.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 296: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/296.jpg)
Question 1 - continued
The model is a discrete time Markov chain.
Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 297: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/297.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 298: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/298.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 299: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/299.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 300: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/300.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 301: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/301.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 302: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/302.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi)
=
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 303: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/303.jpg)
Question 1 - continued
The model is a discrete time Markov chain. Where P = {pij}
~r =
r1r2r3r4
Diag(Si) =
S1 0 0 00 S2 0 00 0 S3 00 0 0 S4
Si = 1− ri − qi
Diag(qi) =
q1 0 0 00 q2 0 00 0 q3 00 0 0 q4
Bo Friis Nielsen Limiting Distribution and Classification
![Page 304: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/304.jpg)
Question 1 - continued
Or without matrix notation:
A =
1 0 0 0 0 0 0 0 00 p11 p12 p13 p14 0 0 0 00 p21 p22 p23 p24 0 0 0 00 p31 p32 p33 p34 0 0 0 00 p41 p42 p43 p44 0 0 0 0r1 q1p11 q1p12 q1p13 q1p14 S1p11 S1p12 S1p13 S1p14r2 q2p21 q2p22 q2p23 q2p24 S2p21 S2p22 S2p23 S2p24r3 q3p31 q3p32 q3p33 q3p34 S3p31 S3p32 S3p33 S3p34r4 q4p41 q4p42 q4p43 q4p44 S4p41 S4p42 S4p43 S4p44
Bo Friis Nielsen Limiting Distribution and Classification
![Page 305: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/305.jpg)
Question 1 - continued
Or without matrix notation:
A =
1 0 0 0 0 0 0 0 00 p11 p12 p13 p14 0 0 0 00 p21 p22 p23 p24 0 0 0 00 p31 p32 p33 p34 0 0 0 00 p41 p42 p43 p44 0 0 0 0r1 q1p11 q1p12 q1p13 q1p14 S1p11 S1p12 S1p13 S1p14r2 q2p21 q2p22 q2p23 q2p24 S2p21 S2p22 S2p23 S2p24r3 q3p31 q3p32 q3p33 q3p34 S3p31 S3p32 S3p33 S3p34r4 q4p41 q4p42 q4p43 q4p44 S4p41 S4p42 S4p43 S4p44
Bo Friis Nielsen Limiting Distribution and Classification
![Page 306: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/306.jpg)
Question 1 - continued
Or without matrix notation:
A
=
1 0 0 0 0 0 0 0 00 p11 p12 p13 p14 0 0 0 00 p21 p22 p23 p24 0 0 0 00 p31 p32 p33 p34 0 0 0 00 p41 p42 p43 p44 0 0 0 0r1 q1p11 q1p12 q1p13 q1p14 S1p11 S1p12 S1p13 S1p14r2 q2p21 q2p22 q2p23 q2p24 S2p21 S2p22 S2p23 S2p24r3 q3p31 q3p32 q3p33 q3p34 S3p31 S3p32 S3p33 S3p34r4 q4p41 q4p42 q4p43 q4p44 S4p41 S4p42 S4p43 S4p44
Bo Friis Nielsen Limiting Distribution and Classification
![Page 307: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/307.jpg)
Question 1 - continued
Or without matrix notation:
A =
1 0 0 0 0 0 0 0 0
0 p11 p12 p13 p14 0 0 0 00 p21 p22 p23 p24 0 0 0 00 p31 p32 p33 p34 0 0 0 00 p41 p42 p43 p44 0 0 0 0r1 q1p11 q1p12 q1p13 q1p14 S1p11 S1p12 S1p13 S1p14r2 q2p21 q2p22 q2p23 q2p24 S2p21 S2p22 S2p23 S2p24r3 q3p31 q3p32 q3p33 q3p34 S3p31 S3p32 S3p33 S3p34r4 q4p41 q4p42 q4p43 q4p44 S4p41 S4p42 S4p43 S4p44
Bo Friis Nielsen Limiting Distribution and Classification
![Page 308: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/308.jpg)
Question 1 - continued
Or without matrix notation:
A =
1 0 0 0 0 0 0 0 00 p11 p12 p13 p14 0 0 0 00 p21 p22 p23 p24 0 0 0 00 p31 p32 p33 p34 0 0 0 00 p41 p42 p43 p44 0 0 0 0
r1 q1p11 q1p12 q1p13 q1p14 S1p11 S1p12 S1p13 S1p14r2 q2p21 q2p22 q2p23 q2p24 S2p21 S2p22 S2p23 S2p24r3 q3p31 q3p32 q3p33 q3p34 S3p31 S3p32 S3p33 S3p34r4 q4p41 q4p42 q4p43 q4p44 S4p41 S4p42 S4p43 S4p44
Bo Friis Nielsen Limiting Distribution and Classification
![Page 309: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/309.jpg)
Question 1 - continued
Or without matrix notation:
A =
1 0 0 0 0 0 0 0 00 p11 p12 p13 p14 0 0 0 00 p21 p22 p23 p24 0 0 0 00 p31 p32 p33 p34 0 0 0 00 p41 p42 p43 p44 0 0 0 0r1 q1p11 q1p12 q1p13 q1p14 S1p11 S1p12 S1p13 S1p14r2 q2p21 q2p22 q2p23 q2p24 S2p21 S2p22 S2p23 S2p24r3 q3p31 q3p32 q3p33 q3p34 S3p31 S3p32 S3p33 S3p34r4 q4p41 q4p42 q4p43 q4p44 S4p41 S4p42 S4p43 S4p44
Bo Friis Nielsen Limiting Distribution and Classification
![Page 310: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/310.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we getState 0 Absorbing1 Positive recurrent2 Positive recurrent3 Positive recurrent4 Positive recurrent5 Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 311: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/311.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we getState 0 Absorbing1 Positive recurrent2 Positive recurrent3 Positive recurrent4 Positive recurrent5 Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 312: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/312.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we get
State 0 Absorbing1 Positive recurrent2 Positive recurrent3 Positive recurrent4 Positive recurrent5 Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 313: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/313.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we getState 0
Absorbing1 Positive recurrent2 Positive recurrent3 Positive recurrent4 Positive recurrent5 Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 314: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/314.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we getState 0 Absorbing1
Positive recurrent2 Positive recurrent3 Positive recurrent4 Positive recurrent5 Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 315: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/315.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we getState 0 Absorbing1 Positive recurrent2
Positive recurrent3 Positive recurrent4 Positive recurrent5 Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 316: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/316.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we getState 0 Absorbing1 Positive recurrent2 Positive recurrent3 Positive recurrent4 Positive recurrent5
Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 317: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/317.jpg)
Solution question 2
Characterise the states in the Markov chain.
With reasonable assumptions on P (i.e. irreducible) we getState 0 Absorbing1 Positive recurrent2 Positive recurrent3 Positive recurrent4 Positive recurrent5 Transient6 Transient7 Transient8 Transient
Bo Friis Nielsen Limiting Distribution and Classification
![Page 318: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/318.jpg)
Solution question 3
We now consider the case where the stability of the system hasbeen assured, i.e. the error has been found and corrected, andthe program has been running for long time without errors. Theparameters are as follows.
Pi,i+1 = 0.6 i = 1,2,3 Pi,i−1 = 0.2 i = 2,3,4Pi,j = 0 |i − j | > 1 qi = 10−3i ri = 10−3i−5
Characterise the stochastic proces, that describes thestable system.
The system becomes stable by reaching one of the states 1-4.The process is ergodic from then on. The procces is areversible ergodic Markov chain in discrete time.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 319: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/319.jpg)
Solution question 3
We now consider the case where the stability of the system hasbeen assured, i.e. the error has been found and corrected, andthe program has been running for long time without errors. Theparameters are as follows.Pi,i+1 = 0.6 i = 1,2,3 Pi,i−1 = 0.2 i = 2,3,4Pi,j = 0 |i − j | > 1 qi = 10−3i ri = 10−3i−5
Characterise the stochastic proces, that describes thestable system.
The system becomes stable by reaching one of the states 1-4.The process is ergodic from then on. The procces is areversible ergodic Markov chain in discrete time.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 320: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/320.jpg)
Solution question 3
We now consider the case where the stability of the system hasbeen assured, i.e. the error has been found and corrected, andthe program has been running for long time without errors. Theparameters are as follows.Pi,i+1 = 0.6 i = 1,2,3 Pi,i−1 = 0.2 i = 2,3,4Pi,j = 0 |i − j | > 1 qi = 10−3i ri = 10−3i−5
Characterise the stochastic proces, that describes thestable system.
The system becomes stable by reaching one of the states 1-4.The process is ergodic from then on. The procces is areversible ergodic Markov chain in discrete time.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 321: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/321.jpg)
Solution question 3
We now consider the case where the stability of the system hasbeen assured, i.e. the error has been found and corrected, andthe program has been running for long time without errors. Theparameters are as follows.Pi,i+1 = 0.6 i = 1,2,3 Pi,i−1 = 0.2 i = 2,3,4Pi,j = 0 |i − j | > 1 qi = 10−3i ri = 10−3i−5
Characterise the stochastic proces, that describes thestable system.
The system becomes stable by reaching one of the states 1-4.
The process is ergodic from then on. The procces is areversible ergodic Markov chain in discrete time.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 322: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/322.jpg)
Solution question 3
We now consider the case where the stability of the system hasbeen assured, i.e. the error has been found and corrected, andthe program has been running for long time without errors. Theparameters are as follows.Pi,i+1 = 0.6 i = 1,2,3 Pi,i−1 = 0.2 i = 2,3,4Pi,j = 0 |i − j | > 1 qi = 10−3i ri = 10−3i−5
Characterise the stochastic proces, that describes thestable system.
The system becomes stable by reaching one of the states 1-4.The process is ergodic from then on.
The procces is areversible ergodic Markov chain in discrete time.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 323: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/323.jpg)
Solution question 3
We now consider the case where the stability of the system hasbeen assured, i.e. the error has been found and corrected, andthe program has been running for long time without errors. Theparameters are as follows.Pi,i+1 = 0.6 i = 1,2,3 Pi,i−1 = 0.2 i = 2,3,4Pi,j = 0 |i − j | > 1 qi = 10−3i ri = 10−3i−5
Characterise the stochastic proces, that describes thestable system.
The system becomes stable by reaching one of the states 1-4.The process is ergodic from then on. The procces is areversible ergodic Markov chain in discrete time.
Bo Friis Nielsen Limiting Distribution and Classification
![Page 324: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/324.jpg)
Solution question 4
For what fraction of time will the system be in level 1.
We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1 can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 325: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/325.jpg)
Solution question 4
For what fraction of time will the system be in level 1.We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1 can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 326: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/326.jpg)
Solution question 4
For what fraction of time will the system be in level 1.We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1 can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 327: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/327.jpg)
Solution question 4
For what fraction of time will the system be in level 1.We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1
⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1 can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 328: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/328.jpg)
Solution question 4
For what fraction of time will the system be in level 1.We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1 can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 329: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/329.jpg)
Solution question 4
For what fraction of time will the system be in level 1.We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1 can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 330: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/330.jpg)
Solution question 4
For what fraction of time will the system be in level 1.We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1
can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification
![Page 331: Discrete Time Markov Chains, Limiting Distribution and ... · Discrete Time Markov Chains, Limiting Distribution and Classification Bo Friis Nielsen1 1DTU Informatics 02407 Stochastic](https://reader033.fdocuments.us/reader033/viewer/2022051512/6029034bad19ea0d03043bc4/html5/thumbnails/331.jpg)
Solution question 4
For what fraction of time will the system be in level 1.We obtain the following steady state equations
πi = 3i−1π1
4∑i=1
3i−1π1 = 1⇔ 40π1 = 1
π1 =140
The sum∑4
i=1 3i−1 can be obtained by using∑4i=1 3i−1 = 1−34
1−3 = 40.
4∑i=1
3i−1π1 = 1⇔ 1− 34
1− 3π1 = 1
Bo Friis Nielsen Limiting Distribution and Classification