Solution set for ‘Queues: A Course in Queueing Theory ...pluto.huji.ac.il/~haviv/solmanu1.pdf ·...
Transcript of Solution set for ‘Queues: A Course in Queueing Theory ...pluto.huji.ac.il/~haviv/solmanu1.pdf ·...
Solution set for ‘Queues: A Course in Queueing
Theory’ by Moshe Haviv
October 30, 2017
I received much help in composing this solution set from Yoav Kerner,Binyamin Oz, and Liron Ravner. Credit is given when due next to theappropriate questions. I am indebted to all three of them.
1
Chapter 0
Contents
1 Chapter 1 61.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.6 Question 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.7 Question 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.8 Question 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.9 Question 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.10 Question 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.11 Question 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.12 Question 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.13 Question 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.14 Question 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.15 Question 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.16 Question 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.17 Question 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.18 Question 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.19 Question 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.20 Question 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.21 Question 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.22 Question 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Chapter 2 152.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.6 Question 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.7 Question 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.8 Question 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.9 Question 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.10 Question 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.11 Question 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2
Chapter 0
3 Chapter 3 243.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.6 Question 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.7 Question 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.8 Question 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.9 Question 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4 Chapter 4 294.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5 Chapter 5 325.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.6 Question 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6 Chapter 6 396.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426.6 Question 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436.7 Question 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.8 Question 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.9 Question 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.10 Question 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.11 Question 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.12 Question 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476.13 Question 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3
Chapter 0
6.14 Question 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.15 Question 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.16 Question 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496.17 Question 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496.18 Question 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506.19 Question 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516.20 Question 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526.21 Question 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7 Chapter 7 537.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8 Chapter 8 578.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598.6 Question 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608.7 Question 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638.8 Question 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648.9 Question 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648.10 Question 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658.11 Question 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668.12 Question 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678.13 Question 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698.14 Question 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698.15 Question 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718.16 Question 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718.17 Question 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718.18 Question 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728.19 Question 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
9 Chapter 9 749.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4
Chapter 0
10 Chapter 10 7810.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7810.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7810.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7810.4 Question 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7910.5 Question 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7910.6 Question 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
11 Chapter 11 8111.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8111.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8211.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
12 Chapter 12 8412.1 Question 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8412.2 Question 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8712.3 Question 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5
Chapter 1
1 Chapter 1
1.1 Question 1
We use induction. First, E(X0) = E(1) = 1, which proves the formula forthe case where n = 0. Assume the induction hypothesis that E(Xn−1) =(n− 1)!/λn−1. Next,
E(Xn) =
∫ ∞x=0
xnλe−λx dx = −xne−λx|∞x=0 +
∫ ∞x=0
nxn−1e−λx dx
= 0 +n
λ
∫ ∞x=0
xn−1λe−λx dx =n
λE(Xn−1) =
n
λ
(n− 1)!
λn−1=n!
λn.
where the one before last equality is due to the indiction hypothesis
1.2 Question 2
h(t) =f(t)
F (t)= − d
dtloge(F (t).
Hence, for some constant C
−∫ x
t=0h(t) dt = loge F (x) + C
and for some constant K
F (x) = Ke−∫ xt=0 h(t) dt.
Since F (∞) = 1 and since∫ xt=0 h(t) dt = ∞ by assumption, we conclude
that K = 1 and complete the proof.
1.3 Question 3
a) E(X) = 1/λ, E(Y ) = 1/µ and since min{X,Y } follows an exponentialdistribution with parameter λ+µ, E(min{X,Y }) = 1/(λ+µ). Hence,with the help of the hint, E(max{X,Y }) = 1/λ+ 1/µ− 1/(λ+ µ).
b)
FW (t) = P(W ≤ t) = P(max{X,Y } ≤ t) = P(X ≤ t)P (Y ≤ t)
= (1− e−λt)(1− e−µt) = 1− e−λt − e−µt + e−(λ+µ)t.
6
Chapter 1
c) It was already proved that E(minni=1{Xi}) = 1/(nλ). Once, the min-imum among the n is realized, then, by the memoryless property, allstarts from stretch but now with n − 1 random variables. Again,we look at the minimum among them and its expected value equals1/((n− 1)λ). This is then repeated but now with n− 2. These meanvalues need to be added and we get the required result. The fact thatthe ratio between this summation (called the harmonic series) andloge n goes to 1 is well known. The moral here is that if you are wait-ing for a group to be formed where each individual’s time of arrivalfollows an exponential distribution, you will wait quite a lot.
1.4 Question 4
We need to show that∫ y
x1=0
∫ y
x2=x1
· · ·∫ y
xi−2=xi−1
∫ x
xi+1=y
∫ x
xi+2=xi+1
· · ·∫ x
xn=xn−1
1 dx1 . . . dxi−1 dxi+1 . . . dxn
equalsyi−1
(i− 1)!
(x− y)n−i
(n− i)!, 1 ≤ i ≤ n.
Indeed taking the most inner integral with respect to xn, we get (x −xn−1). Repeating that, now with respect to xn−1, we get (x − xn−2)2/2.Next with respect to xn−2 and the integral equals (x − xn−3)3/3!. Doingthat until (and inclusive) xi+1, results in (x− y)n−i/(n− i)!. This is now aconstant with respect to the rest of the integration, so we have
(x− y)n−i
(n− i)!
∫ y
x1=0
∫ y
x2=x1
· · ·∫ y
xi−2=xi−1
1 dx1 . . . dxi−1.
Again doing the integration one by one, leads first to (y − xi−2), then to(y − xi−3)2/2 and eventually to (y − 0)i−1/(i− 1)!, concluding the proof.
An alternative way (but somewhat heuristic is as follows). Suppose youfix who is the observation who comes as the i-th observation. There are noption here. Also, you fix the indices of those to proceed it. There are n− 1choose i−1 options here, namely there are n!/((i−1)!(n− i)!) options. Theprobability (in fact, density) that the one selected to have exactly a valueof y is 1/x. The probability that the selected i− 1 will have a smaller valuethan y is (y/x)i−1 and for the rest to have higher values is ((x − y)/x)n−i.All needed now is to multiply all these probabilities.
7
Chapter 1
1.5 Question 5
A(t) =∞∑i=1
p(1− p)i−1ti = tp∞∑i=0
[(1− p)t]i =
= tp1
1− (1− p)t.
1.6 Question 6
A(t) =∞∑i=0
e−λλi
i!ti = e−λ
∞∑i=0
λiti
i!
= e−λeλt = e−λ(1−t).
1.7 Question 7
Assume X and Y are two independent nonnegative random variables. Then,
AX+Y (t) =
∞∑i=0
P(X + Y = i)ti =
∞∑i=0
i∑k=0
P(X = k)P(Y = i− k)ti
which by the change of the order to summation equals
=∞∑k=0
P(K = k)tk∞∑i=k
P(Y = i− k)ti−k
=
∞∑k=0
P(K = k)tk∞∑i=0
P(Y = i)ti
=∞∑k=0
P(K = k)tkAY (t) = AY (t)∞∑k=0
P(K = k)tk = AY (t)AX(t).
The proof can now be conclude by induction: Σni=1Xi = (Σn−1
i=1 Xi) +Xn.
1.8 Question 8
fX(t) = Σnpnλne−λnt. Hence, FX(t) = Σnpne
−λnt and finally FX(t) =1− Σnpne
−λnt
1.9 Question 9
The proof is given in fact in the footnote.
8
Chapter 1
1.10 Question 10
(Benny) Let X|Y be Erlang with parameters Y and λ where Y is geometricwith parameter p. Then
F ∗X(s) = E(e−sX) = E(E(e−sX |Y )).
From (1.24) we learn that
E(e−sX |Y ) = (λ
λ+ s)Y
and therefore, using (1.19) we get
F ∗X(s) = E((λ
λ+ s)Y ) = AY (
λ
λ+ s) =
p λλ+s
1− (1− p) λλ+s
=pλ
λ+ s− (1− p)λ
=pλ
s+ pλ
which is the LST of an exponential random variable with parameter pλ.
1.11 Question 11
F ∗X(s) =
∫ ∞x=0
e−sxλe−λx dx =λ
λ+ s
∫ ∞x=0
(λ+ s)e−(λ+s)x dx
=λ
λ+ s.
1.12 Question 12
It is sufficient to show the result for the case where n = 2 as the rest willfellow with an inductive argument. This case be done as follows:
f∗X1+X2(s) =
∫ ∞x=0
e−sxfX1+X2(x) dx =
∫ ∞x=0
e−sx∫ x
y=0fX1(y)fX2(x−y) dy dx
=
∫ ∞x=0
∫ x
y=0fX1(y)e−syfX2(x− y)e−s(x−y), dy dx
=
∫ ∞y=0
e−syfX1(y)
∫ ∞x=y
e−s(x−y)fX2(x− y) dx dy
=
∫ ∞y=0
e−syfX1(y)
∫ ∞x=0
e−sxfX2(x) dx dy
F ∗X1(s)F ∗X2
(s).
9
Chapter 1
1.13 Question 13
We prove the result by induction and the use of the memoryless property.The case where n = 2 was established in Lemma 2.1(2).
P(X1 ≥n∑i=2
Xi) = P(X1 ≥n∑i=2
Xi|X1 ≥ X2)P(X1 ≥ X2).
which by the memoryless equals
P(X1 ≥n∑i=3
Xi)P(X1 ≥ X2).
The left term equals, by the induction hypothesis to Πni=3λi/(λi + λ1) while
the second, due to Lemma 2.1(2) equals λ2/(λ2 + λ1). This completes theproof.
1.14 Question 14
In the case where all parameters in the previous question coincide, we get1/2n−1 in the righthand side at the previous question. If we ask what isthe probability that X2 is larger than the sum of all others, we of coursewill get the same answer. Likewise for any Xi, 1 ≤ i ≤ n being greaterthan or equal to the sum of all others. The union of all these events whichare clearly disjoint, is that one of them will be greater than or equal to thesum of the other. This one is the fact the maximum among them (and ifthe maximum is grater than the sum of the others, then clearly there existssuch one of them). Summing up the probabilities we get n/2n−1.
1.15 Question 15
Proof 1. We next show that the reciprocal function of hX(t) is monotonedecreasing in t. Divide (1.12) by (1.10) and get, up to a multiplicativeconstant, ∑n−1
k=0(λt)k
k!
tn−1=
n−1∑k=0
(λt)k+1−n
k!
which is clearly decreasing with t since k + 1− n ≤ 0 for all 0 ≤ k ≤ n.
Proof 2. Recall that an Erlang random variable with parameters n and λis the sum of n independent exponential random variables with parameter
10
Chapter 1
λ. Thus, while one waits for a realization of it, it can be looked as being inone of n possible stages which progress as a Poisson process. Also, in orderto see its termination in the next instant, in needs to be in its final stage.Moreover, given that, the hazard rate is then λ (as the time until conclusionis now exponential with parameter λ). Note that the hazard rate given anyother stage is zero as one needs to conclude more than one stage in no time.Using the notation on Poisson process what we look for is
P(N(t) = n|N(t) ≤ n).
By (1.14) this probability equals
e−λt (λt)n
n!∑nk=0 e
−λt (λt)k
k!
.
As in the previous proof it can easily seen that the reciprocal of this valueis monotone decreasing with t.
1.16 Question 16
a)
fX(t) =
n∑i=1
αiλie−λit.
b) E(X|I = i) = 1/λi, 1 ≤ i ≤ n and since E(X) = E(E(X|I)),E(X) = Σn
i=1αi/λi. As for the variance, first Var(X|I = i) = 1/λ2i , and
then E(Var(X)) = Σni=1αi/λ
2i . Second, Var(E(X|I)) = E(E(X|I)2) −
E2(E(X|I)) which equals Σni=1αi/λ
2i − (Σn
i=1αi/λi)2. Finally, use the
fact that Var(X) = E(Var(X|I)) + E(Var(X|I)) and sup up these twovalues to get
Σni=12αi/λ
2i − (Σn
i=1αi/λi)2.
c) Bayes’ formula leads to
P(I = i|X ≥ t) =P(I = i)P(X ≥ t|I = i)
P(X ≥ t)
aie−λit∑n
j=1 αje−λit
, 1 ≤ i ≤ n.
11
Chapter 1
d)
hX(t) =fX(t)
FX(t)=
∑ni=1 αiλie
−λit∑ni=1 αie
−λit. (1)
We need to show that this ratio is decreasing in t. We differentiate itwith respect to t and need to observe that the derivative is negativefor any value of t. Taking derivative in a standard way for a quo-tient, we next look only in the numerator of the derivative (since thedenumerator is always positive). Specifically, this numerator equals of
−n∑i=1
αiλ2i e−λit
n∑i=1
αie−λit + (
n∑i=1
αiλie−λit)2.
We argue that the above is negative by Cauchy-Swhartz inequality. Itsays that for any two positive series {ai}ni=1, and {bi}ni=1,
n∑i=1
a2i
n∑i=1
b2i − (
n∑i=1
aibi)2 ≥ 0.
Using this inequality with ai = λi√αie−λit and bi =
√αie−λitλi,
1 ≤ i ≤ n, concludes the proof.
e) Inspect the ratio in (1). Multiply numerator and denominator by
eminnj=1{λj}t Take limit when t goes to infinity. All terms of the typee−λit+min
nj=1{λj}t will go to zero with the exception of j∗ ≡ arg minni=1{λi}
which is a constant. In particular, this is its limit. This leaves us withthe minnj=1{λj}. as required.
Comment: It is possible here to see that given X ≥ t, namely the ageequals t, the residual life time follows an hyper-exponential distribution withαi being replace with P(I = i|X ≥ t), whose explicit expression is givenabove. In words, as time progresses the weights on the possible exponentialdistributions move. In particular, the hazard function is a weighted averageof the current posteriors for each of the options on I. When time is setto infinity, full weight is given to the slowest option possible. To visualizethis, suppose a lightbut follows an exponential distribution conditional onquality, where quality is measured by the rate of burning, the slowest thebetter. The longer the lightbulb is functioning, the more likely it is to beof the best quality (with the corresponding exponential distribution) andhence the more time (stochastically) is ahead, as said by the DFR property.
12
Chapter 1
1.17 Question 17
Given that server-i finishes first (a probability λi/(λ1 +λ2) event), the prob-ability that this customer is not the last to leave is λi/(λ1 + λ2), i = 1, 2.Hence, the prior probability of this event is
2∑i=1
(λi
λ1 + λ2)2.
1.18 Question 18
Integration by parts (note that s is considered as constant here) leads to∫ ∞x=0
FX(x)e−sx dx = −1
sFX(x)e−sx|∞x=0 +
1
s
∫ ∞x=0
fX(x)e−sx dx.
The fact that the first term is zero concludes the proof.
1.19 Question 19
One needs to assume here that FX(0) = 0 as in the previous exercise.
P(X ≤ Y ) =
∫ ∞y=0
P(X ≤ y)fY (y) dy =
∫ ∞y=0
P(X ≤ y)se−sy dy
s
∫ ∞y=0
FX(y)e−sy dy = sF ∗X(s)
s= F ∗X(s)
where the one before last equality is based on the previous exercise.
1.20 Question 20
(Was solved by Liron Ravner) Taking the derivative of the negative of thistail function, we get the density function −e−(λx)αλα(λx)α−1. Dividing thisdensity by the tail function we get that the hazard function equals
αλ(λx)α−1.
a) The hazard is decreasing with x when 0 < α < 1.
b) The hazard is increasing with x when α > 1. Note that in the casewhere α = 1 we get an exponential distribution which is both IHR andDHR.
13
Chapter 1
1.21 Question 21
a) Straightforward differentiation leads to fX(t) = αβα/(β+ t)α+1. Thisimplies that hX(t) = α/(β + t) which is decreasing with t.
b)
P(X ≥ x|X ≥ t) =P(X ≥ x)
P(X ≥ t)=
βα
(β + x)α/
βα
(β + t)α
=(β + t)α
(β + x)α, x ≥ t.
Hence,
P(X − t ≥ x|X ≥ t) =(β + t)α
(β + x+ t)α, x ≥ 0.
Integrating this from zero to infinity leads to the expected value weare after (see Lemma 1.1). Specifically, the integral equals
(β + t)α
−α+ 1(β + x+ t)−α+1|∞x=0 =
β + t
α− 1
which is decreasing with t.
1.22 Question 22
P(bXc = i) =
∫ i+1
x=iλe−λx = e−λx|i+1
x=i = e−λi − e−λ(i+1)
(e−λ)i(1− eλ), i ≥ 0.
14
Chapter 2
2 Chapter 2
2.1 Question 1
E(X) =∞∑i=1
ipi =∞∑i=1
i∑j=1
pi.
Changing the order of summation, we get
E(X) =∞∑j=1
∞∑j=i
pi =∞∑j=1
qj ,
as required.
2.2 Question 2
E(X) = Σ∞i=1pn/λn and FX(x) = Σ∞i=1pne−λnx. Dividing the latter by the
former gives the age density.
2.3 Question 3
Recall that E(X) = n/λ. The tail density function is given in formula (1.12)(see Chapter 1). Dividing the latter by the former implies that
fX(x) =1
n
n−1∑k=0
λe−λx(λx)k
k!.
This density is a mixture with equal probabilities among n Erlang densitieswith parameters (1, λ), . . . , (n, λ).
2.4 Question 4
(Benny) The first step is to compute the LTS of the right hand side. Thenwe show that it coincides with the LST of the left hand side. Before that,note the following facts:
a) The LST of a sum of N iid random variables with LTS F ∗X(s) equals
ΠN (F ∗X(s))
where ΠN (t) = E(tN ) is the z-transform of N .
15
Chapter 2
b) The LTS of a sum of N − 1 such random variables equals
ΠN−1(F ∗X(s)) = E(F ∗X(s)N−1) =ΠN (F ∗X(s))
F ∗X(s).
c) The z-transform of the length bias distribution of N equals
ΠLN (t) =∑∞
i=1 tiP (LN = i)
=∑∞
i=1 ti iP (N=i)E(N)
= tE(N)
∑∞i=1 t
i−1iP (N = i)
= tE(N)
dΠN (t)d t
Now, recalling Lemma 2.3 and the fact that the LTS of an independent sumis their LTS product, the LST of the right hand side equals
F ∗LX (s)ΠLN (F ∗X(s))
F ∗X(s)= −
dF ∗X(s)
d s
1
E(X)
ΠLN (F ∗X(s))
F ∗X(s)
The LTS of the left hand side equals
F ∗LY (s) = −dF ∗Y (s)d s
1E(Y )
= −dΠN (F ∗X(s))d s
1E(N)E(X)
= − dΠN (t)d t
∣∣∣t=F ∗X(s)
dF ∗X(s)d s
1E(N)E(X)
= −t
E(N)
dΠN (t)
d t
∣∣∣t=F∗
X(s)
F ∗X(s)
dF ∗X(s)d s
1E(X)
= −ΠLN (F ∗X(s))
F ∗X(s)
dF ∗X(s)d s
1E(X)
as required.An explanation for this theorem is as follows. Sampling length biased Yis equivalent to sampling length biased X and length biased N , and thencompute Y as this (biased) X plus a sum of (biased) N − 1 of (unbiased)X-es.
2.5 Question 5
a) The event of being at stage i at time t, means that up to this timei− 1 stages have being completed while the i-th is still in process. Itsprobability in the case where i = 1 is e−λt as this is the probability
16
Chapter 2
that a single exponential random variable with parameter λ is greaterthan or equal to t. In case of i ≥ 2, it equals∫ t
y=0λe−λy
(λy)i−2
(i− 2)!eλ(t−y) dy.
Note that we are using the complete probability theorem when theintegration is with respect to where exactly stage i−1 ended at. Simplecalculus leads to the value of
e−λt(λt)i−1
(i− 1)!.
This needs to be divided by P(X ≥ t) in order to get pi(t). This valuecan be read from formula (1.12) (see Chapter 1). Hence, we concludethat
pi(t) =(λt)i−1/(i− 1)!∑n−1
j=0 (λt)j/j!, 1 ≤ i ≤ n.
b)
hX(t) =fX(t)
FX(t)=
λ (λt)n−1
(n−1)! e−λt∑n−1
i=0 e−λt (λt)i
i!
= λpn(t).
The interpretation of this is as follows. The hazard rate correspondsto an immediate end. This can take place only if the current stage isstage n (immediate end in other stages requires the end of a numberof stages in no time). Once the current stage is stage n, the rate of‘death’ is λ. Hence the product of λ and pn(t) is the hazard rate intime t.
2.6 Question 6
(Benny & Liron) Let X be a non-negative random variable and denote thecorresponding age variable by A. Suppose that X has a decreasing hazardrate, i.e. fX(x)
FX(x)is monotone decreasing w.r.t x. We prove that E(A) ≥ E(X)
by showing that A stochastically dominates X: FX(x) ≥ FA(x), ∀x ≥ 0.
Recall that the density of the age is fA(x) = F (x)
E(X). Since FX(x) is monotone
decreasing w.r.t. x, then so is fA(x). The decreasing hazard rate clearlyimplies that fX(x) is also monotone decreasing w.r.t. x, and at a faster ratethan FX(x) for any x ≥ 0. Consider the equation fX(x) = fA(x) or
fX(x) =FX(x)
E(X), (2)
17
Chapter 2
It has a unique solution, denoted by x′, because it is equivalent to:
hX(x) =1
E(X), (3)
where the LHS is decreasing and the RHS is constant.All of the above lead to the following observations:
a) fX(x) > fA(x) for any x < x′.
b) fX(x′) = fA(x′).
c) fX(x) < fA(x) for any x > x′.
x′
fX
fA
The first and second imply that FX(x) > FA(x), ∀x ≤ x′. The third,together with the fact that limx→∞ FX(x) = limx→∞FA(x) = 1, impliesthat FX(x) = FA(x) only at the bounds of the support of X. Finally, wecan conclude that FX(x) ≥ FA(x),∀x ≥ 0, which implies E(A) ≥ E(X) andthus completes the proof.
As for the second part, recall that the coefficient of variation of a randomvariable equals the ratio between its standard deviation and its mean. Wenext show that the square of this ratio is greater than or equal to one if andand only if E(A) ≥ E(X). Indeed,
E(X2)− E2(X)
E2(X)
is larger than or equal to one, if and only if
E(X2) ≥ 2E2(X),
which is easily seen to be the case if and only if E(A) ≥ E(X) since E(A) =E(X2)/2E(X).
18
Chapter 2
2.7 Question 7
The approach in the first two items is to consider the kernel of a densityfunction in the family, then to multiply it by x and get yet again a kernelin this family.
a) The kernel of the density of a gamma random variable with parametersα and β equals
xα−1e−βx, x ≥ 0.
Multiplying it by x makes it a gamma kernel but with parameters α+1and β.
b) The kernel of the density of a beta random variable with parametersα and β equals
xα−1(1− x)β−1, 0 ≤ x ≤ 1.
Multiply it by x just replaces the α with α+ 1.
c) This is a special case of the previous case but with the case whereβ = 1. In order to conclude the proof we need to show that what isclaimed to be the density function of L is a proper density function,i.e., θ + 1 is the correct constant. Indeed,∫ 1
x=0xθ dx =
1
θ + 1xθ+1|1x=0 =
1
θ + 1.
2.8 Question 8
a) By (2.14) for the case where X follows a binomial distribution
P(L− 1 = l) =(l + 1)pl+1
E(X), 0 ≤ l ≤ n− 1.
Since E(X) = np, the righthand side equals
(n− 1)!
l!(n− 1− l)!pl(1− p)n−1−l, 0 ≤ l ≤ n− 1
which is indeed the probability of l−1 for a binomial random variablewith parameters n− 1 and p.
19
Chapter 2
b) The random variable X follows a negative binomial distribution withparameters r and p for some integer r ≥ 1 and a fraction 0 < p < 1.This means that
P(X = k) =
(k − 1r − 1
)pr(1− p)k−r, k ≥ r.
Hence,
P(L− 1 = `) = P(L = `+ 1) =(`+ 1)P(X = `+ 1)
E(X), `+ 1 ≥ r.
Since E(X) = r/p, we get that
P(L = `− 1) =(`+ 1) l!
(r−1)!(`+1−r)!pr(1− p)`+1−r
rp
.
With minimal algebra this equals(`+ 1r
)pr+1(1− p)`−(r+1),
which is the probability of `+1 for a negative binomial random variablewith parameters r + 1 and p.
c)
P(L− 1 = l) =(l + 1)pl+1
E(X)= (l + 1)e−λ
λl+1
(l + 1)!
1
λ.
This easily seen to equal P(X = l). Next we show the converse. IfP(L− 1 = l) = P(X = l), we can conclude that
P(L = l + 1) = (l + 1)P(X = l + 1)/E(X) = P(X = l).
Hence, P(X = l + 1) = E(X)P(X = l)/(l + 1). By induction we getthat that P(X = l) = P(X = 0)El(X)/l!. Since the probabilities sum
up to one, we get that P(X = 0) = e−E(X) and that P(X = l) =
e−E(X)El(X)l! . This is the Poisson distribution when E(X) is usually
denoted by λ.
20
Chapter 2
2.9 Question 9
a) If X ≤ a + r the process is never on. Otherwise, it is on only whenit is in the ‘middle’ of its life, when it crosses the age of a and beforeit enters the period when its residual is smaller than or equal to r.Moreover, the length of this period is X − a − r. Thus, the expectedtime of ‘on’ is E(max{X − a− r, 0}). Clearly,
E(max{X − a− r, 0}) =
∫ ∞x=a+r
(x− a− r)fX(x).
b) By the analysis of Section 2.3, we learn that
P(‘on′) = P(A ≥ a,R ≥ r) =E(max{X − a− r, 0})
E(X).
Taking derivative with respect to both a and r leads us to the jointdensity for (A,R) at the point (a, r). This is done in detail next butfirst note that∫ ∞x=a+r
(x−a−r)fX(x) dx =
∫ ∞x=a+r
xfX(x) dx−a∫ ∞a+r
fX(x) dx−r∫ ∞x=a+r
fX(x) dx.
Taking derivative with respect to a, leads to
−(a+ r)fX(a+ r)−∫ ∞x=a+r
fX(x) + afX(a+ r) + rfX(a+ r).
And now with respect to r,
−fX(a+r)−(a+r)f ′X(a+r)+fX(a+r)+af ′X(a+r)+rf ′X(a+r)+fX(a+r),
which is easily seen to equal fX(a+ r). Hence, the joint density func-tion of A and R equals fX(a, r)/E(X).
2.10 Question 10
a) (Benny) The values of a and r splits the possible values of X into 4cases as described below:
X|0
I|
min (a, r)
II|
max (a, r)
III|
a+ r
IV
21
Chapter 2
In case I X is smaller than a and r, therefore the condition is true forall 0 < t < X so the ’on’ length is X.
In case II X is between a and r which means a < X < r or r < X < a.In the first case, the age is smaller than a only for 0 < t < a and theresidual is smaller then r for all 0 < t < X so the ’on’ length is a. Inthe second case, the age is smaller than a for all 0 < t < X but theresidual is smaller then r only for X − r < t < X so the ’on’ length isr. Concluding both cases for case II, the ’on’ length is min (a, r)
In case III and IV X is grater than a and r, therefore the age is smallerthan a only for 0 < t < a and the residual is smaller then r only forX − r < t < X so both conditions are true for X − r < t < a. The’on’ length is grater then 0 only if X − r < a namely X < a+ r thatis true in case III but not in case IV. Concluding both cases (III andIV), the ’on’ length is min (a+ r −X, 0)
Now it is easy to see that, given X, the ’on’ length is equal to,
min (X, a) + min (X, r)−min (X, a+ r)
by simply checking the 4 cases above.
Thus,
P(A ≤ a,R ≤ r) =E(min{a,X}+ min{r,X} −min{a+ r,X})
E(X).
b) We next deal with the numerator above. We need to take its derivativefirst with respect a and then with respect to r (or with the reversedorder). The first two terms will of course end with zero. So look onlyat the third:
−E(min{a+ r,X}) = −∫ a+r
x=0xfX(x) dx− (a+ r)FX(a+ r).
Taking derivative with respect to a, we get
−(a+ r)fX(a+ r)− FX(a+ r) + (a+ r)fX(a+ r) = −F x(a+ r).
Taking now the derivative with respect to r, we get fX(a+ r) as wasrequired.
22
Chapter 2
2.11 Question 11
(Benny) First observe that fo any function g(·), E(g(L)) = E(Xg(X))/E(X).Thus,
−d F ∗X(s)
d s
1
E(X)= −d E(e−sX)
d s
1
E(X)=
E(Xe−sX)
E(X)= E(e−sL) = F ∗L(s).
23
Chapter 3
3 Chapter 3
3.1 Question 1
P(Xn+2 = in+2, Xn+1 = in+1|X0 = i0, . . . , Xn = in)
= P(Xn+2 = in+2|Xn+1 = in+1, X0 = i0, . . . , Xn = in)P(Xn+1 = in+1|X0 = i0, . . . , Xn = in),
which by the definition of a Markov process equals
P(Xn+2 = in+2|Xn+1 = in+1, Xn = in)P(Xn+1 = in+1|Xn = in)
= P(Xn+2 = in+2, Xn+1 = in+1|Xn = in),
as required.
3.2 Question 2
P(Xn−1 = in−1, Xn+1 = in+1|Xn = in)
=P(Xn−1 = in−1, Xn+1 = in+1, Xn = in)
P(Xn = in)
=P(Xn+1 = in+1|Xn = in, Xn−1)P(Xn = in, Xn−1 = in−1)
P(Xn = in),
which, due to the fact that we have a Markov process, equals
P(Xn+1 = in+1|Xn = in)P(Xn = in, Xn−1 = in−1)
P(Xn = in)
= P(Xn+1 = in+1|Xn = in)P(Xn−1 = in−1|Xn = in),
as required.
3.3 Question 3
P(Xn = in|Xn+2 = in+2, Xn+1 = in+1)
=P(Xn+2 = in+2, Xn+1 = in+1, Xn = in)
P(Xn+2 = in+2, Xn+1 = in+1)
= P(Xn+2 = in+2|Xn+1 = in+1, Xn = in)P(Xn+1 = in+1, Xn = in)P(Xn+2 = in+2|Xn+1 = in+1)P(Xn+1 = in+1).
Since the process is a Markov process, the first terms in the numerator and thedenominator, coincide. Hence, we got
P(Xn+1 = in+1, Xn = in)
P(Xn+1 = in+1)= P(Xn = in|Xn+1 = in+1),
as required.
24
Chapter 3
3.4 Question 4
The result can be established by the help of an induction argument. Thecases where n = 1 and n = 2 are stated in page 39. Assuming, it holds forn. Then, by the complete probability theorem,
P(Xn+1 = j) =∑i
P(Xn = i)Pij(n).
Inserting above the value of P (Xn = i) as stated by the induction hypothesis,concludes the proof.
3.5 Question 5
a) We next prove that if two matrices (with the same dimension) arestochastic, then the same is the case with their product. The theoremis now established by the use of induction. Specifically, suppose A andB are two stochastic matrices in Rn×n. Then, for any i, 1 ≤ i ≤ n,
n∑j=1
(AB)ij =n∑j=1
n∑k=1
AikBkj =n∑k=1
Aik
n∑j=1
Bkj = 1× 1 = 1.
b) This item is also shown with the help of an induction argument. Theresult is correct for the case where n = 1 by definition. Then,
P(Xn+1 = j|X0 = i)
=∑k
P(Xn+1 = j|P (Xn = k,X0 = i)P(Xn = k|X0 = i)∑k
P(Xn+1 = j|Xn = k)P(Xn = k|X0 = i)
since the process is a Markov chain. This equals, by the time-homogeneityof the process and the induction hypothesis, to∑
k
PkjPnik = Pn+1
ij ,
as required.
c) Yn+1 and Yn are two observations from the original Markov process,which are k time epochs from each others. Hence, given Yn, Yn+1 doesnot depend on Y0, . . . , Yn−1 which makes in a Markov chain itself. Also,Yn+1|Yn is distributed as Y1|Y0. The transition matrix of this processis P k as required.
25
Chapter 3
3.6 Question 6
Let v be a probability vector for which is it given that vj = ΣiviPij for allstates but some state j0. We next show that the same equation holds for j0too. This is done by some algebra as follows:
vj0 = 1−∑j 6=j0
vj = 1−∑j 6=j0
∑i
viPij = 1−∑i
vi∑j 6=j0
Pij = 1−∑i
vi(1−Pij0),
the last equality being true do to the fact that P is stochastic. This thenequals to
1−∑i
vi +∑i
viPij0 = 1− 1 +∑i
viPij0 =∑i
viPij0 ,
as required.
3.7 Question 7
(Liron) Let P be the probability matrix of a time-reversible discrete-timeMarkov process with some finite state space S. According to Theorem (3.4),the limit probability vector by u satisfies
uj =∑i∈S
uiPij , j ∈ S (4)
and ∑j∈S
uj = 1. (5)
Now let us divide the state space into two disjoint partitions J and J ′
such that J ∪ J ′ = S. We will prove that the normalized vector uJ ∈ R|J |:
uj =uj∑j∈J uj
, j ∈ J (6)
is the limit probability vector of the Markov process with probability matrix:
Pij =
{Pij , i 6= j, i, j ∈ JPii +
∑k∈J ′ Pik, i = j, i ∈ J . (7)
Recall the definition of time-reversibility: uiPij = ujPji, ∀i, j ∈ S. We usethis property to show that
uj =∑i∈J
uiPij . (8)
26
Chapter 3
We compute the RHS:∑i∈J
uiPij =∑
i 6=j, i∈JuiPij + uj
(Pjj +
∑kinJ ′
Pjk
)=
∑i∈J
uiPij +∑k∈J ′
ujPjk
TR=
∑i∈J
uiPij +∑k∈J ′
ukPkj
=∑i∈S
uiPij = uj .
We have shown that uJ solves the balance equations of the process defined byP . Therefore, uJ clearly solves the equations too, along with the probabilityvector condition.
3.8 Question 8
We start with the necessity part: Assume the Markov chain is time-reversibleand hence that the detailed balanced equations (see (3.7)) hold. Hence, forany given path (i1, i2, . . . , ik) one gets that
ui1Pi1i2 = ui2Pi2i1 ,
ui2Pi2i3 = ui3Pi3i2 ,
...
uikPiki1 = ui1Pi1ik .
Multiplying all left hand sides and then (separately) all right hand sides andnoting that the products of the u components have the same contributionin both sides, leads to (3.8). For the converse, fix a pair of states i and j.Then for any k intermediate states i1, . . . , ik, condition (3.8) implies that
Pii1Pi1i2 · · ·Pik−1ikPikjPji = PijPjikPikik−1· · ·Pi2i1Pi1i.
Summing up the above with respect to all k-length paths, we get that
P k+1ij Pji = PijP
k+1ji .
Taking limit with respect to k and recalling that limk→∞ Pk+1ij = uj and
that limk→∞ Pk+1ji = ui, we conclude that
ujPij = uiPij ,
27
Chapter 3
as required.In the special case where all off diagonal entries of P are positive, it is
claimed that the condition
PijPjkPki = PikPkjPji
for any three states i,j and k is sufficient for time-reversibility (the neces-sity is trivial since the corresponding condition is necessary for all cycles).Indeed, fix a state, call it state-0, and any positive value for u0. Then defineuj as u0Pi0j/Pji0 . It is possible to show that this choice for uj solves thedetailed balance equations. Indeed, It is easy to check that for any pair of iand j, ujPjk = ukPkj :
ujPjk = u0Pi0jPji0
Pjk and ukPkj = u0Pi0kPki0
Pkj ,
which are easily seen to be equal since Pi0jPjkPki0 = Pi0kPkjPji0 .
3.9 Question 9
Consider two states i and j, both being in J . What we look for is theprobability that given that i is the initial state, for the next state visitedin J is state j. Of course, Pij is the probability of moving directly therebut to this probability we need to add the probability of hoping from i toJ ′ and when J ′ is left, the first state in J visited in state j. Minding allpossible states from which J ′ can be entered, all possible length of time instaying there and finally all possible states to exit J ′ from, we get that thisadditional probability equals
∑k∈J ′
Pik
∞∑n=0
∑l∈J ′
(PnJ ′J ′)klPlj .
Writing this in matrix notation and recalling that∑∞
n=0 PnJ ′J ′ = (I−PJ ′J ′)−1
since PJ ′J ′ is a transient matrix (see Lemma 3.1), the last display equals∑k∈J ′
Pik(I − PJ ′J ′)−1lj .
In matrix notation this leads to PJJ ′(I − PJ ′J ′)−1PJ ′J .
28
Chapter 4
4 Chapter 4
4.1 Question 1
Denote the mean of this type of busy period by bs. Then, in an argumentsimilar to the one leading to (4.11) one gets that
bs = s+ λsb,
where b is the mean of a standard busy period. Indeed, each one who arrivesduring the first service time can be seen as opening a standard busy period.Moreover, λs is the mean number of such arrivals. Finally, from (4.10) welearn that b = x/(1 − ρ). The rest is trivial algebra. Next, let ns be themean number who are served during this type of busy period. Then,
ns = 1 + λsn,
where n is the mean number who are served during a standard busy period.From (4.13) we learn that n = 1/(1− ρ). Hence,
ns = 1 + λs1
1− ρ.
4.2 Question 2
a) Applying Little’s rule to the case where the system under considerationis the server, we get the λW product is in fact λx or ρ. This equals L,the mean number in service. This should be smaller than 1. Hence,ρ < 1 is the condition needed for stability. Note that it does not matterfor how long the vacation is or under which policy the server resumesservice. All required is that once he is back to service, he does nottake another vacation prior to emptying the system.
b) The above argument leads to the conclusion that ρ is the utilizationlevel.
c) The reduction of the queue length from n to n − 1 from the instantof a service commencement is equivalent to a busy period. Thus, thisserver needs to complete n busy periods prior to his next vacation.Hence, the final answer is nb = nx/(1 − ρ). Note that the reductionof the queue length from n to n − 1 when looked from an arbitrarypoint in not as a standard busy period. This is due to the fact thatthe residual service time of the one in service is not x (which is true
29
Chapter 4
only upon service commencement). In fact, it has a value which isa function of n. More on that and at length in Chapter 6. Finally,note that this disclaimer does not hold when service is exponentiallydistributed: By the memoryless property, the residual service time isalways with the same mean of x.
d) With probability ρ the service is busy and hence he will be readyfor the next server after time whose mean equals x2/2x. With thecomplementary probability of (1 − ρ) is on vacation which will endafter time whose
mean equals (n − 1)/(2λ). Note that once in vacation the number inqueue is uniformly distributed between 0 and n− 1, implying that thevacation will end after a uniformly distributed between 0 and n − 1number of arrivals, each of which takes on average 1/λ units of time.Using PASTA and the same argument used in Proof 1 of Theorem 4.2,we conclude that
Wq = Lqx+ ρx2
2x+ (1− ρ)
n− 1
2λ. (9)
Replacing Lq above by λWq (Little’s) leads to an equation for Wq
which is solved to
Wq =λx2
1− ρ+n− 1
2λ.
This is once again a decomposition result.
e) The second term in (9) needs to be replaced with
(1− ρ)nλ (n−1
2λ + s) + s s2
2snλ + s
.
The reasoning is as follows. Suppose arrival finds the server idle (weconsider the setup time as part of idleness). The probability that the
idleness is due to waiting for the queue to be at least of size n is n/λn/λ+s
in which case the mean time until (true) service commencement istime whose mean equals (n − 1)/(2λ) + s. With the complementaryprobability the idleness is due to the server being engaged in setup.In this case the mean time until (true) service commencement equalsthe mean of its residual which equals s2/(2s).
30
Chapter 4
4.3 Question 3
We start with the case where z ≥ 0. Then, using the convolution formula,
fZ(z) =
∫ ∞x=z
fX(x)fY (x− z) dx =
∫ ∞x=z
λµe−λxe−µ(x−z) dx
= λµeµz−1
λ+ µ[e−(λ+µ)]∞z =
λµ
λ+ µe−λz.
For the case where z ≤ 0, note the symmetry: A negative difference is as thecorresponding positive difference with the roles of λ and µ being swapped.
4.4 Question 4
This is Little’s rule one more time time. Specifically, consider the set ofservers as the ‘system.’ L now is the mean number of busy servers, λ is thesame arrival rate and W is the mean time in the system which is x. Thus,L = λx. Of course, 1/x is the service rate. Hence, on the righthand sidewe got the ratio between the arrival rate and the service rate. Note thatwe have reached an insensitivity result: The mean number of busy serversis a function of service time distribution (in fact, also of inter-arrival timedistribution) only through its mean.
4.5 Question 5
The proof is straightforward and follows the definition of N . The point ofthis question is that the number to arrive during a busy period is a stoppingtime with respect to the arrival process, commencing at the first interarrivaltime which follows the first arrival. Note that the issue of stopping time isnot dealt elsewhere in the textbook.
31
Chapter 5
5 Chapter 5
5.1 Question 1
First,
1
1− σi− 1
1− σi−1=
1− σi−1 − (1− σi)(1− σi)(1− σi−1)
=ρi
(1− σi)(1− σi−1).
Then, by (5.3),
N∑i=1
qiWqi =
N∑i=1
ρi(1− σi)(1− σi−1)
= W0
N∑i=1
(1
1− σi− 1
1− σi−1) =
W0(1
1− σN− 1
1− σ0) = W0(
1
1− ρ− 1) = W0
ρ
1− ρ,
as required.
5.2 Question 2
a) We have an M/G/1 model with non-preemptive priorities. The numberof classes, N , equals 2. The arrival rate of class 1, λ1, equals λG(x0).Thus,
σ1 = ρ1 = λG(x0)
∫ x0
x=0xg(x)
G(x0)dx = λ
∫ x0
x=0xg(x) dx.
In a similar way,
ρ2 = λ
∫ x=∞
x=x0
xg(x) dx.
For these values,
W q1 =
W0
1− ρ1and W q
2 =W0
(1− ρ)(1− ρ1).
b) In order to find the mean overall waiting time, we need to averagebetween these two values giving them weights G(x0) and (1−G(x0)),respectively. Hence,
W q = G(x0)W01
1− ρ1+ (1−G(x0))W0
1
(1− ρ)(1− ρ1),
32
Chapter 5
which by trivial algebra is shown to equal
W0
1− ρ1− ρG(x0)
1− ρ1.
Since W qFCFS = W0/(1− ρ) (see (4.5)), we easily get that
W q = W qFCFS
1− ρG(x0)
1− ρ1.
As for the rightmost inequality, we need to show that 1 − ρG(x0) <1− ρ1, or, equivalently, to ρG(x0) > ρ1, or, recalling the definition forρ1 above, to x > x1. This is certainly the case, since x is the overallmean waiting time, while x1 is the mean only through those servicetime is smaller than or equal to x0.
c) In the case where x0 = 0, G(x0) = ρ1 = 0 and hence W q = W0/(1 −ρ) = W q
FCFS . Likewise, in the case where x0 = ∞, ρ1 = ρ andG(x0) = 1, and hence (and again) W q = W q
FCFS . The explanationis as follows: In both case, there is basically only one class (the otherclass, regardless it is it of premium or of disadvantage customers is ofmeasure zero) and hence the mean waiting time is as in a standardFCFS queue.
d) The optimization problem we face here is that of minimizing W q withrespect to x0. In other words, we look for
min0≤x0<∞
1− ρG(x0)
1− ρ1.
e) Denote λ∫ x0
x=0 xg(x) dx by ρ(x0), which now is looked as a function ofx0. Recall that our goal is
minx0
1− ρG(x0)
1− ρ(x0).
Taking derivative with respect to x0, we get that the numerator of thederivative equals
−ρg(x0)(1− ρ(x0)) + (1− ρG(x0))λg(x0).
Replacing ρ here by λx and equating it to zero, leads to the fact thatfor the optimal x0,
x(1− ρ(x0)) = x0(1− ρG(x0)).
33
Chapter 5
From which we easily get
x
x0=
1− ρG(x0)
1− ρ(x0)=
W q
W qFCFS
< 1.
5.3 Question 3
a) (i) This item’s result follows Theorem 4.8. Specifically, when a class icustomers arrives he faces an mount of work of Ri which, bydefinition, needs to be completed before he enters to service. Heindeed enters to service at the instant in which the server wouldhave been idle for the first time, had classes 1, . . . , i would havebeen the only classes which exist. Note that this includes class isince all this class customers who arrive while the tagged customeris in queue, overtake him. This implies that W q
i = Ri/(1 − σi),1 ≤ i ≤ N . In the case where i = 1, the amount of work thatsuch a customer faces in the system and needs to be processedprior to it entrance, is the residual service time of the one inservice (if there is one). This amount of work has a mean ofW0. In other words, R1 = W0. We need to show that W q
i =W0/[(1−si)(1−si−1)], 1 ≤ i ≤ N . Since, σ0 = 0, we are done forthe case where i = 0, and this fact is the anchor of our inductionargument.
(ii) We next claim that Ri+1 = Ri + Lixi, 1 ≤ i ≤ N − 1. Indeed,the amount of work in the system faced by a class i+1 customersis as much a class i customer with the additional work due toclass-i customers who are in the queue upon once arrival (whichare overtaken by a class i arrival). The mean number of suchcustomers is by definition Lq, each of which adding mean valueof work of xi. Note, by Little’s, that Lqi xi = ρiW
qi , 1 ≤ i ≤ N .
(iii) Using all of the above, we get that for 1 ≤ i ≤ N − 1.
W qi+1 =
Ri+1
1− σi+1=Ri + ρiW
qi
1− σi+1=W qi (1− σi) + ρiWi
1− σi+1
=W qi (1− σi−1)
1− σi+1.
Finally, invoke the induction hypothesis that W qi = W0/[(1 −
σi)(1 − σi−1)] and get that W qi+1 = W0/[(1 − σi+1)(1 − σi)], as
required.
34
Chapter 5
b) The proof below has much similarities with the proof for the case whereLCFS without preemption is used. There in a preliminary observationneeded here: If two customers of the same class turned out to be inthe queue at the same time, then each one of them enters to servicebefore the other with a probability of 1/2, regardless of who arrivedearlier.
(i) In the previous case R1 was equal to W0. In the random orderregime, a class 1 arrival will find (on average) half of class 1who are there upon his arrival, entering to service ahead of him.Hence, R1 = W0 + x1 Lq1/2.
(ii) We use again Theorem 4.8. The reasoning is as in the previousmodel with the exception that the traffic intensity of those whoovertake a tagged class i customer is 1−σi−1−ρi/2. In particular,the rational behind ρi/2 is due to the fact that the arrival rate ofthose of his class who overtake him is λi/2.
(iii) Note: the r in the righthand side is a typo. We are asked hereto compare Ri+1 with Ri. In fact, we need to check when needsto be added to Ri in order to get Ri+1. Indeed, this added workhas two sources. The first is those of class i which are overtakena tagged class i customer but of course enter before a class i+1customer. This has a mean value of Lqi xi/2. The second sourceis half of those of his class he finds in the queue upon arrival(the second half is being overtaken by him). This contributes anadditional value of Lqi+1xi+1/2.
(iv) We proof the claim with the help of an induction argument. Westart with i = 1. From the first item here, coupled with Little,we get that
R1 = W0 + ρ1W1/2.
From the second item we learn (since σ0) that
R1 = W q1 (1− ρ1/2).
Combining the two and solving for W q1 , we get that
W q1 =
W0
1− ρ1=
W0
(1− σ1)(1− σ0),
concluding the proof for the case i = 1. Next we assume theresult to hold for i and establish for the case i+ 1. Indeed, from
35
Chapter 5
the previous item we get that
W qi+1(1−σi−ρi+1/2) = W q
i (1−σi−1−ρi/2)+ρiWqi /2+ρi+1W
qi+1/2,
from which we easily get W qi+1 in terms of W q
i . Specifically,
W qi+1 = W q
i
1− σi−1
1− σi+1.
Using the induction hypothesis that W qi = W0/[(1−σi)(1−σi−1)],
completes the proof.
5.4 Question 4
a) Under any possible scenario of a state n = (n1, . . . , nM ), it is possibleto see that given the event that one of two two tagged customers,one an class i customer, the other a class j customer, just enteredservice, it is actually the class i customer with probability pi/(pi+pj),1 ≤ i, j ≤ M . Note that this is also the case when i = j, namely thetwo belong to the same class, making the corresponding probability1/2.
b) A class i arrival find the following mean work in the system he/she as towait for before commencing service. First, the standard W0. Second,each class-j customer presents in queue will enter to service prior tohim/her with probability pj/(pi + pj) (see previous item), leading toa work of xj . Since the expected number of such class-j customers isdenoted by Lqj we can conclude that
Ri = W0 +N∑j=1
pjpi + pj
Lqj xj , 1 ≤ i ≤ N. (10)
c) The value for τi as defined in the question, i.e.,
τi =
N∑j=1
pjpi + pj
ρj , 1 ≤ i ≤ N,
is the traffic intensity of those who in fact have priority over a taggedclass i customer and overtake him even if they arrive while he/she isin line. Then, again with the use of Theorem 4.8, his/her instant ofservice commencement can be looked at as the first time of idlenesswhen Ri is the amount of work in the system and τi is the trafficintensity.
36
Chapter 5
d) Observe (10). Replace Ri on lefthand side by W qi (1 − τi) (which was
proved in the previous item) and Lj xj on the righthand side by ρjWqj
(by Little’s rule), and get
(1− τi)W qi = W0 +
N∑j=1
pjρjpi + pj
W qj , 1 ≤ i ≤ N.
This is a system of N×N linear equations in the unknown W qi , 1 ≤ i ≤
N . Note that in the system the righthand side is a vector of identicalentries, all equal to W0. Thus, one can solve for the system assumingall entries on the righthand side equal to one, and then multiply thesolution by W0. As for the matrix which needs to be inverted inorder to solve the equations, note that its off-diagonal ij entry equals−pjρj/(pi + pj), 1 ≤ i 6= j ≤ n, while its i-th diagonal entry equals(1− τi − ρi/2), 1 ≤ i ≤ N .
5.5 Question 5
Consider a customer whose service time is x. All customers with servicetime of y with y < x who arrive while he/she are in the system leave beforehe/she does from his/her point of his/her delay, it is as all these customersget preemptive priority over him/her. As for customers with y > x theyalso inflict additional waiting time on him. Yet, they all do as they were allx customers who has preemptive priority on the tagged customer. Hence,we can apply the formula for Tx which appears in the middle of page 77
but with the update of σx to σ′x and σ(2)x to σ
,(2)x to reflect the fact that an
additional set of customers, those with a rate of arrival λG(x) and servicerate of (practically) x is to be of concern as they practically overtake thetagged customers. Note that the fact that he/she also inflicts waiting timeon them is irrelevant.
5.6 Question 6
(The question is not phrase well.) Assuming the SJF policy, let x be the(infinitesimal) class of all those whose service time equals x. Their arrivalrate is λg(x), their mean service time is (of course) x and hence their trafficintensity equals λg(x)x. Finally, their mean time in the queues, denoted byW qx equals (see above (5.5)) to W0/(1 − σx)2. Hence, using the continuous
version of (5.1), we get that∫ ∞x=0
λxg(x)W0
(1− σx)2dx = W0
ρ
1− ρ,
37
Chapter 5
from which we can deduce that∫ ∞x=0
xg(x)
(1− σx)2dx =
x
1− ρ
38
Chapter 6
6 Chapter 6
6.1 Question 1
(By Yoav)
a) Given the service time X, vn ∼ Po(λX). Thus,
E(vn) = E(λX) = λx = ρ.
Also,E(v2
n) = E((λX)2 + λX) = λ2(x2) + ρ.
b) The difference between the number of customers left behind in twoconsecutive departures is the number of arrivals minus the one whohas been served, if there was any.
c)
q2n+1 = (qn −4qn + vn+1)2 = q2
n+4q2n+v2
n+1−2qn4qn+2qnvn+1−2vn+14qn
= q2n +4qn + v2
n+1 − 2qn + 2qnvn+1 − 2vn+14qn.
d) Taking expected value of the above yields
E(q2n+1
)= E
(q2n
)+ E(4qn) + λ2x2 + ρ− 2E(qn)(1− ρ)− 2ρE(4qn).
Taking the limit n → ∞, using limn→∞ E(4qn) = ρ, and erasing thesecond moment from both hand sides we get
0 = λ2x2 + 2ρ(1− ρ)− 2q(1− ρ).
Dividing both hand sides by 2(1−ρ) and rearranging yields the result.
e) By little’s formula
W = L/λ =λx2
2(1− ρ)+ x.
Since W = Wq + x we have
Wq =λx2
2(1− ρ).
39
Chapter 6
6.2 Question 2
(By Yoav)
a) First, each one of the j present customers waits x (on expectation).During the service time, each arriving customer (who arrived in the
middle) waits x2
2x . The number of arriving customers is λx (on expec-
tation) and hence wj = jx+ λx2/2.
b) If 0 customers were left behind, there is no waiting until the firstarrival. Since then, the behavior is the same as one customer was leftbehind.
c)
φ =∞∑j=0
jwj =∞∑j=0
(jx+
λx2
2
)uj =
∞∑j=0
juj x+λx2
2= L+
λx2
2
d) Each departure implies a waiting time φ. As there are λ departuresper time unit, the product λφ is the expected waiting time added eachtime unit.
e) The latter is also the expected number of customers in the system.Hence,
L = λφ = λLx+λ2x2
2= ρL+
λ2x2
2
Note: the solution is Lq and not L? what am I missing?
6.3 Question 3
(By Yoav) Let Y, Y1, Y2, . . . be i.i.d. with E(tY ) = N(t). Let X be theservice time that opens the busy period and let A(X) be the number ofarrival during X. Note that A(X)|X ∼ Po(λX). Since each arrival duringX open a new busy period, we have that the distribution of Y is the same
as the distribution of 1 +A(X)∑i=1
Yi Thus we have,
E(tY ) = E
t1+A(X)∑i=1
Yi
= tE(N(t)A(X)
)= tE
[E (N(t))A(X) |X
]
= tE[e−λX(1−N(t))
]= tG∗(λ(1−N(t)))
40
Chapter 6
In the M/M/1 case, with G∗(s) = µµ+s , we have
N(t) =tµ
µ+ λ(1−N(t))or (µ+ λ)N(t)− λN2(t) = tµ
which implies our result.
6.4 Question 4
(By Yoav)
a) The second item in (6.2) is now Zn+1 instead of Yn+1, where Zn+1
follows the probabilities bj .
b) The first row represents the transition probabilities from state 0. Thatis, transitions from an empty system. Thus, after a departure wholeaves and empty system behind him, the next service time will bethe first in a busy period and hence the number of arrivals will beaccording to the bj probabilities. Thus, the first row in (6.3) is withbj replacing aj and the rest or the rows are the same.
c) In each equation in (6.4), the first addend in the right hand side standsfor the product of u0 and the relevant aj from the first row. In ourmodel the aj is replaced by the corresponding bj . The rest of theelements of course remain the same.
d) Assuming FCFS, the number of customers left behind upon depar-ture, given sojourn time W of the the departing customer, follows thedistribution Po(λW ). Hence, Π(t) = E
(e−λ(1−t)W ) = W ∗(λ(1 − t).
Inserting s = λ(1− t) implies (6.11).
e)
Π(t) =∞∑j=0
ujzj =
∞∑j=0
zj∞∑i=0
uiPij =∞∑i=0
∞∑j=0
zjuiPij = u0
∞∑j=0
zjbj+
∞∑i=1
∞∑j=0
zjuiPij
= u0B(t) +∞∑i=1
∞∑j=i−1
zjuiaj−i+1 = u0B(t) +∞∑i=1
uizi−1
∞∑k=0
akzk
u0B(t) +1
t(Π(t)− u0)A(t) = u0
(B(t)− A(t)
t
)+ Π(t)
A(t)
t.
41
Chapter 6
Hence
Π(t)
(1− A(t)
t
)= u0
(B(t)− A(t)
t
)or
Π(t) = u0tB(t)−A(t)
t−A(t)
f) As in the standard M/G/1 we insert t = 1 in both sides of the latterand apply L’Hopital’s rule:
1 = u0B(1) +B′(1)−A′(1)
1−A′(1)= u0
1 + λb− ρ1− ρ
and hence
u0 =1− ρ
1 + λb− ρ,
where b =∑∞
j=0 jbj .
g) We can refer to the sum of setup time and the first service time as the”different” service time that opens a busy period.
Let cj =∞∫
x=0
e−λx (λx)j
j! f(x)dx The number of arrivals during the ”dif-
ferent” service time is the sum of the number of arrivals during thesetup time and the number of arrivals during the first service time.Thus,
bj =
j∑i=0
aicj−i and B(t) = C(t)A(t) = F ∗(λ(1− t))G∗(λ(1− t))
6.5 Question 5
(By Yoav)
a) The time between the first arrival during a vacation is considered”setup time” in terms of 4(g).
b) At the beginning of the last vacation, a vacation time V and an inter-arrival time A started competing, and the inter-arrival time occurredfirst. The remaining vacation time is the difference between the two,V −A. its distribution is the distribution of V −A|V > A. First notethat P (V > A) = E
(1− e−λV
)= 1− V ∗(λ). Second, we have
F ∗(s) = E(e−s(V−A)|V > A
)=
E(e−s(V−A)I{A<V }
)1− V ∗(λ)
42
Chapter 6
=1
1− V ∗(λ)
∞∫x=0
x∫a=0
e−s(x−a)λe−λav(x)dadx
=λ
1− V ∗(λ)
∞∫x=0
e−sxv(x)
x∫a=0
e−a(λ−s)dadx
=λ
(1− V ∗(λ))(λ− s)
∞∫x=0
e−sxv(x)(
1− e−(λ−s)x)dx
=λ
(1− V ∗(λ))(λ− s)
∞∫x=0
v(x)(e−sx − e−λx
)dx =
λ
λ− sV ∗(s)− V ∗(λ)
1− V ∗(λ)
6.6 Question 6
(By Benny)
a) Let X be the number of arrivals during one service period.
P0j = P (X = j + 1|X > 0) =aj+1
1− a0
b) The balance equations are:
u0 = u0a1
1−a0+ u1a0
u1 = u0a2
1−a0+ u1a1 + u2a0
u2 = u0a3
1−a0+ u1a2 + u2a1 + u3a0
...
Multiplying equation j with tj and summing up gives
Π(t) = u0
∑∞j=0 t
jaj+1
1−a0+ 1
tA(t)(Π(t)− u0)
= u0
∑∞j=0 t
j+1aj+1
t(1−a0) + 1tA(t)(Π(t)− u0)
= u0A(t)−a0
t(1−a0) + 1tA(t)(Π(t)− u0)
NowΠ(t) [t−A(t)] = u0
a01−a0
[A(t)− 1]
Π(t) = C A(t)−1t−A(t)
43
Chapter 6
where C is constant. Since Π(t) is z-transform
limt→1
CA(t)− 1
t−A(t)= 1 =⇒ C = lim
t→1
t−A(t)
A(t)− 1
Using L’Hopital’s rule
C = limt→1
1−A′(t)A′(t)
=1− λxλx
=1− ρρ
c) In the above item, we showed that
u0a0
1− a0= C =
1− ρρ
=⇒ u0 =1− ρρ
1− a0
a0
If service times are exponentially distributed
a0 =
∫ ∞t=0
e−λtµe−µtdt =µ
λ+ µ=
1
ρ+ 1
hence
u0 =1− ρρ
ρρ+1
1ρ+1
= 1− ρ
d)
Π′(1) = limt→11−ρρ
A′(t)(t−A(t))−(A(t)−1)(1−A′(t))(t−A(t))2
= limt→11−ρρ
(t−1)A′(t)−A(t)+1(t−A(t))2
= limt→11−ρρ
A′(t)+(t−1)A′′(t)−A′(t)2(t−A(t))(1−A′(t))
= limt→11−ρρ
t−1t−A(t)
A′′(t)2(1−A′(t))
= limt→11−ρρ
11−A′(t)
A′′(t)2(1−A′(t))
= 1−ρρ
A′′(1)2(1−A′(1))2
e) The limit distributions at arrival and departure instants coincide andby the PASTA property this is also the distribution at arbitrary in-stants. Hence, Π′(1) is the expected number of customers in the sys-tem, L, and by Little’s law we get
W = L/λ = 1λ
1−ρρ
A′′(1)2(1−A′(1))2
= 1λ
1ρλ2x2
2(1−ρ)
= λx2
2ρ(1−ρ)
44
Chapter 6
6.7 Question 7
a) The distribution of Bex is the distribution ofA(X)∑i=1
Bi, where Bi are
standard busy periods.
B∗ex(s) = E(e−sBex
)= E
[(B∗(s))A(X)
]= E
(e−λX(1−B∗(s))
)= G∗0(λ(1−B∗(s)))
b) Since Bin = X +Bex, we have
E(e−sBin
)= E
(e−sX−sBex
)= E
(e−sX+λX(1−B∗(s))
)= G∗0(s+λ(1−B∗(s)))
6.8 Question 8
THERE IS AN ERROR IN THE QUESTION. IT SHOULD BE: The queue-ing time (exclusive of service) is as a busy period where the first service timeis the residual service time. The latter has the LST of (1−G∗(s))/xs. Hence,we are in the situation described in Q7a above with G∗0(s) having this ex-pression. Using this for G∗0(s), we get that the LST of this queueing time isderived by inserting (λ− λB∗s(s)) for s in (1−G∗(s))/xs. The final resultis thus
(1−G∗(λ− λB∗(s)))(λ− λB∗(s))x
.
6.9 Question 9
By definition,
a0 =
∞∫x=0
e−λxg(x)dx = G∗(λ).
Using the first equation in (6.4) and the fact that u0 = 1 − ρ, we have1− ρ = (1− ρ)G∗(λ) + u1G
∗(λ) and hence
u1 =(1− ρ)(1−G∗(λ))
G∗(λ)
6.10 Question 10
d
dsW ∗q (s) =
d
ds
1− ρ1− ρG∗r(s)
=(1− ρ) ddsG
∗r(s)
(1− ρG∗r(s))2
45
Chapter 6
Multiplying by -1 and inserting s = 0 yields
E(Wq) =(1− ρ)E(R)
(1− ρ)2=
E(R)
1− ρ
d2
ds2W ∗q (s) =
d2
ds2
1− ρ1− ρG∗r(s)
=d
ds
(1− ρ) ddsG∗r(s)
(1− ρG∗r(s))2
=(1− ρ)
(2ρ(ddsG
∗r(s)
)2+ (1− ρGr(s)) d
2
ds2G∗r(s)
)(1− ρGr(s))3 .
Inserting s = 0 yields
(1− ρ)(2ρ(E(R))2 + (1− ρ)E(R2)
)(1− ρ)3
=2ρ(E(R))2
(1− ρ)2+
E(R2)
1− ρ
6.11 Question 11
(Benny)
a) First note that
a0 =
∫ ∞0
e−λt(λt)0
0!g(t)dt =
∫ ∞0
e−λtg(t)dt = G∗(λ)
and by (6.7) and (6.5) we get
u1 =1
a0(u0−(1−ρ)a0) =
1
G∗(λ)(1−ρ−(1−ρ)G∗(λ)) = (1−ρ)
1−G∗(λ)
G∗(λ)
Now, from (6.17) it follows that
r1 =1− ρλ
1− h1
h1=
1− ρλ
1− u1/(1− u0)
u1/(1− u0)
Substituting u0 and u1 gives
r1 =ρG∗(λ)
λ(1−G∗(λ))− 1− ρ
λ=
x
1−G∗(λ)− 1
λ
b) Using (6.17) again
rn+1 = 1−ρλ
1−hn+1
hn+1= 1−ρ
λ
∑∞i=n+1 ui−un+1
un+1
= 1−ρλ
∑∞i=n ui−un−un+1
un+1= 1−ρ
λ
[∑∞i=n ui/un−1un+1/un
− 1]
= 1−ρλ
unun+1
1−hnhn− 1−ρ
λ = unun+1
rn − 1−ρλ
46
Chapter 6
6.12 Question 12
a) Let Sn be the number of customers present in the system upon thecommencement of the nth service and let Xn be the number of cus-tomers left behind upon the nth departure. We have Sn = Xn−1 +I{Xn−1=0}. That is, X and S coincide unless X = 0 (then S = 1).Since in the transition matrix of X P0j = P1j for all j ≥ 0, we canmerge states 0 and 1. Hence, Sn is a Markov chain as well.
b) By the latter, after merging the states 0 and 1, we get the desiredresult.
6.13 Question 13
First note that the un/ρ is the probability of n customers in the queue, givenit is greater than or equal to one, n ≥ 1. Denote such a random variableby L+. Second, from the previous question we learn that upon entrance toservice the probability of n customers (denoted by en) is also un but onlyfor n ≥ 2. For n = 1 it is u0 + u1 while it is zero with a probability zero, ofcourse. Then, by Bayes’ rule we get for L ≥ 1,
fA|L+=n(a) =fA(a)P(L+ = n|A = a)
P(L+ = n). (11)
Clearly, fA(a) = (1 − G(a))/x (see (2.4) in page 25) and, as said, P(L+ =n) = un/ρ, n ≥ 1. Next, note that L+ is the sum of two random variables,how many where in the system upon the current service commencement(which is independent of the age of this service) plus how many had arriveduring it (which, given an age of service of a follows a Poisson distributionwith parameter λa. Hence,
P(L+ = n|A = a) =
n∑i=1
eie−λa (λa)n−i
(n− i)!
= u0(λa)n−1
(n− 1)!+
n∑i=1
uie−λa (λa)n−i
(n− i)!, n ≥ 1.
Putting all in 11 (recall that u0 = 1− ρ) concludes the proof. For the casewhere n = 1, one needs to use the expression for u1 as given in Question 9above, coupled with some minimal algebra, in order to get that
fA|L=1 = λ1−G(a)
1−G∗(λ)e−λa.
47
Chapter 6
6.14 Question 14
By (2.6) (see page 28), we get that fA,R(a, r) = g(a+r)/x. Likewise, by (2.4)(see page 25) we get that fA(a) = (1−G(a))/x. Hence,
fR|A=a(r) = ¯fA,R(a, r)fA(a) =g(a+ r)
1−G(a).
Insert this and (6.26) into (6.28) and conclude the proof.
6.15 Question 15
(Benny) The second moment of of the queueing time equalsd2W ∗q (s)
ds2
∣∣∣s=0
.
Define φ(s) := λG∗(s) + s− λ. From (6.13) we get
dW ∗q (s)
ds= (1− ρ)
φ(s)− sφ′(s)φ2(s)
and
d2W ∗q (s)
ds2= (1− ρ) (φ′(s)−φ′(s)−sφ′′(s))φ2(s)−(φ(s)−sφ′(s))(2φ(s)φ′(s))
φ4(s)
= (1− ρ)−sφ′′(s)φ2(s)−2φ2(s)φ′(s)+2sφ(s)(φ′(s))2
φ4(s)
= (1− ρ)−sφ′′(s)φ(s)−2φ(s)φ′(s)+2s(φ′(s))2
φ3(s)
(12)
Note that φ(0) = 0 and hence lims→0d2W ∗q (s)
ds2has 0
0 form. We useL’Hopital’s rule three times
lims→0d2W ∗q (s)
ds2= lims→0(1− ρ)−sφ
′′(s)φ(s)−2φ(s)φ′(s)+2s(φ′(s))2
φ3(s)
= lims→0(1− ρ)
−φ′′(s)φ(s) −s(φ(3)(s)φ(s) + φ′′(s)φ′(s))−2((φ′(s))2 + φ(s)φ′′(s)) + 2((φ′(s))2 + 2sφ′(s)φ′′(s))
3φ2(s)φ′(s)
= lims→0(1− ρ)−3φ′′(s)φ(s)−sφ(3)(s)φ(s)+3sφ′(s)φ′′(s)3φ2(s)φ′(s)
= lims→0(1− ρ)
−3(φ(3)(s)φ(s)+ φ′′(s)φ′(s))− φ(3)(s)φ(s)
−s(φ(4)(s)φ(s) + φ(3)(s)φ′(s))+3(φ′(s)φ′′(s) + s((φ′′(s))2 + φ′(s)φ(3)(s)))
3(2φ(s)(φ′(s))2+φ2(s)φ′′(s))
= lims→0(1− ρ)−4φ(3)(s)φ(s)−sφ(4)(s)φ(s)+3s(φ′′(s))2+2sφ′(s)φ(3)(s)6φ(s)(φ′(s))2+3φ2(s)φ′′(s)
= lims→0(1− ρ)
−4(φ(4)(s) φ(s) + φ(3)(s)φ′(s))− φ(4)(s)φ(s)
−s(φ(5)(s)φ(s) + φ(4)(s)φ′(s)) + 3((φ′′(s))2 + 2sφ′′(s)φ′(s))+2(φ′(s)φ(3)(s) + s(φ′′(s)φ(3)(s) + φ′(s)φ(4)(s)))
6((φ′(s))3+φ(s)2φ′(s)φ′′(s))+3(2φ(s)φ′(s)φ′′(s)+(φ(s))2φ(3)(s))
= lims→0(1− ρ)−2φ(3)(s)φ′(s)+3(φ′′(s))2
6(φ′(s))3
48
Chapter 6
φ′(s) = λdG∗(s)ds + 1
φ′(0) = λ dG∗(s)ds
∣∣∣s=0
+ 1 = λ(−x) + 1 = 1− ρ
φ(k)(s) = λdkG∗(s)dsk
, k ≥ 2
φ(k)(0) = λ dkG∗(s)dsk
∣∣∣s=0
= (−1)kλxk, k ≥ 2
lims→0d2W ∗q (s)
ds2= (1− ρ)−2(−λx3)(1−ρ)+3(λx2)2
6(1−ρ)3
= 2λx3(1−ρ)+3(λx2)2
6(1−ρ)2
6.16 Question 16
We have here an M/G/1 queue. Denote the arrival rate by λ. Service timesfollow an Erlang distribution with parameters 2 and µ. Then, by (1.24) (seepage 16), G(s) = µ2/(s + µ)2. Also, the mean service time equals 2/µ andhence ρ = 2λ/m. The rest follows (6.10) (see page 85). Specifically,
G∗(λ(1− t)) =µ2
(µ+ λ(1− t))2
and hence
Π(t) = (1− ρ)(t− 1)µ2
t(µ+ λ(1− t))2 − µ2.
Some algebra shows that the denominator here equals λ2(1− t)2t+2λµt(1−t) + µ2(1− t). Inserting this and recalling dividing all by µ2, concludes theproof.
6.17 Question 17
a) Some thoughts leads to the conclusion that the event A ≥W coincideswith the event that one who just concluded service in a FCFS M/G/1queue, leaves behinds an empty system. This probability is 1 − ρ.See (6.5) on page 83.
b) Denote by fW (w) the density function of the waiting time (serviceinclusive). Then,
P(A ≥W ) =
∫ ∞w=0
fW (w)P(A ≥ w) dw =
∫ ∞w=0
fW (w)e−λw dw = W ∗(λ).
The rest of the proof is by the previous item.
49
Chapter 6
c) Insensitivity property was defined as some parameter of a queueingsystem which is a function of the service distribution only through itsmean value which was denoted by x. See first main paragraph on page56. In the previous two items of this question, the lefthand sides arefunctions of G(x) while the righthand side only of x.
Note that the order of proving the first two items can be reversed. Specif-ically, inserting s = λ in (6.12) on page 86, immediately leads to the factthat W ∗(λ) = 1− ρ.
An alternative proof: (Yoav’s)
a) Consider an arbitrary customer, who sojourn time is W . The event{A ≥ W} is the event the customer who arrives after the arbitrarycustomer will find an empty system (because his inter arrival time islarger than the sojourn time of the previous customer). The probabilityof finding an empty system is 1− ρ.
b) Recall that for any random variable Y , P(Y ≤ A) = Y ∗(λ). This factis sufficient for the result. Yet, direct derivation, by recalling that
W ∗(s) =(1− ρ)G∗(s)
1− ρG∗r(s)=
s(1− ρ)G∗(s)
s− λ(1−G∗(s))
and inserting s = λ we get W ∗(λ) = 1− ρ.
c) These results actually implies that the value of W ∗(λ) depends on theservice time distribution on through its mean and insensitive to theother properties of the distribution.
6.18 Question 18
The corollary is referring on an M/G/1 queue. We first prove (6.23) althoughit was already established as Theorem 4.7 (page 62). Our point of departureis Equation (6.19). Taking derivative with respect to s is both hand sideswe get that
(B∗)′(s) = (G∗)′(s+ λ(1−B∗(s)))(1− λ(B∗)′(s)). (13)
Recall at any LST gets the value of 1 when the variable s gets the value ofzero and the negative of its derivative there coincides with the correspondingmean value we get that
−b = −x(1 + λb).
50
Chapter 6
Solving this for b we get (6.23). Our next mission is to prove (6.24). Takingderivative with respect to s in (13) we get
(B∗)”(s) = (G∗)”(s+λ(1−(B∗)(s))(1−λ(B∗)′(s))2−λ(B∗)”(s)(G∗)′(s+λ(1−(B∗)(s)).
Recall that in the value of the second derivative of an LST at zero yieldsthe corresponding second moment, we get
b2 = x2(1 + λb)2 + λb2x.
Using the value for b just derived and solving for b2 gives the expression weare after.
6.19 Question 19
a) Consider the hint. The LST in case T = 0 and it is the residual of abusy period in case T > 0. Using Equation (2.10) (page 30), we getthe later equals (1−B∗(s))/(bs). Hence,
T ∗(s) = (1− ρ) + ρ1−B∗(s)
bs.
Since b = [(1− ρ)µ]−1 (see Equation (4.10)), we get that
T ∗(s) = (1− ρ) + (1− ρ)λ1−B∗s
s.
Note that this derivation holds for any M/G/1.
b) The first thing to observe here is that in an M/M/1 the time to reducethe number in the queue from n to n − 1 is as the length of a busyperiod.1 Hence, conditioning on L, T is distributed as the sum of Lindependent busy periods. The random variable L ≥ 0 itself follows ageometric distribution with parameter (1 − ρ): P(L = l) = (1 − ρ)ρl,l ≥ 0. Hence,
T ∗(s) = E(e−sT ) = E(E(e−sT |L)) =∞∑l=0
(1−ρ)ρl(B∗(s))l =1− ρ
1− ρB∗(s),
as required.
c) Equating the two expressions for T ∗(s) derived above, leads to anequation in the unknown B∗(s). Some algebra leads to its equivalenceto Equation (6.21).
1The argument in the beginning of Section 6.3.2 (page 89) explains in fact why thisresult does not extend to an M/G/1.
51
Chapter 6
6.20 Question 20
There are two typos here. First, the recursion should be read as
ui =1
a0(u0βi +
i−1∑j=0
βi−j+1).
Second, the definition of βi is Σ∞j=iai. Finally, this recursion is in fact givenin Equation (6.6) (see page 84). The derivation which proceeds it proves it.
6.21 Question 21
a) The sojourn time in an M/M/1 queue follows an exponential distribu-tion with parameter (1−ρ)µ (see Theorem 6.5 on page 87). Hence theresidual and the age of Mr. Smith time in the system follows the samedistribution (see Example 1 on page 27). The total time in the sys-tem, given one is there, is the corresponding length bias distributionwhich is the sum of two independent such exponential random vari-ables, namely Erlang with parameters 2 and (1− ρ)µ) (see Example 2in page 22).
b) The number in an M/M/1 queue (service inclusive) follows a geomet-ric distribution with parameter (1 − ρ). Those ahead of Mr. Smith(inclusive of him) can be looked as the residual and those behind him(inclusive of him) as the age of this number. Hence, by Example 4on page 33, we learn that both follow the same distribution which ishence geometric with parameter (1 − ρ). It is also shown there thatthese two are independent. The total number is hence their sum mi-nus one (in order not to count Mr. Smith twice). This sum, again seeExample 4 on page 33, follows a negative binomial distribution withparameters 2 and (1− ρ).
52
Chapter 7
7 Chapter 7
7.1 Question 1
a) The LST of the random variable corresponding to an interarrival timeis
λn
(λ+ s)n
(see (1.24) on page 16). This, coupled with (7.4) from page 101, leadto the value we are after solves
σ =λn
(λ+ µ(1− σ))n.
Looking at this an equation who might have more than one solution,we are interested in the unique root which is a fraction between zeroand one (see Theorem 7.1 on page 100).
b) In the case where n = 1 we the quadratic equation σ(λ+µ(1−σ)) = λ.This is a quadratic equation. It has two solutions 1 and λ/µ. We areafter the latter solution (which is assumed to be smaller than one).Indeed, in the where n = 1 we in fact have an M/M/1 model andfor example (7.6) on page 103 hence coincides with what we know forM/M/1 (see page 85 and or Example 2 on page 120 (note that thelatter corresponds to a later chapter). In the case where n = 2 thequadratic equation
σ(λ+ µ(1− σ))2 = λ2,
or, equivalently,
µ2σ3 − 2µ(λ+ µ)σ2 + (λ+ µ)2σ − λ2 = 0.
This cubic equation has a root which equals one, so what we look ofsolves
µ2σ2 − (2λµ+ µ2) + λ2 = 0.
This quadratic equation has two positive roots. We look for the small-est among them (it is impossible to have two fractions as solutions).Hence, the value we are looking for equals
2λµ+ µ)−√
(2λµ+ µ)2 − 4µ2λ2
2µ2.
53
Chapter 7
7.2 Question 2
a) (i) Initiate with Bayes’ rule:
fA|L=n(a) =fA(a)
P(L = n)P(L = n|A = 0).
Next, by (2.4) (see page 25), fA(a) = G(a)/t, and by (7.6) (seepage (103)), P(L = n) = ρ(1 − σ)σn−1, for n ≥ 1. Thus, allneeded is to derive P(L = n|A = a) for n ≥ 1. This value canbe looked by conditioning on the number in the system upon thebeginning of the current inter-arrival time. This number, aughtto equal some value m with n−1 ≤ m <∞ (where m− (n−1) isthe number of service completions during the current (age of)inter-arrival time). This number, given an age of a equals kfor 0 ≤ k < m with probability η−µa(µa)k/k! (Note that thisexpression does not hold for the case k = m as k = m gets thecomplementary value of all the cases k < m). Hence,
P(L = n) =∞∑
m=n−1
(1− σ)σme−µa(µa)m−n+1
(m− n+ 1)!.
This value equals
(1−σ)σn−1e−µa∞∑i=0
(σµa)i
i!= (1−σ)σn−1e−µaeσµa = (1−σ)σn−1e−µa(1−σ).
Putting it all together and some minimal algebra concludes theproof.
(ii) First, trivially,∫ ∞a=0
G(a)µe−µ(1−σ) da = tµ
∫ ∞a=0
G(a)
te−µ(1−σ) da.
Since G/t is the density of the age of service, we get that thelast expression equals tµ times the LST of this age at the pointµ(1− σ). Using (2.10) (see page 30). Hence, we get
tµ1−G∗(µ(1− σ))
tµ(1− σ)=
1−G∗(µ(1− σ))
1− σ.
This by the definition of σ (see (7.4) on page 101), equals
1− σ1− σ
= 1,
as required.
54
Chapter 7
(iii)
P(S ≤ s|S ≤ Y ) =P(S ≤ s, S ≤ Y )
P(S ≤ Y )=
P(S ≤ {s, Y })P(S ≤ Y )
=
∫ st=0 µ(1− σ)eµ(1−σ)G(t) dt∫∞t=0 µ(1− σ)eµ(1−σ)G(t) dt
.
Consider the numerator. Take a derivative with respect to s andget µ(1 − σ)e−µ(1−σ)sG(s). This function is proportional to thedensity we are looking for and in particular, there is no need toconsider further the denominator. This value up to a multiplica-tive constant is what appears in (7.7) and of course (7.7) is hencethe density we are after.
b) First, note that P(L = 0) = 1 − ρ and P(L ≥ 1) = ρ. Second, notethat the unconditional density of A, the age of of the arrival, is G(a)/t(see (2.4) on page 25). Hence,
G(a)
t= (1− ρ)fA|L=0(a) + ρfA|L≥1(a).
The value of fA|L≥1(a) is given in fact in (7.7) (as said above, theexpression there is free of n as long as n ≥ 1 and hence it coincidewith the corresponding expression when L ≥ 1). All left is to solve forfA|L=0(a) is can be done by some trivial algebra.
c) The point of departure should be that
fR(r) =
∫ ∞a=0
fA(a)g(a+ r)
G(a)da.
This can be seen by integrating (2.6) (see page 28) with respect to a.The same is the case if the density of A is replace by some conditionaldensity of A, say A|X as long as given A, R and X are independent.This is for example the case here: L and R are certainly not indepen-dent. Yet, given A, L and R are independent.
7.3 Question 3
(Benny)
a) In order to determine the distribution of the number in the systemat the next arrival, all is needed is the number in the system at thecurrent arrival. This is true since the service times are memoryless.
55
Chapter 7
b) Let Yn be the (random) number of customers served during the n-thinterarrival time. Clearly,
Xn+1 = Xn + 1− Yn
c) When s servers serve s customers simultaneously, the time till next ser-vice completion follows exponential distribution with rate sµ. Hence,assuming there is no shortage of customers in the queue, i.e., all servicecompletions during this interarrival time occurred while all s serverswere busy,
bk =
∫ ∞t=0
e−sµt(sµt)k
k!g(t) dt
d) In order that all service completions during this interarrival time oc-curred while all s servers were busy we need that j ≥ s − 1 which isassumed. Hence,
P (Xn+1 = j|Xn = i) = P (Yn = i+ 1− j|Xn = i) = bi+1−j
e) The corresponding balance equations are
ui =∞∑
j=i−1
ujbj+1−i, i ≥ s,
or alternatively
ui =∞∑n=0
ui−1+nbn, i ≥ s.
Now, it can be checked that ui = usσi−s, i ≥ s for σ that solves
σ =∑∞
n=0 bnσn, solve the above equations.
∞∑n=0
ui−1+nbn =∞∑n=0
usσi−1+n−sbn = usσ
i−1−s∞∑n=0
σnbn = usσi−s = ui
f) The proof is the same as the proof of Theorem 7.1, but with sµt > 1.
56
Chapter 8
8 Chapter 8
8.1 Question 1
a) The state space is {0, 1, . . . , N}, representing the number of customersin the system. It is a birth and death process with λi,i+1 = λ, 1 ≤ i ≤N − 1, and µi,i−1 = µ, 1 ≤ i 6= N . All other transition rates equalzero.
b) The limit probabilities can be derived as leading to equation (8.17)but now 1 ≤ i ≤ N − 1. Using induction, this leads to πi = π0ρ
i,0 ≤ i ≤ N where ρ = λ/µ. Using the fact that the sum of the limitprobabilities is one, we get that π0 = [
∑Ni=0 ρ
i]−1. This completes theproof.
c) The fact that L(0) = 1 is trivial. The rest is algebra:
L(N + 1) =ρN+1∑N+1i=0 ρi
=ρN+1∑N
i=0 ρi + ρN+1
=ρN+1
ρN
L(N) + ρN+1=
ρ1
L(N) + ρ=
ρL(N)
1 + ρL(N),
as required.
d) Note first thatλπi = µπi+1, 0 ≤ i ≤ N − 1.
Summing up both sides from i = 0 through i = N − 1, we get
λ(1− πN ) = µ(1− πN ),
as required
e) No conditions are required on the transitions rates as long as they arepositive.
f) As any birth and death process, this is a time-reversible Markov pro-cess and the detailed balance equations hold. They are
λπi = µπi+1, 0 ≤ i ≤ N − 1.
57
Chapter 8
8.2 Question 2
(Liron) Proof of formula formula (8.14).For the case were the service rate is equal for all classes, i.e. µc = µ ∀c ∈ C(where C is the set of classes), the balance equations can be re-written as:
π(∅)∑c∈C
ρc∈C =∑c
π(c)(∑c∈C
ρc + 1
)π(c1, . . . , cn) = ρcnπ(c1, . . . , cn−1) +
∑c∈C
π(c, c1, . . . , cn),
where ρc = λcµ . The solution is of a product form:
π(c1, . . . , cn) = π(∅)n∏i=1
ρci . (14)
We find the multiplicative constant by summing over the probabilities andequating to one. One must be careful in the summation in order to coverall of the possible state space, which can be stated as follows:
∅ ∪
{n ∈ {1, 2, . . .} :
∑c∈C
n∑i=1
1(ci = c) = n
},
i.e. all the possible queue sizes, and all the possible class divisions for everyqueue size. If we denote mc :=
∑ni=1 1(ci = c), then the number of possible
vectors (c1, . . . , cn) is n!∏c∈Cmc!
(because the order within a class does not
matter). Finally the normalization equation is:
1 = π(∅) +
∞∑n=1
∑{∑c∈Cmc=n}
n!∏c∈Cmc!
π(c1, . . . , cn)
= π(∅) +∞∑n=1
∑{∑c∈Cmc=n}
n!∏c∈Cmc!
π(∅)∏c∈C
ρmcc ,
which yields:π(∅) = (1 +A)−1 , (15)
where:
A =∞∑n=1
∑{∑c∈Cmc=n}
n!∏c∈C mc!
∏c∈C
ρmcc . (16)
58
Chapter 8
8.3 Question 3
This exercise goes verbatim with Exercise 3 from Chapter 3.
8.4 Question 4
From (8.23) (swapping the roles of the indices) we get that q∗ij = πjqji/πi.Substituting this in (8.22), we get∑
j∈Nqij =
∑j∈N
πjqjiπi
,
from which the balance equation
πi∑j∈N
qij =∑j∈N
πjqji
is immediate. The symmetry between the original and the time-reversedprocesses, namely the fact that (q∗)∗ = q, takes care of the second part ofthe exercise.
8.5 Question 5
a) In the case where the server is busy, the next departure takes timewhich follows an exponential distribution with parameter µ.
b) In case where the server is idle the time until first departure is thesum of two independent and exponential random variables, one withparameter λ (arrival) and one with parameter µ (service). Denote thefirst by X and the second by Y . Then,
fX+Y (x) =
∫ x
t=0fX(t)fy(x−t) dt =
∫ x
t=0λe−λtµe−µ(x−t) dt =
λµe−µx
λ− µ
∫ x
t=0(λ−µ)e−(λ−µ)t dt
=λµ
λ− µe−µx(1− e−(λ−µ)x) =
λµ
λ− µ(e−µx − e−λx),
as required.
c) We consider in the previous two items, the two possible cases, leadingto the next departure. The former is the case with probability ρ (whichis the probability that the server is busy) while it is the second withprobability 1−ρ (which is the probability that the server is idle). Thus,
59
Chapter 8
the density function at point x for the time until the next departureis
ρµe−µx + (1− ρ)λµ
λ− µ(e−µx − e−λx).
Minimal algebra shows that this equals
λe−λx,
which is the exponential density with parameter λ. Note that thedeparture rate is λ (and not µ): what’s come in, must come out. Sothe value of the parameter was quite expected.
An alternative proof using LSTs: Let T be the random time betweentwo departures. It’s LST is
E(e−sT ) = (1− ρ)E(e−sT |L = 0) + ρE(e−sT |L ≥ 1).
In the case where L = 0, T = X + Y where X is exponential with λand Y is exponential with λ. In the case where L ≥ 1, T = Y . Hence.
E(e−sT ) = (1− ρ)λ
λ+ s
µ
µ+ s+ ρ
µ
µ+ s.
Some algebra shows that it equals λ/(λ + s) which is the LST of anexponential random variable with parameter λ is required.
d) There is one more thing which needs to be established and the anal-ysis above does not address it: In order to show that the departureprocess is Poisson, we need to show that consecutive departure timesare independent.
8.6 Question 6
(Liron)
a) The only transition rates that need to be updated are q,0,1A = pλ anfdq,0,1B = (1− p)λ. The new flow diagram:
b) The balance equations:
π0 (λp+ λ(1− p)) = π1Aµ1 + π1Bµ2 (17)
π1A (µ1 + λ) = π0λp+ π2µ2 (18)
π1B (µ1 + λ) = π0λ(1− p) + π2µ1 (19)
π2 (λ+ µ1 + µ2)) = π1Aλ+ π1Bλ+ (µ1 + µ2)π3 (20)
πi (λ+ µ1 + µ2)) = πi−1λ+ (µ1 + µ2)πi+1, i ≥ 3. (21)
60
Chapter 8
0
1A
1B
2 · · · i− 1 i i+ 1 · · ·
pλ
µ1
(1− p)λ
µ2
λ
µ2
λ
µ1
λ
µ1 + µ2
λ
µ1 + µ2
λ
µ1 + µ2
λ
Figure 1: Two non-identical servers. First server in busy period chosenaccording to p.
We can apply the cut-balancing Theorem 8.3 to states i ≥ 2:
πiλ = πi+1(µ1 + µ2)⇔ πi+1 = πiλ
µ1 + µ2. (22)
Which is also to equivalent to πi = π2
(λ
µ1+µ2
)i−2∀i ≥ 2. We are left
with computing π0, π1A, π1B and π2. From equations (18) and (19) wederive:
π1A = π0λp
λ+ µ1+ π2
µ2
λ+ µ1(23)
π1B = π0λ(1− p)λ+ µ2
+ π2µ1
λ+ µ2. (24)
By plugging in the above equations into (17) we get:
π0λ = π0λpµ1
λ+ µ1+ π2
µ1µ2
λ+ µ1+ π0
λ(1− p)µ2
λ+ µ2+ π2
µ1µ2
λ+ µ2. (25)
Simple algebra yields π2 = π0C, where:
A =λ2 (λ+ (1− p)µ1 + pµ2)
µ1µ2(2λ+ µ1 + µ2). (26)
We can now conclude that the solution to the balance equations is
61
Chapter 8
given by:
π1A = π0λp+ µ2C
λ+ µ1(27)
π1B = π0λ(1− p) + µ1C
λ+ µ2(28)
πi = π0C
(λ
µ1 + µ2
)i−2
, i ≥ 2. (29)
Finally, to derive π0 we sum the probabilities to and equate them toone:
1 = π0 + π1A + π1B +∞∑i=2
πi
= π0
(1 +
λp+ µ2C
λ+ µ1+λ(1− p) + µ1C
λ+ µ2+A
∞∑i=2
(λ
µ1 + µ2
)i−2)
= π0
(1 +
λp+ µ2C
λ+ µ1+λ(1− p) + µ1C
λ+ µ2+A
µ1 + µ2
µ1 + µ2 − λ
),
yielding:
π0 =
(1 +
λp+ µ2C
λ+ µ1+λ(1− p) + µ1C
λ+ µ2+A
µ1 + µ2
µ1 + µ2 − λ
)−1
. (30)
c) From Theorem 8.4 we know that the transition rates of the time-reversed process are: q∗ij =
πjπiqji, i ≥ 0. We can apply the results of
62
Chapter 8
(b):
q∗0,1A = µ1λp+ µ2C
λ+ µ1
q∗0,1B = µ2λ(1− p) + µ1C
λ+ µ2
q∗1A,0 = λλ+ µ1
λp+ µ2C
q∗1B,0 = λλ+ µ2
λ(1− p) + µ1C
q∗1A,2 = µ2Cλλ+ µ1
λp+ µ2C
q∗1B,2 = µ1Cλλ+ µ2
λ(1− p) + µ1C
q∗2,1A = λλp+ µ2C
C(λ+ µ1)
q∗2,1B = λλ(1− p) + µ1C
C(λ+ µ2)
q∗i,i+1 = λ, i ≥ 2
q∗i,i−1 = µ1 + µ2, i ≥ 3
d) The process is not time-reversible. The Kolmogorov criterion on theproduct of the transition rates is a necessary and sufficient conditionfor time reversibility, so it is enough to show one cycle where thecriterion is not met. Consider the cycle (0, 1A, 2, 1B, 0). The productof the transition rates in the original order along this cycle equalspλλµ1µ2. Reversing the order one gets (1 − p)λλµ2µ1. These twoproducts are equal if and only if p = 1
2 . This shows that p = 12 is a
necessary condition for time-reversibility.
e) As it turns out it is also sufficient: All other cycles are as this oneor they are based on simply traversing the order of the states whichtrivially obeys the condition.
8.7 Question 7
Inspect Figure 8.11 in page 117. The states are the same. Likewise is thecase with regard to the service rates µi, i = 1, 2, where applicable. Thedifference are as follows: q00,10 = q00,01 = λ/2 and all other λ1 and λ2 need
63
Chapter 8
to be replaced with the common value of λ. The process is certainly time-reversible: The Markov process has a tree-shape and in fact, by re-labelingthe states, we have here a birth-and-death process. See the discussion onthe top of page 132. As for the limit probabilities, keep the definition for πijas stated in Example 12 in page 126. By the cut balancing theorem, we getthat πi+1,0 = ρ1πi,0 for i ≥ 1 and ρ1 = λ/µ1. Hence, πi,0 = ρi−1π1,0, i ≥ 1.Likewise, π1,0 = ρ1π0,0/2 and hence πi,0 = ρi1π00/2. Similarly, π0,i = ρi2π00/2for i ≥ 1 and ρ2 = λ/µ2. Since π00 + Σ∞i=1πi0 + Σ∞i=1π0i = 1, we get that
π00(1 +ρ1
2(1− ρ1)+
ρ2
2(1− ρ2)= 1,
from which the value for π00 can be easily found to equal
2(1− ρ1)(1− ρ2)
2− ρ1 − ρ2 − ρ1ρ2.
8.8 Question 8
Assume for all pairs of i and j, πiqij = πjqij , namely the vector π obeys∑j
πiqij =∑j
πjqij
orπi∑j
qij =∑j
πjqij ,
which is the balance equation corresponding to state i.
8.9 Question 9
a) Exponential with parameter µ. This is by the fact that the distributionof the age of an exponential distribution coincides with the distributionof the original random variable. See Example 1 in Section 2.2.2.
b) Consider the time epoch of the current service commencement. Look-ing from this point of time, no arrival took place and likewise a not ser-vice completion. This means that time since service commencement,namely its age, is the minimum between two independent exponentialrandom variables with parameters λ and µ. Since this minimum isalso exponential with parameter λ + µ and its age follows the samedistribution (since it is exponential) we conclude with the answer: Ex-ponential with parameter λ+ µ.
64
Chapter 8
c) The answer was in fact given at the previous item.
d) Using the results of the previous item and replacing densities withprobabilities
P(empty|Age = t) =P(empty)P(Age = t|empty)
P(Age = t)= (1−ρ)
(λ+ µ)e−(λ+µ)
µe−µt= (1−ρ2)e−λt.
Another possible question here is: Given one is in service, what is the dis-tribution of one’s service time. The answer: it an Erlang distribution withparameters 2 and µ. One way to look at this is by the fact that the lengthdistribution of an exponential distribution is an Erlang with these param-eters. See Example 1 in Section 2.2.1. Another way is to notice that thefuture (ie., residual) service time is exponential with µ by the memorylessproperty. Likewise, by the time-reversibility of the process, this is also thedistribution of the age. Both are independent. Finally, the sum of twoiid exponentially distributed random variable with parameter µ follows anErlang distribution with parameters 2 and µ. See Section 1.2.5.
8.10 Question 10
a) The amount of work at the system is less than or equal than in the cor-responding FCFS (without reneging) due to work who leaves withoutbeing served. Hence this is not a work-conversing system.
b) Since given the number in the system at some time determines thefuture (statistically) in the same way as in the case when further pasthistory is given, the number in the system presents a state in a Markovchain.
c) We have a birth and death process with {0, 1, 2, . . .} as the state space.The transition rates are λi,i+1 = λ, i ≥ 0, and µi,i−1 = µ+ iθ, i ≥ 1.
d) The balance equations are
λπ0 = µπ1
and(λ+ iθ)πi = λπi−1 + (µ+ (i+ 1)θ)πi+1, i ≥ 1
The detailed balance equations are
λπi = (λ+ (i+ 1)θ)πi+1, i ≥ 0.
65
Chapter 8
Their solution is
πi = π0λi
Πij=1(µ+ jθ)
, i ≥ 0.
π0 is the constant which makes the summation across limit probabili-ties equal to one:
π0 = [
∞∑i=0
λi
Πij=1(µ+ jθ)
]−1.
This summation is always well defined (as long as all parameters λ, µand θ are positive. The model resembles mostly the M/M/∞ model.
e) It is only the third process which is Poisson. The argument is that theabovementioned Markov process is time-reversible as any birth anddeath process. The arrival process in the time-reversible process is λbut any such arrival corresponds to a departure in the original process(without ‘telling’ the reason for this departure, service completion orabandonment).
f) λP is the long run abandonment rate. θ is the individual abandonmentrate and since L is the mean number in the system, θL is the long runabandonment rate. Hence λP = θL. Since L = λW by Little’s, weget λP = θλW and hence P = θW .
8.11 Question 11
a) Consider the time-reversed process and recall that an M/M/1 is time-reversible. Looking from the time-reversed process angle, the age ofservice (in the original process) is the time first arrival or until thesystem is empty, whatever comes first. The next event will take withintime whose mean equals 1/(λ+ µ). It is a departure with probabilityλ/(λ + µ), a case which adds zero to the mean value. It is a servicecompletion with probability µ/(λ+ µ), a case which adds an−1 to themean. Hence,
an =1
λ+ µ+
µ
λ+ µan−1, n ≥ 1
with a0 = 0.
66
Chapter 8
b) Note first that the case n = 1 was already dealt with in Q9(c). Fromthere we can conclude that a1 = 1/(λ + µ) which will be used as ouranchor for the induction argument. Using the previous item, coupledwith the induction hypothesis, we get that
an =1
λ+ µ+
µ
λ+ µ(1
λ− 1
λ
1
(1 + ρ)n−1
=1
λ− 1
λ+ µ
µ
λ
1
(1 + ρ)n=
1
λ− 1
λ
1
(1 + ρ)n,
as required.
c) This item is trivial.
8.12 Question 12
a) This is a birth and death process with state space {0, 1, . . . , s}. Thebirth rates λi,i+1 = λ, 0 ≤ i ≤ s − 1, and death rates µi,i−1 = iµ.1 ≤ i ≤ s. The detailed balance equations are
λπ = (i+ 1)µπi+1, 0 ≤ i ≤ s− 1.
From that we get that
πi+1 =λ
(i+ 1)µπi, 0 ≤ i ≤ s− 1.
and then by induction that
πi =ρi
i!π0, 0 ≤ i ≤ s.
The value of π0 is found by noting the the sum of the probabilitiesequals one. Hence,
π0 = [s∑i=0
ρi
i!]−1.
This concludes our proof. Finally, note that both numerator and de-nominator can be multiplied by e−ρ. Then, In the numerator one getsP(X = i) when X follows a Poisson distribution with parameter ρ.The denominator equals P(X ≤ s).
67
Chapter 8
b) B(0) = 1 as this is the loss probability in case of no servers.
B(s+1) =ρs+1/(s+ 1)!∑s+1
i=0 ρi/i!
=ρs+1/(s+ 1)!∑s
i=0 ρi/i! + ρs+1/(s+ 1)
=ρs+1/(s+ 1)!
rs/s!B(s) + ρs+1/(s+ 1)
ρ/(s+ 1)1
B(s) + 1 + ρ/(s+ 1)=
ρB(s)
1 + s+ ρB(s),
as required.
c) (Liron) From example 4 we know that the probability of all s serversbeing busy in a M/M/s system is:
Q(s) =
∞∑j=s
(λµ
)js!sj−s
(∑s−1i=0
(λµ
)ii! +
(λµ
)ss!(
1− λµs
)) . (31)
In the previous items we showed that:
B(s) = πs =
(λµ
)ss!∑s
i=0
(λµ
)ii!
. (32)
Combining both of the above:
Q(s) =
∞∑j=s
(λµ
)js!sj−s
(∑s−1i=0
(λµ
)ii! +
(λµ
)ss!(
1− λµs
))
=
(λµ
)ss!∑s
i=0
(λµ
)ii!
(1−B(s) + B(s)
1− λµs
) ∞∑j=0
λ
µs
=B(s)(
1−B(s) + B(s)
1− λµs
)(1− λ
µs
)=
B(s)
1− λµs + λ
µsB(s)
=sB(s)
s− λµ(1−B(s))
68
Chapter 8
d) E(Wq) = Q(s) holds due to the PASTA property. As for the meanwaiting time,
E(Wq) = E(Wq|Wq = 0)P(Wq = 0)+E(Wq|Wq > 0)P(Wq > 0) = E(Wq|Wq > 0)P(Wq > 0).
The second term here isQ(s) as defined in the previous item. As for thefirst item, note that it equals the mean waiting time (service inclusive)of a customer in an M/M/1 queue with λ as the arrival rate and sµand the service rate. In other words, E(Wq|Wq > 0) = 1/(sµ−λ) (seeEquation (4.9)).
8.13 Question 13
a) In this system an idleness period follows a busy period, which fol-lows an idleness period, etc. The mean of the idleness period equals1/(λ1 + λ2) as this period is the minimum between exponentially dis-tributed random variables with parameters λ1 and λ2. This is thenumerator. The mean busy period is b1 (respectively, b2) if they ar-rival who opens the busy period is of type 1 (respectively, type 2), aprobability λ1/(λ1 +λ2) (respectively, λ2/(λ1 +λ2)) event. Hence, thedenominator here is the mean cycle time. The ratio between the twois the probability of idleness which, by definition, equals π00.
b) The value of b1 is 1/(µ− λ1) and likewise b2 = 1/(µ− λ2). Hence, π00
as stated in the question equals
1
1 + λ1µ−λ1
+ λ2µ−λ2
.
Minimal algebra leads to Equation (8.18).
8.14 Question 14
a) Work conservation is defined on page 53 (see Definition 4.1). In thismodel of retrials it is possible that jobs will be in queue (or orbit) whilethe server is idle. Had that been a FCFS model, the server who servethe customer(s) at this instant. Hence this is not a work-conserving.In particular, the amount of work kept in the system is larger than orequal to a similar one but under the FCFS regime.
b) We repeat below what appears in Hassin and Haviv book, pp.131-132. First, notice that using the notation of the exercise, what we are
69
Chapter 8
looking for is
Lq =
∞∑j=1
jπ1j +
∞∑j=1
jπ0j .
Consider the unnumbered formula after (8.21) and denote it by Gs(x)as indeed it is the function of s and x. Take derivative with respect tox, in both sides and let x = ρ, you get,
∞∑j=1
jρj−1Πj−1i=1 (1 +
s
i) = (1 +
λ
η)(1− ρ)
−2−λη . (33)
Then, by (8.21),
∞∑j=1
jπ1j =∞∑j=1
jρjΠji=1(1 +
λ
ηi)πi0.
Using the value of πi0 given at the bottom of page 127, coupledwith (33), leads to
∞∑j=1
jπ1j =ρ2
1− ρ(1 +
λ
η). (34)
As for the second summation, using (8.20) and (8.21)
π0j =µ
λ+ ηjπ1j =
µ
λ+ ηjρjΠj
i=1(1+λ
ηi)πi0 ==
µ
λ+ ηjρjΠj
i=1(1+λ
ηi)ρ(1−ρ)
1+λη .
Multiplying by j and summing up from j = 1 to infinity, we get bythe use of (33), that
∞∑j=1
jπ0j =λ
ηρ. (35)
All is left is to sum (34) and (35) and some minimal algebra.
c) First note that the probability that the server is busy is ρ. This canbe argues by Little’s law: the arrival rate to the server (and we countonly those who commence service is λ since eventually all enter service(and only once). The time there is with a mean 1/µ so we end up withλ/µ as the mean number of customers there. Second, we use Little’slaw again one when the system is defined as the orbit. What is theexpect number in this system is what was just derived in the previous
70
Chapter 8
item. The arrival rate to this system is λρ and ρ since, as was justargues, ρ is the probability of finding the server busy. Note that wecount an arrival to the orbit as one (and we in fact do not consideran unsuccessful retrial as a fresh arrival to the system and look at itas ‘nothing happened’). Dividing one by the other, leads to the meantime in orbit, for one who enters the orbit, which coincides with whatis asked in this item.
8.15 Question 15
All one needs to do it to replace all transition probabilities in Theorem 3.8with transition rates.
8.16 Question 16
We start with showing Bn, n ≥ 1, obeys the recursion stated there for n ≥ 2.Suppose a customer opens a busy period in an M/M/1/n queue. The nextevent takes place within 1/(λ+µ) units of time. With probability µ/(λ+µ)this is due to a service completion and the busy period ends. Otherwise,and this is with probability λ/(λ+µ) a new customer arrives. The questionwe are after is when first the server will be idle again. In order to be idealagain, it needs first to drop the number of customer there from two to one.The reader can be convinced that this is exactly as a busy period in anM/M/1/(n-1) queue. Once this occurs, it will take in fact another busyperiod until the system is empty for the fist time. It is clear that B1 = 1/µ,so the recursion leads to a way for computing Bn, n ≥ 2, one by one. Thisimplies that it is solved by a unique solution which is the values of themean busy periods. What is finally needs to be shown is that 1
µΣn−1j=0 ρ
j is ssolution. Minimal algebra leads to
(λ+ µ)Bn = 1 + λ(Bn−1 +Bn)
or
Bn =1
µ+ ρBn−1.
Assume the induction hypothesis that Bn = 1µΣn−2
j=0 ρj and conclude.
8.17 Question 17
Theorem 8.2 says in fact that ei is proportional to πiqi, i ∈ N . In otherwords, up to some multiplicative constant, ei equals πiqi. Note that the
71
Chapter 8
vector ei, i ∈ N , obeys ej = ΣieiPij = Σieiqij/qi. We need to show that πiqiis such a solution. We plug it in the two sides of this equation and check forequality. Indeed, πjqj is what we get in the lefthand side. On the righthandside we get Σiπiqiqij/qi = Σiπiqij . The two are equal for all j, as the vectorπi, i ∈ N , solves the balance equations.
8.18 Question 18
During a time of length t, the number of exits from state to state followsa Poisson distribution with parameter ct. Hence, the number of such hopsis n with probability η−ct(ct)n/n!. The dynamics among states in with thetransition matrix P . Hence, given n hops, the probability of being in state j,given i being the initial state, is Pnij . The rest is the use of the completeprobability theorem.
8.19 Question 19
a) The answer is Poisson with parameter λW . Firth, note that this iseasily the answer had the question been referring to the time instantof departure. Yet, by considered the time reversed process (which hasthe same dynamics), the instants of arrivals and departure swap andhence have the same statistics.
b) Denote by S the service time. First note that given W = w, S getsonly values between 0 and w. Moreover, there is an atom at w due tothe fact that ‘no waiting’ is a non-zero probability event. Indeed, westart with the atom. Since regardless of service time, it coincides withthe waiting time with due to finding an empty system upon arrival, aprobability 1− ρ event,
P(S = w|W = w) =fS(s)P(W = w|S = w)
fW (w)=
µe−µw(1− ρ)
(µ− λ)e−(µ−λ)w,
which by minimal algebra is seen to equals e−λw. Looking from thetime reserved angle, the event that W = S is equivalent to no arrivalsduring a waiting time, in this case, time of length w. This probabilityequals e−λw.
Next for the case fS|W=w(s) for 0 ≤ s ≤ w:
fS|W=w =fS(s)fW |S=s(w)
fW (w)=µe−µsfWq(w − s)(µ− λ)e−(µ−λ)w
,
72
Chapter 8
where Wq is the random variable of the time in queue (exclusive ofservice. This equals,
µe−µsρ(µ− λ)e(µ−λ)(w−s)
(µ− λ)e−(µ−λ)w= λe−λs.
Looking for a time-reservable argument, time of departure correspondsto time of arrival and the event S = s, means that at this instant,which is the last departure in the last w units of time, is in fact thefirst arrival in the reversed process. This is with density λe−λs.
73
Chapter 9
9 Chapter 9
9.1 Question 1
a) ∑j
P ∗ij =∑j
γjγiPji =
1
γi
∑j
γjPji,
which by (9.1) equals
1
γi(γi − λi) = 1− λi
γi< 1.
b) By what was just shown
λi + γi∑j
P ∗ij = λi + γi(1−λiγi
) = γi,
as required.
c) Consider the departure rate from station i in the time-reversed pro-cess. The external rate equals λi as this is the external arrival rate inthe original process. The internal departure rate in the time-reversedprocess equals γiΣjP
∗ij . Sum the two an get the departure rate. This
should of course equal the throughput of this station which is γi.
d) The lefthand side equals
λiπ(n)1
ρi+∑j 6=i
π(n)ρjρiµjPji = π(n)[λi
µiγi
+∑j 6=i
γjγiµiPji]
π(n)µiγi
[λi +∑j 6=i
γjPji] = π(n)µiγiγi
= π(n)µi,
where the one before last equality is due to (9.1).
9.2 Question 2
(Liron)
a) The Markov process of am open network of exponential queues is timereversible if and only if γiPij = γjPji for every pair 1 ≤ i, j ≤M .Proof: By Theorem (8.4), a Markov process is time reversible if andonly if for every pair of states i ∈ N , j ∈ N : g∗ij = qij , where g∗ij =
74
Chapter 9
πjπiqji.
Recall that the state space is the vectors n ∈ NM . We apply Theorem(9.1) in order to compute q∗ for the different possible states:
q∗(n, n+ ei) =π(n+ ei)
π(n)q(n+ ei, n) = ρiµiP
′i , 1 ≤ i ≤M,
q∗(n, n− ei) =π(n− ei)π(n)
q(n− ei, n) =λiρi
, 1 ≤ i ≤M , ni ≥ 1,
q∗(n, n− ei + ej) =π(n− ei + ej)
π(n)q(n− ei + ej , n) =
ρjρiµiPji , 1 ≤ i, j ≤M , ni ≥ 1.
Applying simple algebra and plugging in the rates into the time-reversibility condition (q∗ = q) we obtain the following conditions:
γi
1−M∑j=1
Pij
= λi , 1 ≤ i ≤M (36)
γiPij = γjPji , 1 ≤ i, j ≤M (37)
If we plug (37) into (36) we get:
γi = λi +
M∑j=1
γjPji,
which is met by definition in this model: γ = λ(I − P )−1. We cantherefore conclude that the second condition is a necessary and suffi-cient condition for time reversibility.
b) If the process is time-reversible we show that the external arrival rateequals the external departure rate.The external arrival rate to server 1 ≤ i ≤M is
∑Mj=1 γjPji. From the
condition of (a): γjPji = γiPij ∀1 ≤ i, j ≤ M we get that the arrival
rate equals∑M
j=1 γiPij , which is exactly the external departure rate.
9.3 Question 3
a) CANCELED
b) (i) Establishing the fact that the waiting times in the first two queuesare independent is easy: No queueing time prior to the secondservice and this service time is drawn independently of the waiting
75
Chapter 9
time in queue 1. By looking at the time-reversed process andusing the time-reversibility of this system, we get, by the samereasoning used to show the independence between waiting timesat the first two queues, that the waiting time in the second queue(i.e., the corresponding service time) and the waiting time at thethird queue are independent.
(ii) Let us look first at the unconditional probability. The probabilitythat the one in the back of a customer overtake hims, is 1/4. Thereason behind that is the one in the back should first completeservice before this customer finishes his second service. This isa probability 1/2 event. Due to the memoryless property, thathe will also finishes service at the second server first is also 1/2,leading to 1/4. However, if the customer under consideration isserved for a very long time, he is very likely to be overtaken. Infact, this probability can become large as one wants, by takingthis service time to be long enough. In fact, for any such ser-vice time x, we look for P(X ≤ x) where X follows an Erlangdistribution with parameter 2 (for two stages) and 1 (the rate ofservice at each stage). Clearly, limx→∞ P(X ≤ x) = 1.
9.4 Question 4
a) The departure rate from station i equals γiP′i , 1 ≤ i ≤ M . Hence, in
case of a departure it is from station i with probability
γiP′i∑M
j=1 γiP′i
, 1 ≤ i ≤M.
b) Denote by W di the mean time spent in the system by a customer who
depart from station i, 1 ≤ i ≤M . Then, in a similar way to (9.6) butfor the time-reversed process, we get that
W di =
1
µ(1− ρi)+
M∑j=1
P ∗ijWdj , 1 ≤ i ≤M.
Then, as in (9.7),
W di =
M∑j=1
(I − P ∗)−1 1
µj(1− ρj), 1 ≤ i ≤M.
76
Chapter 9
Using the previous item, we conclude that the mean time in the systemof one who just departed is
W d ≡M∑i=1
γiP′i∑M
j=1 γjP′j
W di .
c) Of course, W = W d. Hence, this is an alternative way for computingthe mean time in the system for a random customer.
77
Chapter 10
10 Chapter 10
10.1 Question 1
Although in the model dealt with in this chapter, there areM state variables,we in fact have only M − 1. This is the case since one degree of freedom islost due to the fact that the sum across the values of the state variables, isfixed to the value of N . In particular, in the case where M = 2 we in facthave only one state variable that of how many are in station 1. The numberthere is a random random between 0 and N . The birth rate is m2 as longas the number there is below N : This is due to a service completion as theother server (recall that we assume Pii = 0 and hence customers hop fromone server to the other). The death rate is µ1 from the same reason and thisis the case as long as this station is not empty. Since the M/M/1N is also abirth and death process with the same state space, our proof is completed.
10.2 Question 2
Since e(n − ei + ej) = e(n)ρj/ρi for any pair of i and j, 1 ≤ i, j ≤ M (aslong as ni ≥ 1), we get that the lefthand side equals
e(n)1
ρi
∑j 6=i
ρjµjPji = e(n)1
ρi
∑j 6=i
γjPji
which equals, by the definition of the vector γ and the assumption thatPii = 0, to
e(n)1
ρiγi = e(n)µi,
as required.
10.3 Question 3
Suppose µi is changed to µit, 1 ≤ i ≤M . Then, G(N) as defined in (10.4) ismultiplied by 1/tN . Of course, G(N − 1) is multiplied by 1/tN−1. Insertingthis in (10.11), leads to the fact that C(N) is multiplied by t. The same isthe case with the throughputs Xi(N), 1 ≤ i ≤ N (see (10.10)). Indeed if µimeasures the number of service completions per minute, 60µi is the numberof service completions per hours. Of course, the throughputs measured inhours are 60 times larger than the throughput measured per minutes.
78
Chapter 10
10.4 Question 4
(It is recommended to read first the first remark in page 154.) Note that allparameters of the model depend on the matrix P only throught the vectorof γ. In particular, two transition matrices having the same γ, end up withthe same values computed throughout this chapter. Next, for a transitionmatrix specified in the question, the corresponding g has uniform entries,which can be assumed to equal one. Replacing µi by γi/mi (both gi and µifor some given model), led to a new ρi which equals 1/(µi/gi) = γi/mi whichcoincides with the original ρi, 1 ≤ i ≤ M . Since G(N) (see (10.4)), e(n)(see 10.4), and then the throughputs as defined in (10.10) are only functionof (the unchanged) ρi, 1 ≤ i ≤M , the proof is completed.
10.5 Question 5
The first thing to do is to compute ΣMi=1ρ
ni , for n = 0, . . . , N . This is an
O(NM) task. Next initiate with G(0) = 1 which is fixed and with G(n) = 0for 1, . . . , N which are only temprory values. Then,
For n = 1, . . . , N
For k = 1, . . . , n
G(n)← G(n) +G(n− k)
M∑j=1
ρkj
G(n)→ G(n)
nSince we have a double look each of which with N steps, the complexity ofthis part isO(N2). Thus, the complexity of the algorith isO(max{NM,N2})which is inferior to the O(MN) complexity of both the convolution algo-rithm and the MVA algorithm which were described in Sections 10.2 and10.5.1, respectively.
10.6 Question 6
The Markov process underlying the model is time-reversible if and only if
e(nµiPij = e(n+ ej − ei)µjPji
for the case where ni ≥ 1. Since e(n+ ej − ei) = e(n)ρj/ρi, this condition ismet if and only if
µiPij =ρjρiµjPji
79
Chapter 10
or equivalently, after some algebra, if and only if
γiPijγjPji, 1 ≤ i, j ≤M.
The the case where Pij = 1/(M −1), 1 ≤ i 6= j ≤M , all enties in the vectorg are identical which immediately led to this conclusion.
80
Chapter 11
11 Chapter 11
11.1 Question 1
a) For the M/G/1/1 case:
(i) This is not a renewal process: from past loss instants one canlearn differently (and statistically) the age of service upon theloss, which in turn effects the time of the next loss.
(ii) This is a renewal process. The time between two renewal fol-lows the independent sum of service time (distributed G) andan arrival time (exponentially distributed). Note the need forthe Poisson arrival rate here: As soon as service is completed,the next service commencement is within an exponential servicetime, regardless the service length.
(iii) This is also a renewal processes. In fact, the underlying distribu-tion is as for the enter to service process.
(iv) This is a renewal process. In fact, this is also a Poisson processwith a rate which equals the arrival rate. The reason behindthat is this is a symmetric queue (see the bottom of page 166) forwhich Theorem 11.3 holds. Note the departure process here is thesuper-positioned process for loss arrival and service completions(and not only the latter process).
b) For the M/M/1/1 case:
(i) Now the loss process is a renewal process as the age of serviceupon the current loss is irrelevant. As for the underlying distri-bution, we claim that its LST equals
λ(λ+ s)
s2 + (2λ+ µ)s+ λ2.
This is proved as follows: Denote by F ∗X(s) the LST we are after.The consider the first event after a loss, which can be either arrivalor departure. It has an exponential distribution with parameterλ+µ. Hence, its LST equals (λ+µ)/(λ+µ+ s). Independently,the next loss will take no time (with probability of λ/(λ + µ)which corresponds to arrival taken place first) or the next losswill require the sum of two independent random variables: an
81
Chapter 11
arrival plus an independent replica of X (with the complementaryprobability of µ/(λ+ µ)). In summary,
F ∗X(s) =λ+ µ
λ+ µ+ s
(λ
λ+ µ+
µ
λ+ µ
λ
λ+ sF ∗X(s)
).
The rest follows some minimal algebra.
(ii) The entrance to service is a renewal process as in any M/G/1/1above. Here we can be specific and say that the underlying dis-tribution is the sum of two independent exponentially distributedrandom variables, one with λ, the other with µ (the arrival rateand service rate respectively).
(iii) Looking at the general case of M/G/1/1, we again reach the sameconclusion as in the previous item (that of the loss processes).
(iv) This is just a special case of an M/G/1/1 model dealt above. Inparticular, the answer is that this is a Poisson process with rateλ.
11.2 Question 2
For the M/G/s/s model:
a) The loss process is not a renewal process as it is not in the M/G/1/1case. Yet, it is for the M/M/2/2 case.
b) The entrance to service is not a renewal process and this is also thecase in the M/M/2/2 model.
c) The previous conclusion holds here too.
d) The situation here is as in the M/G/1/1 case due to Theorem 11.3.See Question 1 above. In particular, this is a Poisson process.
11.3 Question 3
What Theorem 11.5 basically says that as long as a new arrival commencesservice as soon as it arrives (and with the full dedication of the server)in order to get a product from for the limit probabilities (and other phe-nomena associated with it as insensitivity, Poisson departure process, etc.)the order of return to service of those previously preempted is not impor-tant, as long as it is done without anticipation, namely without referenceto past or future service times of those on line from which one needs to
82
Chapter 11
be selected to resume service. Thus, the variation suggested here does notchange anything. Now more formally, in the presentation of the model,the fourth bullet from the top of page 176, needs to be that the transitionrate from (d1, d2, . . . , dn) to (dπ−1
n−1(1)+1, dπ−1n−1(2)+1, . . . , dπ−1
n−1(n−1)+1) equals
µqd1+1
qd1sπ(n−1). Next, when we update the proof, we need to consider only
Case 2 as the rest stays unchanged. Specifically, x = (d1, d2, . . . , dn) andy = (dπ−1
n−1(1)+1, dπ−1n−1(2)+1, . . . , dπ−1
n−1(n−1)+1) and the transition rate from
x to y equals µqd1+1
qd1sπ(n−1). The transition rate from y to x in the time-
reversed process equals µsπ(n−1). The rest of the proof is the same: An extraterm which equals sπ(n−1) appears now in both (11.17) and (11.18).
83
Chapter 12
12 Chapter 12
12.1 Question 1
(Benny)
a) (i)
0 1 2 3
0
1
· · ·
· · ·
· · ·
· · ·
i phase
j level
λ λ λ
λ λ λλ
µ µ µ
µ τ τ τ τ
(ii)
Q0(st) =
{λ s = t0 otherwise
Q2(st) =
{µ s = t = 00 otherwise
Q1(st) =
τ s = 1, t = 00 s = 0, t = 1
−(λ+ µ) s = t = 0−(λ+ τ) s = t = 1
(iii)
Q0 =
(λ 00 λ
)
Q1 =
(−(λ+ µ) 0
τ −(λ+ τ)
)
Q2 =
(µ 00 0
)
84
Chapter 12
(iv) For the case where j ≥ 1, the balance equations, in matrix nota-tion are
πjQ0 + πj+1Q1 + πj+2Q2 = 0
or, in detail,
(λ+ µ)π0,j+1 = λπ0,j + τπ1,j+1 + µπ0,j+2
and(λ+ τ)π1,j+1 = λπ1,j
b) We use a probabilistic argument (as stated at the end of Theorem12.1) to show that the rate matrix equals(
λµ 0λµ
λλ+τ
).
First, we look at the corresponding discrete-time process which we getby uniformization (we take the value of c to equal λ+ µ+ τ).
0 1 2 3
0
1
· · ·
· · ·
· · ·
· · ·
i phase
j level
λc
λc
λc
λc
λc
λc
µc
µc
µc
µ+τc
τc
τc
τc
λc
µc
µc
µc
µc
τc
τc
τc
τc
The following observation will be needed in the sequel. Once theprocess is in state (0, j + 1), unless the level goes down, a probabilityµc event, an additional visit in state (0, j+ 1) is guaranteed. Thus, thenumber of visits to state (0, j + 1) prior to reaching level j follows ageometric distribution with mean c
µ .
For process that commences at (0, j), in order to reach state (0, j + 1)prior to returning to level j, it must move immediately to state (0, j+1), which is a probability λ
c event. By the argument above, the number
85
Chapter 12
of future visits there (inclusive of the first one), is with a mean of cµ .
Thus, R00 equals λ/µ. For process that commences at (1, j), in orderto reach state (0, j + 1) prior to returning to level j, it must moveimmediately to state (1, j+1), which is a probability λ
c event. Once instate (1, j+1), it will reach state (0, j+1) for sure, sooner or later. Bythe argument above, the number of future visits there (inclusive of thefirst one), is with a mean of c
µ . Thus, R10 equals λµ . On the other hand,
once in state (1, j + 1), the process will revisit there with probabilityµc or leave this state with no possibility to revisit prior to returningto level j implying that the number of future visits there (inclusive ofthe first one), follows a geometric distribution with mean 1
1−µc
. Thus,
R11 equals λc
11−µ
c= λ
λ+θ .
Finally, process that commences at (0, j) will never reach state (1, j+1)prior to returning to level j and hence R01 = 0.
An alternative way is to verify that R is a solution to (12.23)
R2Q2 +RQ1 +Q0 =((λµ)2 0
(λµ)2 + λ2
(λ+τ)µ ( λλ+τ )2
) (µ 00 0
)+(
λµ 0λµ
λλ+τ
)(−(λ+ µ) 0
τ −(λ+ τ)
)+
(λ 00 λ
)=(
λ2
µ 0λ2
µ + λ2
λ+τ 0
)+
(−λ(λ+µ)
µ 0
−λ(λ+µ)µ + λτ
λ+τ −λ
)+
(λ 00 λ
)= 0
c) The characteristic polynomial of R is
(λ
µ− w)(
λ
λ+ τ− w)
and the two solutions, i.e., the two eigen values are w1 = λµ and w2 =
λλ+τ .
(i) A necessary and sufficient condition for stability is |wi| < 1, 1 ≤i ≤ 2 (see 12.5). Since 0 < w2 < 1, it is left to guarantee thatw1 < 1, i.e., λ < µ.
(ii) From
πj+1 = πjR = (λ
µ(π0j + π1j),
λ
λ+ τπ1j)
86
Chapter 12
it is possible to see that the probability of phase 1 decay withgeometric factor of λ
λ+τ . If τ is small enough, more specificallyτ < λ − µ, then w2 > w1 and w2 is the largest eigenvalue andits corresponding left eigenvector v2, that by Perron-Frobiniustheorem is known to be positive, real, and unique, satifies
(v2)i = limj→∞
πijπ0j + π1j
.
In that case, it is easy to see that v2 = (0, 1) and hence, for largej, π.j ≈ π1j , that, as said, decay with geometric factor of λ
λ+τ .
12.2 Question 2
CANCELED
12.3 Question 3
(Benny)
a)
Q0(st) =
{λ s = r, t = 10 otherwise
Q2(st) =
{µ 1 ≤ s = t ≤ r0 otherwise
and
Q1(st) =
−(λ+ µ) s = t
λ 0 ≤ s ≤ r − 1, t = s+ 10 otherwise
Or, equivalently,
Q0 =
0 0 · · · 0 0 00 0 · · · 0 0 0...
... · · ·...
......
0 0 · · · 0 0 0λ 0 · · · 0 0 0
87
Chapter 12
Q2 =
µ 0 0 · · · 0 00 µ 0 · · · 0 0...
.... . . · · ·
......
0 0 0 · · · µ 00 0 0 · · · 0 µ
= µI
and
Q1 =
−(λ+ µ) λ 0 · · · 0 0 00 −(λ+ µ) λ · · · 0 0 00 0 −(λ+ µ) · · · 0 0 0...
. . .. . .
. . ....
......
0 0 0 0 · · · −(λ+ µ) λ0 0 0 0 · · · 0 −(λ+ µ)
b) The process can be looked as the phases are ordered in a cycle where
the move to the next phase occurs upon any completion of stage inthe arrival process. These transitions are not affected by changes inthe level. The phase process is with generator matrix
Q =
−λ λ 0 · · · 0 0 00 −λ λ · · · 0 0 00 0 −λ · · · 0 0 0...
. . .. . .
. . ....
......
0 0 0 0 · · · −λ λλ 0 0 0 · · · 0 −λ
and clearly, π = (1/r, . . . , 1/r) solve the balance equation 0 = πQ andhence the limit distribution is uniform in the integers 1 through r.
c) (i) When one approximates the solution of (12.23) via the entry-wise monotonic matrix sequence {X(k)|k ≥ 0} defined throughthe recursion stated in (12.24) while initializing with X(0) = 0,one gets that X(1) = A0. Due to the shape of Q0, all the entriesof X(2) but those in the last row equal zero too. Moreover,as the iterative procedure continues, the same is the case withall matrices X(k), k ≥ 0. As {X(k)}∞k=0 converges, as k goesto infinity, to a solution of (12.23), R itself possesses the sameshape. In summary, Rij = 0 for 1 ≤ i ≤ r − 1 and 1 ≤ j ≤ r.
88
Chapter 12
Thus, for some row vector w ∈ Rr
R =
0 · · · 0... · · ·
...0 · · · 0w1 · · · wr
(ii) Due to the structure of R, we get that
Rj = wj−1r R, j ≥ 1
and as πj = π0Rj , j ≥ 0, we get that
πj = πr0wj−1r w, j ≥ 1.
In other words,
πij = πr0wiwj−1r , 0 ≤ i ≤ n, j ≥ 1. (38)
Denote the number of customers in the system by L and thenumber of completed stages in the arrival process by S.
P (S = i|L = j) =πij∑rk=1 πij
=wi∑rk=1wk
, 1 ≤ i ≤ r, j ≥ 1
and as long as j ≥ 1, this probability is not a function of j.Hence, given that L ≥ 1, i.e., the server is busy, S and L areindependent.
(iii) Using (38) and the fact that the limit distribution of the phase isuniform we get
1
r= πr. = πr0 + πr0
∞∑j=1
wrwj−1r =
πr01− wr
implying πr0 = 1−wrr
and
1
r= πi. = πi0 +πr0
∞∑j=1
wiwj−1r = πi0 +
1− wrr
wi1− wr
, 1 ≤ i ≤ r
implying πi0 = 1−wir .
Finally, from (38) we get
πij =1− wrr
wiwj−1r , 0 ≤ i ≤ n, j ≥ 1.
89