On some Markov models of certain interacting populations

16
BULLETIN OF IAT~t~ATICALBIOLOGY VOL~E 38, 1976 ON SOME MARKOV MODELS OF CERTAIN INTERACTING POPULATIONS D. Ka~N~ Department of Mathematics, University of Georgia, Athens, Georgia 30602, U.S.A. We give a stochastic foundation to the Volterra prey-predator population in the following case. We take Volterra's predator equation and let a free host birth and death process support the evolution of the predator population. The purpose of this article is to present a rigorous population sample path construction of this interacted predator process and study the properties of this interacted process. The construction yields a strong Marker process. The existence of steady-state distribution for the interacted predator process means the existence of equilibrium population level. We find a necessary and sufficient condition for the existence of a steady-state distribution. Next we see that if the host process possesses a steady-state distribution, so does the interacted predator process and this distribution satisfies a difference equation. For special choices of the auto death and interaction parameters a and b of the predator, whenever the host process visits the particular state a* = a/b the predator takes rest (saturates) from its evolution. We find the probability of asymptotic saturating of the predator. 1. Introduction. Volterra, Lotka and Kolmogorov laid the mathematical foundation of the deterministic theory of population dynamics. Volterra's deterministic models provided a starting point for the development of several stochastic models. From the deterministic theory it is well known that given the population size at any moment (and the system parameters) it is possible to determine the subsequent behavior of the population. Translating this in probabilistic terms we have that Volterra's population systems behave like Markovian systems. Since 1939 several stochastic methods have been used to study Volterra's prey-predator system. While some used a birth-death process approach, several others used the Fokker-Plank equation method for the joint probability density of the population sizes, and other methods are known. To 723

Transcript of On some Markov models of certain interacting populations

Page 1: On some Markov models of certain interacting populations

BULLETIN OF �9 IAT~t~ATICAL BIOLOGY VOL~E 38, 1976

ON SOME M A R K O V MODELS OF

C E R T A I N I N T E R A C T I N G P O P U L A T I O N S

�9 D. K a ~ N ~ Department of Mathematics, University of Georgia, Athens, Georgia 30602, U.S.A.

We give a stochastic foundation to the Volterra prey-predator population in the following case. We take Volterra's predator equation and let a free host birth and death process support the evolution of the predator population. The purpose of this article is to present a rigorous population sample path construction of this interacted predator process and study the properties of this interacted process. The construction yields a strong Marker process. The existence of steady-state distribution for the interacted predator process means the existence of equilibrium population level. We find a necessary and sufficient condition for the existence of a steady-state distribution. Next we see that if the host process possesses a steady-state distribution, so does the interacted predator process and this distribution satisfies a difference equation. For special choices of the auto death and interaction parameters a and b of the predator, whenever the host process visits the particular state a* = a/b the predator takes rest (saturates) from its evolution. We find the probability of asymptotic saturating of the predator.

1. Introduction. Volterra, L o t k a and Ko lmogorov laid the ma thema t i ca l foundat ion of the determinis t ic theory of popula t ion dynamics . Vol ter ra ' s

determinis t ic models provided a s ta r t ing point for the deve lopment of several

s tochast ic models. F r o m the determinist ic theory it is well known tha t given the

popula t ion size a t any m o m e n t (and the sys tem parameters ) i t is possible to de termine the subsequent behavior of the populat ion. Trans la t ing this in probabil is t ic t e rms we have t h a t Vol ter ra ' s popula t ion sys tems behave like Markovian systems. Since 1939 several s tochast ic methods have been used to

s tudy Vol ter ra ' s p r e y - p r e d a t o r system. While some used a b i r t h - d e a t h process approach, several others used the F o k k e r - P l a n k equat ion me thod for the joint p robab i l i ty densi ty of the popula t ion sizes, and o ther methods are known. To

723

Page 2: On some Markov models of certain interacting populations

724 D. KANNAN

understand the dynamical evolutionary behavior of prey-predator system in random environments we used white-noise type perturbations (c.f. Gard and Kannan, 1976). In this article we present a stochastic foundation of the evolu- tion of interacted predator. Since we are looking at a Markovian behavior, we want to give a constructive definition of this Markov provess. Because of the approach taken here, the modern theory of jump Markov processes can be used in further investigations of our model.

As can be seen from (1.1) below, the encounters between predator and prey result in instantaneous effects on the predator size. However, biosystems are not temporally isolated and they have memory. Such systems did not receive the necessary stochastic foundation. In a future publication we will present analytical and probabilistic foundations to prey-predator systems which incorporate hereditary actions.

Transport processes, autocatalytic reactions, prey-predator evolutions, biochemical oscillators, neural systems, carrier-borne epidemics, political parties are some of the numerous examples of assemblies of elements which influence each other through competition or cooperation. Several stochastic methods have been developed t~ study such interacting systems. In systems such as transport processes, population dynamics, carrier-borne epidemics, etc., one approach is to look at the evolution of one element of the system as an interacted process under the action of the other element(s) of the system (c.f. Becker, 1970; Bharucha-Reid, 1960; Puri, 1975; Watanabe, 1968). A much-used procedure in this kind of modeling is to begin with the probability function for transition to neighboring states during an infinitesmial time period (c.f. Becker, 1970; Puri, 1975; and Bharucha-Reid, 1960 for general ideas). In this article we present a population sample path set up for a predator population evolving under the influence of a free prey (host) population which we take as a bir th-death process, (see also Watanabe, 1968). This set-up can easily be modified to other systems acted upon by pure jump Markov processes. The ideas used here are extensions of those in the construction of jump processes (for a discussion of jump Markov processes we refer to Blumenthal and Getoor (1968) and also Doob (1953), Dynkin (1965), Feller (1966) and Moyal (1957).

The purpose of this article is to study the evolution of a predatory population as an interacted or reacted process under the influence of a free prey population. We take this host process as a bir th-death process. (Using similar analysis one can look at the evolution of the prey under the action of the predator). We present a population sample path set-up to follow the evolution of the predator governed by the differential equation

d x / d t = - a x + b x y , x(O) = 4, t ~ O, (1.1)

where y(t) = y(t , co) is a bir th-death process influencing the predators, and eo e ~ the supporting probability space. The object of study is the reacted

Page 3: On some Markov models of certain interacting populations

ON SOME MARKOV MODELS OF CERTAIN INTERACTING POPULATIONS 725

process (x(t), y(t)), where x(t) is the solution of the (1.1) with y(0) = i, the initial size of the host population.

First we construct the reacted predator population process (x(t, co), y(t, co)) and show that this interacted process is a Markov process. Next we look at the corresponding transition function from different angles, and show that the reacted process is a strong Markov process. The classification of the reacted predator population into transient/recurrent processes follows from the classi- fication of the prey birth-death process. The steady-state distribution of the population is of fundamental importance in the population dynamics. First, without special reference to the host population, we give a necessary and suffi- cient condition for the existence of steady-state distribution for the reacted predator process. Next we show that if the host birth-death process has a steady- state distribution, then the reacted predator population process has a steady- state distribution, and this distribution satisfies a second-order difference equation.

2. Construction of the Interacted Predator Population Process. In a Volterra model the evolution of the predator population is governed by the differential equation

d d-~ x(t) = -ax(t)+bx(t)y(t), x(0) = 4, t > 0. (2.1)

Here a is the resultant auto death rate, b is the interaction rate, x(t) is the predator size, and y(t) is the prey or host size at time t. We will stay with this model even though, without much difficulty, we could consider more general equations with interaction terms or the Kolmogorov model (Rescigno and Richardson, 1973). In order to pursue a stochastic analysis we assume that the host population y(t) is a birth-death process y(t, co) on the state space S = {0, 1, 2, 3 . . . . }. For the sake of convenience we shall take the state space of x(t) to be the right half line R+:x > O. Thus, we want to study the evolution of the predator population under the influence of a host birth-death process. In this section we construct the interacted predator process (x(t, co), y(t, co)), co e ~, the basic sample space. We show that this process is a Markov process, and under a boundedness assumption on the waiting times between interactions, that with probability one only a finite number of interactions occur in a finite length of time.

We can and do assume that, for almost every co, the differential equation (2.1) has a solution {x(t, co), t > 0}. We shall write (2.1) in the equivalent form

x(t, co) = ~ exp ( -a t ) + b ~#0 x(s)y(s, co) exp ( - a ( t - s ) ) ds. (2.2)

Page 4: On some Markov models of certain interacting populations

726 D. K A N N A N

Denote the solution x(t, o~) by x*(t, ~;y) to represent the evolution of the predator x(t, o~) guided by the action y (i.e. y preys). Let y(0) = i ~ S. For a fixed action i e S, the predator evolves according to x*(t, ~;i) until a new action j e S is taken (i.e. until there is an interaction at a random time ~1). Now it evolves according to x*(t, x(vl) ; j) , t > zl, until the next interaction at random time z2 w h e n a new action k e S is prescribed. We continue this until the explosion or killing time ~. Between interactions the predator evolves independently of the host, constrained only by an appropriate prey size. After we piece these processes together we get the evolution of the predator population, and we are interested in this interacted or reacted process. We shall take the killing time ~ = ~ , almost surely (this holds under some boundedness assumptions).

Define the evolution or propagation operator

e( . , . , . ) : R+ x R+ • S ---~ R+ • S

as follows :

e(t, ~, y) = (x*(t, ~;y), y), (2.3)

for all (~, y) e R+ • S and all t e R+. From (2.2)-(2.3) and the theory of differential equations (e.f. Hille, 1969)

we get the following:

(I) e(0, ~, i) = (x*(0, ~;i), i) = (~, i); (2.4)

(II) since x*(t, ~; i) is the unique solution of the initial value problem (2.1) we have

e( t+s , ~, i) = (x*( t+s , ~; i ) , i ) = (x*(t, x*(s, ~; i); i ) , i)

= e(t, e(s, x* ( s , ~; i), i), i); (2.5)

(III) e(t, ~, i) is continuous in t, for every (~, i) e R+ • S;

(IV) e(t, ~, i) is continuous in ~ (from variation of initial data).

Clearly the interacted predator process will be a jump process. The predators encounter the prey at random times, and at the end of each hunt the population jumps to a new level. Because the interacted predator evolves under the support of a prey bir th-death process, the jump times and jump distributions of interacted predator process depend on similar characteristics of the prey.

Let a(i) be the parameter of the waiting time ~(i) in i e S for the process y(t, w). Using the same a, denote the parameter a : R + • - , ( 0 , ~ ] of the waiting time between interactions by

a(~, i) = a(i). (2.6)

We shall use the same K to denote different jump distributions. Let K(y , B)

Page 5: On some Markov models of certain interacting populations

ON SOME MARKOV MODELS OF CERTAIN INTERACTING POPULATIONS 727

be the jump distribution of the host b i r th-dea th process y(t, ~), where y ~ S, B e 2 S = ~ .

Define, for x ~ R+, A e ~+ , y ~ S, B ~ 2s = 50,

K((x , y), (A • B)) = K(x , A ) K ( y , B ) = 5x (A)K(y , B). (2.7)

Due to interaction, the system jumps according to the jump distribution K( . , �9 ): (R+ x S) • ($2+ x 5 e) -+ [0, 1]. In the Volterra model t ha t we consider, the evolution of the system changes as soon as the action occurs. Thus, we have,

K((x , i), R+ • {i}) = 0, (x, i) e R+ • S. (2.8)

Now we proceed to construct the reacted predator process evolving under the action of the host b i r th-death process y(t, ~o) (see also Blumenthal and Getoor (1968), Ch. I). Let

= (R+ x R+ x S) N = {co :N -~ R+ x R+ x S},

N = {0, 1, 2, 3 , . . . ) ; and let M = | x ~+ x St). For a generic co = ((tn, xn , in), n >= 0} e ~, define

X n ( ~ ) = fin, Xn, in),

Tn(s = tn ,

Sn(W) = (xn, in), (2.9)

pn(co) = xn ,

hn(~o) = in.

Define the kernel q: (R+ x R+ x S, R+ x R+ x 5 ~ --~ [0, 1] as follows:

q((t, x, y), (ds x du x dv))

0, i f s ~ t,

= K ( ( x * ( s - t , x; y), y), du x dv)a(x, y) e-a(x,Y)(8-t) ds, s ~ t, (2.10)

( = K(e(s - t , x, y), du x dv)a(x , y)e-a(x,Y)(s-t) ds).

Now we define the reacted predator populat ion process by

X( t ) = (x(t), y(t)) = ( x * ( t - T n , xn; in), in)

= e ( t - T n , xn , in), T n < t ~ T n + l . (2.11)

Next define the shift-operator Or: ~ - ~ , (Ot(o~) = {~ ) k , k > 0}) by

I &o = (0, x(t, w), y(t, m)), Tn(oJ) <= t < Tn+~(o~),

Ot(w) = ( ( ( ( T n + ~ - t ) v O, (xn+k, in+k)), k > 1}. (2.12)

Note tha t X s o O~ = Xs+t, s, t e R+ = [0, ~ ] .

Page 6: On some Markov models of certain interacting populations

728 D. KANNAN

D e f i n e ~ = a[Xu: s < u < t], ~ t = ~O, a n d d 8 = a[Xu: s < u < oo], = ~ 0 .

Next we proceed to show tha t the reacted predator process X = (fl, ~ , ~ t , Xt , 0t, p(~,l)) is a Markov process.

Because of the cont inui ty properties of the evolution operator, the following is clear.

LE~MA 2.1. The evolution operation e(t, 4, i) is ~ + x ~ + • 5 f / ~ + • ~ - measurable.

Set, for s e [kt2 -n, (k + 1)t2-n),

en(S, 4, i) = e((k+ 1)t2 -n, 4, i).

Then en is clearly measurable, and hence e = limn en is also measurable.

LEMMA 2.2. The kernel q : ( R + x R + x S , ~ + • ~ ) -*[0, 1] defined by (2.10) is a Marlcovian kernel.

This readily follows from theorems on product spaces (c.f. Neveu, 1965, Ch. I I I (also Ch. V)).

LEMMA 2.3. I f IZ is a probability measure on R+ • R+ x S, there is a prob- ability measure P~ on ~ such that the coordinate mappings (Xn , n > O} form a temporally homogeneous Markov chains over (~, P~) with initial distribution #

and transition K.

This is a consequence of a theorem of Ionescu Tuleea (es Neveu, 1965).

Here

P~(Xn+I e A IX0 . . . . , Xn) = ;Aq((Tn , Pn, hn), ds x du • dv), (2.13)

l

and, for any bounded (~2+ • ~+ x ~)n+l-measurable function f ,

(2.14) where X~ = (tt, xi, y~). Also, for ~ > O,

E~(exp [ - o ~ ( T n + l - T n ) ] } = El~ ' ff__(~n) ~. (2.15)

Now, as a - * oo, P~[Tn+I <= Tn] = 0. From (2.8) P~[hn+l = hn] = 0. Also, for (4, i) ~ R+ • S, P(~,I)[T0 = 01 = 1. Hence, we can and do assume, wi thout any loss of generality, t ha t

= {(Xn = (tn, xn, in ) ,n > 0}: 0 = to < tl < t2 < . . . . and

i.+1 ~ i ~ for n > 0}.

Hereafter this is our ~ and dS ' s are the corresponding traces.

Page 7: On some Markov models of certain interacting populations

ON SOME MARKOV MODELS OF CERTAIN INTERACTING POPULATIONS 729

LEMMA 2.4. (d~ , t > 0} is a right continuous family of a-algebras.

Define ~o(t, co) = (xn(co), yn(O))), Tn < t < Tn+l. Let fl~ = ~[~(u) :s < u < t], f~ = a[y(u):s < u < t]. Since cp(t) is a right continuous step process,

it is not hard to see tha t { j o ) is a right continuous family. Also f o = fit+'~ Note, from the construction of the reacted process, tha t fro c d o (~ f lo, and Tn's are fro_ and hence, d~ ~ and ri~ times. Let B = (X(s) e A) , for A e ~ + • < t. Then

B = U [B (~ {Tn < s < Tn+l}] n

= U ( ( x * ( s - T n , Xn; in), in) e A } n (Tn < s < Tn} n

= U ( e ( s - T n , xn, in) e A} ~ (Tn < s < Tn+l}. n

From Lemma 2.1, it is clear now tha t ~4o = j o = j o + . This gives us our lemma.

THEOREM 2.1. The reacted predator process evolving according to X = (~, �9 ~ , ~/t, X t , Or, p($,l)) is a Marlcov process.

Proof. To establish the theorem we have to show that , for all bounded continuous functions f and A e Ms, (see Blumenthal and Getoor, 1968),

E(~,~)[f(Xs+t); A] = E(~,~)[EXs[f(Xt); A]],

or, using Laplace transform, tha t

Tha t is, it suffices to show that ,

where Ra is the resolvent operator Raf(z) = E Z I : e-~tf(Xt)dt, a >= O.

D e f i n e / n = a[Xk, k = 1, 2 , . . . , n]. For ,4 e ~r there is an An e f in such tha t A r~ (Tn < s < Tn+x) = A~ c~ (s < Tn+l). For, if A is of the form

r - - 1 A = X u (B), then

A ~ (Tk < u < T~+I) = ( ( x * ( u - T k , Sk), h ~ ) e B ) n (Tk <= u < Tk+l)

= A k(u) n (u < Tk+l),

where Ak(u) e f in for k < n and u < s. Thus it sufficies to show (2.16) with A replaced by Bn = An n (s ~ Tn+l}. Rewrite the integral on the left side of

Page 8: On some Markov models of certain interacting populations

730 D. KANNAN

(2.16) as

J = + = J 0 + Jk . (2.17) k=n+l d T k k=n+l

Set # = 5(~u ) in (2.14). Denote (s < Tn+l} by Cn+l} by Cn+1. We continue to use An and Cn+l to denote their corresponding sets in Mn+l and M~+2 respectively. Let Qn(x, A) denote the nth iterate of a kernel Q(x, A): Q(x, A) = 5z(A), Qn+l(x, A) = SQn(x, dy)Q(y, A). Now denoting the right side of (2.14) by Q~f with # = 5(~,0 , we see that

E"[e-C'sRaf(Xs) ; B,,]

= Q~+l[e-a'Raf(X*(s- Tn, xn; in), in)IA,Icn]

= Q~+l[e-~"(z"'% ('-T')IAe-~'Raf(X*(S -- Tn, xn ; in), in)], (2.18)

where IA is the indicator function of the set A. Returning to the left hand side of (2.16),

n+Ir T.+~+,

~ n [ ~ - a ( x . , ~ . ) ( t - T . ) r . e_o~ t ~ k + l = ~l~ ~ ~A. X ~de(s--Tn,S.)

Combining (2.16)-(2.19) we obtain the Markovian property of X(t), and hence the theorem.

That X(t) is a strong Markov process will be seen in the next section. We have assumed that the killing time ~ of X(t) is infinite a.s. One can also consider the case ~ < oo by adjoining, in a standard fashion, a fictitious state A to the state space. Since we are studying the action of the prey process on the predator process we can take that the waiting time parameter a to be bounded. This implies that ~ = oo, a.s. We have the following theorem.

THEOREM 2.2. I f a(X, i) is bounded, 0 < a(x, i) < M < oo, then in any finite time interval only a finite number of interactions can occur with Trobability one.

Proof. Recall that, if z is any {Mt}-stopping time, then M~; denotes the a-algebra of events of the form {~ < t} (~ A, A e d . Also, note that Tn's are Mr-stopping times.

For any ~ > 0,

P~[T, < t] < E~[ exp {~(t-Tn)}] = e~tE~[exp(-aTn)]

= eo'tE~[e-~T,-,E.~T,-, (exp ( -- a(Tn - Tn-1))}].

Page 9: On some Markov models of certain interacting populations

ON SOME MARKOV MODELS OF CERTAIN INTERACTING POPULATIONS 731

From (2.15),

Ea~--1(exp [-a(Tn-Tn-1)]} <= M

- - < 1 . ~z+M

Therefore

P~[T~ < t] < e~t ~ 0, as n ~ ~ . = \ a + M ]

This completes the proof. In the above discussions we could have taken jump time distributions in a

much more general form:

{f0 } P[T1 > t] = exp - 2(s,x(s),y(s))ds ,

where t is a non-negative bounded function on R+ x R+ x S. Next, f C~1+ t }

P [ T 2 - T 1 > t] = exp ~ - J r , ~(s, x(8, Xl) , V(8 , /~) )ds ,

and so on. Proceeding as before one could establish Theorems 2.1 and 2.2 in this case also.

3. Transition Function; Steady-state Distribution. We have seen that the reacted predator process Xt is a Markov process. Using the notations in the construction of Xt one can write down the corresponding transition probability function. Equivalently, one can also follow Doob (1963) Ch. VI to express the transition function. I t is also possible to derive the transition function of the reacted predator process in terms of the transition function of the host bir th- death process. I t is not hard to see that on a suitable common domain the infinitesimal generators corresponding to these transition functions coincide. In this section we show that the reacted predator process is a strong Markov process. In the stochastic analysis of an interacted predator population, conditions for the existence of steady-state distribution is of basic importance. We obtain a necessary and sufficient condition for the existence of a steady- state distribution; this is without any explicit reference to the host process. We also see that ff the host birth-death process possesses a steady-state distri- bution, so does the reacted predator process. This distribution satisfies a second- order difference equation. For any choices of the auto death parameter a and interaction parameter b such that a* = a/b is a (positive) integer, whenever the prey birth-death process visits the state a*, the reacted predator population saturates (rate of change of population vanishes) until the next interaction time. We obtaSn, under suitable conditions, the probability of asymptotic saturating of predator population.

Page 10: On some Markov models of certain interacting populations

732 D . K A N N A N

Let B be the Banach space, under the "sup"-norm, of all bounded measur- able functions f (x, i) on R+ • S, C be all continuous functions in B, and ~ be all functions in C that are vanishing at infinity (that is, for any a > 0, there is a compact set K = K e c R+• S such that If(x, i)l < ~ whenever (x, i ) e K c, for all f e ~]). Without any loss of generality in applicability we can and do assume that a(x, i) is bounded.

Corresponding to the Markovian reacted predator process {X(t), t >__ 0}, the transition semigroup of operators on B is given by

Ttf(x, i) = E(x,I)[f(x(t), y(t))] o9

= ~ q~+lo)[f(x*(t-Tn, Xn; in), in)], Tn <= t < Tn+l, (3.1) n = l

where we recall that

x*(t, 4; Y) = @-a t+b x(s)Y(S) e-a(t-s)d8 Jo

and Q~ is as in the proof of Theorem 2.1. The corresponding transition function is given by

oo

P(t, z, A) ~ n+l . = Q~ [IA(X ( t - -Tn , xn; in), in)], Tn < t < Tn+l. n = 0

More convenient forms of the transition function, can be derived directly from the population behavior.

Fix i ~ S. Let x*(t, 4; t/) be the solution of

Set

dx/dt = - a x + b x y , x(O) = 4; y(t) =- tl, t > O.

P(t, 4, B; tl) = 5(x*(t, ~; t/); B), (3.2)

where 5(.,-) is the Kronecker delta.

LEMMA 3.1. P~(t ,B; t l )= P(t, 4, B; ~/) defined by (3.2) is a stationary transition function.

Clearly it is a probability measure in B. From the continuous dependence of the solution on initial data and the uniqueness of solution it is obvious that PC is measurable in 4 and that P satisfies the Chapman-Kolmogorov relation (see (2.5)).

Next we borrow some ideas from Doob (1953) (Ch. VI). Let nP(t, (x, i), (B, j)) be the probability that, in exactly n interactions during a period of time t, the host population passes from size i to size j while the predator population evolves from state x into the Borel set B. Then, 0P is defined as follows.

Page 11: On some Markov models of certain interacting populations

ON SOME MARKOV MODELS OF C E R T A I N I N T E R A C T I N G P O P U L A T I O N S 733

0, if i # j

oP(t, Ix, i), (B,j)) = (e-a(z'~ t, B; i), i f i = j.

Next 1P(t, (x, i), (B,j)) is obtained as follows: the reacted predator evolves according as x*(u, x; i) until an interaction occurs at time s (with probability e -ar at which time it jumps to a state in (B, j) (with probability K(i, dk)PZ(s, d~; i)) and then during the remaining time until t it continues to evolve on its own with no interactions (with probability 0P). Thus,

1P(t, (x, u), (B,/V))

= f'odS fR e-~(')~~ ('' v)' (B'F))P~(s' d'; u)K(u'

(This relation can be derived with more rigor.) Now, by induction, we have

(n+l)P(t, (x, u), (B, F))

- - f'od ( f dv) j x\{u} JR.

LEMMA 3.2. The (stationary) transition function of the Markov process X(t) = (x(t), y(t)) is given by

oo

P(t, (x, u), (B, F)) = ~ nP(t, (x, u), (B, F)). (3.3) n=O

The transition function P(t, x, i; B, j) is called stochastically continuous if for any open set G c R+ and x e G and for any i e S, we have

lira Pit, x; i, G, i) = 1. t $ 0

We claim that the transition function given in Lemma 3.2 is stochastically continuous. For,

oo

P(t, x, i; G, j) = Z riP(t, x, i; G, j) n = l

> oP(t, x, i; G, j)

= e-a(x,otpx(t, G; i)

---~1, a~st~O,

since the continuous dependence of the solution on the initial condition implies that limt~0 Px(t, G; i) = 1.

A better direct form of P will be the following.

P(t, (x, i), (B,j)) = P(~,O[x(t) e B, y(t) = j]

= P~[x*(t, x; y(. )) E B, y(t) = j]

= p~ j ( t )P[x*( t , x ;y ( . ) ) eB ly(0) = i,y(t) = j ] , (3.4) L

Page 12: On some Markov models of certain interacting populations

734 D. KANNAN

where [p~l(t)] is the transition matrix of the host bir th-death process y(t, to).

PROPOSITION 3.1. The transition probability P(t, (x, i), (B,j)) is a Feller transition function.

By definition (c.f. Dynkin (1965), P is a Feller transition function if and only if the semigroup

I'P[x*(t, x; y(t, ~o)) e dy ] y(O) = i, y(t) = j]f(y,j), Ttf(x, i) Zp,j(t) 1 d

maps C into C. Towards this it sufficies to show that, for fixed i and j ,

fR P[x*(t, y(t, co) e dy [ y(O) = i, y(t) = j] f (y , j ) x; +

is continuous in x. But, it is not hard to see that this holds because of the continuous dependence of the (unique) solution on the initial conditions.

TH~ORE~ 3.1. The reacted predator process (x(t), y(t)) is a strong Markov process.

This follows from the fact that P is a Feller transition function of the right continuous process (x(t), y(t)).

Let f(x, i ) e B be continuously partially differentiable with respect to x. Then, f is in the domain of the weak infinitesimal generator A of the reacted predator process (x(t), y(t)) and, noting that y(t) is a bir th-death process,

~f(x , i) - af(x, i____) ( _ ax + bxi) + Z qJ (x , j) - (2~ +/~)f(x, i),

where

;tl if j = i + 1

q~j= p~ if j = i - 1

0 if [ -Jl > 1.

Now let f e ~ such that af/Ox and (bxi-ax)(Of/ax) are in ~. Then, f belongs to the domain of the characteristic operator ~ and o// = ~ _ A, the infini- tesimal generator. In this case

Of(x, i) Af(x, i) = (bx i -ax ) ~ + 2d(x, i + 1 )

+ ~ g ( z , i - 1) - (2~ + re ) / (x , i ) . (3 .5)

Next object we shall look at is the steady-state distribution of the reacted predator process. First we give a necessary and sufficient condition for the

Page 13: On some Markov models of certain interacting populations

ON SOME MARKOV MODELS OF CERTAIN INTERACTING POPULATIONS 735

existence wi thout any reference to the existence of the s teady-sta te distr ibution of the host process. Following Dynkin (1965), let V = V(R+• ~ + • 5:) denote the Banach space of all finite measures v(F), F e ~ + • 5: with []v]] = v a r v . For the theory of dual semigroup we refer to Dynkin (1965).

Let v(B, i) denote the initial distr ibution of the reacted process, and let go(x, i) be its densi ty function. Let ~h(t; x, i) be the densi ty function of Utv(B, i), where Ut is the dual of Tt. Let v(B, i) be concentrated on a compact K c R+, and go be continuously differentiable in x. F rom the results on dual semigroups it is not hard to see tha t

and, also

A *v(B, i) = -- f B { ~ [go(x, i)(bxi--ax)] § (~ + ~)go(X, i)

- ~lgo(x, i § 1) - pigo(x, i - 1)}dx,

a-t ~b(t; x, i) = - [~(t; x, i)(bxl-ax)]-(~§ x, i)

+ 2t~h(t; x, i + 1)+ p~b(t; x, i - 1 ) .

Consequently, the s teady-s ta te distr ibution of the reacted predator exists if and only if

~x[go( x, i)(bxi -ax) ] + ( ~t + pt)go(x, i)

-~tgo(x, i+ 1) - plgo(x, i - 1 ) = 0,

which can be wri t ten as the following equation

Ago(x, i) = 2(bxi-ax) - - 0go(x, i, (a-bi)go(x, i).

~x

Summarizing these arguments, we get the following.

T H E O R ~ 3.2. Let go(x, i) be the initial density function of the reacted predator process X(t) such that the initial distribution v(B, i) is concentrated on a compact K, and that go is continuously differentiable in x. Then, there exists a steady-state distribution of the reacted predator process if and only if the equation

Ago(x, i) = 2(bxi -ax) ~go(x, i) + (a-bi)go(x, i) ax

is soluble with A as in (3.5).

L*

Page 14: On some Markov models of certain interacting populations

736 D. KA_NNA~

Naturally one likes to know that if the prey population assumes an equili- brium level, would there exist the equilibrium level for the interacted predator population ? We show next, based on the construction of the reacted predator process, that this expectation is valid. For a nonabsorbing state (x, i), define, for B e ~+ and j e S,

p(x, i; B , j ) = P(x,O[VB,I) < oo],

where z(B,t)(o~) = rain {t > 0: X(t, ~o) e (B, j)}. I f X(0) = (x, i), then Z(x,l) is defined as the first return to (x, i) after the first interaction time TI: z(x, ~)(co) -- min {t > TI: X(t, oJ) = (x, i)}. Also, define the mean return time re(x, i) = E(x,~)[z(x,~)]. A state (x, i) of the reacted predator process X(t) is recurrent (respectively, transient) if

p(x, i) = p(x, i; x, i) = 1 (respectively, p(x, i) < 1.

A nonabsorbing recurrent state (x, i) is called positive recurrent (resp. null recurrent) if re(x, i) < ~ (resp. re(x, i) = ~ ).

We shall call the Markov chain {Xn, n > 0} that is given in Lemma 2.3 as the embedded predator chain associated with the reacted predator process.

From the construction in Section 2, it is easy to see that the characteristics p(x, i; y, j ) for the embedded predator chain are equal to the corresponding quantities for the reacted predator process. Now let the host (birth-death) process be irreducible. Then, from the construction of jump distributions, both the host process and predator process are either recurrent or transient. Let the host process be an irreducible positive recurrent process. Then, the reacted predator process is also positive recurrent. Since a steady-state distribution lives in the positive recurrent states, in the present ease, there is a unique steady-state distribution of the reacted predator process.

Let the host process be irreducible positive recurrent. Then, the predator process has a unique steady-state distribution. Let v(B, i) be the initial stationary distribution of the predator process. Then,

+ I + J

where

K ( / , i) =

0 , otherwise.

Page 15: On some Markov models of certain interacting populations

ON SOME MARKOV MODELS OF CERTAIN INTERACTING POPULATIONS 737

Hence,

#i+i I 5x(B)v(dx, i+ 1) v(B, i) = ~ + ~ - ~ + i JR+

~-i ~ 5x(B)v(dx, i - 1) + )It-i § JR+

- v(B, i + 1 ) ~ v(B, i - 1 ) . (3.6) 21+id-pt+l 21-i+gi-i

Thus the s teady-s ta te (initial) dis t r ibut ion v(B, i) satisfies the difference

equat ion (3.6). Between interact ion t imes, the p reda to r popula t ion evolves on its own. L e t

the ra tes a and b be such t h a t a/b = a* is a (positive) integer. Le t y(0) = a*.

Then for x(0) = 4,

dX a~e-a t+ax( t ) -ab f tx(s) be-a(t-s)dS O, To = 0 < t < T i .

Thus, whenever the host process visits the s ta te a* the reacted p reda to r process

sa tura tes ( takes rest f rom its evolution) until the nex t in teract ion time. T h a t is,

for all t e [0, Ti), x(t) = ~ with probabi l i ty 1. Le t a* be a recurrent s ta te of the host process. Then, the p reda tor re turns to infinitely m a n y rest (saturating)

periods during its evolut ion (~ = oo a.s.). Hence, the a sympto t i c sa tura t ing

p robab i l i ty for the p reda to r popula t ion is 1/(2a,m(a*)), where re(a*) is the mean re tu rn t ime to a* of the host process.

LITERATURE

Becker, N. G. 1975. "A stochastic model for two interacting populations." J. Appl. Prob. 7, 544-564.

Bharucha-Reid, A. T. 1960. Elements of the Theory of Markov Processes and Their Applications. New York; McGraw-Hill.

Blumenthal, R. M. and R. K. Getoor. 1968. Markov Processes and Potential Theory. New York: Academic Press.

Doob, J. L. 1953. Stochastic Processes. New York: Wiley. Dynkin, E. B. 1965. Markov Processes, Vol. I. New York: Springer-Verlag, Feller, W. 1966. An Introduction to Probability Theory and its Applications, Vol. II .

New York: Wiley. Gard, T. C. and D. Kannan. 1976. "On a stochastic differential equation modeling of

prey-predator evolution." J. Appl. Prob., 13. Goel, N. S., S. C. Maitra and E. W. Montroll. 1971. "On the Volterra and other non-linear

models of interacting populations." Rev. Mod. Phys., 43, 231-276. Hille, E. 1969..Lectures in Ordinary Differential Equations. Massachusetts: Addison-

Wesley. Moyal, J. E. 1957. "Discontinuous Markoff Processes." Acta Math., 98, 221-264.

Page 16: On some Markov models of certain interacting populations

738 D. KANNAN

•eveu, J. 1965. Mathematical Foundations of the Calculu~ of Probability. San Francisco: go lden-Day .

Purl, P. S. 1975. "A linear birth and death process under the influence of another process." J. Appl. Prob., 12, 1-17.

Rescigno, A. and L W. Richardson. 1973. "The deterministic theory of population dynamics." In Foundations of Mathematical Biology, Vol. I I I , ed. R. Rosen, New York: Academic Press.

Watanabe, T. 1968. "Approximation of uniform transport process on a finite interval to Brownian motion." Nagoya Math. J., 32, 297-314.

RECEIVED 10-17-75

REVISED 3-1-76