Martingale problems and stochastic equations
Transcript of Martingale problems and stochastic equations
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 1
Martingale problems and stochastic equations
• Characterizing stochastic processes by their martingale properties
• Markov processes and their generators
• Forward equations
• Other types of martingale problems
• Change of measure
• Martingale problems for conditional distributions
• Stochastic equations for Markov processes
• Filtering
• Time change equations
• Bits and pieces
• Supplemental material
• Exercises
• References
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 2
Supplemental material
• Background: Basics of stochastic processes
• Stochastic integrals for Poisson random measures
• Technical lemmas
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 3
1. Characterizing stochastic processes by their martingale prop-erties
• Levy’s characterization of Brownian motion
• Definition of a counting process
• Poisson processes
• Martingale properties of the Poisson process
• Strong Markov property for the Poisson process
• Intensities
• Counting processes as time changes of Poisson processes
• Martingale characterizations of a counting process
• Multivariate counting processes
Jacod (1974/75),Anderson and Kurtz (2011, 2015)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 4
Brownian motion
A Brownian motion is a continuous process with independent, Gaussian increments.A Brownian motion W is standard if the increments W (t+ r)−W (t), t, r ≥ 0, havemean zero and variance r.
W is a martingale:
E[W (t+ r)|FWt ] = E[W (t+ r)−W (t)|FWt ] +W (t) = W (t),
FWt = σ(W (s) : s ≤ t).
W has quadratic variation t, that is
[W ]t = limsup |ti+1−ti|=0
∑i
(W (t ∧ ti+1)−W (t ∧ ti))2 = t
in probability, or more precisely,
limsup |ti+1−ti|=0
supt≤T|∑i
(W (t ∧ ti+1)−W (t ∧ ti))2 − t| = 0
in probability for each T > 0.W (t)2 − t
is a martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 5
Levy’s characterization of Brownian motion
Theorem 1.1 Let M be a continuous local martingale with [M ]t = t. Then M is standardBrownian motion.
Remark 1.2 Note that
E[M τn(t)2] = E[[M τn ]t] = E[τn ∧ t] ≤ t
and E[sups≤tMτn(s)2] ≤ 4E[M τn(t)2] ≤ 4t and it follows by the dominated convergence
theorem and Fatou’s lemma that M is a square integrable martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 6
Proof. Applying Ito’s formula,
eiθM(t) = 1 +
∫ t
0
iθeiθM(s)dM(s)− 1
2θ2
∫ t
0
eiθM(s)ds,
where the second term on the right is a martingale. Consequently,
E[eiθM(t+r)|Ft] = eiθM(t) − 1
2θ2
∫ t+r
t
E[eiθM(s)|Ft]ds
and
ϕt(θ, r) ≡ E[eiθ(M(t+r)−M(t)|Ft] = 1− 1
2θ2
∫ t+r
t
E[eiθM(s)−M(t)|Ft]ds
= 1− 1
2θ2
∫ r
0
ϕt(θ, u)du
so ϕt(θ, r) = e−12θ2r. It follows that for 0 = t0 < t1 < · · · < tm,
E[m∏k=1
eiθk(M(tk)−M(tk−1)] =∏
e−12θ2k(tk−tk−1)
and hence M has independent Gaussian increments.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 7
Definition of a counting process
Definition 1.3 N is a counting process if N(0) = 0 and N is constant except for jumpsof +1.
Dc[0,∞) will denote the space of possible counting paths that are finite for all time.
Dc∞[0,∞) ⊃ Dc[0,∞) is the larger space allowing the paths to hit infinity in finite
time.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 8
Poisson processes
A Poisson process is a model for a series for random observations occurring intime. For example, the process could model the arrivals of customers in a bank,the arrivals of telephone calls at a switch, or the counts registered by radiationdetection equipment.
x x x x x x x xt
Let N(t) denote the number of observations by time t. In the figure above, N(t) =6. Note that for t < s,N(s)−N(t) is the number of observations in the time interval(t, s]. We make the following assumptions about the model.
1) Observations occur one at a time.
2) Numbers of observations in disjoint time intervals are independent randomvariables, i.e., if t0 < t1 < · · · < tm, then N(tk) − N(tk−1), k = 1, . . . ,m areindependent random variables.
3) The distribution of N(t+ a)−N(t) does not depend on t.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 9
Characterization of a Poisson process
Theorem 1.4 Under assumptions 1), 2), and 3), there is a constant λ > 0 such that, fort < s, N(s)−N(t) is Poisson distributed with parameter λ(s− t), that is,
PN(s)−N(t) = k =(λ(s− t))k
k!e−λ(s−t).
Proof. LetNn(t) be the number of time intervals ( kn, k+1
n], k = 0, . . . , [nt] that contain
at least one observation. Then Nn(t) is binomially distributed with parameters nand pn = PN( 1
n) > 0. Then
PN(1) = 0 = PNn(1) = 0 = (1− pn)n
and npn → λ ≡ − logPN(1) = 0, and the rest follows by standard Poissonapproximation of the binomial.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 10
Interarrival times
Let Sk be the time of the kth observation. Then
PSk ≤ t = PN(t) ≥ k = 1−k−1∑i=0
(λt)i
i!e−λt, t ≥ 0.
Differentiating to obtain the probability density function gives
fSk(t) =
1
(k−1)!λ(λt)k−1e−λt t ≥ 0
0 t < 0
Theorem 1.5 Let T1 = S1 and for k > 1, Tk = Sk−Sk−1. Then T1, T2, . . . are independentand exponentially distributed with parameter λ.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 11
Martingale properties of the Poisson process
Theorem 1.6 (Watanabe) If N is a Poisson process with parameter λ, then N(t) − λt isa martingale. Conversely, if N is a counting process and N(t) − λt is a martingale, thenN is a Poisson process.
Proof.
E[eiθ(N(t+r)−N(t))|Ft]
= 1 +n−1∑k=0
E[(eiθ(N(sk+1)−N(sk) − 1− (eiθ − 1)(N(sk+1)−N(sk))eiθ(N(sk)−N(t))|Ft]
+n−1∑k=0
λ(sk+1 − sk)(eiθ − 1)E[eiθ(N(sk)−N(t))|Ft]
The first term converges to zero by the dominated convergence theorem, so wehave
E[eiθ(N(t+r)−N(t))|Ft] = 1 + λ(eiθ − 1)
∫ r
0
E[eiθ(N(t+s)−N(t))|Ft]ds
and E[eiθ(N(t+r)−N(t))|Ft] = eλ(eiθ−1)t.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 12
Strong Markov property
A Poisson process N is compatible with a filtration Ft, if N is Ft-adapted andN(t+ ·)−N(t) is independent of Ft for every t ≥ 0.
Lemma 1.7 Let N be a Poisson process with parameter λ > 0 that is compatible withFt, and let τ be a Ft-stopping time such that τ <∞ a.s. Define Nτ (t) = N(τ + t)−N(τ). ThenNτ is a Poisson process that is independent of Fτ and compatible with Fτ+t.
Proof. Let M(t) = N(t)− λt. By the optional sampling theorem,
E[M((τ + t+ r) ∧ T )|Fτ+t] = M((τ + t) ∧ T ),
so
E[N((τ + t+ r) ∧ T )−N((τ + t) ∧ T )|Fτ+t] = λ((τ + t+ r) ∧ T − (τ + t) ∧ T ).
By the monotone convergence theorem
E[N(τ + t+ r)−N(τ + t)|Fτ+t] = λr
which gives the lemma.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 13
Intensity for a counting process
If N is a Poisson process with parameter λ and N is compatible with Ft, then
PN(t+ ∆t) > N(t)|Ft = 1− e−λ∆t ≈ λ∆t.
For a general counting process N , at least intuitively, a nonnegative, Ft-adaptedstochastic process λ(·) is an Ft-intensity for N if
PN(t+ ∆t) > N(t)|Ft ≈ E[
∫ t+∆t
t
λ(s)ds|Ft] ≈ λ(t)∆t.
Let Sn be the nth jump time of N .
Definition 1.8 λ is an Ft-intensity for N if and only if for each n = 1, 2, . . ..
N(t ∧ Sn)−∫ t∧Sn
0
λ(s)ds
is a Ft-martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 14
Modeling with intensities
Let Z be a stochastic process (cadlag, E-valued for simplicity) that models “exter-nal noise.” Let Dc[0,∞) denote the space of counting paths (zero at time zero andconstant except for jumps of +1).
Condition 1.9
λ : [0,∞)×DE[0,∞)×Dc[0,∞)→ [0,∞)
is measurable and satisfies λ(t, z, v) = λ(t, zt, vt), where zt(s) = z(s ∧ t) (λ is nonantic-ipating), and ∫ t
0
λ(s, z, v)ds <∞
for all t ≥ 0, z ∈ DE[0,∞) and v ∈ Dc[0,∞).
Let Y be a unit Poisson process that is Gu-compatible and assume that Z(s) isG0-measurable for every s ≥ 0. (In particular, Z is independent of Y .) Consider
N(t) = Y (
∫ t
0
λ(s, Z,N)ds). (1.1)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 15
Solution of the stochastic equation
Theorem 1.10 There exists a unique solution of (1.1) up to time S∞ ≡ limn→∞ Sn,
τ(t) =
∫ t
0
λ(s, Z,N)ds
is a Gu-stopping time, and for each n = 1, 2, . . .,
N(t ∧ Sn)−∫ t∧Sn
0
λ(s, Z,N)ds
is a Gτ(t)-martingale. (Let Ft = Gτ(t).)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 16
Proof. Existence and uniqueness follows by solving from one jump to the next. LetY r(u) = Y (r ∧ u) and let
N r(t) = Y r(
∫ t
0
λ(s, Z,N r)ds).
Then N r(t) = N(t), if τ(t) =∫ t
0λ(s, Z,N)ds ≤ r. Consequently,
τ(t) ≤ r = ∫ t
0
λ(s, Z,N r)ds ≤ r ∈ Fr,
as is τ(t ∧ Sn) ≤ r. Since M(u) = Y (u) − u is a martingale, by the optionalsampling theorem
E[M(τ((t+ v)∧Sn)∧ T )|Fτ(t)] = M(τ((t+ v)∧Sn))∧ τ(t)∧ T ) = M(τ(t∧Sn)∧ T ).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 17
Martingale problems for counting processes
Definition 1.11 (Jacod (1974/75)) Let Z be a cadlag, E-valued stochastic process, and letλ satisfy Condition 1.9. A counting process N is a solution of the martingale problemfor (λ, Z) if
N(t ∧ Sn)−∫ t∧Sn
0
λ(s, Z,N)ds
is a martingale with respect to the filtration
Ft = σ(N(s), Z(r) : s ≤ t, r ≥ 0)
Theorem 1.12 If N is a solution of the martingale problem for (λ, Z), then N has thesame distribution as the solution of the stochastic equation (1.1).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 18
Proof. Suppose λ is an intensity for a counting process N and∫∞
0λ(s)ds = ∞ a.s.
Let γ(u) satisfy
γ(u) = inft :
∫ t
0
λ(s)ds ≥ u.
Then, since γ(u+ v) ≥ γ(u),
E[N(γ(u+v)∧Sn∧T )−∫ γ(u+v)∧Sn∧T
0
λ(s)ds|Fγ(u)] = N(γ(u)∧Sn∧T )−∫ γ(u)∧Sn∧T
0
λ(s)ds.
The monotone convergence argument lets us send T and n to infinity. We thenhave
E[N(γ(u+ v))− (u+ v)|Fγ(u)] = N(γ(u))− u,so Y (u) = N(γ(u)) is a Poisson process. But γ(τ(t)) = t, so (1.1) is satisfied.
If∫∞
0λ(s)ds < ∞ with positive probability, then let Y ∗ be a unit Poisson process
that is independent of Ft for all t ≥ 0 and consider N ε(t) = N(t) + Y ∗(εt). N ε hasintensity λ(t) + ε, and Y ε, obtained as above, converges to
Y (u) =
N(γ(u)) u < τ(∞)
N(∞) + Y ∗(u− τ(∞)) u ≥ τ(∞)
(except at points of discontinuity).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 19
Multivariate counting processes
Dcd[0,∞): The collection of d-dimensional counting paths
Condition 1.13 λk : [0,∞) ×Dcd[0,∞) ×DE[0,∞) → [0,∞), measurable and nonan-
ticipating with∫ t
0
∑k
λk(s, z, v)ds <∞, v ∈ Dcd[0,∞), z ∈ DE[0,∞).
Z cadlag, E-valued and independent of independent Poisson processes Y1, . . . , Yd.
Nk(t) = Yk(
∫ t
0
λk(s, Z,N)ds), (1.2)
where N = (N1, . . . , Nd). Existence and uniqueness holds (including for d = ∞)and
Nk(t ∧ Sn)−∫ t∧Sn
0
λk(s, Z,N)ds
is a martingale for Sn = inft :∑
kNk(t) ≥ n, but what is the correct filtration?
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 20
Multiparameter optional sampling theorem
I is a directed set with partial ordering t ≤ s. If t1, t2 ∈ I, there exists t3 ∈ I suchthat t1 ≤ t3 and t2 ≤ t3.
Ft, t ∈ I, s ≤ t implies Fs ⊂ Ft.
A stochastic process X(t) indexed by I is a martingale if and only if for s ≤ t,
E[X(t)|Fs] = X(s).
An I valued random variable is a stopping time if and only if τ ≤ t ∈ Ft, t ∈ I.
Fτ = A ∈ F : A ∩ τ ≤ t ∈ Ft, t ∈ I
Lemma 1.14 Let X be a martingale and let τ1 and τ2 be stopping times assuming count-ably many values and satisfying τ1 ≤ τ2 a.s. If there exists a sequence Tm ⊂ I suchthat limm→∞ Pτ2 ≤ Tm = 1, limm→∞E[|X(Tm)|1τ2≤Tmc ] = 0, and E[|X(τ2)|] <∞,then
E[X(τ2)|Fτ1 ] = X(τ1)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 21
Proof. (Kurtz (1980b)) Define
τmi =
τi on τi ≤ TmTm on τi ≤ Tmc
Then τmi is a stopping time, since
τmi ≤ t = (τmi ≤ t ∩ τi ≤ Tm) ∪ (τmi ≤ t ∩ τi ≤ Tmc
= (∪s∈Γ,s≤t,s≤Tmτi = s) ∪ (Tm ≤ t ∩ τi ≤ Tmc
Let Γ ⊂ I be countable and satisfy Pτi ∈ Γ = 1 and Tm ⊂ Γ. For A ∈ Fτ1 ,∫A∩τm1 =t
X(τm2 )dP =∑
s∈Γ,s≤Tm
∫A∩τm1 =t∩τm2 =s
X(s)dP
=∑
s∈Γ,s≤Tm
∫A∩τm1 =t∩τm2 =s
X(Tm)dP
=
∫A∩τm1 =t
X(Tm)dP
=
∫A∩τm1 =t
X(t)dP =
∫A∩τm1 =t
X(τm1 )dP
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 22
Multiple time change
I = [0,∞)d, u ∈ I, Fu = σ(Yk(sk) : sk ≤ uk, k = 1, . . . , d). Then
Mk(u) ≡ Yk(uk)− uk
is a Fu-martingale. For
Nk(t) = Yk(
∫ t
0
λk(s, Z,N)ds),
define τk(t) =∫ t
0λk(s, Z,N)ds and τ(t) = (τ1(t), . . . , τd(t)). Then τ(t) is a Fu-
stopping time.
Lemma 1.15 Let Gt = Fτ(t). If σ is a Gt-stopping time, then τ(σ) is a Fu-stoppingtime.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 23
Approximation by discrete stopping times
Lemma 1.16 If τ is a Fu-stopping time, then τ (n) defined by
τ(n)k =
[τk2n] + 1
2n
is a Fu-stopping time.
Proof.
τ (n) ≤ u = ∩kτ (n)k ≤ uk = ∩k[τk2n] + 1 ≤ [uk2
n] = ∩kτk <[uk2
n]
2n
Note that τ (n)k decreases to τk.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 24
Martingale problems for multivariate counting processes
Let Sn = inft :∑
kNk(t) ≥ n.
Theorem 1.17 Let Condition 1.13 hold. For n = 1, 2, . . ., there exists a unique solutionof (1.2) up to Sn, τk(t) =
∫ t0λk(s, Z,N)ds defines a Fu-stopping time, and
Nk(t ∧ Sn)−∫ t∧Sn
0
λk(s, Z,N)ds
is a Fτ(t)-martingale.
Definition 1.18 Let Z be a cadlag, E-valued stochastic process, and let λ = (λ1, . . . , λd)satisfy Condition 1.13. A multivariate counting process N is a solution of the martingaleproblem for (λ, Z) if for each k,
Nk(t ∧ Sn)−∫ t∧Sn
0
λk(s, Z,N)ds
is a martingale with respect to the filtration
Gt = σ(N(s), Z(r) : s ≤ t, r ≥ 0)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 25
Existence and uniqueness for the martingale problem
Theorem 1.19 Let Z be a cadlag, E-valued stochastic process, and let λ = (λ1, . . . , λd)satisfy Condition 1.13. Then there exists a unique solution of the martingale problem for(λ, Z).
Proof. Existence follows from the time-change equation, and uniqueness followsby a result in Meyer (1971) which essentially shows that every solution of the mar-tingale problem can be written as a solution of the time-change equation. (See alsoKurtz (1980c).) Jacod (1974/75) gave the first characterization of a multivariatecounting processes as a solution of a martingale problem.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 26
2. Markov processes and their generators
• Markov chains
• Diffusion processes
• Definition of a martingale problem
• Equivalent formulations
• Uniqueness of 1-dimensional distributions implies uniqueness of fdd
• Markov property
• Building Markov generators (sums of generators are generators–usually)
• Uniqueness under the Hille-Yosida conditions
• Quasi-left continuity
Dynkin (1965); Meyer (1967); Stroock and Varadhan (1979); Ethier and Kurtz (1986)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 27
Markov chainsConsider X ∈ Zd satisfying
X(t) = X(0) +m∑k=1
Yk(
∫ t
0
βk(s)ds)ζk = X(0) +m∑k=1
Rk(t)ζk
where the Yk are independent unit Poisson processes and the ζk are distinct. Notethat
PX(t+ ∆t) = X(t) + ζk|FXt ≈ PRk(t+ ∆t) > Rk(t)|FXt ≈ βk(X(t))∆t.
Let Rk(t) = Rk(t)−∫ t
0βk(X(s))ds (which will typically be a martingale)
f(X(t)) = f(X(0)) +m∑k=1
(f(X(s−) + ζk)− f(X(s−))dRk(s)
= f(X(0)) +m∑k=1
(f(X(s−) + ζk)− f(X(s−))dRk(s)
+
∫ t
0
m∑k=1
βk(X(s))(f(X(s) + ζk)− f(X(s)))ds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 28
Martingale properties of Markov chains
Define
Af(x) =m∑k=1
βk(x)(f(x+ ζk)− f(x)).
Then, at least for f with compact support,
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds
will be a FXt -martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 29
Diffusion processesConsider the Ito equation,
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s) +
∫ t
0
b(X(s))ds,
and for a(x) = σ(x)σ(x)T , define
Af(x) =1
2
∑i,j
aij(x)∂i∂jf(x) +∑i
bi(x)∂if(x).
Then for C2 functions f , by Ito’s formula
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds =
∫ t
0
∇f(X(s))Tσ(X(s))dW (s)
which, at least for f with compact support, is a FX,Wt -martingale (and hence aFXt -martingale).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 30
Martingale problems: Definition
E state space (a complete, separable metric space)
A generator (usually a linear operator with domain and range in B(E))
µ ∈ P(E)
X is a solution of the martingale problem for (A, µ) if and only if µ = PX(0)−1 andthere exists a filtration Ft such that
Mf (t) = f(X(t))− f(X(0)−∫ t
0
Af(X(s))ds
is an Ft-martingale for each f ∈ D(A).
In these lectures, we will assume that D(A) ⊂ Cb(E) and that all processes arecadlag.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 31
Examples of generators
Standard Brownian motion (E = Rd)
Af =1
2∆f, D(A) = C2
c (Rd)
Poisson process (E = 0, 1, 2 . . ., D(A) = B(E))
Af(k) = λ(f(k + 1)− f(k))
Pure jump process (E arbitrary)
Af(x) = λ(x)
∫E
(f(y)− f(x))µ(x, dy)
Diffusion (E = Rd, D(A) = C2c (Rd))
Af(x) =1
2
∑i,j
aij(x)∂2
∂xi∂xjf(x) +
∑i
bi(x)∂
∂xif(x) (2.1)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 32
Equivalent formulationsSuppose, without loss of generality, thatD(A) is closed under addition of constants(A1 = 0). Then the following are equivalent:
a) X is a solution of the martingale problems for (A, µ).
b) PX(0)−1 = µ and there exists a filtration Ft such that for each λ > 0 andeach f ∈ D(A),
e−λtf(X(t))− f(X(0)) +
∫ t
0
e−λs(λf(X(s))− Af(X(s)))ds
is a Ft-martingale.
c) PX(0)−1 = µ and there exists a filtration Ft such that for each f ∈ D(A)with infx∈E f(x) > 0,
Rf (t) =f(X(t))
f(X(0))exp−
∫ t
0
Af(X(s))
f(X(s))ds
is a Ft-martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 33
Proof.
f(X(t)) exp−∫ t
0
Af(X(s))
f(X(s))ds
= f(X(0)) +
∫ t
0
exp−∫ r
0
Af(X(s))
f(X(s))dsdf(X(r))
−∫ t
0
f(X(r))Af(X(r))
f(X(r))exp−
∫ r
0
Af(X(s))
f(X(s))dsdr
= f(X(0)) +
∫ t
0
exp−∫ r
0
Af(X(s))
f(X(s))dsdMf (r)
so if Mf is a martingale, then Rf is a martingale.
Conversely, if Rf is a martingale, then
f(X(0))
∫ t
0
exp∫ r
0
Af(X(s))
f(X(s))dsdRf (r) = Mf (t)
is a martingale.
Note that considering only f that are strictly positive is no restriction since we canalways add a constant to f .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 34
Conditions for the martingale property
Lemma 2.1 For (f, g) ∈ A, h1, . . . , hm ∈ C(E), and t1 ≤ t2 ≤ · · · ≤ tm+1, let
η(Y ) ≡ η(Y, (f, g), hi, ti)
= (f(Y (tm+1)− f(Y (tm))−∫ tm+1
tm
g(Y (s)ds)m∏i=1
hi(Y (ti)).
Then Y is a solution of the martingale problem for A if and only if E[η(Y )] = 0 for allsuch η.
The assertion that Y is a solution of the martingale problem for A is an assertionabout the finite dimensional distributions of Y .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 35
Uniqueness of 1-dimensional distributions implies unique-ness of fdd
Theorem 2.2 If any two solutions of the martingale problem forA satisfying PX1(0)−1 =PX2(0)−1 also satisfy PX1(t)−1 = PX2(t)−1 for all t ≥ 0, then the f.d.d. of a solution Xare uniquely determined by PX(0)−1
Proof. If X is a solution of the MGP for A and Xa(t) = X(a + t), then Xa is asolution of the MGP for A. Further more, for positive fi ∈ B(E) and 0 ≤ t1 < t2 <· · · < tm = a, define
Q(B) =E[1B(Xa)
∏mi=1 fi(X(ti))]
E[∏m
i=1 fi(X(ti))]
defines a probability measure onF = σ(Xa(s), s ≥ 0) and underQ, Xa is a solutionof the martingale problem for A with initial distribution
µ(Γ) =E[1Γ(X(a))
∏mi=1 fi(X(ti))]
E[∏m
i=1 fi(X(ti))].
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 36
Proceeding by induction, fix m and suppose E[∏m
i=1 fi(X(ti))] is uniquely deter-mined for all 0 ≤ t1 < t2 < · · · < tm and all fi. Then µ is uniquely determined andthe one dimensional distributions of Xa under Q are uniquely determined, that is
E[fm+1(X(tm+1))∏m
i=1 fi(X(ti))]
E[∏m
i=1 fi(X(ti))]
is uniquely determined for tm+1 ≥ a. Since a is arbitrary and the denominator isuniquely determined, the numerator is uniquely determined completing the in-duction step.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 37
Time homogeneous Markov processes
A process X is Markov with respect to a filtration Ft provided
E[f(X(t+ r))|Ft] = E[f(X(t+ r))|X(t)]
for all t, r ≥ 0 and all f ∈ B(E).
The conditional expectation on the right can be written as gf,t,r(X(t)) for a measur-able funtion gf,t,r depending on f , t, and r.
If the function can be selected independently of t, that is
E[f(X(t+ r))|X(t)] = gf,r(X(t)),
then the Markov process is time homogeneous. A time inhomogeneous Markov pro-cess can be made time homogeneous by including time in the state. That is, setZ(t) = (X(t), t).
Note that gf,r will be linear in f , so we can write gf,r = T (r)f , where T (r) is a linearoperator onB(E) (the bounded measurable functions onE). The Markov propertythen implies T (r + s)f = T (r)T (s)f .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 38
Adding a time component
Lemma 2.3 Suppose that g(t, x) has the property that g(t, ·) ∈ D(A) for each t and that g,∂tg, and Ag are all bounded in t and x and are continuous functions of t. If X is a solutionof the martingale problem for A, then
g(t,X(t))−∫ t
0
(∂sg(x,X(s)) + Ag(s,X(s)))ds
is a martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 39
Proof.
E[g(t+ r,X(t+ r))− g(t,X(t))|Ft]=∑k
E[g(t+ sk+1, X(t+ sk+1))− g(t+ sk, X(t+ sk))|Ft]
=∑k
E[g(t+ sk+1, X(t+ sk+1))− g(t+ sk+1, X(t+ sk))|Ft]
+∑k
E[g(t+ sk+1, X(t+ sk))− g(t+ sk, X(t+ sk))|Ft]
=∑k
E[
∫ t+sk+1
t+sk
Ag(t+ sk+1, X(t+ r))dr|Ft]
+∑k
E[
∫ t+sk+1
t+sk
∂rg(t+ r,X(t+ sk))dr|Ft]
To complete the proof, see Exercise 14.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 40
Markov property
Theorem 2.4 Suppose the conclusion of Theorem 2.2 holds. If X is a solution of themartingale problem for A with respect to a filtration Ft, then X is Markov with respectto Ft.
Proof. Assuming that P (F ) > 0, let F ∈ Fr and for B ∈ F , define
P1(B) =E[1FE[1B|Fr]]
P (F ), P2(B) =
E[1FE[1B|X(r)]]
P (F ).
Define Y (t) = X(r + t). Then
P1Y (0) ∈ Γ =E[1FE[1Y (0)∈Γ|Fr]]
P (F )=E[1FE[1X(r)∈Γ|Fr]]
P (F )
=E[1F1X(r)∈Γ]
P (F )=E[1FE[1X(r)∈Γ|X(r)]]
P (F )= P2Y (0) ∈ Γ
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 41
Check that EP1 [η(Y )] = EP2 [η(Y )] = 0 for all η(Y ) as in Lemma 2.1. Therefore
E[1FE[f(X(r + t))|Fr]] = P (F )EP1 [f(Y (t))]
= P (F )EP2 [f(Y (t))]
= E[1FE[f(X(r + t))|X(r)]].
Since F ∈ Fr is arbitrary, E[f(X(r + t))|Fr] = E[f(X(r + t)|X(r)] and the Markovproperty follows.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 42
Building Markov generators(sums of generators are generators–usually)
Suppose A and B are generators Let Xn be a stochastic process such that for k =0, 2, 4, . . . Xn evolves as if it has generator A on the time interval [ k
n, k+1
n) and as if
it has generator B on the time interval [k+1n, k+2
n). Then for f ∈ D = D(A) ∩ D(B),
f(Xn(t))− f(Xn(0))−∫ t
0
(1 + (−1)[ns]
nAf(Xn(s)) +
1− (−1)[ns]
nBf(Xn(s))
)ds
is a martingale. Letting n → ∞, at least along a subsequence Xn should convergeto a process such that
f(X(t))− f(X(0))−∫ t
0
1
2(A+B)f(X(s))ds
is a martingale for f ∈ D. (True, for example, if E is compact, A,B ⊂ C(E)×C(E),and D is dense in C(E).)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 43
Markov processes and semigroupsT (t) : B(E)→ B(E), t ≥ 0 is an operator semigroup if T (t)T (s)f = T (t+ s)f
X is a Markov process with operator semigroup T (t) if and only if
E[f(X(t+ s))|FXt ] = T (s)f(X(t)), t, s ≥ 0, f ∈ B(E).
T (s+ r)f(X(t)) = E[f(X(t+ s+ r))|FXt ]
= E[E[f(X(t+ s+ r))|FXt+s]|FXt ]
= E[T (r)f(X(t+ s))|FXt ]
= T (s)T (r)f(X(t))
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 44
Hille-Yosida theorem
Theorem 2.5 The closure of A is the generator of a strongly continuous contraction semi-group on L0 if and only if
• D(A) is dense in L0.
• ‖λf − Af‖ ≥ λ‖f‖, f ∈ D(A), λ > 0.
• R(λ− A) is dense in L0.
Proof.(of sufficiency) Assuming A is closed (otherwise, replace A by its closure),the conditions implyR(λ− A) = L0 and the semigroup is obtained by
T (t)f = limn→∞
(I − 1
nA)−[nt]f.
(One must show that the right side is Cauchy.)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 45
Uniqueness under the Hille-Yosida conditions
Theorem 2.6 If A satisfies the conditions of Theorem 2.5 and D(A) is separating, thenthere is at most one solution to the martingale problem.
Proof. If X is a solution of the martingale problem for A, then by Lemma 2.3, foreach t > 0 and each f ∈ D(A), T (t − s)f(X(s)) is a martingale. This martingaleproperty extends to all f in the closure of D(A). Consequently,
E[f(X(t))|Fs] = T (t− s)f(X(s)),
and E[f(X(t))] = E[T (t)f(X(0))] which determines the one dimensional distribu-tions implying uniqueness.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 46
Cadlag versions
Lemma 2.7 Suppose E is compact and A ⊂ C(E) × B(E). If D(A) is separating, thenany solution of the martingale problem for A has a cadlag modification.
Proof. By the upcrossing inequality, there exists a set Ωf ⊂ Ω with P (Ωf ) = 1 suchthat for ω ∈ Ωf , lims→t+,s∈Q f(X(s, ω)) exists for each t ≥ 0 and lims→t−,s∈Q f(X(s, ω))exists for each t > 0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 47
Quasi-left continuity
X is quasi-left continuous if and only if for each sequence of stopping times τ1 ≤τ2 ≤ · · · such that τ ≡ limn→∞ τn <∞ a.s.,
limn→∞
X(τn) = X(τ) a.s.
Lemma 2.8 Let A ⊂ C(E) × B(E), and suppose that D(A) is separating. Let X be acadlag solution of the martingale problems for A. Then X is quasi-left continuous
Proof. For (f, g) ∈ A,
limn→∞
f(X(τn ∧ t)) = limn→∞
E[f(X(τ ∧ t))−∫ τ∧t
τn∧tg(X(s))ds|Fτn ]
= E[f(X(τ ∧ t))| ∨n Fτn ] .
Since X is cadlag,
limn→∞
X(τn ∧ t) =
X(τ ∧ t) if τn ∧ t = τ ∧ t for n sufficiently largeX(τ ∧ t−) if τn ∧ t < τ ∧ t for all n
To complete the proof, see Exercise 6.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 48
Continuity of diffusion process
Lemma 2.9 Suppose E = Rd and
Af(x) =1
2
∑i,j
aij(x)∂2
∂xi∂xjf(x) +
∑i
bi(x)∂
∂xif(x), D(A) = C2
c (Rd).
If X is a solution of the martingale problem for A, then X has a modification that is cadlagin Rd ∪ ∞. If X is cadlag, then X is continuous.
Proof. The existence of a cadlag modification follows by Lemma 2.7. To showcontinuity, it is enough to show that for f ∈ C∞c (Rd), f X is continuous. To showf X is continuous, it is enough to show
limmax |ti+1−ti|→0
∑(f(X(ti+1 ∧ t)− f(X(ti ∧ t)))4 = 0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 49
From the martingale properties,
E[(f(X(t+ h))− f(X(t)))4]
=
∫ t+h
t
E[Af 4(X(s))− 4f(X(t))Af 3(X(s))
+6f 2(X(t))Af 2(X(s))− 4f 3(X(t))Af(X(s))]ds
Check that
Af 4(x)− 4f(x)Af 3(x) + 6f 2(x)Af 2(x)− 4f 3(x)Af(x) = 0 (2.2)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 50
3. Forward equations
• Conditions for relative compactness for cadlag processes
• Forward equations
• Uniqueness of the forward equation under a range condition
• Construction of a solution of a martingale problem
• Stationary distributions
• Echeverria’s theorem
• Equivalence of the forward equation and the MGP
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 51
Conditions for relative compactnessLet (E, r) be complete, separable metric space, and define q(x, y) =1 ∧ r(x, y) (so q is an equivalent metric under which E is complete).Let Xn be a sequence of cadlag processes with values in E, Xn
adapted to Fnt .
Theorem 3.1 Assume the following:
a) For t ∈ T0, a dense subset of [0,∞), Xn(t) is relatively compact (inthe sense of convergence in distribution).
b) For T > 0, there exist β > 0 and random variables γn(δ, T ) such thatfor 0 ≤ t ≤ T , 0 ≤ u ≤ δ,
E[qβ(Xn(t+ u), Xn(t))|Fnt ] ≤ E[γn(δ, T )|Fn
t ]
and limδ→0 lim supn→∞E[γn(δ, T )] = 0.
Then Xn is relatively compact in DE[0,∞).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 52
Relative compactness for martingale problems
Theorem 3.2 Let E be compact, and let An be a sequence of generators.Suppose there exists a dense subset D ⊂ C(E) such that for each f ∈D there exist fn ∈ D(An) such that limn→∞ ‖fn − f‖ = 0 and Cf =supn ‖Anfn‖ <∞. If Xn is a sequence of cadlag processes in E such thatfor each n, Xn is a solution of the martingale problem for An, then Xn isrelatively compact in DE[0,∞).
Remark 3.3 The assumption of compactness is not as much of a restrictionas it might appear. For processes in Rd, typically A ⊂ C(Rd) × C(Rd).Replace Rd by the one point compactification Rd∪∞ and add (1, 0) to A.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 53
Proof. Let f ∈ D and for δ > 0, let hδ ∈ D be such that there existlim supδ→0
√δChδ <∞ and limδ→0 ‖hδ − f 2‖ = 0. Then for 0 ≤ u ≤ δ,
E[(f(Xn(t+ u)− f(Xn(t))2|Fn
t ]
= E[f 2(Xn(t+ u))− f 2(Xn(t))|Fnt ]
−2f(Xn(t)E[f(Xn(t+ u))− f(Xn(t))|Fnt ]
≤ lim supn→∞
(2‖f 2 − hδn‖+ ‖∫ t+u
t
Anhδn(Xn(s))ds‖
+4‖f‖‖f − fn‖+ 2‖f‖‖∫ t+u
t
Anfn(Xn(s))ds‖)
≤ 2‖f 2 − hδ‖+ Chδδ + 2‖f‖Cfδ ≡ γf(δ).
Since limδ→0 γf(δ) = 0, relative compactness for f(Xn) follows.Since the collection of such f is dense in C(E), relative compactnessof Xn follows.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 54
The forward equation for a general Markov processLet A ⊂ B(E) × B(E). If X is a solution of the martingale problemfor A and νt is the distribution of X(t), then
0 = E[f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds] = νtf − ν0f −∫ t
0
νsAfds
so
νtf = ν0f +
∫ t
0
νsAfds, f ∈ D(A). (3.1)
(3.1) gives the weak form of the forward equation.
Definition 3.4 A measurable mapping t ∈ [0,∞) → νt ∈ P(E) is asolution of the forward equation for A if (3.1) holds for all f ∈ D(A).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 55
Fokker-Planck equation
Let Af = 12a(x)f ′′(x) + b(x)f ′(x), f ∈ D(A) = C2
c (R). If νt has a C2
density, then
νtAf =
∫ ∞−∞
Af(x)ν(t, x)ds
=
∫ ∞−∞
f(x)
(1
2
∂2
∂x2(a(x)ν(t, x))− ∂
∂x(b(x)ν(t, x))
)dx
and the forward equation is equivalent to
∂
∂tν(t, x) =
1
2
∂2
∂x2(a(x)ν(t, x))− ∂
∂x(b(x)ν(t, x)),
known as the Fokker-Planck equation in the physics literature.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 56
Forward/master equation for Markov chainsBy the martingale propertyE[f(X(t))] = E[f(X(0))]+
∫ t0 E[Af(X(s))]ds
and taking f(y) = 1x(y), we have
PX(t) = x = PX(0) = x+
∫ t
0
(∑l
βl(x− ζl)PX(s) = x− ζl
−∑l
βl(x)PX(s) = x)ds
giving the Kolmogorov forward or master equation for the distribu-tion of X . In particular, defining px(t) = PX(t) = x and νx =PX(0) = x, px satisfies the system of differential equations
px(t) =∑l
λl(x− ζl)px−ζl(t)− (∑l
λl(x))px(t), (3.2)
with initial condition px(0) = νx.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 57
Uniqueness for the forward equation
Lemma 3.5 If νt and µt are solutions of the forward equation for Awith ν0 = µ0 andR(λ−A) is separating for each λ > 0, then
∫∞0 e−λtνtdt =∫∞
0 e−λtµtdt and µt = νt for almost every t. Consequently, if ν and µ areweakly right continuous or if D(A) is separating, νt = µt for all t ≥ 0.
Proof.
λ
∫ ∞0
e−λtνtfdt = ν0f + λ
∫ ∞0
e−λt∫ t
0
νsAfds dt
= ν0f + λ
∫ ∞0
∫ ∞s
e−λtνsAfdt ds
= ν0f +
∫ ∞0
e−λsνsAf ds
and hence∫ ∞
0
e−λtνt(λf − Af)dt = ν0f.
SinceR(λ− A) is separating, the result holds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 58
Uniqueness for the FEQ implies uniqueness for the MGP
Lemma 3.6 If uniqueness holds for the forward equation, then uniquenessholds for the martingale problem.
Proof. Uniqueness for the forward equation gives uniqueness for theone-dimensional distributions which by Theorem 2.2 gives unique-ness for the martingale problem.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 59
The semigroup and the forward equation
If A generates a semigroup in the Hille-Yosida sense, then
T (t)f = f +
∫ t
0
T (s)Afds = f +
∫ t
0
AT (s)fds.
If ν0 ∈ P(E), then
ν0T (t)f = ν0f +
∫ t
0
ν0T (s)Afds
and if T (t) is given by a transition function, νt =∫E P (t, x, ·)ν0(dx)
satisfies
νtf = ν0f +
∫ t
0
νsAfds, f ∈ D(A).
If A satisfies the conditions of the Hille-Yosida theorem and D(A) isseparating, then uniqueness follows by Lemma 3.5.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 60
Dissipativity and the positive maximum principle
For λ > 0,
‖λf − t−1(T (t)f − f)‖ ≥ (λ+ t−1)‖f‖ − t−1‖T (t)f‖ ≥ λ‖f‖
so A is dissipative
‖λf − Af‖ ≥ λ‖f‖, λ > 0.
Definition 3.7 A satisfies the positive maximum principle if f(x) =‖f‖ impliesAf(x) ≤ 0. (For our operators, we can always add a constant tof making it positive. Consequently f(x) = supy∈E f(y) implies Af(x) ≤0.)
Lemma 3.8 LetE be compact andD(A) ⊂ C(E). IfA satisfies the positivemaximum principle, then A is dissipative.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 61
Digression on the proof of the Hille-Yosida theorem
The conditions of the Hille-Yosida theorem imply (I−n−1A)−1 existsand
‖(I − n−1A)−1f‖ ≤ ‖f‖.In addition
‖(I − n−1A)−1f − f‖ ≤ 1
n‖Af‖.
One proof of the Hille-Yosida theorem is to show that
Tn(t)f = (I − n−1A)−[nt]f
is a Cauchy sequence and to observe that
Tn(t)f = f +1
n
[nt]∑k=1
(I − n−1A)−kAf = f +
∫ [nt]n
0
Tn(s+ n−1)Afds
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 62
Probabilistic interpretation
(n − A)−1 =∫∞
0 e−ntT (t)dt and (I − n−1A)−1 = n∫∞
0 e−ntT (t)dt. IfT (t) is given by a transition function, then
ηn(x, dy) = n
∫ ∞0
e−ntP (t, x, dy)dt
is a discrete time transition function. If Y nk is a Markov chain with
transition function ηn, then
E[f(Y nk )] = E[(I − n−1A)−kf(Y0)] = E[f(X(
∆1 + · · ·+ ∆k
n))],
where the ∆i are independent unit exponentials, and Xn(t) = Y n[nt]
can be written as
Xn(t) = X(1
n
[nt]∑k=1
∆k)→ X(t)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 63
Construction of a solution of a martingale problem
Theorem 3.9 Assume that E is compact, A ⊂ C(E) × C(E), (1, 0) ∈ A,A is linear, andD(A) is dense in C(E). Assume thatA satisfies the positivemaximum principle (and is consequently dissipative). Then there exists atransition function ηn such that∫
E
f(y)ηn(x, dy) = (I − n−1A)−1f(x) (3.3)
for all f ∈ R(I − n−1A).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 64
Proof. Note that D((I − n−1A)−1) = R(I − n−1A).
For each x ∈ E,ηxh = (I − n−1A)−1h(x)
is a linear functional onR(I − n−1A). If
h(x) = f(x)− 1
nAf(x)
for some f ∈ D(A), then ηxh = f(x). Since A satisfies the positivemaximum principle, ηx is a positive linear functional on R(I − 1
nA),|ηxh| = |f(x)| ≤ ‖h‖ and ηx1 = 1. The Hahn-Banach theorem impliesηx extends to a positive linear functional onC(E) (hence a probabilitymeasure).
Γx = η ∈ P(E) : ηf = (I − n−1A)−1f(x), f ∈ R(I − n−1A)
is closed and lim supy→x Γy ⊂ Γx. The measurable selection theoremimplies the existence of η satisfying (3.3).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 65
Approximating Markov chain
For ηn as in (3.3), define
Anf = n(
∫E
f(y)ηn(x, dy)− f(x))
Then A is the generator of a pure-jump Markov process of the formXn(t) = Y n
Nn(t), where Y nk is a Markov chain with transition function
ηn and Nn is a Poisson process with parameter n.
Then
f(Xn(t))− f(Xn(0))−∫ t
0
Anf(Xn(s))ds
is a martingale, and in particular, if f ∈ D(A) and fn = f − n−1Af ,then Anfn = Af and
Mnf (t) = fn(Xn(t))− fn(Xn(0))−
∫ t
0
Af(Xn(s))ds
is a martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 66
Theorem 3.2 implies Xn is relatively compact, and (see Lemma 2.1),if X is a limit point of Xn, for 0 ≤ t1 < · · · < tm+1,
0 = E[(fn(Xn(tm+1)− fn(Xn(tm))−∫ tm+1
tm
Anf(Xn(s)ds)m∏i=1
hi(Xn(ti))]
→ E[(f(X(tm+1)− f(X(tm))−∫ tm+1
tm
Af(X(s)ds)m∏i=1
hi(X(ti))],
at least if the ti are selected outside the at most countable set oftimes at which X has a fixed point of discontinuity. Since X is rightcontinuous, the right side is in fact zero for all choices of ti, so X isa solution of the martingale problem for A.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 67
Stationary distributions
Definition 3.10 A stochastic process X is stationary if the distribution ofXt ≡ X(t+ ·) does not depend on t.
Definition 3.11 µ is a stationary distribution for the martingale problemfor A if there exists a stationary solution of the martingale problem for Awith marginal distribution µ.
Theorem 3.12 Suppose that D(A) and R(λ − A), λ > 0, are separatingand that for each ν ∈ P(E), there exists a solution of the martingale prob-lem for (A, ν). If µ ∈ P(E) satisfies∫
E
Afdµ = 0, f ∈ D(A),
then µ is a stationary distribution for A. (See Lemma 3.5.)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 68
Echeverria’s theorem
Theorem 3.13 Echeverrıa (1982)
Let E be compact, and let A ⊂ C(E)×C(E) be linear and satisfy the posi-tive maximum principle. Suppose that D(A) is closed under multiplicationand dense in C(E). If µ ∈ P(E) satisfies∫
E
Afdµ = 0, f ∈ D(A),
then µ is a stationary distribution of A.
Example 3.14 E = [0, 1], Af(x) = 12f′′(x)
D(A) = f ∈ C2[0, 1] : f ′(0) = f ′(1) = 0, f ′(13) = f ′(2
3)
Let µ(dx) = 3I[ 13 ,23 ](x)dx.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 69
Outline of proofIn the proof of Theorem 3.9, we constructed ηn so that∫
E
fn(y)ηn(x, dy) =
∫E
(f(y)− 1
nAf(y))ηn(x, dy) = f(x)
Consequently,∫E
∫E
fn(y)ηn(x, dy)µ(dx) =
∫E
f(x)µ(dx)
=
∫E
(f(x)− 1
nAf(x))µ(dx) =
∫E
fn(x)µ(dx),
and µ should be a stationary distribution for ηn.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 70
For F (x, y) =∑m
i=1 hi(x)(fi(y)− 1nAfi(y)) +h0(y), fi ∈ D(A), define
ΛnF =
∫ [ m∑i=1
hi(x)fi(x) + h0(x)
]µ(dx).
We want to apply the Hahn Banach theorem to Λn considered as alinear functional on C(E × E).
If Λn is given by a measure νn, then both marginals are µ, and lettingηn satisfy νn(dx, dy) = ηn(x, dy)µ(dx), for f ∈ D(A),∫
(f(y)− 1
nAf(y))ηn(x, dy) = f(x), µ− a.s.
The work is to show that Λn is a positive linear functional.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 71
Extensions
Theorem 3.15 Let E be locally compact (e.g., E = Rd), and let A ⊂C(E)× C(E) satisfy the positive maximum principle. Suppose that D(A)
is an algebra and dense in C(E). If µ ∈ P(E) satisfies∫E
Afdµ = 0, f ∈ D(A),
then µ is a stationary distribution of A.
Proof. Let E = E ∪ ∞ and extend A to include (1, 0). There existsan E-valued stationary solution X of the martingale problem for theextended A, but PX(t) ∈ E = µ(E).
Let E = R and Af(x) = a(x)f ′′(x), f ∈ C2c , where a(x) > 0 and∫∞
−∞1
a(x)dx = 1. Then µ(dx) = 1a(x)dx is a stationary measure for A.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 72
Complete, separable E Bhatt and Karandikar (1993)
E complete, separable. A ⊂ Cb(E)× Cb(E).
Assume that gk is closed under multiplication. Let I be the col-lection of finite subsets of positive integers, and for I ∈ I, let k(I)satisfy gk(I) =
∏i∈I gi. For each k, there exists ak ≥ |gk|. Let
E = z ∈∞∏i=1
[−ai, ai] : zk(I) =∏i∈I
zi, I ∈ I.
Note that E is compact, and defineG : E → E byG(x) = (g1(x), g2(x), . . .).Then G has a measurable inverse defined on the (measurable) setG(E).
Lemma 3.16 Let µ ∈ P(E). Then there exists a unique measure ν ∈ P(E)satisfying
∫E gkdµ =
∫E zkν(dz). In particular, ifZ has distribution ν, then
G−1(Z) has distribution µ.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 73
Equivalence of the forward equation and the MGP
Suppose
νtf = ν0f +
∫ t
0
νsAfds
Define
Bλf(x, θ) = Af(x, θ) + λ(
∫E
f(y,−θ)ν0(dy)− f(x, θ))
andµλ = λ
∫ ∞0
e−λtνtdt× (1
2δ1 +
1
2δ−1).
Then ∫E
Bλfdµλ = 0, f(x, θ) = f1(x)f2(θ), f1 ∈ D(A).
Let τ1 = inft > 0 : Θ(t) 6= Θ(0), τk+1 = inft > τk : Θ(t) 6= Θ(τk).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 74
Theorem 3.17 Let (Y,Θ) be a stationary solution of the martingale prob-lem for Bλ with marginal distribution µλ. Let τ1 = inft > 0 : Θ(t) 6=Θ(0), τk+1 = inft > τk : Θ(t) 6= Θ(τk). Define X(t) = Y (τ1 + t). Thenconditioned on τ2 − τ1 > t0, X is a solution of the martingale problem forA and the distribution of X(t) is νt for 0 ≤ t ≤ t0.
Extension of Echeverria’s result to the forward equation is given inTheorem 4.9.19 of Ethier and Kurtz (1986) for locally compact spacesand in Theorem 3.1 of Bhatt and Karandikar (1993) for general com-plete separable metric spaces.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 75
4. Other types of martingale problems
• Stopped martingale problems
• Local martingale problems
• Notions of uniqueness
• Controlled martingale problems
• Constrained martingale problems
Kurtz and Nappo (2011), Kurtz and Stockbridge (2001)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 76
Stopped martingale problemsWe weaken the assumption that A ⊂ Cb(E) × B(E) and considerA ⊂ Cb(E)×M(E), but assume the existence of a ψ ≥ 1 such that foreach f ∈ D(A), there exists af > 0 such that
|Af(x)| ≤ afψ(x).
Definition 4.1 An E-valued process X and a nonnegative random vari-able τ are a solution of the stopped martingale problem for A, if thereexists a filtration Ft such that X is Ft-adapted, τ is a Ft-stoppingtime,
E[
∫ t∧τ
0
ψ(X(s))ds] <∞, t ≥ 0, (4.1)
and for each f ∈ D(A),
f(X(t ∧ τ))− f(X(0))−∫ t∧τ
0
Af(X(s))ds (4.2)
is an Ft-martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 77
Local martingale problems
Definition 4.2 AnE-valued processX is a solution of the local-martingaleproblem for A, if there exists a filtration Ft such that X is Ft-adaptedand a sequence τn of Ft-stopping times such that τn →∞ a.s. and foreach n, (X, τn) is a solution of the stopped martingale problem for A usingthe filtration Ft.
Remark 4.3 If X is a solution of the local martingale problem for A, thenthe localizing sequence τn can be taken to be predictable. In particular, wecan take
τn = inft :
∫ t
0
ψ(X(s))ds ≥ n.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 78
Notions of uniqueness
Definition 4.4 Uniqueness holds for the (local) martingale problem for(A, ν0) if and only if all solutions have the same finite-dimensional dis-tributions. Stopped uniqueness holds if for any two solutions, (X1, τ1),(X2, τ2), of the stopped martingale problem for (A, ν0), there exists a stochas-tic process X and nonnegative random variables τ1, τ2 such that (X, τ1∨ τ2)
is a solution of the stopped martingale problem for (A, ν0), (X(· ∧ τ1), τ1)
has the same distribution as (X1(·∧τ1), τ1), and (X(·∧ τ2), τ2) has the samedistribution as (X2(· ∧ τ2), τ2).
Remark 4.5 Note that stopped uniqueness implies uniqueness. Stoppeduniqueness holds if uniqueness holds and every solution of the stopped mar-tingale problem can be extended (beyond the stopping time) to a solution ofthe (local) martingale problem. (See Lemma 4.5.16 of Ethier and Kurtz(1986) for conditions under which this extension can be done.)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 79
Local forward equation
Definition 4.6 A pair of measure-valued functions (ν0t , ν
1t ), t ≥ 0 is a
solution of the stopped forward equation for A if for each t ≥ 0, νt ≡ν0t + ν1
t ∈ P(E) and∫ t
0 ν1sψds < ∞, t → ν0
t (C) is nondecreasing for allC ∈ B(E), and for each f ∈ D(A),
νtf = ν0f +
∫ t
0
ν1sAfds. (4.3)
A P(E)-valued function νt, t ≥ 0 is a solution of the local forwardequation for A if there exists a sequence (ν0,n, ν1,n) of solutions of thestopped forward equation for A such that for each C ∈ B(E) and t ≥ 0,ν1,n
t (C) is nondecreasing and limn→∞ ν1,nt (C) = νt(C).
Any solution of the stopped martingale problem for A gives a solu-tion of the stopped forward equation for A, that is,
ν0t f = E[1[τ,∞)(t)f(X(τ))] and ν1
t f = E[1[0,τ)(t)f(X(t))].
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 80
Technical conditions
Condition 4.7 1. A : D(A) ⊂ Cb(E)×C(E) with 1 ∈ D(A) and A1 =0.
2. D(A) is closed under multiplication and separates points.
3. There exist ψ ∈ C(E), ψ ≥ 1, and constants af such that f ∈ D(A)implies |Af(x)| ≤ afψ(x).
4. DefiningA0 = (f, ψ−1Af) : f ∈ D(A),A0 is separable in the sensethat there exists a countable collection gk ⊂ D(A) such that everysolution of the martingale problem forAr
0 = (gk, A0gk) : k = 1, 2, . . .is a solution for A0.
5. A0 is a pre-generator, that is, A0 is dissipative and there are sequencesof functions µn : E → P(E) and λn : E → [0,∞) such that for each(f, g) ∈ A for each x ∈ E
g(x) = limn→∞
λn(x)
∫E
(f(y)− f(x))µn(x, dy). (4.4)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 81
Equivalence of forward equations and martingale prob-lemsThe primary consequence of Condition 4.7 is an extension of Theo-rem 3.13
Theorem 4.8 IfA satisfies Condition 4.7 and (ν0t , ν
1t ), t ≥ 0 is a solution
of the stopped forward equation for A, then there exists a solution (X, τ) ofthe stopped martingale problem for A such that ν0
t f = E[1[τ,∞)(t)f(X(τ))]and ν1
t f = E[1[0,τ)(t)f(X(t))].
If A satisfies Condition 4.7 and νt, t ≥ 0 is a solution of the local forwardequation forA, then there exists a solutionX of the local martingale problemfor A satisfying νtf = E[f(X(t)].
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 82
Proof. First enlarge the state space E = E × 0, 1 and define νth =
ν0t h(·, 0) + ν1
t h(·, 1). Setting D(A) = f(x)g(y) : f ∈ D(A), g ∈B(0, 1), for h = fg ∈ D(A), define Ah(x, y) = yAh(x, y) = yg(y)Af(x)and Bh(x, y) = y(h(x, 0)− h(x, y)). Then
0 = ν0t h(·, 1) + ν1
t h(·, 1)− ν00h(·, 1)− ν1
0h(·, 1)−∫ t
0
νsAhds
= νth− ν0h−∫ t
0
νsAhds+ ν0t h(·, 1)− ν0
t h(·, 0) + ν00h(·, 1)− ν0
0h(·, 0)
= νth− ν0h−∫ t
0
νsAhds−∫E×0,1×[0,t]
Bh(x, y)µ(dx× dy × ds),
where, noting that ν0t (C) is an increasing function of t, µ is the mea-
sure determined by
µ(C × 1 × [0, t2]) = ν0t2
(C)− ν00(C), µ(C × 0 × [t1, t2]) = 0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 83
Controlled martingale problems
X(t) = X(0) +
∫ t
0
σ(X(s), U(s))dW (s) +
∫ t
0
b(X(s), U(s))ds
Z(t) = 1− Y (
∫ t
0
Z(s)V (s)ds)ds
Define a(x, u) = σ(x, u)2,
Af(x, u) =1
2a(x, u)f ′′(x) + b(x, u)f ′(x), x ∈ R, u ∈ U
vBg(z) = vz((g(z − 1)− g(z)), z ∈ 0, 1, v ∈ [0,∞)
Then the following is a martingale:
f(X(t))g(Z(t))− f(X(0))g(Z(0))−∫ t
0
g(Z(s))Af(X(s), U(s))ds
−∫ t
0
f(X(s))V (s)Bg(Z(s))ds
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 84
Optimal stopping
If τ = inft : Z(t) = 0 and only Z is controlled, that is, we have anoptimal stopping problem,
f(X(t ∧ τ))g(Z(t))− f(X(0))g(Z(0))−∫ t
0
g(Z(s))Z(s)Af(X(s ∧ τ))ds
−∫ t
0
f(X(s ∧ τ))V (s)Bg(Z(s))ds
is a martingale.
If νth = E[h(X(t∧ τ), Z(t))] and µth = E[∫ t
0 V (s)h(X(s∧ τ), Z(s))ds],then for h(x, z) = f(x)g(z),
νth = ν0h+
∫ t
0
νsAhds+
∫E×0,1×[0,t]
Bh(x, y)µ(dx× dz × ds)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 85
Constrained martingale problems
Consider the Skorohod equation for reflecting Brownian motion
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s) +
∫ t
0
b(X(s))ds+
∫ t
0
η(X(s))dλ(s),
where X ∈ D and λ increases only when X ∈ ∂D. Then definingBf = η · ∇f ,
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds−∫ t
0
Bf(X(s))dλ(s)
=
∫ t
0
∇f(X(s))Tσ(X(s))dW (s)
is a martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 86
Constrained forward equations
The forward equation becomes
νtf = ν0f +
∫ t
0
νsAfds+
∫∂D×[0,t]
Bfµ(dx× ds)
and a stationary distribution must satisfy
πAf + µπBf = 0,
which in the queueing literature is known as the basic adjoint rela-tionship (see Dai and Harrison (1991, 1992)).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 87
General controlled martingale problemA ⊂ Cb(E)× C(E × U), B ⊂ Cb(E)× C(E × U) Want
f(X(t))− f(X(0))−∫ t
0
∫U
Af(X(s), u)Λs(du)ds
−∫E×U×[0,t]
Bf(x, u)Γ(dx× du× ds)
to be martingales with respect to some Ft. The corresponding “for-ward equation” becomes
νEt f = νE0 +
∫ t
0
νsAfds+
∫E×U×[0,t]
Bf(x, u)µ(dx× du× ds),
where νt ∈ P(E × U) and νEt is the E marginal.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 88
5. Change of measure
• Absolute continuity and the Radon Nikodym theorem
• Bayes formula
• Local absolute continuity
• Martingales under a change of measure
• Change of measure for Brownian motion
• Change of measure for Poisson processes
• Applications of absolute continuity
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 89
Absolute continuity and the Radon-Nikodym theorem
Definition 5.1 Let P and Q be probability measures on (Ω,F). Then P is absolutelycontinuous with respect to Q (P << Q) if and only if Q(A) = 0 implies P (A) = 0.
Theorem 5.2 If P << Q, then there exists a random variable L ≥ 0 such that
P (A) = EQ[1AL] =
∫A
LdQ, A ∈ F .
Consequently, Z is P -integrable if and only if ZL is Q-integrable, and
EP [Z] = EQ[ZL].
Standard notation: dPdQ
= L.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 90
Bayes Formula
Recall the definition of conditional expectation: Y = E[Z|D] if Y is D-measurableand for each D ∈ D,
∫DY dP =
∫DZdP .
Lemma 5.3 (Bayes Formula) If dP = LdQ, then
EP [Z|D] =EQ[ZL|D]
EQ[L|D].(5.1)
Proof. Clearly the right side of (5.1) is D-measurable. Let D ∈ D. Then∫D
EQ[ZL|D]
EQ[L|D].dP =
∫D
EQ[ZL|D]
EQ[L|D]LdQ
=
∫D
EQ[ZL|D]
EQ[L|D]EQ[L|D]dQ
=
∫D
EQ[ZL|D]dQ
=
∫D
ZLdQ =
∫D
ZdP
which verifies the identity.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 91
Examples
For real-valued random variables with a joint density X, Y ∼ fXY (x, y), condi-tional expectations can be computed by
E[g(X)|Y = y] =
∫∞−∞ g(z)fXY (z, y)dz
fY (y)
that is, setting h(y) equal to the right side,
E[g(X)|Y ] = h(Y ).
For general random variables, suppose X and Y are independent on (Ω,F , Q). LetL = H(X, Y ) ≥ 0, and E[H(X, Y )] = 1. Define
νX(Γ) = QX ∈ ΓdP = H(X, Y )dQ.
Bayes formula becomes
EP [g(X)|Y ] =EQ[g(X)H(X, Y )|Y ]
EQ[H(X, Y )|Y ]=
∫g(x)H(x, Y )νX(dx)∫H(x, Y )νX(dx)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 92
Local absolute continuity
Theorem 5.4 Let (Ω,F) be a measurable space, and let P and Q be probability measureson F . Let Ft be a filtration, and suppose P |Ft << Q|Ft , t ≥ 0. Define L(t) = dP
dQ
∣∣∣Ft
.
Then L(t) is a nonnegative Ft-martingale on (Ω,F , Q) and L∞ = limn→∞ L(t)satisfies EQ[L∞] ≤ 1. If EQ[L∞] = 1, then P << Q on F∞ =
∨tFt.
Proof. If t < s, and D ∈ Ft ⊂ Fs, then P (D) = EQ[L(t)1D] = EQ[L(s)1D] whichimplies E[L(s)|Ft] = L(t). If E[L∞] = 1, then L(t)→ L∞ in L1, so
P (D) = EQ[L∞1D], D ∈ ∪tFt,
hence for all D ∈∨tFt.
Proposition 5.5 P << Q on F∞ if and only if Plimt→∞L(t) <∞ = 1.
Proof. The dominated convergence theorem implies
PsuptL(t) ≤ K = lim
r→∞EQ[1supt≤r L(t)≤KL(r)] = EQ[1supt L(t)≤KL∞].
Letting K →∞, we see that EQ[L∞] = 1.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 93
Martingales and change of measure(See Protter (2004), Section III.6.)
Let Ft be a filtration and assume that P |Ft << Q|Ft and that L(t) is the corre-sponding Radon-Nikodym derivative. Then as before, L is an Ft-martingale on(Ω,F , Q).
Lemma 5.6 Z is a P -local martingale if and only if LZ is a Q-local martingale.
Proof. For a bounded stopping time τ , Z(τ) is P -integrable if and only if L(τ)Z(τ)is Q-integrable. Furthermore, if L(τ ∧ t)Z(τ ∧ t) is Q-integrable, then L(t)Z(τ ∧ t)is Q-integrable and EQ[L(τ ∧ t)Z(τ ∧ t)] = EQ[L(t)Z(τ ∧ t)].
By Bayes formula, EP [Z(t+ h)− Z(t)|Ft] = 0 if and only if
EQ[L(t+ h)(Z(t+ h)− Z(t))|Ft] = 0
which is equivalent to
EQ[L(t+ h)Z(t+ h)|Ft] = EQ[L(t+ h)Z(t)|Ft] = L(t)Z(t),
so Z is a martingale under P if and only if LZ is a martingale under Q.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 94
Semimartingale decompositions under a change of measure
Theorem 5.7 If M is a Q-local martingale, then
Z(t) = M(t)−∫ t
0
1
L(s)d[L,M ]s (5.2)
is a P -local martingale, where [L,M ]t is the covariation of L and M . (Note that theintegrand is 1
L(s), not 1
L(s−).)
Proof. Note that LM − [L,M ] is a Q-local martingale. We need to show that LZ isa Q-local martingale. But letting V denote the second term on the right of (5.2), wehave L(t)V (t) =
∫ t0V (s−)dL(s) +
∫ t0L(s)dV (s) and hence
L(t)Z(t) = L(t)M(t)− [L,M ]t −∫ t
0
V (s−)dL(s).
Both terms on the right are Q-local martingales.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 95
Change of measure for Poisson processes
Theorem 5.8 Let N be a unit Poisson process on (Ω,F , Q) that is compatible with Ft.If λ is nonnegative, cadlag, Ft-adapted, and satisfies∫ t
0
λ(s)ds <∞ a.s., t ≥ 0,
then
L(t) = exp
∫ t
0
lnλ(s−)dN(s)−∫ t
0
(λ(s)− 1)ds
satisfies
L(t) = 1 +
∫ t
0
(λ(s−)− 1)L(s−)d(N(s)− s) (5.3)
and is a Q-local martingale. If E[L(T )] = 1 and we define dP = L(T )dQ on FT , thenN(t)−
∫ t0λ(s)ds is a P -local martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 96
Proof. Note that at the jump times of N , L(s) = λ(s−)L(s−). By Theorem 5.7,
Z(t) = N(t)− t−∫ t
0
1
L(s)(λ(s−)− 1)L(s−)dN(s)
=
∫ t
0
1
λ(s−)dN(s)− t
is a local martingale under P . Consequently,∫ t
0
λ(s−)dZ(s) = N(t)−∫ t
0
λ(s)ds
is a local martingale under P .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 97
Construction of counting processes by change of measure
Let Dc[0,∞) denote the collection of nonnegative integer-valued cadlag functionsx with x(0) = 0 and x constant except for jumps of +1. Suppose that λ : Dc[0,∞)×[0,∞)→ [0,∞), ∫ t
0
λ(x, s)ds <∞, t ≥ 0, x ∈ Dc[0,∞)
and that λ(x, s) = λ(x(· ∧ s), s) (that is, λ is nonanticipating). If we take λ(t) =λ(N, t) and let Sn = inft : N(t) = n, then defining dP = L(Sn)dQ on FSn ,
N(t ∧ Sn)−∫ t∧Sn
0
λ(N, s)ds
is a Ft∧Sn-martingale on (Ω,FSn , P ). In particular, NSn(t) ≡ N(· ∧ Sn) can bewritten as the solution of
NSn(t) = Y (
∫ t∧Sn
0
λ(NSn , s)ds)
for Y a unit Poisson process.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 98
Multivariate counting processes
Theorem 5.9 Let λk(t, Z,N) be as in Condition 1.13, and let Nk, k = 1, . . . ,m, be inde-pendent unit Poisson processes that are independent of Z on (Ω,F , Q). Define
L(t) =m∏k=1
exp∫ t
0
log λk(s−, Z,N)dNk(s)−∫ t
0
(λk(s, Z,N)− 1)ds,
and dP|FT∧Sn = L(T ∧ Sn)dQ|FT∧Sn . Then on (Ω,FT∧Sn , P ), for k = 1, . . . ,m,
Nk(t ∧ Sn)−∫ t∧Sn
0
λk(s, Z,N)ds,
is a Ft∧Sn-martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 99
Change of measure for Brownian motion
Let W be standard Brownian motion, and let ξ be an adapted process. Define
L(t) = exp∫ t
0
ξ(s)dW (s)− 1
2
∫ t
0
ξ2(s)ds
and note that
L(t) = 1 +
∫ t
0
ξ(s)L(s)dW (s).
Then L(t) is a local martingale.
Assume EQ[L(t)] = 1 for all t ≥ 0. Then L is a martingale. Fix a time T , and restrictattention to the probability space (Ω,FT , Q). On FT , define dP = L(T )dQ.
For t < T , let A ∈ Ft. Then
P (A) = EQ[1AL(T )] = EQ[1AEQ[L(T )|Ft]]
= EQ[1AL(t)]︸ ︷︷ ︸has no dependence on T
(crucial that L is a martingale)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 100
New Brownian motion
Theorem 5.10 W (t) = W (t)−∫ t
0ξ(s)ds is a standard Brownian motion on (Ω,FT , P ).
Proof. Since W is continuous with quadratic variation [W ]t = t a.s., so by Levy’scharacterization of Brownian motion, it is enough to show that W is a local mar-tingale (and hence a martingale). But since W is a Q-martingale and [L,W ]t =∫ t
0ξ(s)L(s)ds, Theorem 5.7 gives the desired result.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 101
Changing the drift of a diffusion
Suppose that
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s)
and setξ(s) = b(X(s)).
Note that X is a diffusion with generator 12σ2(x)f ′′(x). Define
L(t) = exp∫ t
0
b(X(s))dW (s)− 1
2
∫ t
0
b2(X(s))ds,
and assume that EQ[L(T )] = 1 (e.g., if b is bounded). Set dP = L(T )dQ on (Ω,FT ).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 102
Transformed SDE
Define W (t) = W (t)−∫ t
0b(X(s))ds. Then
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s) (5.4)
= X(0) +
∫ t
0
σ(X(s))dW (s) +
∫ t
0
σ(X(s))b(X(s))ds
so under P , X is a diffusion with generator
Af(x) =1
2σ2(x)f ′′(x) + σ(x)b(x)f ′(x). (5.5)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 103
Conditions that imply local absolute continuity
Let Af(x) = 12σ2(x)f ′′(x) + σ(x)b(x)f ′(x).
Condition 5.11 If ν ∈ P(R) and (Xn, τn), n = 1, 2, . . . satisfy Xn(0) has distribution ν,
f(Xn(t ∧ τn))− f(Xn(0))−∫ t∧τn
0
Af(Xn(s))ds,
is an Fnt -martingale for each f ∈ C2c (R), and
τn = inft :
∫ t
0
b2(Xn(s))ds ≥ n,
then limn→∞ Pτn ≤ T = 0 for each T > 0.
Theorem 5.12 Suppose Condition 5.11 holds, and letW be a Browian motion on (Ω,F , Q).If X is a solution of
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s)
on (Ω,F , Q) and X(0) has distribution ν, then for each T , there is a change of measuredP = L(T )dQ such that X on (Ω,FT , P ) is a solution of the martingale problem for A on[0, T ].
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 104
Proof. Let
L(t) = exp∫ t
0
b(X(s))dW (s)− 1
2
∫ t
0
b2(X(s))ds,
and define τn = inft :∫ t
0b2(X(s))ds > n. Then EQ[L(T ∧ τn)] = 1 and we can
define dP = L(T ∧ τn)dQ on FT∧τn . On (Ω,FT∧τn , P ),
W (t ∧ τn) = W (t ∧ τn)−∫ t∧τn
0
b(X(s))ds
is a Brownian motion stopped at τn and
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s) +
∫ t
0
σ(X(s))b(X(s))ds,
for t ≤ T ∧ τn. Then (X, τn), n = 1, 2, . . . satisfies Condition 5.11, and since
PL(T ∧ τn) > K = P∫ T∧τn
0
b(X(s))dW (s) +1
2
∫ T∧τn
0
b2(X(s))ds > logK,
we can apply Proposition 5.1 to conclude that P << Q on FT , that is, E[L(T )] = 1.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 105
Maximum likelihood estimation
Suppose for each α ∈ A,
Pα(Γ) =
∫Γ
LαdQ
andLα = H(α,X1, X2, . . . Xn)
for random variables X1, . . . , Xn. The maximum likelihood estimate α for the“true” parameter α0 ∈ A based on observations of the random variablesX1, . . . , Xn
is the value of α that maximizes H(α,X1, X2, . . . Xn).
For example, under certain conditions the distribution of
Xα(t) = X(0) +
∫ t
0
σ(Xα(s))dW (s) +
∫ t
0
b(Xα(s), α)ds,
will be absolutely continuous with respect to the distribution of X satisfying
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s) . (5.6)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 106
Sufficiency
If dPα = LαdQ whereLα(X, Y ) = Hα(X)G(X, Y ),
then X is a sufficient statistic for α. Without loss of generality, we can assumeEQ[G(X, Y )] = 1 and hence dQ = G(X, Y )dQ defines a probability measure.
Example 5.13 If (X1, . . . , Xn) are iid N(µ, σ2) under P(µ,σ) and Q = P(0,1), then
L(µ,σ) =1
σnexp
−1− σ2
2σ2
n∑i=1
X2i +
µ
σ2
n∑i=1
Xi −µ2
σ2
so (∑n
i=1X2i ,∑n
i Xi) is a sufficient statistic for (µ, σ).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 107
Parameter estimates and sufficiency
Theorem 5.14 If θ(X, Y ) is an estimator of θ(α) and ϕ is convex, then
EPα [ϕ(θ(α)− θ(X, Y ))] ≥ EPα [ϕ(θ(α)− EQ[θ(X, Y )|X])]
Proof.
EPα [ϕ(θ(α)− θ(X, Y ))] = EQ[ϕ(θ(α)− θ(X, Y ))Hα(X)]
= EQ[EQ[ϕ(θ(α)− θ(X, Y ))|X]Hα(X)]
≥ EQ[ϕ(θ(α)− EQ[θ(X, Y )|X])Hα(X)]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 108
Other applications
Finance: Asset pricing models depend on finding a change of measure underwhich the price process becomes a martingale.
Stochastic Control: For a controlled diffusion process
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s) +
∫ t
0
b(X(s), u(s))ds
where the control only enters the drift coefficient, the controlled process can beobtained from an uncontrolled process satisfying (5.6) via a change of measure.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 109
6. Martingale problems for conditional distributions
• Conditional distributions for martingale problems
• Partially observed processes
• Filtered martingale problem
• Markov mapping theorem
• Burke’s theorem
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 110
A martingale lemmaLet Ft and Gt be filtrations with Gt ⊂ Ft.
Lemma 6.1 Suppose that
M(t) = U(t)−∫ t
0
V (s)ds
is an Ft-martingale. Then
E[U(t)|Gt]−∫ t
0
E[V (s)|Gs]ds
is a Gt-martingale.
Proof. The lemma follows by the definition and properties of condi-tional expectations.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 111
Martingale properties of conditional distributions
Corollary 6.2 If X is a solution of the martingale problem for A with re-spect to the filtration Ft and πt is the conditional distribution of X(t)given Gt ⊂ Ft, then
πtf − π0f −∫ t
0
πsAfds (6.1)
is a Gt-martingale for each f ∈ D(A).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 112
Forward equation
Recall that a P(E)-valued function νt, t ≥ 0 is a solution of theforward equation forA if for each t > 0,
∫ t0 νsψds <∞ (see Condition
4.7) and for each f ∈ D(A),
νtf = ν0f +
∫ t
0
νsAfds. (6.2)
Note that if π satisfies (6.1), then νt = E[πt] satisfies (6.2).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 113
Martingale characterization of conditional distributions
Theorem 6.3 Suppose that πt, t ≥ 0 is a cadlag, P(E)-valued processwith no fixed points of discontinuity adapted to Gt satisfying
E[
∫ t
0
πsψds] <∞, t > 0
and that
πtf − π0f −∫ t
0
πsAfds
is a Gt-martingale for each f ∈ D(A). Then there exists a solution X ofthe martingale problem for A, a P(E)-valued process πt, t ≥ 0 with thesame distribution as πt, t ≥ 0, and a filtration Gt such that πt is theconditional distribution of X(t) given Gt.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 114
Conditioning on a process
Theorem 6.4 If Gt in Theorem 6.3 is generated by a cadlag process Ywith no fixed points of discontinuity and π(0), that is,
Gt = F Yt ∨ σ(π(0)),
then there exists a solution X of the martingale problem for A, a P(E)-valued process πt, t ≥ 0, and a process Y such that πt, t ≥ 0 and Yhave the same joint distribution as πt, t ≥ 0 and Y and πt is the condi-tional distribution of X(t) given FY
t ∨ σ(π(0)).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 115
Idea of proof
Enlarge the state space so that the current state of the process con-tains all information about the past of the observation Y .
Let bk, ck ⊂ Cb(E0) satisfy 0 ≤ bk, ck ≤ 1, and suppose that thespans of bk and ck are bounded, pointwise dense in B(E0).
Let a1, a2, . . . be an ordering of the rationals with ai ≥ 1 and
Vki(t) = ck(Y (0))− ai∫ t
0
Vki(s)ds+
∫ t
0
bk(Y (s))ds (6.3)
= ck(Y (0))e−ait +
∫ t
0
e−ai(t−s)bk(Y (s))ds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 116
Set V (t) = (Vki(t) : k, i ≥ 1) ∈ [0, 1]∞,
D(A) = f(x)m∏
k,i=1
gki(vki) : f ∈ D(A), gki ∈ C1[0, 1],m = 1, 2, . . .
and
A(fg)(x, v, u) = g(v)Af(x) + f(x)∑
(−aiv + bk(u))∂kig(v),
For fg ∈ D(A),
πtfg(V (t))− π0fg(V (0))
−∫ t
0
(g(V (s))πsAf + πsf
∑(−aiVki(s) + bk(Y (s)))∂kig(V (s))
)ds
is a F Yt -martingale and νt defined by
νt(fgh) = E[πtfg(V (t))h(Y (t)]
is a solution of the controlled forward equation for A.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 117
Partially observed processes
Let γ : E → E0 be Borel measurable.
Corollary 6.5 If in Corollary 6.4, Y and π satisfy∫E
h γ(x)πt(dx) = h(Y (t)) a.s.
for all h ∈ B(E0) and t ≥ 0, then Y (t) = γ(X(t)).
(cf. Kurtz and Ocone (1988) and Kurtz (1998))
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 118
The filtered martingale problem
Definition 6.6 A P(E)-valued process π and an E-valued process Y are asolution of the filtered martingale problem for (A, γ) if
πtf − π0f −∫ t
0
πsAfds
is a F Yt ∨σ(π(0))-martingale for each f ∈ D(A) and
∫E hγ(x)πt(dx) =
h(Y (t)) a.s. for all h ∈ B(E0) and t ≥ 0.
Theorem 6.7 Let ϕ0 ∈ P(P(E)) and define µ0 =∫P(E) µϕ0(dµ). If
uniqueness holds for the martingale problem (A, µ0), then uniqueness holdsfor the filtered martingale problem for (A, γ, ϕ0). If uniqueness holds for thefiltered martingale problem for (A, γ, ϕ0), then πt, t ≥ 0 is a Markov pro-cess.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 119
Markov mappings
Theorem 6.8 γ : E → E0, Borel measurable.
α a transition function from E0 into E satisfying
α(y, γ−1(y)) = 1
Let µ0 ∈ P(E0), ν0 =∫α(y, ·)µ0(dy), and define
C = (∫E
f(z)α(·, dz),∫E
Af(z)α(·, dz)) : f ∈ D(A) .
If Y is a solution of the MGP for (C, µ0), then there exists a solution Z ofthe MGP for (A, ν0) such that Y = γ Z and Y have the same distributionon ME0
[0,∞).
E[f(Z(t))|FYt ] =
∫f(z)α(Y (t), dz)
(at least for almost every t, all t if Y has no fixed points of discontinuity).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 120
Uniqueness
Corollary 6.9 If uniqueness holds for the MGP for (A, ν0), then unique-ness holds for the ME0
[0,∞)-MGP for (C, µ0). If Y has sample paths inDE0
[0,∞), then uniqueness holds for the DE0[0,∞)-martingale problem
for (C, µ0).
Existence for (C, µ0) and uniqueness for (A, ν0) implies existence for (A, ν0)
and uniqueness for (C, µ0), and hence that Y is Markov.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 121
Intertwining condition
Let α(y,Γ) be a transition function from E0 to E satisfying
α(y, γ−1(y)) = 1,
and define S(t) : B(E0)→ B(E0) by
S(t)g(y) = αT (t)g γ(y) ≡∫E
T (t)g γ(x)α(y, dx).
Theorem 6.10 (Rogers and Pitman (1981), cf Rosenblatt (1966)) If foreach t ≥ 0,
αT (t)f = S(t)αf, f ∈ B(E), (S(t) is a semigroup)
and X is a Markov process with intial distribution α(y, ·) and semigroupT (t), then Y is a Markov process with Y (0) = y and
PX(t) ∈ Γ|FYt = α(Y (t),Γ).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 122
Generator for Y
Note thatαT (t)f = S(t)αf, f ∈ B(E),
suggests that the generator for Y is given by
Cαf = αAf.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 123
Burke’s output theoremKliemann, Koch and Marchetti
X = (Q,D), an M/M/1 queue and its departure process
Af(k, l) = λ(f(k + 1, l)− f(k, l)) + µ1k>0(f(k − 1, l + 1)− f(k, l))
γ(k, l) = l
Assume λ < µ and define
α(l, (k, l)) = (1−λµ
)(λ
µ)k−1, k = 0, 1, 2, . . . α(l, (k,m)) = 0, m 6= l
Then
αAf(l) = µ∞∑k=1
(1− λ
µ)(λ
µ)k−1(f(k − 1, l + 1)− f(k − 1, l))
= λ(αf(l + 1)− αf(l))
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 124
Poisson output
Therefore, there exists a solution (Q,D) of the martingale problemfor A such that D is a Poisson process with parameter λ and
PQ(t) = k|FDt = (1− λ
µ)(λ
µ)k−1,
that is, Q(t) is independent of FDt and is geometrically distributed.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 125
Pitman’s theorem
Z standard Brownian motion
M(t) = sups≤t Z(s), V (t) = M(t)− Z(t)
X(t) = (Z(t),M(t)− Z(t)) = (Z(t), V (t))
Y (t) = 2M(t)− Z(t) = γ(X(t) = 2V (t) + Z(t)
Af(z, v) =1
2fzz(z, v)− fzv(z, v) +
1
2fvv(z, v) b.c. fv(z, 0) = 0
F (y) = αf(y) =1
y
∫ y
0
f(y − 2v, v)dv
αAf(y) =1
2F ′′(y) +
1
yF ′(y)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 126
7. Stochastic equations for Markov processes in Rd
• Markov property
• Gaussian white noise
• Poisson random measure
• Pure jump processes
• Generators in Rd
• Stochastic equations for Markov processes
• Martingale problems
• Change of measure for Poisson random measures
• Conditions for uniqueness
• Uniqueness and the Markov property
• Equations for spatial birth and death processes
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 127
Markov chainsMarkov property: “Given the present, the future is independent ofthe past,” or “the future is a function of the present and inputs inde-pendent of the past.”
Discrete time: X0, ξ1, ξ2, . . . independent
Xn+1 = Hn+1(Xn, ξn+1).
By iterationXn+1 = Hk,n+1(Xk, ξk+1, . . . , ξn+1).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 128
Continuous time: Processes with independent increments
Replace the sequence ξk by a process ξ with independent incre-ments:
X(s) = Ht,s(X(t), ξ(s ∧ (t+ ·))− ξ(t)).
Essentially two choices for ξ: Standard Brownian motion and a Pois-son process.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 129
Gaussian white noise integral
µ0 a σ-finite Borel measure on (S0, r0)
A(S0) = A ∈ B(U) : µ0(A) <∞
W (A×[0, t]) normal withE[W (A×[0, t])] = 0 and V ar(W (A×[0, t])) =µ0(A)t and
E[W (A× [0, t])W (B × [0, s])] = µ0(A ∩B)t ∧ s
Then W (∪Ai × [0, t]) =∑W (Ai × [0, t])
Z(t) =
∫U×[0,t]
Y (u, s)W (du× ds)
satisfies
E[Z(t)] = 0 [Z]t =
∫ t
0
∫U
Y 2(u, s)µ0(du)ds
E[Z2(t)] = E[[Z]t] =
∫ t
0
∫U
E[Y 2(u, s)]µ0(du)ds
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 130
Space-time Poisson random measures
µ1 a σ-finite, Borel measure on (U, r1)
ξ is a Poisson random measure on U × [0,∞) with mean measureµ1 × ` (where ` denotes Lebesgue measure).
ξ(A, t) ≡ ξ(A× [0, t]) is a Poisson process with parameter µ1(A).
ξ(A, t) ≡ ξ(A× [0, t])− µ1(A)t is a martingale.
Definition 7.1 ξ is Ft-compatible, if for each A ∈ A(U), ξ(A, ·) is Ftadapted and for all t, s ≥ 0, ξ(A× (t, t+ s]) is independent of Ft.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 131
Stochastic integrals for Poisson random measures
For ti < ri, Ai ∈ B(U), and ηi Fti-measurable:
X(u, t) =∑i
ηi1Ai(u)1[ti,ri)(t), so X(u, t−) =∑i
ηi1Ai(u)1(ti,ri](t).
(7.1)
Define
Iξ(X, t) =
∫U×[0,t]
X(u, s−)ξ(du× ds) =∑i
ηiξ(Ai × (ti, ri]).
Then
E [|Iξ(X, t)|] ≤ E
[∫U×[0,t]
|X(u, s−)|ξ(du× ds)]
=
∫U×[0,t]
E[|X(u, s)|]µ1(du)ds
and if the right side is finite, E[Iξ(X, t)] =∫U×[0,t]E[X(u, s)]µ1(du)ds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 132
Stochastic integrals for centered Poisson random mea-sures
Let ξ(du× ds) = ξ(du× ds)− µ1(du)ds
ForX(u, t−) =
∑i
ηi1Ai(u)1(ti,ri](t).
as in (7.1), define
Iξ(X, t) =
∫U×[0,t]
X(u, s−)ξ(du×ds) =
∫U×[0,t]
X(u, s)ξ(du×ds)−∫ t
0
∫U
X(u, s)µ1(du)ds
and note that
E[Iξ(X, t)
2]
=
∫U×[0,t]
E[X(u, s)2]µ1(du)ds
if the right side is finite.
Then Iξ(X, ·) is a square-integrable martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 133
An equation for a pure jump process
Consider a generator of the form
Af(x) = λ(x)
∫ t
0
(f(x+ y)− f(x))η(x, dy)
for a process in Rd. There exists a function H : Rd × [0, 1] such for Vuniform [0, 1], H(x, V ) has distribution η(x, ·).
Let ξ be a Poisson random measure on [0,∞) × [0, 1] × [0,∞) withmean measure `× `× `. Then the solution of
X(t) = X(0)+
∫[0,∞)×[0,1]×[0,t]
1[0,λ(X(s−))](v)H(X(s−), u)ξ(dv×du×ds).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 134
Markov processes in Rd
Typically, a Markov process X in Rd has a generator of the form
Af(x) =1
2
d∑i,j=1
aij(x)∂2
∂xi∂xjf(x)+b(x)·∇f(x)+
∫Rd
(f(x+y)−f(x)−1B1(y)y·∇f(x))η(x, dy)
where B1 is the ball of radius 1 centered at the origin and η satisfies∫1 ∧ |y2|η(x, dy) < ∞ for each x. (See, for example, Stroock (1975),
Cinlar, Jacod, Protter, and Sharpe (1980).)
η(x,Γ) gives the “rate” at which jumps satisfying X(s)−X(s−) ∈ Γoccur.
B1 can be replaced by any set C containing an open neighborhood ofthe origin provided that the drift term is replaced by
bC(x) · ∇f(x) =
(b(x) +
∫Rdy(1C(y)− 1B1
(y))η(x, dy)
)· ∇f(x).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 135
A representation for ηWe will assume that there exist λ : Rd × S → [0, 1], γ : Rd × S → Rd,and a σ-finite measure µ1 on a complete, separable metric space Ssuch that
η(x,Γ) =
∫S
λ(x, u)1Γ(γ(x, u))µ1(du).
This representation is always possible but in no way unique.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 136
Reformulation of the generatorFor simplicity, assume that there exists a fixed set S1 ∈ S such thatfor S2 = S − S1,∫
S
λ(x, u)(1S1(u)|γ(x, u)|2 + 1S2
(u)|γ(x, u)|)µ1(du) <∞
and ∫S
λ(x, u)|γ(x, u)||1S1(u)− 1B1
(γ(x, u))|µ1(du) <∞.
Then
Af(x) =1
2
d∑i,j=1
aij(x)∂2
∂xi∂xjf(x) + b(x) · ∇f(x)
+
∫S
λ(x, u)(f(x+ γ(x, u))− f(x)− 1S1(u)γ(x, u) · ∇f(x))µ1(du)
where
b(x) = b(x) +
∫S
λ(x, u)γ(x, u)(1S1(u)− 1B1
(γ(x, u)))µ1(du).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 137
Technical assumptionsFor each compact K ⊂ Rd
supx∈K
(|b(x)|+∫S0
|σ(x, u)|2µ0(du) +
∫S1
λ(x, u)|γ(x, u)|2µ1(du)
+
∫S2
λ(x, u)|γ(x, u)| ∧ 1µ1(du)) <∞.
a(x) =
∫S0
σ(x, u)σ(x, u)Tµ0(du)
Let D(A) = C2c (Rd) and assume that for f ∈ D(A), Af ∈ Cb(Rd).
The continuity assumption can be removed and the boundedness as-sumption relaxed using existing technology.
For x outside the support of f ,
Af(x) =
∫S
λ(x, u)(f(x+ γ(x, u))µ1(du).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 138
Ito equationsX should satisfy a stochastic differential equation of the form
X(t) = X(0) +
∫S0×[0,t]
σ(X(s), u)W (du× ds) +
∫ t
0
b(X(s))ds (7.2)
+
∫[0,1]×S1×[0,t]
1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds)
+
∫[0,1]×S2×[0,t]
1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),
for t < τ∞ ≡ limk→∞ inft : |X(t−)| or |X(t)| ≥ k, where W is Gaus-sian white noise determined by µ0 and ξ is a Poisson random mea-sure on [0, 1]× S × [0,∞) with mean measure `× µ1 × `.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 139
Martingale properties/problemsAssume that τ∞ =∞. Applying Ito’s formula,
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds
=
∫S0×[0,t]
∇f(X(s))Tσ(X(s), u)W (du× ds)
+
∫[0,1]×S×[0,t]
1[0,λ(X(s−),u)](v)(f(X(s−) + γ(X(s−), u))− f(X(s−)))ξ(dv × du× ds)
The right side is a local martingale, so under the assumption that Afis bounded, it is a martingale.
Definition 7.2 X is a solution of the martingale problem for A if thereexists a filtration Ft such that
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds (7.3)
is a Ft-martingale for each f ∈ D(A).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 140
Change of measure for Poisson random measuresLet λ be a nonnegative, Ft-adapted, cadlag (in t) process satisfying∫
S×[0,t]
(λ(u, s)− 1)2 ∧ |λ(u, s)− 1|µ1(du)ds <∞ a.s., t ≥ 0.
Let Mλ(t) =∫S×[0,t](λ(u, s−)− 1)ξ(du× ds) (Mλ is a Ft-martingale),
and let L be the solution (which will be a local martingale) of
L(t) = 1+
∫ t
0
L(s−)dMλ(s) = 1+
∫S×[0,t]
(λ(u, s−)−1)L(s−)ξ(du×ds).
(7.4)If∫S×[0,t] |λ(u, s)− 1|µ1(du)ds <∞ a.s., t ≥ 0, then
L(t) = exp∫S×[0,t]
log λ(u, s)ξ(du×ds)−∫S×[0,t]
(λ(u, s)− 1)µ1(du)ds,
and in general,
L(t) = exp∫S×[0,t]
log λ(u, s)ξ(du×ds)+∫S×[0,t]
(log λ(u, s)−λ(u, s)+1)µ1(du)ds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 141
Intensity for the transformed counting measureIf E[L(T )] = 1, then for A ∈ B(S) with µ1(A) <∞,
MA(t) =
∫A×[0,t]
λ(u, s)ξ(du× ds)
is a local martingale under Q, and since
[MA, L]t =
∫A×[0,t]
λ(u, s)(λ(u, s)− 1)L(s−)ξ(du× ds),
ZA(t) =
∫A×[0,t]
λ(u, s)ξ(du× ds)
−∫A×[0,t]
1
L(s)λ(u, s)(λ(u, s)− 1)L(s−)ξ(du× ds)
= ξ(A, t)−∫A×[0,t]
λ(u, s)µ1(du)ds
is a local martingale under dP = L(T )dQ.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 142
Change for stochastic equations
Assuming S = S1 ∪ S2, S1 ∩ S2 = ∅, consider
Af(x) =
∫S1
(f(x+ α(x, u))− f(x)− α(x, u) · ∇f(x))µ1(du)
+
∫S2
(f(x+ α(x, u))− f(x))µ1(du)
Let D(A) = C2c (Rd) and suppose that Af is bounded for f ∈ D(A).
Then X satisfying
X(t) = X(0)+
∫S1×[0,t]
α(X(s−), u)ξ(du×ds)+∫S2×[0,t]
α(X(s−), u)ξ(du×ds)
is a solution of the martingale problem for A.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 143
New martingale problemLet λ(u, s) = λ(u,X(s−)). Under Q
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds
=
∫S×[0,t]
(f(X(s−) + α(X(s−), u))− f(X(s−))ξ(du× ds)
is a local martingale, so under P ,
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds
−∫ t
0
∫S
(f(X(s) + α(X(s), u))− f(X(s))(λ(u,X(s))− 1)µ1(du)ds
=
∫S×[0,t]
(f(X(s−) + α(X(s−), u))− f(X(s−))
×(ξ(du× ds)− λ(u,X(s))µ1(du)ds)
is a local martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 144
New generatorUnder P, X is a solution of the martingale problem for
A1f(x) =
∫S1
(f(x+ α(x, u))− f(x)− α(x, u) · ∇f(x))λ(u, x)µ1(du)
+
∫S2
(f(x+ α(x, u))− f(x))λ(x, u)µ1(du)
+
∫S1
α(x, u)(λ(x, u)− 1)µ1(du) · ∇f(x)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 145
Conditions for uniqueness for general SDEIn Ito (1951) as well as in later presentations (for example, Skorokhod(1965) and Ikeda and Watanabe (1989)), L2-estimates are used to proveuniqueness for (7.2).
Graham (1992) points out the possibility and desirability of using L1-estimates. (In fact, for equations controlling jump rates with factorslike 1[0,λ(X(t),u)](v), L1-estimates are essential.)
Kurtz and Protter (1996) develop methods that allow a mixing of L1,L2, and other estimates.
The uniqueness proof given here uses only L1 estimates.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 146
Theorem 7.3 Suppose there exists a constant M such that
|b(x)|+∫S0
|σ(x, u)|2µ(du) +
∫S1
|γ(x, u)|2λ(x, u)µ1(du) (7.5)
+
∫S2
λ(x, u)|γ(x, u)|µ1(du) < M,
and √∫S0
|σ(x, u)− σ(y, u)|2µ0(du) ≤ M |x− y| (7.6)
|b(x)− b(y)| ≤ M |x− y| (7.7)∫S1
(γ(x, u)− γ(y, u))2λ(x, u) ∧ λ(y, u)µ1(du) ≤ M |x− y|2 (7.8)∫S2
λ(x, u)||γ(x, u)− γ(y, u)|µ1(du) ≤ M |x− y| (7.9)∫S
|λ(x, u)− λ(y, u)||γ(y, u)|µ1(du) ≤ M |x− y|. (7.10)
Then there exists a unique solution of (7.2).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 147
Proof. Suppose X and Y are solutions of (7.2). ThenX(t) (7.11)
= X(0) +
∫S0×[0,t]
σ(X(s), u)W (du× ds) +
∫ t
0
b(X(s))ds
+
∫[0,∞)×S1×[0,t]
1[0,λ(X(s),u)∧λ(Y (s),u)](v)γ(X(s−), u)ξ(dv × du× ds)
+
∫[0,∞)×S1×[0,t]
1(λ(Y (s−),u)∧λ(X(s−),u),λ(X(s−),u)](v)
γ(X(s−), u)ξ(dv × du× ds)
+
∫[0,∞)×S2×[0,t]
1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),
and similarly with the roles of X and Y interchanged. Then (7.6)and (7.7) give the necessary Lipschitz conditions for the coefficientfunctions in the first two integrals on the right, (7.8) gives an L2-Lipschitz condition for the third integral term, and (7.9) and (7.10)give L1-Lipschitz conditions for the fourth and fifth integral terms.Theorem 7.1 of Kurtz and Protter (1996) gives uniqueness.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 148
Extension to unbounded coefficients
Corollary 7.4 Suppose that there exists a function M(r) defined for r >0, such that (7.5) through (7.10) hold with M replaced by M(|x| ∨ |y|).Then there exists a stopping time τ∞ and a process X(t) defined for t ∈[0, τ∞) such that (7.2) is satisfied on [0, τ∞) and τ∞ = limk→∞ inft :|X(t)| or |X(t−)| ≥ k. If (Y, τ) also has this property, then τ = τ∞and Y (t) = X(t), t < τ∞.
Proof. The corollary follows by a standard localization argument.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 149
A martingale inequality
If M is a local square integrable martingale, there exists a nonde-creasing predictable process 〈M〉 such that M(t)2 − 〈M〉t is a localmartingale. In particular, [M ]− 〈M〉 is a local martingale. Recall thata left-continuous, adapted process is predictable.
Lemma 7.5 For 0 < p ≤ 2 there exists a constantCp such that for any localsquare integrable martingale M with Meyer process 〈M〉 and any stoppingtime τ
E[sups≤τ|M(s)|p] ≤ CpE[〈M〉p/2τ ]
The discrete time version is due to Burkholder (1973) and the con-tinuous time version to Lenglart, Lepingle, and Pratelli (1980). Thefollowing proof is due to Ichikawa (1986).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 150
Proof. For p = 2 the result is an immediate consequence of Doob’sinequality. Let 0 < p < 2. For x > 0, let σx = inft : 〈M〉t > x2.Since σx is predictable, there exists a strictly increasing sequence ofstopping times σnx → σx. Noting that 〈M〉σnx ≤ x2, we have
Psups≤τ|M(s)| > x ≤ Pσnx < τ+ P sup
s≤τ∧σnx|M(s)| > x
≤ Pσnx < τ+E[〈M〉τ∧σnx ]
x2≤ Pσnx < τ+
E[x2 ∧ 〈M〉τ ]x2
,
and letting n→∞, we have
Psups≤τ|M(s)| > x ≤ P〈M〉τ ≥ x2+
E[x2 ∧ 〈M〉τ ]x2
. (7.12)
Using the identity∫∞
0 E[x2 ∧ X2]pxp−3dx = 22−pE[|X|p], the lemma
follows by multiplying both sides of (7.12) by pxp−1 and integrating.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 151
Computation of Meyer process
Let ξ be a Poisson random measure on S× [0,∞) with mean measureµ1×`, and letW be a Gaussian white noise on S0× [0,∞) determinedby µ0× `. Assume that ξ and W are compatible with a filtration Ft.
If X is predictable and∫S×[0,t]X
2(u, s)µ1(du)ds <∞ a.s., then M(t) =∫S×[0,t]X(u, s)ξ(du× ds) is a local square integrable martingale with
〈M〉t =
∫S×[0,t]
X2(u, s)µ1(du)ds <∞.
If X is adapted and∫ t
0
∫S0X(u, s)2µ0(du)ds < ∞ a.s., then M(t) =∫
S0×[0,t]X(u, s)dW (du × ds) is a local square integrable martingalewith
〈M〉t =
∫ t
0
∫S0
X(u, s)2µ0(du)ds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 152
Estimate for solutions of (7.11)
E[sups≤t|X(s)− Y (s)|]
≤ E[|X(0)− Y (0)|] + C1E[(
∫ t
0
∫S0
|σ(X(s), u)− σ(Y (s, u))|2µ0(du)ds)12 ]
+C1E[(
∫ t
0
∫S1
(γ(X(s), u)− γ(Y (s), u))2λ(X(s), u) ∧ λ(Y (s), u)µ1(du))12 ]
+2E[
∫ t
0
∫S1
|λ(X(s), u)− λ(Y (s), u)|(|γ(X(s), u)|+ |γ(Y (s), u)|)µ1(du)ds]
+E[
∫ t
0
∫S2
λ(X(s), u)|γ(X(s), u)− γ(Y (s), u)|µ1(du)ds]
+E[
∫ t
0
∫S2
|λ(X(s), u)− λ(Y (s), u)||γ(Y (s), u)|µ1(du)ds]
+E[
∫ t
0
|b(X(s))− b(Y (s))|ds]
≤ E[|X(0)− Y (0)|] +D(√t+ t)E[sup
s≤t|X(s)− Y (s)|]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 153
For t small enough so that D(√t+ t) ≤ .5, then
E[sups≤t|X(s)− Y (s)|] ≤ 2E[|X(0)− Y (0)|]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 154
Uniqueness and the Markov property If X is a solution of
(7.2), then
X(t) = X(r) +
∫S0×[r,t]
σ(X(s), u)W (du× ds) +
∫ t
0
b(X(s))ds (7.13)
+
∫[0,1]×S1×[r,t]
1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds)
+
∫[0,1]×S2×[r,t]
1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),
Uniqueness impliesX(r) is independent ofW (·+r)−W (r) and ξ(A×(r, ·]) and thatX(t), t ≥ r is determined by X(r), W (·+r)−W (r) andξ(A× (r, ·]), which gives the Markov property.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 155
8. Filtering
• Bayes and Kallianpur-Striebel formula
• “Particle representations” of conditional distributions
• Continuous time filtering in Gaussian white noise
• Derivation of filtering equations
• Zakai equation
• Kushner-Stratonovich equation
• Point process observations
• Spatial observations with additve white noise
• Uniqueness
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 156
Bayes and Kallianpur Striebel formulas
Let X and Y be random variables defined on (Ω,F , P ). Suppose wewant to calculate the conditional expectation
EP [f(X)|Y ].
If P << Q with dP = LdQ, Bayes formula says
EP [f(X)|Y ] =EQ[f(X)L|Y ]
EQ[L|Y ].
If X = h(U, Y ), L = L(U, Y ) and U and Y are independent under Q,then
EP [f(X)|Y ] =
∫f(h(u, Y ))L(u, Y )µU(du)∫
L(u, Y )µU(du)
where µU is the distribution of U .
Method: To compute a conditional expectation, find a reference mea-sure under which what we don’t know is independent of what we doknow.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 157
Monte Carlo integration
Let U1, U2, . . . be iid with distribution µU . Then
EP [f(X)|Y ] = limn→∞
∑ni=1 f(h(Ui, Y ))L(Ui, Y )∑n
i=1 L(Ui, Y )
Note that (U1, Y ), (U2, Y ), . . . is a stationary (in fact exchangeable) se-quence. Consequently,
limn→∞
1
n
n∑i=1
f(h(Ui, Y ))L(Ui, Y ) = EQ[f(U1, Y )L(Ui, Y )|I]
= EQ[f(U1, Y )L(Ui, Y )|Y ]
=
∫f(h(u, Y ))L(u, Y )µU(dx) a.s. Q
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 158
Continuous time filtering in Gaussian white noise
To understand the intuition behind the standard “observation in Gaus-sian white noise” filtering model, suppose X is the signal of interestand noisy observations of the form
On(k
n) = h(X(
k
n))
1
n+
1√nζk
are made every n−1 time units. For large n, the noise swamps thesignal at any one time point.
Assume that the ζk are iid with mean zero and variance σ2. ThenYn(t) =
∑[nt]k=1On(
kn) is approximately
Y (t) =
∫ t
0
h(X(s))ds+ σW (t). (8.1)
Note, however, that E[f(X(t))|FYnt ] does not necessarily converge to
E[f(X(t))|FYt ]; however, we still take (8.1) as our observation model.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 159
Reference measure
By the Girsanov formula,
EP [g(X(t))|FYt ] =
EQ[g(X(t))L(t)|FYt ]
EQ[L(t)|FYt ]
where under Q, X and Y are independent, Y is a Brownian motionwith mean zero and variance σ2t, and
L(t) = exp∫ t
0
h(X(s))
σ2dY (s)− 1
2
∫ t
0
h2(X(s))
σ2ds
that is,
L(t) = 1 +
∫ t
0
h(X(s))
σ2L(s)dY (s).
Under Q, σ−1Y is a standard Brownian motion, so under P ,
W (t) = σ−1Y (t)−∫ t
0
σ−1h(X(s))ds
is a standard Brownian motion.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 160
Monte Carlo solution
Let X1, X2, . . . be iid copies of X that are independent of Y under Q,and let
Li(t) = 1 +
∫ t
0
h(Xi(s))
σ2Li(s)dY (s).
Note that
φ(g, t) ≡ EQ[g(X(s))L(s)|FYs ] = EQ[g(Xi(s))Li(s)|FY
s ]
Claim:1
n
n∑i=1
g(Xi(s))Li(s)→ EQ[g(X(s))L(s)|FYs ]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 161
Zakai equation
For simplicity, assume σ = 1 and X is a diffusion
X(t) = X(0) +
∫ t
0
σ(X(s))dB(s) +
∫ t
0
b(X(s))ds,
where under Q, B and Y are independent standard Brownian mo-tions. Since
g(X(t)) = g(X(0)) +
∫ t
0
g′(X(s))σ(X(s))dB(s) +
∫ t
0
Ag(X(s))ds
g(X(t))L(t) = g(X(0)) +
∫ t
0
L(s)dg(X(s)) +
∫ t
0
g(X(s))dL(s)
= g(X(0)) +
∫ t
0
L(s)g′(X(s))σ(X(s))dB(s)
+
∫ t
0
L(s)Ag(X(s))ds+
∫ t
0
g(X(s))h(X(s))L(s)dY (s)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 162
Monte Carlo derivation of Zakai equation
Xi(t) = Xi(0) +
∫ t
0
σ(Xi(s))dBi(s) +
∫ t
0
b(Xi(s))ds,
where (Xi(0), Bi) are iid copies of (X(0), B).
g(Xi(t))Li(t) = g(Xi(0)) +
∫ t
0
Li(s)g′(Xi(s))σ(Xi(s))dBi(s)
+
∫ t
0
Li(s)Ag(Xi(s))ds
+
∫ t
0
g(Xi(s))h(Xi(s))Li(s)dY (s)
and hence
φ(g, t) = φ(g, 0) +
∫ t
0
φ(Ag, s)ds+
∫ t
0
φ(gh, s)dY (s)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 163
Kushner-Stratonovich equation
πtg = EP [g(X(t))|FYt ] =
φ(g, t)
φ(1, t)
=φ(g, 0)
φ(1, 0)+
∫ t
0
1
φ(1, s)dφ(g, s)−
∫ t
0
φ(g, s)
φ(1, s)2dφ(1, s)
+
∫ t
0
φ(g, s)
φ(1, s)3d[φ(1, ·)]s −
∫ t
0
1
φ(1, s)2d[φ(g, ·), φ(1, ·)]s
= π0g +
∫ t
0
πsAgds+
∫ t
0
(πsgh− πsgπsh)dY (s)
+
∫ t
0
σ2πsgπsh2ds−
∫ t
0
σ2πsghπshds
= π0g +
∫ t
0
πsAgds+
∫ t
0
(πsgh− πsgπsh)(dY (s)− πshds)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 164
Point process observations
Model: X is adapted to Ft, and
Y (t,Γ)−∫ t
0
∫Γ
λ(X(s), u)ν(du)ds
is an Ft-martingale for all Γ ∈ B(U) with ν(Γ) <∞.
supx
∫U
|λ(x, u)− 1| ∧ |λ(x, u)− 1|2ν(du) <∞.
Reference measure construction: Under Q, X is the diffusion givenby (8.2) and Y is an independent, Poisson random measure withmean measure ν. The change of measure is given by (7.4):
L(t) = 1+
∫ t
0
L(s−)dMλ(s) = 1+
∫U×[0,t]
(λ(X(s), u)−1)L(s−)Y (du×ds).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 165
Zakai equation
g(X(t))L(t) = g(X(0)) +
∫ t
0
g(X(s))dL(s) +
∫ t
0
L(s)dg(X(s))
= g(X(0)) +
∫ t
0
L(s)g′(X(s))σ(X(s))dB(s)
+
∫ t
0
L(s)Ag(X(s))ds
+
∫U×[0,t]
g(X(s))(λ(X(s), u)− 1)L(s−)Y (du× ds)
Assume that ν(U) < ∞. Setting λ(x) =∫U λ(x, u)ν(du) and Cg(x) =
(λ(x) − 1)g(x), the unnormalized conditional distribution φ(g, t) =EQ[g(X(t))L(t)|FY
t ] satisfies
φ(g, t) = φ(g, 0)+
∫ t
0
φ((A−C)g, s)ds+
∫ t
0
φ((λ(·, u)−1)g, s−)Y (du×ds).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 166
Solution of the Zakai equation
Let T (t) be the semigroup given by
T (t)f(x) = E[f(X(t))e−∫ t0(λ(X(s))−1)ds|X(0) = x]
Suppose Y =∑
k δ(τk,uk), and 0 < τ1 < · · · < τm < t < τm+1. Then
φ(g, t) = φ(T (t− τm)g, τm)
φ(g, τk+1−) = φ(T (τk+1 − τk)g, τk)φ(g, τk+1) = φ(λ(·, uk+1)g, τk+1−) = φ(T (τk+1 − τk)λ(·, uk+1)g, τk).
If φ(dx, t) = φ(x, t)dx, then
φ(·, t) = T ∗(t− τm)φ(·, τm)
φ(·, τk+1−) = T ∗(τk+1 − τk)φ(·, τk)φ(·, τk+1) = λ(·, uk+1)φ(·, τk+1−) = λ(·, uk+1)T
∗(τk+1 − τk)φ(·, τk).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 167
Counting process observationsModel: X is a diffusion
X(t) = X(0) +
∫ t
0
σ(X(s))dB(s) +
∫ t
0
b(X(s))ds (8.2)
and
Y (t) = V (
∫ t
0
λ(X(s))ds),
where V is unit Poisson process independent of B.
Reference measure construction: Under Q, X is the diffusion givenby (8.2) and Y is an independent, unit Poisson process. The changeof measure is given by (5.3):
L(t) = 1 +
∫ t
0
(λ(X(s))− 1)L(s−)d(Y (s)− s)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 168
Zakai equation
g(X(t))L(t) = g(X(0)) +
∫ t
0
g(X(s))dL(s) +
∫ t
0
L(s)dg(X(s))
= g(X(0)) +
∫ t
0
L(s)g′(X(s))σ(X(s))dB(s)
+
∫ t
0
L(s)Ag(X(s))ds
+
∫ t
0
g(X(s))(λ(X(s))− 1)L(s−)d(Y (s)− s)
The unnormalized conditional distribution φ(g, t) = EQ[g(X(t))L(t)|FYt ]
satisfies
φ(g, t) = φ(g, 0) +
∫ t
0
φ((A− C)g, s)ds+
∫ t
0
φ(Cg, s−)dY (s),
where Cg(x) = (λ(x)− 1)g(x).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 169
Kushner-Stratonovich equation
For πtg ≡ φ(g,t)φ(1,t) ,
πtg = π0g +
∫ t
0
(πsAg − πsλg + πsλπsg)ds
+
∫ t
0
(πs−λg − πs−λπs−gπs−λ
)dY (s)
= π0g +
∫ t
0
πsAgds
+
∫ t
0
(πs−λg − πs−λπs−gπs−λ
)(dY (s)− πsλds)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 170
Solution of the Zakai equation
Let T (t) be the semigroup given by
T (t)f(x) = E[f(X(t))e−∫ t0(λ(X(s))−1)ds|X(0) = x]
Suppose the jump times of Y satisfy 0 < τ1 < · · · < τm < t < τm+1.Then
φ(g, t) = φ(T (t− τm)g, τm)
φ(g, τk+1−) = φ(T (τk+1 − τk)g, τk)φ(g, τk+1) = φ(λg, τk+1−) = φ(T (τk+1 − τk)λg, τk).
If φ(dx, t) = φ(x, t)dx, then
φ(·, t) = T ∗(t− τm)φ(·, τm)
φ(·, τk+1−) = T ∗(τk+1 − τk)φ(·, τk)φ(·, τk+1) = λ(·)φ(·, τk+1−) = λ(·)T ∗(τk+1 − τk)φ(·, τk).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 171
Spatial observations with additive white noise
Signal:
X(t) = X(0) +
∫ t
0
σ(X(s))dB(s) +
∫ t
0
b(X(s))ds+
∫S0×[0,t]
α(X(s), u)W (du× ds)
= X(0) +
∫ t
0
σ(X(s))dB(s) +
∫S0×[0,t]
α(X(s), u)Y (du× ds)
+
∫ t
0
(b(X(s))−∫S0
α(X(s), u)h(X(s), u)µ0(du))ds
Observation:
Y (A, t) =
∫ t
0
∫A
h(X(s), u)µ0(du)ds+W (A, t)
Under Q, Y is Gaussian white noise on S0 × [0,∞) with
E[Y (A, t)Y (B, s)] = µ0(A ∩B)t ∧ s,and dP|Ft = L(t)dQ|Ft where
L(t) = 1 +
∫S0×[0,t]
L(s)h(X(s), u)Y (du× ds)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 172
Apply Ito’s formula
Under P , X is a diffusion with generator
Af(x) =1
2
∑aij(x)∂i∂jf(x) +
∑bi(x)∂if(x)
wherea(x) = σ(x)σ(x)T +
∫S0
α(x, u)α(x, u)Tµ0(du)
Then
f(X(t))L(t)
= f(X(0)) +
∫ t
0
L(s)∇f(X(s))Tσ(X(s))dB(s)
+
∫S0×[0,t]
L(s)(∇f(X(s)) · α(X(s), u) + f(X(s))h(X(s), u))Y (du× ds)
+
∫ t
0
L(s)Af(X(s))ds
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 173
Particle representation
Bi independent, standard Brownian motions, independent of Y on (Ω,F , P0). Let
Xi(t) = Xi(0) +
∫ t
0
σ(Xi(s))dBi(s) +
∫S0×[0,t]
α(Xi(s−), u)Y (du× ds)
+
∫ t
0
∫S0
(b(Xi(s))− α(Xi(s), u)h(Xi(s), u))µ0(du)ds
Li(t) = 1 +
∫S0×[0,t]
Li(s)h(Xi(s), u)Y (du× ds)
Then
φ(f, t) = EP0 [f(X(t))L(t)|FYt ] = limn→∞
1
n
n∑i=1
f(Xi(t))Li(t)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 174
Zakai equation
Since
f(Xi(t))Li(t)
= f(Xi(0)) +
∫ t
0
Li(s)∇f(Xi(s))Tσ(Xi(s))dBi(s)
+
∫S0×[0,t]
Li(s)(∇f(Xi(s)) · α(Xi(s), u) + f(Xi(s))h(Xi(s), u))Y (du× ds)
+
∫ t
0
Li(s)Af(Xi(s))ds,
φ(f, t) = φ(f, 0) +
∫ t
0
φ(Af, s)ds+
∫S0×[0,t]
φ(∇f · α(·, u) + fh(·, u), s)Y (du× ds)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 175
Kushner-Stratonovich equation
It follows that
πtf =φ(f, t)
φ(1, t)
= π0f +
∫ t
0
πsAfds
+
∫S0×[0,t]
(πs(∇f · α(·, u) + fh(·, u))− πsfπsh(·, u)
)Y (du× ds)
+
∫ t
0
∫S0
(πsfπsh(·, u)− πs(∇f · α(·, u) + fh(·, u))
)πsh(·, u)µ0(du)ds
= π0f +
∫ t
0
πsAfds
+
∫S0×[0,t]
(πs(∇f · α(·, u) + fh(·, u))− πsfπsh(·, u)
)Y (du× ds)
where
Y (A, t) = Y (A, t)−∫ t
0
∫A
πsh(·, u)µ0(du)ds
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 176
Uniqueness for Kushner-Stratonovich equationEach of the Kushner-Stratonovich equations is of the form
πtg = π0g +
∫ t
0
πsAgds+
∫ t
0
G(πs−)d(Y (s)− πshds)
Each of the stochastic integrator would be a martingale if the π was the conditionaldistribution. Suppose that is not true.
In each case, under some restrictions on h and π, we can do a change of measure tomake Y (t) −
∫ t0πshds a martingale. Under the “new” measure, (Y, π) is a solution
of the corresponding filtered martingale problem.
By uniqueness of the filtered martingale problem, πt = H(t, Y ) for some appropri-ately measurable transformation. But this transformation doesn’t depend on thechange of measure, so π was already the conditional distribution process.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 177
9. Time change equations
• Stochastic equations for Markov chains
• Diffusion limits??
• General multiple time-change equations
• Compatibility for multiple time-changes
• Martingale problem for the time-change equation
• Uniqueness question
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 178
Stochastic equations for Markov chainsSpecify a continuous time Markov chain by specifying the intensities of its possiblejumps
PX(t+ ∆t) = X(t) + ζk|FXt ≈ βk(X(t))∆t
Given the intensities, the Markov chain satisfies
X(t) = X(0) +∑k
Yk(
∫ t
0
βk(X(s))ds)ζk
where the Yk are independent unit Poisson processes.
(Assume that there are only finitely many ζk.)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 179
Diffusion limits??
Possible scaling limits of the form
Xn(t) = Xn(0) +1
n
∑k
Yk(n2
∫ t
0
βnk (Xn(s))ds)ζk +
∫ t
0
F n(Xn(s))ds
where Yk(u) = Yk(u)− u and F n(x) =∑
k nβnk (x)ζk.
Note that 1nYk(n
2·) ≈ Wk
Assuming Xn(0)→ X(0), βnk → βk and F n → F , we might expect a limit satisfying
X(t) = X(0) +∑k
Wk(
∫ t
0
βk(X(s))ds)ζk +
∫ t
0
F (X(s))ds
Kurtz (1980a), Ethier and Kurtz (1986), Section 6.5.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 180
General multiple time change equations
Let Y1, . . . , Ym be independent, Markov processes, Yk with generator Ak ⊂ Cb(E)×Cb(E). Consider the system
Xk(t) = Yk(
∫ t
0
βk(X(s))ds),
where X = (X1, . . . , Xm) and the βk are continuous.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 181
Compatibility
Definition 9.1 A processX inDE1 [0,∞) is temporally compatible with Y inDE2 [0,∞),if for each t ≥ 0 and h ∈ B(DE2 [0,∞)),
E[h(Y )|FX,Yt ] = E[h(Y )|FYt ] a.s. (9.1)
(cf. (4.5) of Jacod (1980).)
If Y has independent increments, then X is (temporally) compatible with Y ifY (t+ ·)− Y (t) is independent of FX,Yt for all t ≥ 0.
If X is FYt -adapted, then X is compatible with Y .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 182
Compatibility for time-change equations
Let Y1, . . . , Ym be independent, cadlag Markov processes, Yk with state space Ekand generator Ak. Suppose X = (X1, . . . , Xm) and E = E1 × · · · × Em and Xsatisfies
Xk(t) = Yk(
∫ t
0
βk(X(s))ds).
Set τk(t) =∫ t
0βk(X(s))ds, and for α ∈ [0,∞)m, define
FYα = σ(Yk(sk) : sk ≤ αk, k = 1, 2, . . .) ∨ σ(X(0))
andF τα = σ(τ1(t) ≤ s1, τ2(t) ≤ s2, . . . : si ≤ αi, i = 1, 2, . . . , t ≥ 0).
Then X is compatible with Y if for each h ∈ B(DE[0,∞),
E[h(Y )|F τα ∨ FYα ] = E[h(Y )|FYα ]. (9.2)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 183
Existence of compatible solutionsConstruct a compatible approximation
Xnk (t) = Yk(
∫ t
0
βk(X([ns]
n))ds).
(Essentially an Euler approximation.) Then
τnk (t) =
∫ t
0
βk(Xn(
[ns]
n))ds
and
F τnα = σ(τn1 (t) ≤ s1, τn2 (t) ≤ s2, . . . : si ≤ αi, i = 1, 2, . . . , t ≥ 0) ⊂ FYα .
Let C1α = g(· ∧α1, . . . , · ∧αm) : g ∈ CR([0,∞))m and C2
α = g(y1(· ∧α1), . . . , ym(· ∧αm)) : g ∈ C(
∏mk=1DEk [0,∞)).
Lemma 9.2 Suppose Ak ⊂ Cb(Ek) × Cb(Ek) and Yk has no fixed points of discontinu-ity and the βk are bounded and continuous. Then Xn is relatively compact. Pick asubsequence along which (Xn, Y ) ⇒ (X, Y ). Then setting τk(t) =
∫ t0βk(X(s))ds, τ is
compatible with Y .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 184
Proof. For f ∈ Cb(∏DEk [0,∞)), g1 ∈ C1
α and g2 ∈ C2α
E[f(Y )g1(τn)g2(Y )] = E[E[f(Y )|FYα ]g1(τn)g2(Y )]
For each α and ε > 0 there exists fα,ε ∈ Cb(∏DEk [0,∞)) such that
E[|E[f(Y )|FYα ]− fα,ε(Y )|] ≤ ε.
Consequently, it follows that
limn→∞
E[f(Y )g1(τn)g2(Y )] = E[f(Y )g1(τ)g2(Y )]
= limn→∞
E[E[f(Y )|FYα ]g1(τn)g2(Y )]
= limε→0
limn→∞
E[fα,ε(Y )g1(τn)g2(Y )]
= E[E[f(Y )|FYα ]g1(τ)g2(Y )]
verifying compatibility.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 185
MartingalesAssume (1, 0) ∈ Ak, k = 1, . . . ,m. If fk ∈ D(Ak) and infx∈Ek fk(x) > 0, then
Mf (α) =n∏k=1
fk(Yk(αk)) exp−∫ αk
0
Akfk(Yk(s))
fk(Yk(s))ds
is a FYα -martingale. If the solution X is compatible with Y , then Mf is a Gα ≡FYα ∨ FXα -martingale.
By the definition of FXα , τ(t) = (τ1(t), . . . , τm(t)) is a FYα ∨ FXα -stopping time.For simplicity, assume that the βk are bounded. Then, by the optional samplingtheorem
Mf (τ(t)) =n∏k=1
fk(Yk(τk(t))) exp−∫ τk(t)
0
Akfk(Yk(s))
fk(Yk(s))ds
=n∏k=1
fk(Xk(t))) exp−∫ t
0
βk(X(s))Akfk(Xk(s))
fk(Xk(s))ds
is a Gτ(t)martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 186
Martingale problem for the solution
Defining f(x) =∏fk(xk) and
Bf(x) =m∑k=1
βk(x)∏l 6=k
fl(xl)Akfk(xk),
we have
Mf (τ(t)) = f(X(t)) exp−∫ t
0
Bf(X(s))
f(X(s))ds,
and it follows that X is a solution of the martingale problem for B.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 187
Uniqueness question
X(t) = X(0) +∑k
Wk(
∫ t
0
βk(X(s))ds)ζk +
∫ t
0
F (X(s))ds
Let τk(t) =∫ t
0βk(X(s))ds and γ(t) =
∫ t0F (X(s))ds. Then
τl(t) = βl(X(0) +∑k
Wk(τk(t))ζk + γ(t))
γ(t) = F (X(0) +∑k
Wk(τk(t))ζk + γ(t))
Problem: Find conditions under which pathwise uniqueness holds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 188
Compatibility for multiple time-changes
X(t) = X(0) +m∑k=1
Wk(
∫ t
0
βk(X(s))ds)ζk +
∫ t
0
F (X(s))ds
Set τk(t) =∫ t
0βk(X(s))ds, and for α ∈ [0,∞)m, define
FWα = σ(Wk(sk) : sk ≤ αk, k = 1, 2, . . .) ∨ σ(X(0))
andF τα = σ(τ1(t) ≤ s1, τ2(t) ≤ s2, . . . : si ≤ αi, i = 1, 2, . . . , t ≥ 0).
If τ is compatible with W , then Wk(∫ t
0βk(X(s))ds), k = 1, . . . ,m are martingales
with respect to the same filtration and hence X is a solution of the martingaleproblems for
Af(x) =1
2
∑i,j
aij(x)∂i∂jf(x) + F (x) · ∇f(x),
a(x) =∑
k βk(x)ζkζTk .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 189
Two-dimensional case
For i = 1, 2, Wi standard Brownian motions.
βi : R2 → (0,∞), bounded
X1(t) = W1(
∫ t
0
β1(X(s))ds) X2(t) = W2(
∫ t
0
β2(X(s))ds)
or equivalentlyτi(t) = βi(W1(τ1(t)),W2(τ2(t))), i = 1, 2
A strong, compatible solution exists, and weak uniqueness holds by Stroock-Varadhan,so pathwise uniqueness holds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 190
10. Bits and pieces
• Equivalence of martingale problems and SDEs
Kurtz (2011)
• Cluster detection
• Signal in noise with point process observations
• Filtering equations
• Computable cases
• Analysis of earthquake data
Wu (2009)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 191
Equivalence of SDE and MGP for diffusions
X(t) = X(0) +
∫ t
0
σ(X(s))dW (s) +
∫ t
0
b(X(s))ds, (10.1)
By Ito’s formula
f(X(t))− f(X(0))−∫ t
0
Af(X(s))ds =
∫ t
0
∇f(X(s))Tσ(X(s))dW (s)
for
Af(x) =1
2
∑ij
aij(x)∂2
∂xi∂xjf(x) +
∑i
bi(x)∂
∂xif(x)
where ((aij)) = σσT .
If X is a solution of the MGP for A and σ is invertible, then
W (t) =
∫ t
0
σ−1(X(s))dX(s)−∫ t
0
σ−1(X(s))b(X(s))ds
is a standard Brownian motion and (10.1) is satisfied.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 192
A stochastic equation and a martingale problem
Assume W is one dimensional, λ(x, u) ≤ 1, and ξ a Poisson randommeasure on [0, 1]2× [0,∞) with Lebesgue mean measure. Suppose Xsatisfies
X(t) = X(0) +
∫ t
0
σ(X(s))dW +
∫ t
0
b(X(s))ds
+
∫1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(du× dv × ds).
Then setting a(x) = σ(x)2, X will be a solution of the martingaleproblem for
Af(x) =1
2a(x)f ′′(x) + b(x)f ′(x) +
∫ 1
0
λ(x, u)(f(x+γ(x, u))− f(x))du.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 193
Augmented processLet ξ be given by
ξ(C × [0, t]) =
N(t)−1∑i=0
1C(Vi), C ∈ B([0, 1]2),
where N is a unit Poisson process and V0, V1, . . . are iid and unifor-maly distributed over [0, 1]2.
Define
Y (t) = Y (0) +W (t) mod 1
Z(t) = VN(t)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 194
Martingale problem for (X, Y, Z)
Then (X(t), Y (t), Z(t)) is a solution to the martingale problem forthe operator B defined as follows: D(B) consists of functions of theF (x, y, z) = f(x)g(y)h(z), f ∈ C2
c (R), g ∈ C2[0, 1] with g(0) = g(1),g′(0) = g′(1), g′′(0) = g′′(1), and h ∈ C([0, 1]2). Then
BF (x, y, z)
= g(y)h(z)(1
2a(x)f ′′(x) + b(x)f ′(x))
+σ(x)h(z)f ′(x)g′(y) +1
2f(x)h(z)g′′(y)
+g(y)
(∫[0,1]2
h(u)υ(du)f(x+ 1[0,λ(x,z1)](z2)γ(x, z1))− h(z)f(x)
)where υ is the uniform distribution over [0, 1]2.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 195
Application of Markov mapping theorem
Let α(x, dy×dz) = dyυ(dz). Setting g =∫ 1
0 g(y)dy and h =∫
[0,1]2 h(z)υ(dz),
αBF (x) = ghAf(x).
By the martingale mapping theorem, every solution of the martin-gale problem for A comes from a solution of the martingale problemfor B and W can be recovered from Y and ξ can be recovered fromZ.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 196
Cluster detection: Possible applications
• Internet packets that form a malicious attack on a computer sys-tem.
• Financial transactions that form a collusive trading scheme.
• Earthquakes that form a single seismic event.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 197
The model
The observations form a marked point process O with marks in E.
O(A, t) = N(A, t) + C(A, t)
with
N(A, t) =
∫A×[0,∞)×[0,t]
1[0,γ(u)](v)ξ1(du× dv × ds)
C(A, t) =
∫A×[0,∞)×[0,t]
1[0,λ(u,ηs−)](v)ξ2(du× dv × ds)
where ξ1 and ξ2 are independent Poisson random measures on E ×[0,∞) × [0,∞) with mean measure ν × ` × `, ` denoting Lebesguemeasure.
ηt(A× [0, r]) =∫A×[0,t] 1A(u)1[0,r](s)C(du× ds)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 198
Radon-Nikodym derivative
Lemma 10.1 On (Ω,F , Q), let N and C be independent Poisson randommeasures with mean measures ν0(du × ds) = γ(u)ν(du)ds and ν1(du ×ds) = λ(u)ν(du)ds respectively that are compatible with Ft. Let L sat-isfy
L(t) = 1 +
∫E×[0,t]
(λ(u, ηs−)
λ(u)− 1)L(s−)(C(du× ds)− λ(u)ν(du)ds).
(10.2)and assume that L is a Ft-martingale.
Define dP|Ft = L(t)dQ|Ft. Under P , for allA such that∫ t
0
∫A λ(u, ηs)ν(du)ds <
∞, t > 0,
C(A, t)−∫A×[0,t]
λ(u, ηs)ν(du)ds
is a local martingale and N is independent of C and is a Poisson randommeasure with mean measure ν0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 199
The general filtering equations
Theorem 10.2
φ(f, t) = φ(f, 0)−∫E×[0,t]
φ(f(·)(λ(u, ·)− λ(u)), s)ν(du)ds
+
∫E×[0,t]
φ(f(·+ δ(u,s))λ(u, ·)λ(u)
− f(·), s−)λ(u)
λ(u) + γ(u)O(du× ds)
and
πtf = π0f
+
∫E×[0,t]
πs−(f(·+ δ(u,s))λ(u, ·))− πs−λ(u, ·)πs−fπs−λ(u, ·) + γ(u)
O(du× ds)
−∫E×[0,t]
(πs(f(·)λ(u, ·))− πsfπsλ(u, ·))ν(du)ds
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 200
Simplify
Problem: The difficulty of computing the distribution: 2O(E,t) possi-ble states.
Need to compromise: compute πtf = EP [f(ηs)|Fs] for a “small” col-lection of f
Suppose one observes ui at time τi and yi = (ui, τi).
θ(yi)(·) = 1yi is a point in the cluster
θ0(yi)(·) = 1yi is the latest point in the cluster
Need to be able to evaluate
πtλ(u, ·)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 201
A Markov scenario
Consider
λ(u, ηt) =
O(E,t)∑i=1
λ(u, yi)θ0(yi) + ε(u),
where θ0(yi) = 1yi is the latest point in the cluster.
Get a closed system for πtθ0(yi)
Let θ(y)(·) = 1y is a point in the cluster.
Get a closed system for
πtθ0(yi), πtθ(yi), πtθ(yi)θ0(yj)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 202
Earthquake declustering
• Earthquakes often occur in clusters .
• A large number of earthquakes strike without any foreshocks oraftershocks.
• Dataset: the earthquakes in the period of 1926-1995, in the rectan-gular area 34− 39N and 131− 140E, with magnitudes greaterthan 4.0 and depths less than 100 km.
Zhuang J.; Ogata Yosihiko; and Vere-Jones, David. (2002). StochasticDeclustering of Space-Time Earthquake Occurrences J. Am. Statist.Ass., 97, 369-380
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 203
Model for seismic events producing clusters
• D = 1: a cluster is active; D = 0: no active cluster.
• Cluster terminates at each quake in the cluster with probabilityp.
• New cluster is initiated with probability ε. (Only one cluster isactive at a time.
• λ(u,Dt, ηt) = 1Dt=1∑k
i=1 λ(u, yi)θm(yi) + 1Dt=0ε.
• ν is assumed to be the uniform measure on the rectangular re-gion.
• λ(u, yi) is proportional to a 2-D normal distribution:
λ(u, yi) = λ exp(−‖u− ui‖2/2d)/(2πd)
• γ(du) = γdu.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 204
Figure 1: time-space plot of all earthquakes (3771)
130132
134136
138140
32
34
36
38
400
0.5
1
1.5
2
2.5
x 104
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 205
Figure 2: 1500 clustered earthquakes: (a) under ETAS model; (b) under the “mother quake” model
130132
134136
138140
32
34
36
38
400
0.5
1
1.5
2
2.5
x 104
(a)
130132
134136
138140
32
34
36
38
400
0.5
1
1.5
2
2.5
x 104
(b)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 206
Basics of stochastic processes
• Filtrations
• Conditional expectations
• Stopping times
• Martingales
• Optional sampling theorem
• Doob’s inequalities
• Stochastic integrals
• Local martingales
• Semimartingales
• Computing quadratic variations
• Covariation
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 207
• Ito’s formula
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 208
Conventions and caveats
In these lectures, state spaces will always be complete, separablemetric spaces (sometimes called Polish spaces), usually denoted (E, r).
All probability spaces are complete.
All identities involving conditional expectations (or conditional prob-abilities) only hold almost surely (even when I don’t say so).
If the filtration Ft involved is obvious, I will say adapted, ratherthan Ft-adapted, stopping time, rather than Ft-stopping time,etc.
All processes are cadlag (right continuous with left limits at each t >0), unless otherwise noted.
A process is real-valued if that is the only way the formula makessense.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 209
Assumed background
A general knowledge of stochastic processes and martingales in con-tinuous time and of stochastic integration will be assumed. Thisbackground is summarized in the next several slides. I will be morethan happy to answer questions on any of this material. More detailcan be found in the following:
The first five chapters of the notes Lectures on Stochastic Analysis whichhas the advantage of being free.
http://www.math.wisc.edu/˜kurtz/m735.htm
Chapter 2 of Ethier and Kurtz, Markov Processes: Characterization andConvergence
The first two chapters of Protter, Stochastic Integration and DifferentialEquations, Second Edition
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 210
Filtrations
(Ω,F , P ) a probability space
Available information is modeled by a sub-σ-algebra of F
Ft is the σ-algebra representing the information available at time t
Ft is a filtration, that is, t < s implies Ft ⊂ FsFt is complete if F0 contains all subsets of sets of probability zero.
A stochastic process X is adapted to Ft if X(t) is Ft-measurable foreach t ≥ 0.
An E-valued stochastic process X adapted to Ft is Ft-Markov if
E[f(X(t+ r))|Ft] = E[f(X(t+ r))|X(t)], t, r ≥ 0, f ∈ B(E)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 211
Conditional expectationsConditional expectationE[X|D] is the best estimate of a random vari-able X using the information in D. If the information in D is the in-formation obtained by observing Y and Z (denoted D = σ(Y, Z)),then there is a function fX such that
E[X|σ(Y, Z)] = fX(Y, Z).
If E[X2] <∞, then fX is the minimizer of E[(X − f(Y, Z))2],
E[(X − fX(Y, Z))2] = inffE[(X − f(Y, Z))2].
In general, if E|X|] < ∞, then E[X|D] is the D-measurable randomvariable Z (a random variable whose value is determined by the in-formation in D) satisfying
E[X1D] = E[Z1D] ∀D ∈ D.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 212
Measurability for stochastic processes
A stochastic process is an indexed family of random variables, but ifthe index set is [0,∞), then we may want to know more aboutX(t, ω)than that it is a measurable function of ω for each t. For example, fora R-valued process X , when are∫ b
a
X(s, ω)ds and X(τ(ω), ω)
random variables?
X is measurable if (t, ω) ∈ [0,∞)× Ω→ X(t, ω) ∈ E is B([0,∞))× F-measurable.
Lemma B.1 IfX is measurable and∫ ba |X(s, ω)|ds <∞, then
∫ ba X(s, ω)ds
is a random variable.
If, in addition, τ is a nonnegative random variable, then X(τ(ω), ω) is arandom variable.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 213
Proof. The first part is a standard result for measurable functionson a product space. Verify the result for X(s, ω) = 1A(s)1B(ω), A ∈B[0,∞), B ∈ F and apply the Dynkin class theorem to extend theresult to 1C , C ∈ B[0,∞)×F .
If τ is a nonnegative random variable, then ω ∈ Ω → (τ(ω), ω) ∈[0,∞) × Ω is measurable. Consequently, X(τ(ω), ω) is the composi-tion of two measurble functions.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 214
Measurability continued
A stochastic process X is Ft-adapted if for all t ≥ 0, X(t) is Ft-measurable.
If X is measurable and adapted, the restriction of X to [0, t] × Ω isB[0, t]×F-measurable, but it may not be B[0, t]×Ft-measurable.
X is progressive if for each t ≥ 0, (s, ω) ∈ [0, t] × Ω → X(s, ω) ∈ E isB[0, t]×Ft-measurable. (Note that every cadlag and adapted processis progressive.)
Let
W = A ∈ B[0,∞)×F : A ∩ [0, t]× Ω ∈ B[0, t]×Ft, t ≥ 0.
Then W is a σ-algebra and X is progressive if and only if (s, ω) →X(s, ω) isW-measurable.
Since pointwise limits of measurable functions are measurable, point-wise limits of progressive processes are progressive.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 215
Stopping times
Let Ft be a filtration. τ is a Ft-stopping time if and only if τ ≤ t ∈Ft for each t ≥ 0.
If τ is a stopping time, the information available at time τ is repre-sented by the σ-algebra Fτ ≡ A ∈ F : A ∩ τ ≤ t ∈ Ft, t ≥ 0.
The definition of Fτ may look strange, but it has all the right proper-ties. For example,
If τ1 and τ2 are stopping times with τ1 ≤ τ2, then Fτ1 ⊂ Fτ2.
If τ1 and τ2 are stopping times then τ1 and τ1 ∧ τ2 are Fτ1-measurable.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 216
A process observed at a stopping time
If X is measurable and τ is a stopping time, then X(τ(ω), ω) is a ran-dom variable.
Lemma B.2 If τ is a stopping time and X is progressive, then X(τ) is Fτ -measurable.
Proof. ω ∈ Ω→ (τ(ω) ∧ t, ω) ∈ [0, t]× Ω is measurable as a mappingfrom (Ω,Ft) to ([0, t]× Ω,B[0, t]×Ft). Consequently,
ω → X(τ(ω) ∧ t, ω)
is Ft-measurable, and
X(τ) ∈ A ∩ τ ≤ t = X(τ ∧ t) ∈ A ∩ τ ≤ t ∈ Ft.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 217
Right continuous processes
Most of the processes you know are either continuous (e.g., Brown-ian motion) or right continuous (e.g., Poisson process).
Lemma B.3 If X is right continuous and adapted, then X is progressive.
Proof. If X is adapted, then
(s, ω) ∈ [0, t]× Ω→ Yn(s, ω) ≡ X([ns] + 1
n∧ t, ω)
=∑k
X(k + 1
n∧ t, ω)1[ kn ,
k+1n )(s)
is B[0, t] × Ft-measurable. By the right continuity of X , Yn(s, ω) →X(s, ω) on B[0, t] × Ft, so (s, ω) ∈ [0, t] × Ω → X(s, ω) is B[0, t] × Ft-measurable and X is progressive.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 218
Examples and properties
Define Ft+ ≡ ∩s>tFs. Ft is right continuous if Ft = Ft+ for all t ≥ 0.If Ft is right continuous, then τ is a stopping time if and only ifτ < t ∈ Ft for all t > 0.
Let X be cadlag and adapted. If K ⊂ E is closed, τhK = inft :X(t) or X(t−) ∈ K is a stopping time, but inft : X(t) ∈ K maynot be; however, if Ft is right continuous and complete, then forany B ∈ B(E), τB = inft : X(t) ∈ B is an Ft-stopping time. Thisresult is a special case of the debut theorem, a very technical resultfrom set theory. Note that
ω : τB(ω) < t = ω : ∃s < t 3 X(s, ω) ∈ B= projΩ(s, ω) : X(s, ω) ∈ B, s < t
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 219
Piecewise constant approximationsε > 0, τ ε0 = 0,
τ εi+1 = inft > τ εi : r(X(t), X(τ εi )) ∨ r(X(t−), X(τ εi )) ≥ ε
Define Xε(t) = X(τ εi ), τ εi ≤ t < τ εi+1. Then r(X(t), Xε(t)) ≤ ε.
If X is adapted to Ft, then the τ εi are Ft-stopping times and Xε
is Ft-adapted. See Exercise 4.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 220
Martingales
An R-valued stochastic processM adapted to Ft is an Ft-martingaleif
E[M(t+ r)|Ft] = M(t), t, r ≥ 0
Every martingale has finite quadratic variation:
[M ]t = lim∑
(M(t ∧ ti+1)−M(t ∧ ti))2
where 0 = t0 < t1 < · · ·, ti → ∞, and the limit is in probability asmax(ti+1 − ti)→ 0. More precisely, for ε > 0 and t0 > 0,
limPsupt≤t0|[M ]t − lim
∑(M(t ∧ ti+1)−M(t ∧ ti))2| > ε = 0.
For standard Brownian motion W , [W ]t = t.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 221
Optional sampling theorem
A real-valued process is a submartingale if E[|X(t)|] <∞, t ≥ 0, and
E[X(t+ s)|Ft] ≥ X(t), t, s ≥ 0.
If τ1 and τ2 are stopping times, then
E[X(t ∧ τ2)|Fτ1] ≥ X(t ∧ τ1 ∧ τ2).
If τ2 is finite a.s. E[|X(τ2)|] <∞ and limt→∞E[|X(t)|1τ2>t] = 0, then
E[X(τ2)|Fτ1] ≥ X(τ1 ∧ τ2).
Of course, if X is a martingale
E[X(t ∧ τ2)|Fτ1] = X(t ∧ τ1 ∧ τ2).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 222
Square integrable martingales
M a martingale satisfying E[M(t)2] <∞. Then
M(t)2 − [M ]t
is a martingale. In particular, for t > s
E[(M(t)−M(s))2] = E[[M ]t − [M ]s].
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 223
Doob’s inequalities
Let X be a submartingale. Then for x > 0,
Psups≤t
X(s) ≥ x ≤ x−1E[X(t)+]
Pinfs≤t
X(s) ≤ −x ≤ x−1(E[X(t)+]− E[X(0)])
If X is nonnegative and α > 1, then
E[sups≤t
X(s)α] ≤(
α
α− 1
)αE[X(t)α].
Note that by Jensen’s inequality, if M is a martingale, then |M | is asubmartingale. In particular, if M is a square integrable martingale,then
E[sups≤t|M(s)|2] ≤ 4E[M(t)2].
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 224
Stochastic integrals
Definition B.4 For cadlag processes X , Y ,
X− · Y (t) ≡∫ t
0
X(s−)dY (s)
= limmax |ti+1−ti|→0
∑X(ti)(Y (ti+1 ∧ t)− Y (ti ∧ t))
whenever the limit exists in probability.
Sample paths of bounded variation: If Y is a finite variation pro-cess, the stochastic integral exists (apply dominated convergence the-orem) and ∫ t
0
X(s−)dY (s) =
∫(0,t]
X(s−)αY (ds)
αY is the signed measure with
αY (0, t] = Y (t)− Y (0)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 225
Existence for square integrable martingales
If M is a square integrable martingale, then
E[(M(t+ s)−M(t))2|Ft] = E[[M ]t+s − [M ]t|Ft]
For partitions ti and ri
E[(∑
X(ti)(M(ti+1 ∧ t)−M(ti ∧ t))
−∑
X(ri)(M(ri+1 ∧ t)−M(ri ∧ t)))2]
= E
[∫ t
0
(X(t(s−))−X(r(s−)))2d[M ]s
]= E
[∫(0,T ]
(X(t(s−))−X(r(s−)))2α[M ](ds)
]t(s) = ti for s ∈ [ti, ti+1) r(s) = ri for s ∈ [ri, ri+1)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 226
Cauchy property
Let X be bounded by a constant. As sup(ti+1 − ti) + sup(ri+1 − ri)→0, the right side converges to zero by the dominated convergencetheorem.
MtiX (t) ≡
∑X(ti)(M(ti+1 ∧ t) − M(ti ∧ t)) is a square integrable
martingale, so
E
[supt≤T
(∑X(ti)(M(ti+1 ∧ t)−M(ti ∧ t))
−∑
X(ri)(M(ri+1 ∧ t)−M(ri ∧ t)))2]
≤ 4E
[∫(0,t]
(X(t(s−))−X(r(s−)))2α[M ](ds)
]A completeness argument gives existence of the stochastic integraland the uniformity implies the integral is cadlag.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 227
Local martingales
Definition B.5 M is a local martingale if there exist stopping times τnsatisfying τ1 ≤ τ2 ≤ · · · and τn → ∞ a.s. such that M τn defined byM τn(t) = M(τn ∧ t) is a martingale. M is a local square-integrablemartingale if the τn can be selected so that M τn is square integrable.
τn is called a localizing sequence for M .
Remark B.6 If τn is a localizing sequence for M , and γn is anothersequence of stopping times satisfying γ1 ≤ γ2 ≤ · · ·, γn →∞ a.s. then theoptional sampling theorem implies that τn ∧ γn is localizing.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 228
Local martingales with bounded jumps
Remark B.7 If M is a continuous, local martingale, then
τn = inft : |M(t)| ≥ n
will be a localizing sequence. More generally, if
|∆M(t)| ≤ c
for some constant c, then
τn = inft : |M(t)| ∨ |M(t−)| ≥ n
will be a localizing sequence.
Note that |M τn| ≤ n+ c, so M is local square integrable.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 229
Semimartingales
Definition B.8 Y is an Ft-semimartingale if and only if Y = M + V ,where M is a local square integrable martingale with respect to Ft and Vis an Ft-adapted finite variation process.
In particular, if X is cadlag and adapted and Y is a semimartingale,then
∫X−dY exists.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 230
Computing quadratic variations
Let ∆Z(t) = Z(t) = Z(t−).
Lemma B.9 If Y is finite variation, then
[Y ]t =∑s≤t
∆Y (s)2
Lemma B.10 If Y is a semimartingale,X is adapted, andZ(t) =∫ t
0 X(s−)dY (s),then
[Z]t =
∫ t
0
X(s−)2d[Y ]s.
Proof. Check first for piecewise constant X and then approximategeneral X by piecewise constant processes.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 231
Covariation
The covariation of Y1, Y2 is defined by
[Y1, Y2]t ≡ lim∑i
(Y1(ti+1 ∧ t)− Y1(ti ∧ t)) (Y2(ti+1 ∧ t)− Y2(ti ∧ t))
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 232
Ito’s formula
If f : R→ R is C2 and Y is a semimartingale, then
f(Y (t)) = f(Y (0)) +
∫ t
0
f ′(Y (s−))dY (s) +
∫ t
0
1
2f ′′(Y (s))d[Y ]cs
+∑s≤t
(f(Y (s))− f(Y (s−))− f ′(Y (s−))∆Y (s)
where [Y ]c is the continuous part of the quadratic variation given by
[Y ]ct = [Y ]t −∑s≤t
∆Y (s)2.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 233
Ito’s formula for vector-valued semimartingales
If f : Rm → R isC2, Y1, . . . , Ym are semimartingales, and Y = (Y1, . . . , Ym),then defining
[Yk, Yl]ct = [Yk, Yl]t −
∑s≤t
∆Yk(s)∆Yl(s),
f (Y (t)) = f (Y (0)) +m∑k=1
∫ t
0
∂kf (Y (s−)) dYk(s)
+m∑
k,l=1
1
2
∫ t
0
∂k∂lf (Y (s−)) d[Yk, Yl]cs
+∑s≤t
(f (Y (s))− f (Y (s−))−m∑k=1
∂kf (Y (s−)) ∆Yk(s)).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 234
Examples
W standard Brownian motion
Z(t) = expW (t)− 1
2t =
∫ t
0
Z(s)d(W (s)− 1
2s) +
∫ t
0
1
2Z(s)ds
=
∫ t
0
Z(s)dW (s)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 235
Integration with respect to Poisson random measures
• Poisson random measures
• Stochastic integrals for space-time Poisson random measures
• The predictable σ-algebra
• Martingale properties
• Representation of counting processes
• Stochastic integrals for centered space-time Poisson random mea-sures
• Quadratic variation
• Levy processes
• Gaussian white noise
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 236
Poisson distribution
Definition P.1 A random variable X has a Poisson distribution with pa-rameter λ > 0 (write X ∼ Poisson(λ)) if for each k ∈ 0, 1, 2, . . .
PX = k =λk
k!e−λ.
E[X] = λ V ar(X) = λ
and the characteristic function of X is
E[eiθX ] = eλ(eiθ−1).
Since the characteristic function of a random variable characterizesits distribution, a direct computation gives
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 237
Proposition P.2 IfX1, X2, . . . are independent random variables withXi ∼Poisson(λi) and
∑∞i=1 λi <∞, then
X =∞∑i=1
Xi ∼ Poisson
( ∞∑i=1
λi
)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 238
Poisson sums of Bernoulli random variables
Proposition P.3 Let N ∼ Poisson(λ), and suppose that Y1, Y2, . . . arei.i.d. Bernoulli random variables with parameter p ∈ [0, 1]. If N is inde-pendent of the Yi, then
∑Ni=0 Yi ∼ Poisson(λp).
For j = 1, . . . ,m, let ej be the vector in Rm that has all its entries equalto zero, except for the jth which is 1.
For θ, y ∈ Rm, let 〈θ, y〉 =∑m
j=1 θjyj.
Proposition P.4 Let N ∼ Poisson(λ). Suppose that Y1, Y2, . . . are in-dependent Rm-valued random variables such that for all k ≥ 0 and j ∈1, . . . ,m
PYk = ej = pj,
where∑m
j=1 pj = 1. Define X = (X1, ..., Xm) =∑N
k=0 Yk. If N is inde-pendent of the Yk, then X1, . . . , Xm are independent random variables andXj ∼ Poisson(λpj).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 239
Poisson random measuresLet (U, dU) be a complete, separable metric space, and let ν be a σ-finite measure on U . LetN (U) denote the collection of counting mea-sures on U .
Definition P.5 A Poisson random measure on U with mean measure νis a random counting measure ξ (that is, a N (U)-valued random variable)such that
a) For A ∈ B(U), ξ(A) has a Poisson distribution with expectation ν(A)
b) ξ(A) and ξ(B) are independent if A ∩B = ∅.
For f ∈M(U), f ≥ 0, define
ψξ(f) = E[exp−∫U
f(u)ξ(du)] = exp−∫
(1− e−f)dν
(Verify the second equality by approximating f by simple functions.)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 240
Existence
Proposition P.6 Suppose that ν is a measure on U such that ν(U) < ∞.Then there exists a Poisson random measure with mean measure ν.
Proof. The case ν(U) = 0 is trivial, so assume that ν(U) ∈ (0,∞).Let N be a Poisson random variable defined on a probability space(Ω,F , P ) with E[N ] = ν(U). Let X1, X2, . . . be iid U -valued randomvariables such that for every A ∈ B(U),
PXj ∈ A =ν(A)
ν(U),
and assume that N is independent of the Xj.
Define ξ by ξ(A) =∑N
k=0 1Xk∈A. In other words ξ =∑N
k=0 δXk
where, for each x ∈ U , δx is the Dirac mass at x.
Extend the existence result to σ-finite measures by partitioning U =∪iUi, where ν(Ui) <∞.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 241
Identities
Let ξ be a Poisson random measure with mean measure ν.
Lemma P.7 Suppose f ∈M(U), f ≥ 0. Then
E[
∫f(y)ξ(dy)] =
∫f(y)ν(dy)
Lemma P.8 Suppose ν is nonatomic and let f ∈ M(N (U) × U), f ≥ 0.Then
E[
∫U
f(ξ, y)ξ(dy)] = E[
∫U
f(ξ + δy, y)ν(dy)]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 242
Proof. Suppose 0 ≤ f ≤ 1U0, where ν(U0) < ∞. Let U0 = ∪kUn
k ,where the Un
k are disjoint and diam(Unk ) ≤ n−1. If ξ(Un
k ) is 0 or 1, then∫Unk
f(ξ, y)ξ(dy) =
∫Unk
f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)
Consequently, if maxk ξ(Unk ) ≤ 1,∫
U0
f(ξ, y)ξ(dy) =∑k
∫Unk
f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)
Since ξ(U0) <∞, for n sufficiently large, maxk ξ(Unk ) ≤ 1,
E[
∫U
f(ξ, y)ξ(dy)] = E[
∫U0
f(ξ, y)ξ(dy)]
= limn→∞
∑k
E[
∫Unk
f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)]
= limn→∞
∑k
E[
∫Unk
f(ξ(· ∩ Un,ck ) + δy, y)ν(dy)]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 243
= E[
∫U
f(ξ + δy, y)ν(dy)].
Note that the last equality follows from the fact that
f(ξ(· ∩ Un,ck ) + δy, y) 6= f(ξ + δy, y)
only if ξ(Unk ) > 0, and hence, assuming 0 ≤ f ≤ 1U0
,
|∑k
∫Unk
f(ξ(·∩Un,ck )+δy, y)ν(dy)−
∫U0
f(ξ+δy, y)ν(dy)| ≤∑k
ξ(Unk )ν(Un
k ),
where the expectation of the right side is∑
k ν(Unk )2 =
∫U0ν(Un(y))ν(dy) ≤∫
U0ν(U0 ∩ B1/n(y))ν(dy), where Un(y) = Un
k if y ∈ Unk . limn→∞ ν(U0 ∩
B1/n(y)) = 0, since ν is nonatomic.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 244
Space-time Poisson random measures
Let ξ be a Poisson random measure on U× [0,∞) with mean measureν × ` (where ` denotes Lebesgue measure).
ξ(A, t) ≡ ξ(A× [0, t]) is a Poisson process with parameter ν(A).
If ν(A) <∞, ξ(A, t) ≡ ξ(A× [0, t])− ν(A)t is a martingale.
Definition P.9 ξ is Ft compatible, if for eachA ∈ B(U), ξ(A, ·) is Ftadapted and for all t, s ≥ 0, ξ(A× (t, t+ s]) is independent of Ft.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 245
Stochastic integrals for Poisson random measuresFor i = 1, . . . ,m, let ti < ri andAi ∈ B(U), and let ηi beFti-measurable.Let X(u, t) =
∑i ηi1Ai(u)1[ti,ri)(t), and note that
X(u, t−) =∑i
ηi1Ai(u)1(ti,ri](t). (P.1)
Define
Iξ(X, t) =
∫U×[0,t]
X(u, s−)ξ(du× ds) =∑i
ηiξ(Ai × (ti, ri]).
Then
E [|Iξ(X, t)|] ≤ E
[∫U×[0,t]
|X(u, s−)|ξ(du× ds)]
=
∫U×[0,t]
E[|X(u, s)|]ν(du)ds
and if the right side is finite, E[Iξ(X, t)] =∫U×[0,t]E[X(u, s)]ν(du)ds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 246
Estimates in L1,0
If ∫U×[0,t]
|X(u, s−)| ∧ 1ξ(du× ds) <∞
then ξ(u, s) : |X(u, s−)| > 1 <∞ and
E
[supt≤T|Iξ(X ∧ 1, t)|
]≤∫U×[0,T ]
E[|X(u, s)| ∧ 1]ν(du)ds
Definition P.10 Let L1,0(U, ν) denote the space of B(U) × B[0,∞) × F-measurable mappings (u, s, ω)→ X(u, s, ω) such that∫ ∞
0
e−s∫U
E[|X(u, s)| ∧ 1]ν(du)ds <∞.
Let S− denote the collection of B(U)×B[0,∞)×F measurable map-pings (u, s, t)→
∑mi=1 ηi(ω)1Ai(u)1(ti,ri](t) defined as in (P.1).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 247
Lemma P.11
d1,0(X, Y ) =
∫ ∞0
e−s∫U
E[|X(u, s)− Y (u, s)| ∧ 1]ν(du)ds
defines a metric on L1,0(U, ν), and the definition of Iξ extends to the closureof S− in L1,0(U, ν).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 248
The predictable σ-algebraWarning: Let N be a unit Poisson process. Then
∫∞0 e−sE[|N(s) −
N(s−)| ∧ 1]ds = 0, but P∫ t
0 N(s)dN(s) 6=∫ t
0 N(s−)dN(s) = 1− e−t.
Definition P.12 Let (Ω,F , P ) be a probability space and let Ft be afiltration in F . The σ-algebra P of predictable sets is the smallest σ-algebra inB(U)×B[0,∞)×F containing sets of the formA×(t0, t0+r0]×Bfor A ∈ B(U), t0, r0 ≥ 0, and B ∈ Ft0.
Remark P.13 Note that for B ∈ Ft0, 1A×(t0,t0+r0]×B(u, t, ω) is left continu-ous in t and adapted and that the mapping (u, t, ω) → X(u, t−, ω), whereX(u, t−, ω) is defined in (P.1), is P-measurable.
Definition P.14 A stochastic processX on U× [0,∞) is predictable if themapping (u, t, ω)→ X(u, t, ω) is P-measurable.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 249
Lemma P.15 If the mapping (u, t, ω)→ X(u, t, ω) is B(U)×B[0,∞)×F-measurable and adapted and is left continuous in t, then X is predictable.
Proof.Let 0 = tn0 < tn1 < · · · and tni+1 − tni ≤ n−1. Define Xn(u, t, ω) =X(u, tni , ω) for tni < t ≤ tni+1. Then Xn is predictable and
limn→∞
Xn(u, t, ω) = X(u, t, ω)
for all (u, t, ω).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 250
Stochastic integrals for predictable processes
Lemma P.16 Let G ∈ P , B ∈ B(U) with ν(B) < ∞ and b > 0. Then1B×[0,b](u, t)1G(u, t, ω) is a predictable process and
Iξ(1B×[0,b]1G, t)(ω) =
∫U×[0,t]
1B×[0,b](u, s)1G(u, s, ω)ξ(du× ds, ω) a.s.
(P.2)and
E[
∫U×[0,t]
1B×[0,b](u, s)1G(u, s, ·)ξ(du× ds)] (P.3)
= E[
∫U×[0,t]
1B×[0,b]1G(u, s, ·)ν(du)ds]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 251
Proof. Let
A = ∪mi=1Ai × (ti, ti + ri]×Gi : ti, ri ≥ 0, Ai ∈ B(U), Gi ∈ Fti.
Then A is an algebra. For G ∈ A, (P.2) holds by definition, and (P.3)holds by direct calculation. The collection of G that satisfy (P.2) and(P.3) is closed under increasing unions and decreasing intersections,and the monotone class theorem (see Theorem 4.1 of the Appendixof Ethier and Kurtz (1986)) gives the lemma.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 252
Lemma P.17 Let X be a predictable process satisfying∫ ∞0
e−s∫U
E[|X(u, s)| ∧ 1]ν(du)ds <∞.
Then∫U×[0,t] |X(u, t)|ξ(du× ds) <∞ a.s. and
Iξ(X, t)(ω) =
∫U×[0,t]
X(u, t, ω)ξ(du× ds, ω) a.s.
Proof. Approximate by simple functions.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 253
Consequences of predictability
Lemma P.18 If X is predictable and∫U×[0,t] |X(u, s)|∧1ν(du)ds <∞ a.s.
for all t, then ∫U×[0,t]
|X(u, s)|ξ(du× ds) <∞ a.s. (P.4)
and ∫U×[0,t]
X(u, s)ξ(du× ds)
exists a.s.
Proof. Let τc = inft :∫U×[0,t] |X(u, s)| ∧ 1ν(du)ds ≥ c, and consider
Xc(s, u) = 1[0,τc](s)X(u, s). ThenXc satisfies the conditions of LemmaP.17, so ∫
U×[0,t]
|X(u, s)| ∧ 1ξ(du× ds) <∞ a.s.
But this implies ξ(u, s) : s ≤ t, |X(u, s)| > 1 <∞, so (P.4) holds.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 254
Martingale properties
Theorem P.19 SupposeX is predictable and∫U×[0,t]E[|X(u, s)|]ν(du)ds <
∞ for each t > 0. Then∫U×[0,t]
X(u, s)ξ(du× ds)−∫ t
0
∫U
X(u, s)ν(du)ds
is a Ft-martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 255
Proof. Let A ∈ Ft and define XA(u, s) = 1AX(u, s)1(t,t+r](s). ThenXA is predictable and
E[1A
∫U×(t,t+r]
X(u, s)ξ(du× ds)] = E[
∫U×[0,t+r]
XA(u, s)ξ(du× ds)]
= E[
∫U×[0,t+r]
XA(u, s)ν(du)ds]
= E[1A
∫U×(t,t+r]
X(u, s)ν(du)ds]
and hence
E[
∫U×(t,t+r]
X(u, s)ξ(du× ds)|Ft] = E[
∫U×(t,t+r]
X(u, s)ν(du)ds|Ft].
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 256
Local martingales
Lemma P.20 If∫U×[0,t]
|X(u, s)|ν(du)ds <∞ a.s. t ≥ 0,
then ∫U×[0,t]
X(u, s)ξ(du× ds)−∫U×[0,t]
X(u, s)ν(du)ds
is a local martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 257
Proof. If τ is a stopping time andX is predictable, then 1[0,τ ](t)X(u, t)is predictable. Let
τc = t > 0 :
∫U×[0,t]
|X(u, s)|ν(du)ds ≥ c.
Then∫U×[0,t∧τc]
X(u, s)ξ(du× ds)−∫U×[0,t∧τc]
X(u, s)ν(du)ds
=
∫U×[0,t]
1[0,τc](s)X(u, s)ξ(du× ds)−∫U×[0,t]
1[0,τc](s)X(u, s)ν(du)ds.
is a martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 258
Representation of counting processes
LetU = [0,∞) and ν = `. Let λ be a nonnegative, predictable process,and define G = (u, t) : u ≤ λ(t). Then
N(t) =
∫[0,∞)×[0,t]
1G(u, s)ξ(du× ds) =
∫[0,∞)×[0,t]
1[0,λ(s)](u)ξ(du× ds)
is a counting process with intensity λ.
Stochastic equation for a counting process
λ : [0,∞) × DE[0,∞) × Dc[0,∞) → [0,∞), λ(t, z, v) = λ(t, zt−, vt−),t ≥ 0, Z independent of ξ
N(t) =
∫[0,∞)×[0,t]
1[0,λ(s,Z,N)](u)ξ(du× ds) (P.5)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 259
Semimartingale property
Corollary P.21 If X is predictable and∫U×[0,t] |X(u, s)| ∧ 1ν(du)ds < ∞
a.s. for all t, then∫U×[0,t] |X(u, s)|ξ(du× ds) <∞ a.s.∫
U×[0,t]
X(u, s)ξ(du× ds)
=
∫U×[0,t]
1|X(u,s|≤1X(u, s)ξ(du× ds)−∫ t
0
∫U
1|X(u,s)|≤1X(u, s)ν(du)ds︸ ︷︷ ︸local martingale
+
∫ t
0
∫U
1|X(u,s)|≤1X(u, s)ν(du)ds+
∫U×[0,t]
1|X(u,s|>1X(u, s)ξ(du× ds)︸ ︷︷ ︸finite variation
is a semimartingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 260
Stochastic integrals for centered Poisson random mea-sures
Let ξ(du× ds) = ξ(du× ds)− ν(du)ds
ForX(u, t−) =
∑i
ηi1Ai(u)1(ti,ri](t).
as in (P.1), define
Iξ(X, t) =
∫U×[0,t]
X(u, s−)ξ(du×ds) =
∫U×[0,t]
X(u, s−)ξ(du×ds)−∫ t
0
∫U
X(u, s)ν(du)ds
and note that
E[Iξ(X, t)
2]
=
∫U×[0,t]
E[X(u, s)2]ν(du)ds
if the right side is finite.
Then Iξ(X, ·) is a square-integrable martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 261
Extension of integral
The integral extends to predictable integrands satisfying∫U×[0,t]
|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s. (P.6)
so that∫U×[0,t∧τ ]
X(u, s)ξ(du× ds) =
∫U×[0,t]
1[0,τ ](s)X(u, s)ξ(du× ds) (P.7)
is a martingale for any stopping time satisfying
E
[∫U×[0,t∧τ ]
|X(u, s)|2 ∧ |X(u, s)|ν(du)ds
]<∞,
and (P.7) is a local square integrable martingale if∫U×[0,t]
|X(u, s)|2ν(du)ds <∞ a.s.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 262
Quadratic variationNote that if X is predictable and∫
U×[0,t]
|X(u, s)| ∧ 1ν(du)ds <∞ a.s. t ≥ 0,
then ∫U×[0,t]
|X(u, s)|2 ∧ 1ν(du)ds <∞ a.s. t ≥ 0,
and[Iξ(X, ·)]t =
∫U×[0,t]
X2(u, s)ξ(du× ds).
Similarly, if∫U×[0,t]
|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s.,
[Iξ(X, ·)]t =
∫U×[0,t]
X2(u, s)ξ(du× ds).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 263
Semimartingale properties
Theorem P.22 Let Y be a cadlag, adapted process. If X is predictable andsatisfies (P.4), Iξ(X, ·) is a semimartingale and∫ t
0
Y (s−)dIξ(X, s) =
∫U×[0,t]
Y (s−)X(u, s)ξ(du× ds),
and if X satisfies (P.6), Iξ(X, ·) is a semimartingale and∫ t
0
Y (s−)dIξ(X, s) =
∫U×[0,t]
Y (s−)X(u, s)ξ(du× ds)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 264
Levy processes
Theorem P.23 Let U = R and∫R |u|
2 ∧ 1ν(du) <∞. Then
Z(t) =
∫[−1,1]×[0,t]
uξ(du× ds) +
∫[−1,1]c×[0,t]
uξ(du× ds)
is a process with stationary, independent increments with
E[eiθZ(t)] = expt∫R(eiθu − 1− iθu1[−1,1](u))ν(du)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 265
Proof.
eiθZ(t) = 1 +
∫ t
0
iθeiθZ(s−)dZ(s) +∑s≤t
(eiθZ(s) − eiθZ(s−) − iθeiθZ(s−)∆Z(s))
= 1 +
∫[−1,1]×[0,t]
iθeiθZ(s−)uξ(du× ds) +
∫[−1,1]c×[0,t]
iθeiθZ(s−)uξ(du× ds)
+
∫R×[0,t]
(eiθ(Z(s−)+u) − eiθZ(s−) − iθeiθZ(s−)u)ξ(du× ds)
= 1 +
∫[−1,1]×[0,t]
iθeiθZ(s−)uξ(du× ds)
+
∫R×[0,t]
eiθZ(s−)(eiθu − 1− iθu1[−1,1](u))ξ(du× ds)
Taking expectations
ϕ(θ, t) = 1 +
∫R×[0,t]
ϕ(θ, s)(eiθu − 1− iθu1[−1,1](u))ν(du)ds
so ϕ(θ, t) = expt∫R(eiθu − 1− iθu1[−1,1](u))ν(du)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 266
Approximation of Levy processes
For 0 < ε < 1, let
Zε(t) =
∫[−1,−ε)∪(ε,1]×[0,t]
uξ(du× ds) +
∫[−1,1]c×[0,t]
uξ(du× ds)
=
∫(−∞,−ε)∪(ε,∞)×[0,t]
uξ(du× ds)− t∫
[−1−ε)∪(ε,1]
uν(du)
that is, throw out all jumps of size less than or equal to ε and thecorresponding centering. Then
E[|Zε(t)− Z(t)|2] = t
∫[−ε,ε]
u2ν(du).
Consequently, since Zε−Z is a square integrable martingale, Doob’sinequality gives
limε→0
E[sups≤t|Zε(s)− Z(s)|2] = 0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 267
Summary on stochastic integrals
If X is predictable and∫U×[0,t] |X(u, s)| ∧ 1ν(du)ds <∞ a.s. for all t,
then ∫U×[0,t]
|X(u, s)|ξ(du× ds) <∞ a.s.
∫U×[0,t]
X(u, s)ξ(du× ds)
=
∫U×[0,t]
1|X(u,s)|≤1X(u, s)ξ(du× ds)−∫ t
0
∫U
1|X(u,s)|≤1X(u, s)ν(du)ds︸ ︷︷ ︸local martingale
+
∫ t
0
∫U
1|X(u,s)|≤1X(u, s)ν(du)ds+
∫U×[0,t]
1|X(u,s)|>1X(u, s)ξ(du× ds)︸ ︷︷ ︸finite variation
is a semimartingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 268
If X is predictable and∫U×[0,t]
|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s.,
then ∫U×[0,t]
X(u, s)ξ(du× ds)
= limε→0+
∫U×[0,t]
1|X(u,s)|≥ε(s)X(u, s)ξ(du× ds)
= limε→0+
(∫U×[0,t]
1|X(u,s)|≥εX(u, s)ξ(du× ds)
−∫ t
0
∫U
1|X(u,s)|≥εX(u, s)ν(du)ds)
exists and is a local martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 269
Gaussian white noise
(U, dU) a complete, separable metric space; B(U), the Borel sets
µ a (Borel) measure on U A(U) = A ∈ B(U) : µ(A) <∞
W (A, t) ≡ W (A × [0, t]) Mean zero, Gaussian process indexed byA(U)× [0,∞)
E[W (A, t)W (B, s)] = t ∧ sµ(A ∩B),
W (ϕ, t) =∫ϕ(u)W (du, t)
ϕ(u) =∑aiIAi(u)
W (ϕ, t) =∑
i aiW (Ai, t)
E[W (ϕ1, t)W (ϕ2, s)] = t ∧ s∫U ϕ1(u)ϕ2(u)µ(du)
Define W (ϕ, t) for all ϕ ∈ L2(µ).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 270
Definition of integral
X(t) =∑
i ξi(t)ϕi adapted process in L2(µ)
IW (X, t) =∫U×[0,t]X(s, u)W (du× ds) =
∑i
∫ t0 ξi(s)dW (ϕi, s)
E[IW (X, t)2] = E
[∑i,j
∫ t
0
ξi(s)ξj(s)ds
∫U
ϕiϕjdµ
]
= E
[∫ t
0
∫U
X(s, u)2µ(du)ds
]The integral extends to measurable and adapted processes satisfying∫ t
0
∫U
X(s, u)2µ(du)ds <∞ a.s.
so that
(IW (X, t))2 −∫ t
0
∫U
X(s, u)2µ(du)ds
is a local martingale.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 271
Technical lemmas
• Some definitions
• Caratheodary extension theorem
• Dynkin class theorem
• Essential supremum
• Martingale convergence theorem
• Kronecker’s lemma
• Law of large numbers for martingales
• Geometric rates
• Uniform integrability
• Measurable functions
• Measurable selection
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 272
• Dominated convergence theorem
• Metric spaces
• Sequential compactness
• Completeness
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 273
Some definitions
Definition T.1
DE[0,∞) is the space of cadlag functions with the Skorohod (J1) topol-ogy.
A collection of functions D ⊂ B(E) is separating if∫fdµ1 =
∫fdµ2
for all f ∈ D implies µ1 = µ2.
t is a fixed point of discontinuity for a cadlag process X if
PX(t) 6= X(t−) > 0
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 274
Caratheodary extension theorem
Theorem T.2 Let M be a set, and let A be an algebra of subsets of M . Ifµ is a σ-finite measure on A, then there exists a unique extension of µ to ameasure on σ(A).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 275
Dynkin class theoremA collectionD of subsets of Ω is a Dynkin class if Ω ∈ D, A,B ∈ D andA ⊂ B imply B − A ∈ D, and An ⊂ D with A1 ⊂ A2 ⊂ · · · imply∪nAn ∈ D.
Theorem T.3 Let S be a collection of subsets of Ω such that A,B ∈ Simplies A ∩B ∈ S. If D is a Dynkin class with S ⊂ D, then σ(S) ⊂ D.
Corollary T.4 Suppose two finite measures, µ, ν, have the same total massand agree on a collection of subsets S that is closed under intersection. Thenthey agree on σ(S).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 276
Essential Supremum
Let Zα, α ∈ I be a collection of random variables. Note that if I isuncountable, supα∈I Zα may not be a random variable; however, wehave the following:
Lemma T.5 There exists a random variable Z such that PZα ≤ Z = 1for each α ∈ I and there exist αk, k = 1, 2, . . . such that Z = supk Zαk .
Proof. Without loss of generality, we can assume 0 < Zα < 1. (Other-wise, replaceZα by 1
1+e−Zα .) LetC = supE[Zα1∨· · ·∨Zαm], α1, . . . , αm ∈
I,m = 1, 2, . . .. Then there exist (αn1 , . . . , αnmn
) such that
C = limn→∞
E[Zαn1 ∨ · · · ∨ Zαnmn ].
Define Z = supZαni , 1 ≤ i ≤ mn, n = 1, 2, . . ., and note that C =E[Z] and C = E[Z ∨Zα] for each α ∈ I. Consequently, PZα ≤ Z =1.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 277
Martingale convergence theorem
Theorem T.6 Suppose Xn is a submartingale and supnE[|Xn|] < ∞.Then limn→∞Xn exists a.s.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 278
Kronecker’s lemma
Lemma T.7 Let An and Yn be sequences of random variables whereA0 > 0 and An+1 ≥ An, n = 0, 1, 2, . . .. Define Rn =
∑nk=1
1Ak−1
(Yk −Yk−1). and suppose that limn→∞An = ∞ and that limn→∞Rn exists a.s.Then, limn→∞
YnAn
= 0 a.s.
Proof.
AnRn =n∑k=1
(AkRk − Ak−1Rk−1) =n∑k=1
Rk−1(Ak − Ak−1) +n∑k=1
Ak(Rk −Rk−1)
= Yn − Y0 +n∑k=1
Rk−1(Ak − Ak−1) +n∑k=1
1
Ak−1
(Yk − Yk−1)(Ak − Ak−1)
and
YnAn
=Y0
An+Rn −
1
An
n∑k=1
Rk−1(Ak − Ak−1)− 1
An
n∑k=1
1
Ak−1
(Yk − Yk−1)(Ak − Ak−1)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 279
Law of large numbers for martingales
Lemma T.8 Suppose An is as in Lemma T.7 and is adapted to Fn, andsuppose Mn is a Fn-martingale such that for each Fn-stopping timeτ , E[(Mτ −Mτ−1)
21τ<∞] <∞. If∞∑k=1
1
A2k−1
(Mk −Mk−1)2 <∞ a.s.,
then limn→∞Mn
An= 0 a.s.
Proof. Without loss of generality, we can assume that An ≥ 1. Let
τc = minn :n∑k=1
1
A2k−1
(Mk −Mk−1)2 ≥ c.
Then∞∑k=1
1
A2k−1
(Mk∧τc −M(k−1)∧τc)2 ≤ c+ (Mτc −Mτc−1)
21τc<∞.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 280
It follows that Rcn =
∑nk=1
1Ak−1
(Mk∧τc −M(k−1)∧τc) converges a.s. and
hence, by Lemma T.7, that limn→∞Mn∧τcAn
= 0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 281
Geometric convergence
Lemma T.9 Let Mn be a martingale with |Mn+1−Mn| ≤ c a.s. for eachn and M0 = 0. Then for each ε > 0, there exist C and η such that
P1
n|Mn| ≥ ε ≤ Ce−nη.
Proof. Let ϕ(x) = e−x + ex and ϕ(x) = ex − 1 − x. Then, settingXk = Mk −Mk−1
E[ϕ(aMn)] = 2 +n∑k=1
E[ϕ(aMk)− ϕ(aMk−1)]
= 2 +n∑k=1
E[expaMk−1ϕ(aXk) + exp−aMk−1ϕ(−aXk)]
≤ 2 +n∑k=1
ϕ(ac)E[ϕ(aMk−1)],
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 282
and henceE[ϕ(aMn)] ≤ 2enϕ(ac).
Consequently,
Psupk≤n
1
n|Mk| ≥ ε ≤ E[ϕ(aMn)]
ϕ(anε)≤ 2en(ϕ(ac)−aε).
Then η = supa(aε− ϕ(ac)) > 0, and the lemma follows.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 283
Uniform integrability
Lemma T.10 If X is integrable, then for ε > 0 there exists a K > 0 suchthat ∫
|X|>K|X|dP < ε.
Proof. limK→∞ |X|1|X|>K = 0 a.s.
Lemma T.11 If X is integrable, then for ε > 0 there exists a δ > 0 suchthat P (F ) < δ implies
∫F |X|dP < ε.
Proof.Let Fn = |X| ≥ n. Then nP (Fn) ≤ E[|X|1Fn]→ 0. Select n sothat E[|X|1Fn] ≤ ε/2, and let δ = ε
2n . Then P (F ) < δ implies∫F
|X|dP ≤∫Fn
|X|dP +
∫F cn∩F
|X|dP <ε
2+ nδ = ε
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 284
Theorem T.12 Let Xα be a collection of integrable random variables.The following are equivalent:
a) supE[|Xα|] <∞ and for ε > 0 there exists δ > 0 such that P (F ) < δimplies supα
∫F |Xα|dP < ε.
b) limK→∞ supαE[|Xα|1|Xα|>K] = 0.
c) limK→∞ supαE[|Xα| − |Xα| ∧K] = 0
d) There exists a convex function ϕ with lim|x|→∞ϕ(x)|x| = ∞ such that
supαE[ϕ(|Xα|)] <∞.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 285
Proof. a) implies b) follows by
P|Xα| > K ≤ E[|Xα|]K
b) implies d): Select Nk such that∞∑k=1
k supαE[1|Xα|>Nk|Xα|] <∞
Define ϕ(0) = 0 and
ϕ′(x) = k, Nk ≤ x < Nk+1.
Recall that E[ϕ(|X|)] =∫∞
0 ϕ′(x)P|X| > xdx, so
E[ϕ(|Xα|)] =∞∑k=1
k
∫ Nk+1
Nk
P|Xα| > xdx ≤∞∑k=1
k supαE[1|Xα|>Nk|Xα|].
d) implies b): E[1|Xα|>K|Xα|] < E[ϕ(|Xα|)]ϕ(K)/K
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 286
b) implies a):∫F |Xα|dP ≤ P (F )K + E[1|Xα|>K|Xα|].
To see that (b) is equivalent to (c), observe that
E[|Xα| − |Xα| ∧K] ≤ E[|Xα|1|Xα|>K] ≤ 2E[|Xα| − |Xα| ∧K
2]
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 287
Uniformly integrable families
• For X integrable, Γ = E[X|D] : D ⊂ F
• For X1, X2, . . . integrable and identically distributed
Γ = X1 + · · ·+Xn
n: n = 1, 2, . . .
• For Y ≥ 0 integrable, Γ = X : |X| ≤ Y .
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 288
Uniform integrability and L1 convergence
Theorem T.13 Xn → X in L1 iff Xn → X in probability and Xn isuniformly integrable.
Proof. If Xn → X in L1, then
limn→∞
E[|Xn| − |Xn| ∧K] = E[|X| − |X| ∧K]
and Part (c) of Theorem T.12 follows from the fact that
limK→∞
E[|X| − |X| ∧K] = limK→∞
E[|Xn| − |Xn| ∧K] = 0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 289
Measurable functions
Let (Mi,Mi) be measurable spaces.
f : M1 → M2 is measurable if f−1(A) = x ∈ M1 : f(x) ∈ A ∈ M1 foreach A ∈M2.
Lemma T.14 If f : M1 → M2 and g : M2 → M3 are measurable, theng f : M1 →M3 is measurable.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 290
Measurable selection theorem
Theorem T.15 Let (M,M) be a measurable space and (S, ρ) a complete,separable metric space. Suppose for each x ∈ M , Γx is a closed subset ofS. and that for every open set U ⊂ S, x ∈ M : Γx ∩ Uneq∅ ∈ M. A.Then there exist fn : M → S, n = 1, 2, . . ., such that fn isM-measurable,fn(x) ∈ Γx for every x ∈M , and Γx is the closure of f1(x), f2(x), . . ..
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 291
Dominated convergence theorem
Theorem T.16 Let Xn → X and Yn → Y in probability. Suppose that|Xn| ≤ Yn a.s. and E[Yn|D]→ E[Y |D] in probability. Then
E[Xn|D]→ E[X|D] in probability
Proof. A sequence converges in probability iff every subsequencehas a further subsequence that converges a.s., so we may as well as-sume almost sure convergence. Let Dm,c = supn≥mE[Yn|D] ≤ c.Then
E[Yn1Dm,c|D] = E[Yn|D]1Dm,c
L1→ E[Y |D]1Dm,c= E[Y 1Dm,c
|D].
Consequently, E[Yn1Dm,c] → E[Y 1Dm,c
], so Yn1Dm,c→ Y 1Dm,c
in L1
by the ordinary dominated convergence theorem. It follows thatXn1Dm,c
→ X1Dm,cin L1 and hence
E[Xn|D]1Dm,c= E[Xn1Dm,c
|D]L1→ E[X1Dm,c
|D] = E[X|D]1Dm,c.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 292
Since m and c are arbitrary, the lemma follows.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 293
Metric spaces
d : S × S → [0,∞) is a metric on S if and only if d(x, y) = d(y, x),d(x, y) = 0 if and only if x = y, and d(x, y) ≤ d(x, z) + d(z, y).
If d is a metric then d ∧ 1 is a metric.
Examples
• Rm d(x, y) = |x− y|
• C[0, 1] d(x, y) = sup0≤t≤1 |x(t)− y(t)|
• C[0,∞) d(x, y) =∫∞
0 e−t sups≤t 1 ∧ |x(s)− y(s)| dt
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 294
Sequential compactness
K ⊂ S is sequentially compact if every sequence xn ⊂ K has a con-vergent subsequence with limit in K.
Lemma T.17 If (S, d) is a metric space, then K ⊂ S is compact if and onlyif K is sequentially compact.
Proof. Suppose K is compact. Let xn ⊂ K. If x is not a limit pointof xn, then there exists εx > 0 such maxn : xn ∈ Bεx(x) < ∞. Ifxn has no limit points, then Bεx(x), x ∈ K is an open cover of K.The existence of a finite subcover contradicts the definition of εx.
If K is sequentially compact, and Uα is an open cover of K. Letx1 ∈ K and ε1 >
12 supα supr : Br(x1) ⊂ Uα and define recursively,
xk+1 ∈ K ∩ (∪kl=1Bεl(xl)) and εk+1 >12 supα supr : Br(xk+1) ⊂ Uα.
(If xk+1 does not exist, then there is a finite subcover in Uα.) Bysequential compactness, xk has a limit point x and x /∈ Bεk(xk) for
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 295
any k. But setting ε = 12 supα supr : Br(x) ⊂ Uα, εk > ε− d(x, xk), so
if d(x, xk) < ε/2, x ∈ Bεk(xk).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 296
Completeness
A metric space (S, d) is complete if and only if every Cauchy se-quence has a limit.
Completeness depends on the metric, not the topology: For example
r(x, y) = | x
1 + |x|− y
1 + |y||
is a metric giving the usual topology on the real line, but R is notcomplete under this metric.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 297
Exercises1. τ is an Ft-stopping time if for each t ≥ 0, τ ≤ t ∈ Ft.
For a stopping time τ ,Fτ = A ∈ F : τ ≤ t ∩A ∈ Ft, t ≥ 0
(a) Show that Fτ is a σ-algebra.(b) Show that for Ft-stopping times σ, τ , σ ≤ τ implies that Fσ ⊂ Fτ . In particular, Fτ∧t ⊂ Ft.(c) Let τ be a discrete Ft-stopping time satisfying τ < ∞ = ∪∞k=1τ = tk = Ω. Show thatFτ = σA ∩ τ = tk : A ∈ Ftk , k = 1, 2, . . ..
(d) Show that the minimum of two stopping times is a stopping time and that the maximum of twostopping times is a stopping time.
2. Prove theW as defined in the background material is a σ-algebra.3. Let N be a Poisson process with parameter λ. Then M(t) = N(t)− λt is a martingale. Compute [M ]t.4. Let 0 = τ0 < τ1 < · · · be stopping times satisfying limk→∞ τk =∞, and for k = 0, 1, 2, . . ., let ξk ∈ Fτk .
Define
X(t) =
∞∑k=0
ξk1[τk,τk+1)(t).
Show that X is adapted.Example: Let X be a cadlag adapted process and let ε > 0. Define τ ε0 = 0 and for k = 0, 1, 2, . . .,
τ εk+1 = inft > τ εk : |X(t)−X(τ εk)| ∨ |X(t−)−X(τ εk)| ≥ ε.
Note that the τ εK are stopping times, by Problem 1. Define
Xε(t) =
∞∑k=0
X(τ εk)1[τεk,τεk+1
)(t).
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 298
Then Xε is a piecewise constant, adapted process satisfying
supt|X(t)−Xε(t)| ≤ ε.
5. Show that E[f(X)|D] = E[f(X)] for all bounded continuous fucntions (all bounded measurable func-tions) if and only if X is independent of D.
6. Let X and Y S-valued random variables defined on (Ω,F , P ), and let G ⊂ F be a sub-σ-algebra.Suppose that M ⊂ C(S) is separating and
E[f(X)|G] = f(Y ) a.s.
for every f ∈M . Show that X = Y a.s
7. Let τ be a discrete stopping time with values ti. Show that
E[Z|Fτ ] =∑i
E[Z|Fti ]1τ=ti.
8. Let N be a stochasticly continuous counting processes with independent increments. Show that theincrements of N are Poisson distributed.
9. Let Nn and N be counting processes. Suppose that (Nn(t1), . . . , Nn(tm)) ⇒ (N(t1), . . . , N(tm)) forall choices of t1, . . . , tm in T0, where T0 is dense in [0,∞). Show that Nn ⇒ N under the Skorohodtopology.
10. Suppose the Y1, . . . , Ym are independent Poisson random variables with E[Yi] = λi. Let Y =∑mi=1 Yi.
Compute PYi = 1|Y = 1. More generally, compute PYi = k|Y = m.11. Let ξ be a Poisson random measure on (U, dU ) with mean measure ν. For f ∈M(U), f ≥ 0, show that
E[exp−∫U
f(u)ξ(du)] = exp−∫
(1− e−f )dν.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 299
12. Give an example of a right continuous process X such that∫U×[0,t]
|X(u, s)| ∧ 1ν(du)ds < ∞ a.s. but∫U×[0,t]
|X(u, s)|ξ(du× ds) =∞ a.s.
13. Let Y , ξ1, and ξ2 be independent random variables with values in complete, separable metric spaces, E,S1, and S2. Suppose that G : E × S1 → E0 and H : E × S2 → E0 are Borel measurable functions andthat G(Y, ξ1) = H(Y, ξ2) a.s. Show that there exists a Borel measurable function F : E → E0 such thatF (Y ) = G(Y, ξ1) = H(Y, ξ2) a.s.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 300
14. Let X be a measurable stochastic process that is bounded by a constant c, and let Nn be a Poissonprocess with parameter n that is independent of X . Let Snk denote the jump times of Nn. Show that∑
k
X(Snk )(Snk+1 ∧ t− Snk ∧ t)
converges in L1 to∫ t0X(s)ds.
15. Suppose that X is a Markov process with generator A and semigroup T (t). Let V be a unit Poissonprocess independent of X , and define
Xn(t) = X(1
nV (nt)).
Show that Xn is a Markov process. What is its generator.
16. Let Yk be a discrete time Markov chain with state spaceE and transition operator µ(x,Γ) = PYk+1 ∈Γ|Yk = x. Let λ : E → [0,∞), and let ∆i, i = 0, 1, . . . be independent, unit exponential randomvariables. Define
X(t) = Yk fork−1∑i=0
∆i
λ(Yi)≤ t <
k∑i=1
∆i
λ(Yi).
Show that X is Markov and compute its generator. (Assume λ is bounded to avoid technicalities.)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 301
17. For a one-dimensional diffusion with generator
Af(x) =1
2a(x)f ′′(x) + b(x)f ′(x),
check the claim in (2.2).
18. Suppose thatA ⊂ B(E)×B(E) and that a solution of the martingale problem for (A, δx) exists for eachx ∈ E. Show that A is dissipative. (Recall the equivalent forms of the martingale problem.)
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 302
19. Let E = (0, 1] and L = f ∈ C((0, 1]) : limx→0 f(x) exists. Consider the operator
Af(x) = −f ′(x)
with domain
D(A) = f ∈ L : f ′ ∈ L, limx→0
f(x) =
∫ 1
0
f(y)dy.
a) Show that R(λ − A) = L (implying uniqueness for the corresponding martingale problem), butD(A) is not dense in L.
b) Describe the behavior of the process.
c) Is this process quasi-left continuous? Explain.
20. Give an example of a right continuous, adapted process X (not predictable!) such that∫U×[0,t]
|X(u, s)| ∧ 1ν(du)ds <∞ a.s.
but∫U×[0,t]
|X(u, s)|ξ(du× ds) =∞ a.s.
21. Develop a stochastic differential equation model for the movement of a toad in a hale storm. Assumethat when a hale stone lands, the toad jumps directly away from where the hale stone landed, andassume that the distance the toad jumps depends on how close the hale stone lands to the toad and howlarge the hale stone is.For simplicity, assume that the toad is on an infinite, flat surface and that the statistics of the hale stormare translationally and rotationally invariant.
22. Derive the generator for the model developed in Problem 21.
23. For the model developed in Problem 21, give conditions under which the movement of the toad can beapproximated by a Brownian motion.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 303
24. Use the martingale central limit theorem to prove a central limit theorem for a continuous time finiteMarkov chain ξ(t), t ≥ 0. Specifically, assume that the chain is irreducible so that there is a uniquestationary distribution πk and prove convergence for the vector-valued process Un = (U1
n, . . . , Udn)
defined by
Ukn(t) =1√n
∫ nt
0
(1ξ(s)=k − πk)ds.
25. Show that Un defined in Problem 24 is not a “good” sequences of semimartingales. (The easiest ap-proach is probably to show that the conclusion of the stochastic integral convergence theorem is notvalid.)
26. Show that Un can be written as Mn + Zn where Mn is a “good” sequence and Zn ⇒ 0.
27. (Random evolutions) Let ξ be as in Problem 24, and let Xn satisfy
Xn(t) =√nF (Xn(s), ξ(ns)).
Suppose∑i F (x, i)πi = 0. Write Xn as a stochastic differential equation driven by Un, give conditions
under which Xn converges in distribution to a limit X , and identify the limit.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 304
28. Prove the following:
Lemma T.18 Suppose Xn is relatively compact in DS[0,∞) and that for each n, Fnt is a filtration andπ(n)t is the conditional distribution of Xn(t) given Fnt . Then for each ε > 0 and T > 0, there exists a compactKε,T ⊂ S such that
supnPsup
t≤Tπ(n)t (Kc
ε,T ) ≥ ε ≤ ε
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 305
29. Let ξ1, ξ2, . . . be iid uniform [0, r] and independent of N(0). Let Ui(0) = ξi, i = 1, . . . , N(0), and forb > 0, let
Ui(t) = bUi(t)
soUi(t) = Ui(0)ebt.
DefineN(t) = #i : Ui(t) < r.
Let f(u, n) =∏ni=1 g(ui), where 0 ≤ g ≤ 1, g is continuously differentiable, and g(ui) = 1 for ui > r.
The generator for U(t) = (U1(t), . . . , UN(t)) is
Af(u, n) = f(u, n)n∑i=1
buig′(ui)/g(ui)
and
f(U(t), N(t))− f(U(0), N(0))−∫ t
0
Af(U(s), N(s))ds = 0
is a martingale.
(a) Apply the Markov mapping theorem with γ(u1, . . . , un) = n and
α(n, du) =1
rndu1 . . . dun
to show that N is a Markov process.
(b) Give a direct argument that shows why N is a Markov process.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 306
30. Verify the following moment identities for a Poisson random measure ξ with mean measure ν
E[e∫f(z)ξ(dz)] = e
∫(ef−1)dν ,
or letting ξ =∑i δZi ,
E[∏i
g(Zi)] = e∫(g−1)dν .
Similarly,
E[∑j
h(Zj)∏i
g(Zi)] =
∫hgdνe
∫(g−1)dν ,
andE[
∑i 6=j
h(Zi)h(Zj)∏k
g(Zk)] = (
∫hgdν)2e
∫(g−1)dν ,
31. Consider a process with state space (−1, 1×E×[0, r])n and generator given by f(z) =∏ni g(xi, yi, ui),
zi = (xi, yi, ui), and
Af(z) =n∑i=1
f(z)λ(g(−xi, yi, ui)g(xi, yi, ui)
− 1) +∑i6=j
µ1ui<uj ,xi=xjf(z)(g(xj , yi, uj)
g(xj , yj , uj)− 1).
Apply the Markov mapping theorem to characterize the process ηn(t) =∑ni=1 δ(xi,yi) and, under ap-
propriate scaling, use the lookdown model to derive a limiting model for ηn as r →∞.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 307
Stochastic analysis exercises1. Show that if Y1 is cadlag and Y2 is finite variation, then
[Y1, Y2]t =∑s≤t
∆Y1(s)∆Y2(s).
2. Using the fact that martingales have finite quadratic variation, show that semimartingales have finitequadratic variation.
3. Using the above results, show that the covariation of two semimartingales exist.4. Consider the stochastic differential equation
X(t) = X(0) +
∫ t
0
aX(s)dW (s) +
∫ t
0
bX(s)ds.
Find α and β so thatX(t) = X(0) expαW (t) + βt
is a solution.5. Let W be standard Brownian motion and suppose (X,Y ) satisfies
X(t) = x+
∫ t
0
Y (s)ds
Y (t) = y −∫ t
0
X(s)ds+
∫ t
0
cX(s−)dW (s)
where c 6= 0 and x2 + y2 > 0. Assuming all moments are finite, define m1(t) = E[X(t)2], m2(t) =E[X(t)Y (t)], andm3(t) = E[Y (t)2]. Find a system of linear differential equations satisfied by (m1,m2,m3),and show that the expected “total energy” (E[X(t)2 + Y (t)2]) is asymptotic to keλt for some k > 0 andλ > 0.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 308
6. Let X and Y be independent Poisson processes. Show that with probability one, X and Y do not havesimultaneous discontinuities and that [X,Y ]t = 0, for all t ≥ 0.
7. Two local martingales, M and N , are called orthogonal if [M,N ]t = 0 for all t ≥ 0.
(a) Show that if M and N are orthogonal, then [M +N ]t = [M ]t + [N ]t.
(b) Show that if M and N are orthogonal, then M and N do not have simultaneous discontinuities.
(c) Show that if M is finite variation, then M and N are orthogonal if and only if they have no simul-taneous discontinuities.
8. Let W be standard Brownian motion. Use Ito’s Formula to show that
M(t) = eαW (t)− 12α2t
is a martingale. (Note that the martingale property can be checked easily by direct calculation; however,the problem asks you to use Ito’s formula to check the martingale property.)
9. Let N be a Poisson process with parameter λ. Use Ito’s formula to show that
M(t) = eαN(t)−λ(eα−1)t
is a martingale.
10. Let X satisfy
X(t) = x+
∫ t
0
σX(s)dW (s) +
∫ t
0
bX(s)ds
Let Y = X2.
(a) Derive a stochastic differential equation satisfied by Y .
(b) Find E[X(t)2] as a function of t.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 309
11. Suppose that the solution of dX = b(X)dt + σ(X)dW , X(0) = x is unique for each x. Let τ = inft >0 : X(t) /∈ (α, β) and suppose that for some α < x < β, Pτ < ∞, X(τ) = α|X(0) = x > 0 andPτ <∞, X(τ) = β|X(0) = x > 0 .
(a) Show that Pτ < T,X(τ) = α|X(0) = x is a nonincreasing function of x, α < x < β.
(b) Show that there exists a T > 0 such that
infx
maxPτ < T,X(τ) = α|X(0) = x, Pτ < T,X(τ) = β|X(0) = x > 0
(c) Let γ be a nonnegative random variable. Suppose that there exists a T > 0 and a ρ < 1 such thatfor each n, Pγ > (n+ 1)T |γ > nT < ρ. Show that E[γ] <∞.
(d) Show that E[τ ] <∞.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 310
ReferencesDavid F. Anderson and Thomas G. Kurtz. Continuous time markov chain models for chemical reaction net-
works. In H. Koeppl, D. Densmore, G. Setti, and M. di Bernardo, editors, Design and Analysis of BiomolecularCircuits, pages 3–42. Springer, 2011.
David F. Anderson and Thomas G. Kurtz. Stochastic Analysis of Biochemical Systems. Springer-Verlag, 2015.
Abhay G. Bhatt and Rajeeva L. Karandikar. Invariant measures and evolution equations for Markov processescharacterized via martingale problems. Ann. Probab., 21(4):2246–2268, 1993. ISSN 0091-1798.
D. L. Burkholder. Distribution function inequalities for martingales. Ann. Probability, 1:19–42, 1973.
E. Cinlar, J. Jacod, P. Protter, and M. J. Sharpe. Semimartingales and Markov processes. Z. Wahrsch. Verw.Gebiete, 54(2):161–219, 1980. ISSN 0044-3719.
J. G. Dai and J. M. Harrison. Steady-state analysis of RBM in a rectangle: numerical methods and a queueingapplication. Ann. Appl. Probab., 1(1):16–35, 1991. ISSN 1050-5164. URL http://links.jstor.org/sici?sici=1050-5164(199102)1:1<16:SAORIA>2.0.CO;2-5&origin=MSN.
J. G. Dai and J. M. Harrison. Reflected Brownian motion in an orthant: numerical methods for steady-stateanalysis. Ann. Appl. Probab., 2(1):65–86, 1992. ISSN 1050-5164. URL http://links.jstor.org/sici?sici=1050-5164(199202)2:1<65:RBMIAO>2.0.CO;2-F&origin=MSN.
E. B. Dynkin. Markov processes. Vols. I, II, volume 122 of Translated with the authorization and assistance of theauthor by J. Fabius, V. Greenberg, A. Maitra, G. Majone. Die Grundlehren der Mathematischen Wissenschaften,Bande 121. Academic Press Inc., Publishers, New York, 1965.
Pedro Echeverrıa. A criterion for invariant measures of Markov processes. Z. Wahrsch. Verw. Gebiete, 61(1):1–16, 1982. ISSN 0044-3719.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 311
Stewart N. Ethier and Thomas G. Kurtz. Markov processes: Characterization and Convergence. Wiley Series inProbability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc.,New York, 1986. ISBN 0-471-08186-8.
Carl Graham. McKean-Vlasov Ito-Skorohod equations, and nonlinear diffusions with discrete jump sets.Stochastic Process. Appl., 40(1):69–82, 1992. ISSN 0304-4149.
Akira Ichikawa. Filtering and control of stochastic differential equations with unbounded coefficients. Stochas-tic Anal. Appl., 4(2):187–212, 1986. ISSN 0736-2994.
Nobuyuki Ikeda and Shinzo Watanabe. Stochastic differential equations and diffusion processes, volume 24 ofNorth-Holland Mathematical Library. North-Holland Publishing Co., Amsterdam, second edition, 1989. ISBN0-444-87378-3.
Kiyosi Ito. On stochastic differential equations. Mem. Amer. Math. Soc., 1951(4):51, 1951. ISSN 0065-9266.
Jean Jacod. Multivariate point processes: predictable projection, Radon-Nikodym derivatives, representationof martingales. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 31:235–253, 1974/75.
Jean Jacod. Weak and strong solutions of stochastic differential equations. Stochastics, 3(3):171–191, 1980. ISSN0090-9491.
Thomas G. Kurtz. Representations of Markov processes as multiparameter time changes. Ann. Probab., 8(4):682–715, 1980a. ISSN 0091-1798.
Thomas G. Kurtz. The optional sampling theorem for martingales indexed by directed sets. Ann. Probab., 8(4):675–681, 1980b. ISSN 0091-1798.
Thomas G. Kurtz. Representations of Markov processes as multiparameter time changes. Ann. Probab., 8(4):682–715, 1980c. ISSN 0091-1798.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 312
Thomas G. Kurtz. Martingale problems for conditional distributions of Markov processes. Electron. J. Probab.,3:no. 9, 29 pp. (electronic), 1998. ISSN 1083-6489.
Thomas G. Kurtz. Equivalence of stochastic equations and martingale problems. In Stochastic analysis 2010,pages 113–130. Springer, Heidelberg, 2011. doi: 10.1007/978-3-642-15358-7 6. URL http://dx.doi.org.ezproxy.library.wisc.edu/10.1007/978-3-642-15358-7_6.
Thomas G. Kurtz and Giovanna Nappo. The filtered martingale problem. In Dan Crisan and Boris Rozovskii,editors, Handbook on Nonlinear Filtering, chapter 5, pages 129–165. Oxford University Press, 2011.
Thomas G. Kurtz and Daniel L. Ocone. Unique characterization of conditional distributions in nonlinearfiltering. Ann. Probab., 16(1):80–107, 1988. ISSN 0091-1798.
Thomas G. Kurtz and Philip E. Protter. Weak convergence of stochastic integrals and differential equations. II.Infinite-dimensional case. In Probabilistic models for nonlinear partial differential equations (Montecatini Terme,1995), volume 1627 of Lecture Notes in Math., pages 197–285. Springer, Berlin, 1996.
Thomas G. Kurtz and Richard H. Stockbridge. Stationary solutions and forward equations for controlled andsingular martingale problems. Electron. J. Probab., 6:no. 17, 52 pp. (electronic), 2001. ISSN 1083-6489.
E. Lenglart, D. Lepingle, and M. Pratelli. Presentation unifiee de certaines inegalites de la theorie des martin-gales. In Seminar on Probability, XIV (Paris, 1978/1979) (French), volume 784 of Lecture Notes in Math., pages26–52. Springer, Berlin, 1980. With an appendix by Lenglart.
P. A. Meyer. Demonstration simplifiee d’un theoreme de Knight. In Seminaire de Probabilites, V (Univ. Stras-bourg, annee universitaire 1969–1970), pages 191–195. Lecture Notes in Math., Vol. 191. Springer, Berlin, 1971.
Paul-Andre Meyer. Processus de Markov. Lecture Notes in Mathematics, No. 26. Springer-Verlag, Berlin-NewYork, 1967.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 313
Philip E. Protter. Stochastic integration and differential equations, volume 21 of Applications of Mathematics (NewYork). Springer-Verlag, Berlin, second edition, 2004. ISBN 3-540-00313-4. Stochastic Modelling and AppliedProbability.
L. C. G. Rogers and J. W. Pitman. Markov functions. Ann. Probab., 9(4):573–582, 1981. ISSN 0091-1798. URL http://links.jstor.org/sici?sici=0091-1798(198108)9:4<573:MF>2.0.CO;2-G&origin=MSN.
Murray Rosenblatt. Functions of Markov processes. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 5:232–243,1966.
A. V. Skorokhod. Studies in the theory of random processes. Translated from the Russian by Scripta Technica, Inc.Addison-Wesley Publishing Co., Inc., Reading, Mass., 1965.
Daniel W. Stroock. Diffusion processes associated with Levy generators. Z. Wahrscheinlichkeitstheorie und Verw.Gebiete, 32(3):209–244, 1975.
Daniel W. Stroock and S. R. Srinivasa Varadhan. Multidimensional diffusion processes, volume 233 of Grundlehrender Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin-New York, 1979. ISBN 3-540-90353-4.
Zhengxiao Wu. A cluster identification framework illustrated by a filtering model for earthquake occurrences.Bernoulli, 15(2):357–379, 2009. ISSN 1350-7265. doi: 10.3150/08-BEJ159. URL http://dx.doi.org.ezproxy.library.wisc.edu/10.3150/08-BEJ159.
•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 314
AbstractMartingale problems and stochastic equations
Beginning with Lvys characterization of Brownian motion, the mar-tingale properties of stochastic processes have proven to be funda-mental in the formulation and analysis of stochastic models, in par-ticular, in the study of Markov processes. Stochastic equations ofvarious types provide a second fundamental approach to definingstochastic models. These lectures will explore aspects of both ofthese approaches and their relationship. The Martingale problemof Stroock and Varadhan will be formulated and some of its prop-erties described. The notion of a weak solution of a stochastic dif-ferential equation will be introduced and the fact that solutions ofappropriate martingale problems give weak solutions of stochasticdifferential equations will be discussed. Time change equations forMarkov chains and diffusions will be given and the equivalence ofthese equations to corresponding martingale problems shown.