COMPARISON PRINCIPLES FOR PARABOLIC
STOCHASTIC PARTIAL DIFFERENTIAL
EQUATIONS
by
Shiu-Tang Li
A dissertation submitted to the faculty ofThe University of Utah
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy
Department of Mathematics
The University of Utah
May 2017
Copyright c© Shiu-Tang Li 2017
All Rights Reserved
ABSTRACT
We show that a large class of stochastic heat equations can be approximated by systems
of interacting stochastic differential equations. We use this fact to build moment compar-
ison principles for stochastic heat equations with smooth spatially homogeneous noises
(SHE(1)), and then use them to approximate the solution of stochastic heat equations with
spatially homogeneous noise with Riesz kernels (SHE(2)), and obtain moment comparison
principles for SHE(2) as well.
CONTENTS
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
CHAPTERS
1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. PRELIMINARIES ON SPACE–TIME STOCHASTIC INTEGRALS . . . . . . . . . . 5
2.1 Integration against spatially homogeneous noise . . . . . . . . . . . . . . . . . . . . . . . 52.2 Coupling of space–time noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Representation of Ito integrals as space–time integrals . . . . . . . . . . . . . . . . . . . 9
3. COMPARISON THEOREMS FOR INFINITE INTERACTING SDES . . . . . . . . 10
3.1 Existence and uniqueness of (SDE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Approximation of (SDE) by other SDEs under simplifications . . . . . . . . . . . . . 203.3 Comparison principles for (SDE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.1 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4. FROM INTERACTING SDES TO SHE(1): LK(P) APPROXIMATION . . . . . . . . 32
4.1 Proof of Theorem 23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.1.1 The approximation of ut(x) by u(1,δ)
t (x) . . . . . . . . . . . . . . . . . . . . . . . . . . 404.1.2 The approximation of u(1,δ)
t (x) by u(2,ε,δ)t (x) . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.3 The approximation of u(2,ε,δ)t (x) by u(3,ε,δ)
t (x) . . . . . . . . . . . . . . . . . . . . . . 424.1.4 The approximation of u(3,ε,δ)
t (x) by u(4,ε,δ)t (x) . . . . . . . . . . . . . . . . . . . . . . 43
4.1.5 The approximation of u(4,ε,δ)t (x) by u(5,ε)
t (x) . . . . . . . . . . . . . . . . . . . . . . . 444.1.6 The approximation of
∫Rd pt(x− y)u0(y) dy . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.7 Proof of Theorem 23, final step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.2 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5. LK(P) APPROXIMATION FROM SHE(1) TO SHE(2) . . . . . . . . . . . . . . . . . . . . . . 49
5.1 Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515.2 Proof of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
APPENDICES
A. FOURIER TRANSFORM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
B. GRONWALL’S INEQUALITY FOR MEASURABLE FUNCTIONS . . . . . . . . . . . 58
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
CHAPTER 1
INTRODUCTION
Consider the stochastic heat equation with multiplicative noise
∂
∂tut(x) = −ν(−∆)α/2ut(x) + σ
(ut(x)
)η(t, x) t ≥ 0, x ∈ Rd, (1.1)
which satisfies the following assumptions:
1. −ν(−∆)α/2, 0 < α ≤ 2 is the fractional Laplacian operator, which is also the genera-
tor of an isotropic α−stable process. ν > 0.
2. σ : R→ [0, ∞) is Lipschitz continuous and σ(0) = 0.
3. The initial data u0(x) is a nonnegative, bounded continuous, and nonrandom func-
tion.
By “the solution to the above heat equations with noise η”, we mean the space–time
random field ut(x) that satisfies the following mild form:
ut(x) =∫
Rdpt(x− y)u0(y) dy +
∫(0,t)×Rd
pt−s(y− x)σ(us(y))η(ds, dy). (1.2)
We refer the reader to [3, 11, 19] for the background knowledge and motivations of the
solutions of the mild form. Later in Chapter 2, we would review the basic properties on
the space–time stochastic integrals. Here,
pt(x) := (2π)−d∫
Rde−iz·xe−νt|z|α dz, (1.3)
which is the transition density for the α−stable process Xt with characteristic function
E[eiz·Xt ] = e−νt|z|α .
In this dissertation, we consider stochastic heat equations with two different types of
noises for η(t, x):
2
1. Stochastic Heat Equation with smooth spatially homogeneous noise (SHE(1)).
Let η(t, x) be a Gaussian random field with covariance,
Cov(η(t, x), η(s, y)) = δ0(t− s) f (x− y), (1.4)
where f is a bounded continuous, symmetric, and positive definite function on Rd. We
say that f is the covariance kernel for η. In order to guarantee that a unique solution to
(SHE(1)) exists, we need (1.9) in [5] to hold; that,
∫Rd
11 + 2ν|ξ|αF [ f ](ξ) dξ ds < ∞. (1.5)
Here F [ f ] is the Fourier transform of f , and F [ f ](ξ) dξ is a positive finite measure on
Rd. Therefore, for all α > 0, (1.5) is satisfied. As a result, (SHE(1)) has a unique solution for
all 0 < α ≤ 2 due to [5].
2. Stochastic Heat Equation with spatially homogeneous noise with Riesz kernels
(SHE(2)).
Let η(t, x) = ηβ(t, x) be a family of Gaussian random fields (0 < β < d) with covariance
Cov(ηβ(t, x), ηβ(s, y)) = δ0(t− s) fβ(x− y), (1.6)
where the covariance kernel of ηβ is given by fβ(z) := const · |z|−β, 0 < β < d. We note
that
fβ = hβ ∗ hβ, (1.7)
where
hβ(x) = const · |x|−(d+β)/2. (1.8)
Besides (see Appendix A.1 for details),
F [ fβ](ξ) = const · |ξ|−(d−β), F [hβ](ξ) = const · |ξ|−(d−β)/2. (1.9)
For (SHE(2)), (1.9) in [5] becomes
∫Rd
11 + 2ν|ξ|α ·
1|ξ|d−β
dξ ds < ∞, (1.10)
which holds only when α > β. Therefore, in order to make sure that a unique solution to
(SHE(2)) exists, we need to assume that α > β.
3
Let us now consider the system of interacting SDEs indexed by x ∈ Zd given by
dUt(x) =(L Ut
)(x)dt + σ
(Ut(x)
)dBt(x) (SDE)
with the following assumptions:
1. Bt(x)x∈Zd is a family of correlated Brownian motions such that
Cov(
Bs(x), Bt(y))= const · (s ∧ t) · R(|x− y|), (1.11)
whereR : [0, ∞)→ [0, ∞) is a bounded function.
2. L is defined by
L g(j) := ν ∑i∈Zd
pj,i(g(i)− g(j)), (1.12)
for every g : Zd → R, and for all i, j ∈ Zd, pi,j = pj,i = µ(i − j), where µ is a
probability measure on Zd.
3. σ : R→ [0, ∞) is a Lipschitz continuous function, and σ(0) = 0.
4. The initial condition U0(x) ≥ 0 is a bounded, nonrandom function on Zd.
Later in Chapter 3, we will prove the following comparison principle for (SDE), which
is a generalization of Theorem 1 of [2] (the underlying Brownian motions are no longer
independent).
Theorem 1. Consider two solutions Ut and Vt to (SDE) with the same initial conditions U0 ≡ V0,
but with different σ = σ1, σ2 such that σ1 ≤ σ2. Then for every x1, x2, · · · , xn ∈ Zd, t1, t2 · · · tn ≥
0 and k1, · · · , kn ∈ [0, ∞),
E[Ut1(x1)
k1 · · ·Utn(xn)kn]≤ E
[Vt1(x1)
k1 · · ·Vtn(xn)kn]. (1.13)
We may construct a family of SDEs indexed by (εZ)d, and the solutions are denoted by
U(ε)t (x). Also, let ut(x) solve (SHE(1)). We have
U(ε)t (ε[x/ε])→ ut(x) in Lk(P) (1.14)
for every k ≥ [2, ∞), uniformly in x ∈ Rd and t ∈ [T1, T2], where T2 > T1 > 0 are arbitrarily
given. A complete statement of this result will be given as Theorem 23 in Chapter 4. In the
4
proof, we have used ideas from [10], in which a similar approximation for stochastic heat
equation with space–time white noise is presented.
We may now combine the above results with Lemma 22 of Chapter 3, in order to deduce
the following.
Theorem 2. Consider two solutions ut(x) and vt(x) to (SHE(1)) with the same initial conditions
u0(x) ≡ v0(x), but with σ1 and σ2, respectively, such that σ1(x) ≤ σ2(x) for all x ∈ Rd. Then for
any x1, x2, · · · , xn ∈ Rd, t1, t2 · · · tn ≥ 0, and k1, · · · , kn ∈ [0, ∞),
E[ut1(x1)
k1 · · · utm(xm)km]≤ E
[vt1(x1)
k1 · · · vtm(xm)km]. (1.15)
In Chapter 5, we will establish the following approximation of (SHE(2)) by (SHE(1)):
Theorem 3. Let ut(x) be the solution to (SHE(2)), with covariance kernel fβ(z) := C1 · |z|−β,
and fβ(z) = hβ ∗ hβ(x), where hβ(x) := C2 · |x|−(d+β)/2. Then there exists a sequence uδt (x) of
solutions to (SHE(1)) with covariance kernels f δβ , such that
limδ↓0
sup[0,T]
supx∈Rd‖ut(x)− uδ
t (x)‖k = 0 (1.16)
for all k ∈ [2, ∞), T > 0.
Thanks to Theorem 3, Theorem 2 now holds for (SHE(2)) as well, as the following
theorem. The proof will be given in Chapter 5.
Theorem 4. Consider two solutions ut(x) and vt(x) to (SHE(2)) with the same initial conditions
u0(x) ≡ v0(x), but with σ1 and σ2, respectively, such that σ1(x) ≤ σ2(x) for all x ∈ Rd. Then for
any x1, x2, · · · , xn ∈ Rd, t1, t2 · · · tn ≥ 0, and k1, · · · , kn ∈ [0, ∞),
E[ut1(x1)
k1 · · · utm(xm)km]≤ E
[vt1(x1)
k1 · · · vtm(xm)km]. (1.17)
To simplify the notations we define throughout this dissertation,
∥∥X∥∥
k := E[∣∣X∣∣k]1/k, (1.18)
for every k > 0 and random variable X.
CHAPTER 2
PRELIMINARIES ON SPACE–TIME
STOCHASTIC INTEGRALS
This chapter prepares some background knowledge of the space–time stochastic inte-
grals when the noise is as defined in SHE(1) or SHE(2). For the construction of spatially
homogeneous noise, we refer the reader to [3].
Throughout this chapter, the probability space is denoted by (Ω, F , P). Given a spa-
tially homogeneous noise η from either (1.4) or (1.6), we let Ft := σ( ∫
(0,t)×R h(s, y) η(ds, dy) :
h ∈ C), where C := h :
∫R+
∫Rd
∫Rd h(s, x) f (x − y)h(s, y) dx dy dt < ∞. A space–time
process φ(t, x, ω) is called elementary if φ(t, x, ω) = X(ω)1(a,b](t)1A(x), where X ∈ Fa
and A is a compact set on R. We define the predictable σ-field on R × R+ × Ω to be
σ(φ−1(B) : B ∈ B(R), φ ∈ A
); here B(R) is the Borel σ-field on R and A is the class of all
elementary processes. Any space–time process measurable with respect to the predictable
σ-field is called a predictable process.
2.1 Integration against spatially homogeneous noiseConsider the norm ‖ · ‖β,2 for space–time processes (see [11]), defined by
‖g‖β,2 := supt≥0
supx∈R
e−βt‖g(t, x)‖2, β > 0, (2.1)
and we let S be the space of all space–time processes φ(t, x, ω) = ∑∞i=1 Xi(ω)1(ai ,bi ](t)1Ai(x),
where each Xi(ω)1(ai ,bi ](t)1Ai(x) is an elementary process, (ai, bi]× Ai is a disjoint family
of sets, and ‖φ‖β,2 < ∞. It is not hard to see that S is closed under addition. We then
define Lβ,2 to be completion of S with respect to the norm ‖ · ‖β,2. Note that our definition
of Lβ,2 is slightly different from the one in [11].
Given h such that
∫ t
0
∫Rd
∫Rd
h(s, x) f (x− y)h(s, y) dx dy ds < ∞, (2.2)
6
and φ(t, x, ω) = ∑ni=1 Xi(ω)1(ai ,bi ](t)1Ai(x) ∈ S , we define
∫[0,t)×Rd
h(s, x)φ(s, x, ω) η(dx, ds) :=n
∑i=1
Xi(ω) ·∫[0,∞)×Rd
h(s, x)1(ai ,bi ]∩(0,t](s)1Ai(x) η(dx, ds).
(2.3)
It is easily seen
Mt :=∫[0,t)×Rd
h(s, x)φ(s, x, ω) η(dx, ds) (2.4)
is a martingale. Also, Mt has a continuous modification, because it is a sum of time-change
Brownian motions multiplied by random variables. We define
X(i)t := Xi(ω) ·
∫[0,∞)×Rd
h(s, x)1(ai ,bi ]∩(0,t](s)1Ai(x) η(dx, ds), (2.5)
and for all 1 ≤ i, j ≤ n,
〈X(i), X(j)〉t = Xi · Xj ·∫[0,t)×Rd×Rd
h(s, x)h(s, y)1(ai ,bi ]∩(aj,bj](s)1Ai(x)1Ai(y) f (x− y) dx dy ds.
(2.6)
It turns out that
〈M〉t :=∫[0,t)×Rd×Rd
h(s, x)h(s, y)( n
∑i=1
Xi1(ai ,bi ](s)1Ai(y))
·( n
∑j=1
Xj1(aj,bj](s)1Aj(x))
f (x− y) dx dy ds. (2.7)
Now we consider φ(t, x, ω) = ∑∞i=1 Xi(ω)1(ai ,bi ](t)1Ai(x) ∈ S , and define
∫[0,t)×Rd
h(s, x)φ(s, x, ω) η(dx, ds)
:= L2 − limn→∞
n
∑i=1
Xi(ω) ·∫[0,∞)×Rd
h(s, x)1(ai ,bi ]∩(0,t](s)1Ai(x) η(dx, ds). (2.8)
We would like to show (2.8) is well-defined. Let φn(t, x, ω) := ∑ni=1 Xi(ω)1(ai ,bi ](t)1Ai(x),
we have for any n < m,
E[∣∣ ∫
[0,t)×Rdh(s, x)φn(s, x, ω) η(dx, ds)−
∫[0,t)×Rd
h(s, x)φm(s, x, ω) η(dx, ds)∣∣2]
≤m
∑i=n+1
m
∑j=n+1
E[|XiXj|] ·∫[0,t)×Rd×Rd
h(s, x)h(s, y)1(ai ,bi ]∩(aj,bj](s)1Ai(x)1Ai(y) f (x− y) dx dy ds
≤ const · ‖φ‖β,2
×m
∑i=n+1
m
∑j=n+1
∫[0,t)×Rd×Rd
h(s, x)h(s, y)1(ai ,bi ]∩(aj,bj](s)1Ai(x)1Ai(y) f (x− y) dx dy ds. (2.9)
7
It follows that φnn is a Cauchy sequence in L2(Ω). Therefore, (2.4) is a continuous
L2(Ω) martingale if φ(t, x, ω) = ∑∞i=1 Xi(ω)1(ai ,bi ](t)1Ai(x) ∈ S , with quadratic variation∫
[0,t)×Rd×Rdh(s, x)h(s, y)
( ∞
∑i=1
Xi1(ai ,bi ](s)1Ai(y))·( ∞
∑j=1
Xj1(aj,bj](s)1Aj(x))
f (x− y) dx dy ds.
(2.10)
Remark: the above derivations does not imply φn(t, x, ω)→ φ(t, x, ω) in Lβ,2. So this is
why we define Lβ,2 as the completion of infinite sums instead of finite sums. Such revision
of definition of Lβ,2 helps statements like Proposition 4.6 of [11] work correctly.
Following similar calculations as (2.9), for any φ(t, x), ψ(t, x) ∈ S , we have
E[∣∣ ∫
[0,t)×Rdhφ η(dx, ds)−
∫[0,t)×Rd
hψ η(dx, ds)∣∣2]
≤ const · ‖φ− ψ‖2β,2 ·
∫ t
0
∫Rd
∫Rd
h(s, x) f (x− y)h(s, y) dx dy ds. (2.11)
Due to (2.11), we could define∫[0,t)×R hφ ξ(dx, ds) for all φ ∈ Lβ,2. We then follow the
argument from Sec. 4.2 of [11] to show the following proposition holds as well.
Proposition 5. Let h satisfy (2.2) for all t ≥ 0 and φ ∈ Lβ,2 for some β > 0, then
Mt :=∫(0,t)×Rd
h(s, x)φ(s, x, ω) η(dx, ds) (2.12)
defines a continuous L2(Ω)-martingale with quadratic variation
〈M〉t :=∫(0,t)×Rd×Rd
h(s, x)h(s, y)φ(s, x)φ(s, y) f (x− y) dx dy ds. (2.13)
By application of Burkholder–Davis–Gundy inequality and Minkowski’s integral in-
equality to the previous proposition, we have
Proposition 6 (The BDG inequality for spatially homogeneous noise integral). Let h satisfy
(2.2) for all t ≥ 0 and φ ∈ Lβ,2 for some β > 0, then for all real numbers k ≥ 2 and t > 0,∥∥∥ ∫(0,t)×Rd
h(s, x)φ(s, x, ω) η(dx, ds)∥∥∥2
k
≤ 4k ·∫(0,t)×Rd×Rd
h(s, x)h(s, y)∥∥φ(s, x)φ(s, y)
∥∥k/2 dx dy ds. (2.14)
We remark in the end that in [5], the authors have shown that the solution ut(x) to
(SHE(1)) or (SHE(2)) is predictable, and for all β > 0, p ≥ 2,(supt≥0
supx∈R
e−βtE[|ut(x)|p])1/p
< ∞. (2.15)
8
2.2 Coupling of space–time noisesAs is done in (3.5) of [1], given a space–time white noise ξ defined on (0, ∞)× Rd, we
may define random fields ηβ(t, x) on (0, ∞)×Rd, such that
ηβ(φ) :=∫(0,∞)×Rd
( ∫Rd
φ(s, y)hβ(y− x) dy)
ξ(ds, dx), (2.16)
for all φ such that
φ(t, x) =∫
Rdφ(t, y)hβ(y− x) dy ∈ L2((0, ∞)×Rd), (2.17)
where hβ is defined in (1.8).
It can be shown that ηβ satisfies (1.6) by the following calculations:
Cov(ηβ(φ), ηβ(ψ))
=∫(0,∞)×Rd
( ∫Rd
φ(s, y)hβ(y− x) dy)·( ∫
Rdψ(s, y)hβ(y− x) dy
)ds dx
=∫(0,∞)×Rd×Rd
φ(s, y)ψ(s, z)( ∫
Rdhβ(y− x)hβ(x− z) dx
)ds dy dz
=∫(0,∞)×Rd×Rd
φ(s, y)ψ(s, z)( ∫
Rdhβ(y− z− x)hβ(x) dx
)ds dy dz
=∫(0,∞)×Rd×Rd
φ(s, y)ψ(s, z) fβ(y− z) ds dy dz, (2.18)
for all φ, ψ satisfying (2.17).
Now, given h satisfying (2.2), and φ(t, x, ω) = ∑ni=1 Xi(ω)1(ai ,bi ](t)1Ai(x) ∈ S , we have∫
(0,∞)×Rdh(s, x)φ(s, x, ω)ηβ(ds, dx)
=n
∑i=1
Xi(ω)∫(0,∞)×Rd
1(ai ,bi ](s)( ∫
Rdh(s, y)1Ai(y)hβ(y− x) dy
)ξ(ds, dx)
=∫(0,∞)×Rd
( ∫Rd
h(s, y)φ(s, y, ω)hβ(y− x) dy)
ξ(ds, dx). (2.19)
When φ(t, x, ω) = ∑∞i=1 Xi(ω)1(ai ,bi ](t)1Ai(x) ∈ S , we may define
φn(t, x, ω) =n
∑i=1
Xi(ω)1(ai ,bi ](t)1Ai(x), (2.20)
where each φn satisfies (2.19). Taking L2(P) limits of both sides of (2.19) with φ replaced by
φn, we get the same identity for this new φ. We may then repeat the same approximation
procedure to show for any φ ∈ Lβ,2∫(0,∞)×Rd
h(s, x)φ(s, x, ω)ηβ(ds, dx)
=∫(0,∞)×Rd
( ∫Rd
h(s, y)φ(s, y, ω)hβ(y− x) dy)
ξ(ds, dx). (2.21)
9
2.3 Representation of Ito integralsas space–time integrals
Consider the following family of correlated Brownian motions indexed by x ∈ Zd:
Wt(x) :=∫(0,t)×Rd
1[x,x+1)d(y)η(ds, dy), (2.22)
We would like to show for any continuous process X ∈ L2loc(W(x)) (see Definition 4.2.6
of [13]), where Xt ∈ F Wt for all t ≥ 0, we have∫ t
0Xs dWs(x) =
∫(0,t)×Rd
Xs1[x,x+1)d(y)η(ds, dy). (2.23)
It is easily seen (2.23) is true when X is an elementary process (see the comments after
Definition 4.2.3 of [13]). (2.23) then holds by taking limits of elementary processes, which
the reader could refer to Proposition 4.2.13 of [13].
CHAPTER 3
COMPARISON THEOREMS FOR INFINITE
INTERACTING SDES
Throughout this chapter, (SDE) denote the system of SDEs defined earlier in Chapter 1
on page 3. Before stating the main results of (SDE), we first make a few remarks. We note
that L defined in (1.12) is the infinitesimal generator of a continuous time random walk
Xt, with jump matrix P :=(
pi,j)
i,j∈Zd , and the jump rate from state to state is always ν. We
would like to show this fact using three lemmas. Let us define Pt(x, y) := Px(Xt = y).
Lemma 7. Let E1, E2, · · · be a sequence of i.i.d exp(ν) random variables. Then P(E1 + · · · +
En−1 < t ≤ E1 + · · ·+ En) =e−νt(νt)n
n! .
Proof: Let Nt be a Poisson process with jump rate ν. Then
P(E1 + · · ·+ En−1 < t ≤ E1 + · · ·+ En) = P(Nt = n) =e−νt(νt)n
n!. (3.1)
Q.E.D.
Lemma 8. Pt(x, y) =(e−νt(I−P))
x,y.
Proof: By the previous lemma,
Pt(x, y) =∞
∑n=1
(Pn)x,y · P(E1 + · · ·+ En−1 < t, E1 + · · ·+ En ≥ t) + (P0)x,y · P(E1 ≥ t)
=∞
∑n=1
(Pn)x,y ·e−νt(νt)n
n!+ (P0)x,y · e−νt
=∞
∑n=0
(Pn)x,y ·e−νt(νt)n
n!
= e−νt ·(eνtP)
x,y
=(e−νt(I−P))
x,y. (3.2)
Q.E.D.
11
Lemma 9. L = −ν(I − P).
Proof: Because pi,j = pj,i for all i, j ∈ Zd, by the previous lemma,
(L)
i,j :=(L δi
)j = ν ∑
k∈Zd
pj,k(δi(k)− δi(j))
= ν ∑k∈Zd
pk,j(δi(k)− δi(j))
= ν · (pi,j − δ(i, j)) = −ν(I − P)i,j. (3.3)
Q.E.D.
The previous lemma shows
ddt
Pt(x, y) =(L Pt
)x,y, (3.4)
ddt
Pt(x, y)∣∣t=0 =
(L)
x,y. (3.5)
Now we define
Pt(x) := P0(Xt = x). (3.6)
The characteristic function ψ of probability measure Pt equals
ψ(z) = ∑x∈Zd
Pt(x)eiz·x = ∑x∈Zd
∞
∑n=0
(Pn)0,x ·e−νt(νt)n
n!eiz·x
=∞
∑n=0
(∑
x∈Zd
p0,xeiz·x)n · e−νt(νt)n
n!
=∞
∑n=0
(µ(z))n · e−νt(νt)n
n!
= e−νt(1−µ(z)). (3.7)
3.1 Existence and uniqueness of (SDE)We would like to show that (SDE) has a unique solution. References [2], [15] treat this
problem in the case that the underlying Brownian motions are independent. As a side
remark, the assumption σ(0) = 0 is not used when we prove the existence and uniqueness
of (SDE).
Before we get started, we first establish a BDG inequality. A milder inequality for
independent Brownian motions can be found in Lemma 2.1 of [8].
12
Lemma 10 (BDG inequality). Let Z := Zt(x)t≥0,x∈Zd be a predictable random field with
respect to the Brownian motions Bt(x)t≥0,x∈Zd as in assumption (1) of (SDE). We also assume
that
∑y∈Zd
E[ ∫ t
0Zs(y)2 ds
]< ∞. (3.8)
Then, the following Ito integral∫ t
0Zs · dBs := ∑
y∈Zd
∫ t
0Zs(y)dBs(y) (3.9)
exists in L2(P). Furthermore, for any k ∈ [2, ∞), we have
E[∣∣∣ ∫ t
0Zs · dBs
∣∣∣k] ≤ ∣∣∣4k · (1 + CR) · ∑y∈Zd
∫ t
0‖Zs(y)‖2
k ds∣∣∣k/2
(3.10)
for any t ≥ 0, whereR ≤ CR < ∞ andR is defined in (1.11).
Proof: First we enumerate the elements of Zd as x1, x2, · · · , and then define Fn :=
x1, x2, · · · , xn ∀n ≥ 1. For n > m,∥∥∥ ∑y∈Fn
∫Zs(y)dBs(y)− ∑
y∈Fm
∫Zs(y)dBs(y)
∥∥∥2=∥∥∥ ∑
y∈Fn\Fm
∫ t
0Zs(y)dBs(y)
∥∥∥2
=
∑y∈Fn\Fm
∫ t
0E[Zs(y)2] ds
1/2
.
This shows that ∑y∈Fn
∫Zs(y)dBs(y)n is a Cauchy sequence in L2(P). In particular,
∑y∈Zd
∫Zs(y)dBs(y) := limn→∞ ∑y∈Fn
∫Zs(y)dBs(y).
Next we compute the quadratic variation of the preceding as follows:⟨∑
y∈FN
∫ ·0
Zs(y)dBs(y)
⟩t
= ∑y∈FN
∫ t
0Zs(y)2 ds + ∑
x,y∈FN
∫ t
0Zs(x)Zs(y)R(|x− y|) ds
≤ ∑y∈FN
∫ t
0Zs(y)2 ds +
CR2· ∑
x,y∈FN
∫ t
0
(Zs(x)2 + Zs(y)2) ds
= (1 + CR) · ∑y∈FN
∫ t
0Zs(y)2 ds. (3.11)
A more standard form of the BDG inequality (see, for example, Theorem B.1 in [11])
implies that for any k ∈ [2, ∞),
E
(∣∣∣∣∫ ZsdBs(y)∣∣∣∣k)≤ (4k)k/2 · E
(∣∣∣∣∫ t
0Zs(y)2 ds
∣∣∣∣k/2)
. (3.12)
13
We apply Minkowski’s integral inequality to see that
E
∣∣∣∣∣ ∑y∈Fn
∫ZsdBs(y)
∣∣∣∣∣k ≤ (4k)k/2 · (1 + CR)k/2 · E
∣∣∣∣∣ ∑y∈Fn
∫ t
0Zs(y)2 ds
∣∣∣∣∣k/2
≤ (4k)k/2 · (1 + CR)k/2 ·(
∑y∈Fn
∫ t
0‖Zs(y)‖2
k ds
)k/2
≤
4k · (1 + CR) · ∑y∈Zd
∫ t
0‖Zs(y)‖2
k ds
k/2
. (3.13)
Because ∑y∈Fn
∫ZsdBs(y) → ∑y∈Zd
∫ZsdBs(y) in L2(P), we may let n → ∞ along a
subsequence which ∑y∈Fn
∫ZsdBs(y)→ ∑y∈Zd
∫ZsdBs(y) a.s. in (3.13). The lemma is then
proved using Fatou’s lemma. Q.E.D.
Theorem 11. (SDE) has a solution Ut(x) that is continuous almost surely in the variable t for
every x ∈ Zd. Moreover, there exists a constant C depending only on k ∈ [2, ∞), U0, T > 0, and
σ, such that
supt∈[0,T]
supx∈Zd‖Ut(x)‖k ≤ C. (3.14)
Furthermore, the solution Ut(x) is unique among all other solutions that satisfy
supt∈[0,T]
supx∈Zd‖Ut(x)‖2 < ∞ ∀T > 0. (3.15)
Proof: Let U0t (x) := U0(x), for n ∈ N∪ 0, and define iteratively
U(n+1)t (x) :=
∫ t
0
(L U(n)
s)(x)ds +
∫ t
0σ(U(n)
s (x))dBs(x). (3.16)
We first note that
∣∣∣(L U(n)t)(x)−
(L U(n−1)
t)(x)∣∣∣
=∣∣∣ν ∑
y∈Zd
px,y(U(n)t (y)−U(n)
t (x))− ν ∑y∈Zd
px,y(U(n−1)t (y)−U(n−1)
t (x))∣∣∣
≤ ν ·∣∣∣U(n)
t (x)−U(n−1)t (x)
∣∣∣+ ν ·∣∣∣ ∑
y∈Zd
px,y
(U(n)
t (y)−U(n−1)t (y)
)∣∣∣, (3.17)
for all t ∈ [0, T], n ≥ 1, and x ∈ Zd. Also,
14
supx∈Zd
∥∥U(1)t (x)−U(0)
t (x)∥∥
k
= supx∈Zd
∥∥∥∥∥∥∫ t
0ν ∑
y∈Zd
px,y(U0(y)−U0(x))ds−U0(x)
∥∥∥∥∥∥k
+
∥∥∥∥∫ t
0σ(U0(x)
)dBs(x)
∥∥∥∥k
= sup
x∈Zd
∥∥∥∥∥∥∫ t
0ν ∑
y∈Zd
px,y(U0(y)−U0(x))ds
∥∥∥∥∥∥k
+ ‖U0(x)‖k +∥∥σ(U0(x)
)· Bt6
∥∥k
≤ c1 := (1 + 2νT) · sup
x∈ZdU0(x) +
√T · sup
x∈Zdσ(U0(x)
)· ‖N‖k < ∞, (3.18)
where N is a standard normal variable.
Therefore, by Lemma 10, (3.17), and Minkowski’s integral inequality, together imply
that for all n ∈ N, k ∈ [2, ∞), and x ∈ Zd,∥∥U(n+1)t (x)−U(n)
t (x)∥∥
k
≤∥∥∥∥∫ t
0
(L U(n)
s)(x)−
(L U(n−1)
s)(x)ds
∥∥∥∥k+
∥∥∥∥∫ t
0σ(U(n)
s (x))− σ
(U(n−1)
s (x))dBs(x)
∥∥∥∥k
≤∫ t
0
∥∥(L U(n)s)(x)−
(L U(n−1)
s)(x)∥∥
kds
+ (4k)1/2 · Lipσ ·(∫ t
0
∥∥U(n)s (x)−U(n−1)
s (x)∥∥2
kds)1/2
≤ 2ν∫ t
0supx∈Zd
∥∥U(n)s (x)−U(n−1)
s (x)∥∥
k ds
+ (4k)1/2 · Lipσ ·(∫ t
0supx∈Zd
∥∥U(n)s (x)−U(n−1)
s (x)∥∥2
kds
)1/2
. (3.19)
This implies
supx∈Zd
∥∥∥U(n+1)t (x)−U(n)
t (x)∥∥∥2
k≤ c2 ·
∫ t
0supx∈Zd
∥∥∥U(n)s (x)−U(n−1)
s (x)∥∥∥2
kds, (3.20)
where c2 := 4k · Lip2σ +8ν2t.
We iterate (3.20) to deduce from (3.18) that
supt∈[0,T]
supx∈Zd
∥∥U(n+1)t (x)−U(n)
t (x)∥∥
k ≤(
1n!· c2
1 · cn2 · Tn
)1/2
. (3.21)
Due to (3.21), we define Ut(x) = limn→∞ U(n)t (x), and we have
supt∈[0,T]
supx∈Zd
∥∥Ut(x)∥∥
k ≤∞
∑n=0
( 1n!· c2
1 · cn2 · Tn
)1/2< ∞. (3.22)
In particular,
supt∈[0,T]
supx∈Zd
∥∥Ut(x)−U(n)t (x)
∥∥k → 0 as n→ ∞. (3.23)
15
Now we return to (3.16). For each x ∈ Zd, we have
lim supn→∞
E
(∣∣∣∣∫ t
0σ(Us(x)
)dBs(x)−
∫ t
0σ(U(n)
s (x))dBs(x)
∣∣∣∣2)
≤ const · lim supn→∞
∫ t
0E(∣∣∣Us(x)−U(n)
s (x)∣∣∣2) ds
≤ const · lim supn→∞
sups∈[0,t]
supy∈Zd
∥∥Us(y)−U(n)s (y)
∥∥22 = 0. (3.24)
Moreover,
lim supn→∞
E
(∣∣∣∣∫ t
0
(L Us
)(x)ds−
∫ t
0
(L U(n)
s)(x)ds
∣∣∣∣2)
≤ const · lim supn→∞
E
∫ t
0|Us(x)−U(n)
s (x)|+ ∑y∈Zd
px,y|Us(y)−U(n)s (y)| ds
2
≤ const · lim supn→∞
∫ t
0E(∣∣∣Us(x)−U(n)
s (x)∣∣∣2)+ E
∑
y∈Zd
px,y|Us(y)−U(n)s (y)|
2 ds
≤ const · lim supn→∞
∫ t
0E(∣∣∣Us(x)−U(n)
s (x)∣∣∣2)+ ∑
y∈Zd
px,yE(∣∣∣Us(y)−U(n)
s (y)∣∣∣2) ds
≤ const · lim supn→∞
sups∈[0,t]
supy∈Zd
∥∥Us(y)−U(n)s (y)
∥∥22 = 0. (3.25)
Let n→ ∞ in (3.16) as follows: By (3.24) and (3.25), for all x ∈ Zd,
Ut(x) =∫ t
0
(L Us
)(x)ds +
∫ t
0σ(Us(x)
)dBs(x). (3.26)
(3.26) also implies that Ut(x) has a continuous modification for every x ∈ Zd.
Now we prove uniqueness of (SDE). Let Ut(x), Vt(x) be two different solutions of (SDE)
that satisfy (3.15), such that U0 ≡ V0. We carry out the same calculations as we did for (3.19)
to see that
supx∈Zd
∥∥Ut(x)−Vt(x)∥∥2
2 ≤ c1 ·∫ t
0supx∈Zd
∥∥Us(x)−Vs(x)∥∥2
2ds. (3.27)
By Gronwall’s inequality (see Appendix A.2), Ut(x) = Vt(x) a.s. Q.E.D.
Now we define mild solution Ut(x) to (SDE):
Ut(x) = ∑y∈Zd
Pt(x− y) ·U0(y) + ∑y∈Zd
∫ t
0Pt−s(y− x)σ(Us(y)) dBs(y), (3.28)
where Pt(x) is defined in (3.6).
16
It was shown in [8] that, when the underlying Brownian motions of (SDE) are indepen-
dent, (SDE) has a unique mild solution. Here we would like to show that a unique mild
solution to (SDE) exists in the present setting when the Brownian motions are correlated.
Theorem 12. (SDE) has a mild solution Ut(x) that is continuous in the variable t for each
x ∈ Zd. Moreover, there exists C < ∞ depending only on k ∈ [2, ∞), U0, T > 0, and σ, such that
supt∈[0,T]
supx∈Zd
E[|Ut(x)|k] < C. (3.29)
In addition, the mild solution to (SDE) is unique among all solutions that satisfy
supt∈[0,T]
supx∈Zd
E[|Ut(x)|k] < ∞. (3.30)
Proof: We define iteratively for n ∈ N∪ 0,
U(n+1)t (x) := ∑
y∈Zd
Pt(x− y) ·U0(y) + ∑y∈Zd
∫ t
0Pt−s(y− x)σ
(U(n)
s (y))
dBs(y). (3.31)
Because for any x ∈ Zd and t ≥ 0,
∑y∈Zd
E[∫ t
0
(Pt−s(y− x)σ(U0(y))
)2ds]≤ const · ∑
y∈Zd
∫ t
0Pt−s(y− x)2 ds
≤ const ·∫ t
0∑
y∈Zd
Pt−s(y− x) ds < ∞,
it follows that the random field Z(0)s (y) := Pt−s(y− x)σ(U0(y)) satisfies (3.8). By Lemma
10, for any 0 ≤ t ≤ T,
‖U(1)t (x)‖k ≤ sup
y∈ZdU0(y) +
4k · (1 + CR) · ∑y∈Zd
∫ t
0Pt−s(y− x)2‖U0(y)‖2
k ds
1/2
≤ supy∈Zd
U0(y) +
4k · (1 + CR) · ∑y∈Zd
∫ T
0PT−s(y− x)2‖U0(y)‖2
k ds
1/2
≤ supy∈Zd
U0(y) +
4k · (1 + CR) · supy∈Zd
U0(y)2 ·∫ T
0∑
y∈Zd
Ps(y− x) ds
1/2
≤ c1 < ∞, (3.32)
17
where the constant c1 is independent of the choice of x ∈ Zd and t ∈ [0, T]. Next, for n ∈ N
and any x ∈ Zd, if sups∈[0,T] supx∈Zd E[|U(n)(x)|k] < ∞ for some k ∈ [2, ∞), then for any
t ∈ [0, T] we have
∑y∈Zd
E[∫ t
0
(Pt−s(y− x)σ(U(n)
s (y)))2
ds]
≤ const · ∑y∈Zd
∫ t
0Pt−s(y− x)2
(1 + E[U(n)
s (y)2])
ds < ∞. (3.33)
Therefore, Lemma 10 implies that
‖U(n+1)t (x)‖k ≤ sup
y∈ZdU0(y) +
4k · (1 + CR) · ∑y∈Zd
∫ t
0Pt−s(y− x)2‖σ(U(n)
s (y))‖2k ds
1/2
≤ supy∈Zd
U0(y) +
const · ∑y∈Zd
∫ T
0PT−s(y− x)2 ds
1/2
≤ c2,
where the constant c2 is finite and independent of the choice of x ∈ Zd and t ∈ [0, T]. Due
to the above discussions, for any n ∈ N∪ 0, T > 0, and k ∈ [2, ∞),
supt∈[0,T]
supx∈Zd
∥∥U(n+1)t (x)−U(n)
t (x)∥∥2
k < ∞. (3.34)
By (3.34) and Lemma 10, for any x ∈ Zd we have
‖U(n+1)t (x)−U(n)
t (x)‖2k
≤ 4k · (1 + CR) · ∑y∈Zd
∫ t
0Pt−s(y− x)2
∥∥∥σ(U(n)s (y))− σ(U(n−1)
s (y))∥∥∥2
kds
≤ 4k · (1 + CR) · Lip2σ ·∫ t
0supy∈Zd
∥∥∥U(n)s (y)−U(n−1)
s (y)∥∥∥2
kds. (3.35)
This, and Gronwall’s lemma, together imply that for any t ∈ [0, T],
supx∈Zd
∥∥∥U(n+1)t (x)−U(n)
t (x)∥∥∥2
k
≤ 1n!· sup
t∈[0,T]supx∈Zd
∥∥∥U(1)t (x)−U(0)
t (x)∥∥∥
k·(
4k · (1 + CR) · Lip2σ · T
)n. (3.36)
By (3.36), there exists a space–time random field Ut(x) such that for all k ∈ [2, ∞),
supt∈[0,T]
supx∈Zd
∥∥Ut(x)∥∥
k
≤ supt∈[0,T]
supx∈Zd‖U(1)
t (x)−U(0)t (x)‖2
k ·∞
∑n=0
1(n!)1/2
(4k · (1 + CR) · Lip2
σ · T)n/2
< ∞, (3.37)
18
and as n→ ∞,
supt∈[0,T]
supx∈Zd
∥∥Ut(x)−U(n)t (x)
∥∥k → 0. (3.38)
For each x ∈ Zd, by Lemma 10, we have
lim supn→∞
∥∥∥∥∥∥ ∑y∈Zd
∫ t
0Pt−s(y− x)
(σ(Us(y))− σ(U(n)
s (y)))
ds
∥∥∥∥∥∥2
k
≤ 4k · (1 + CR) · lim supn→∞
∑y∈Zd
∫ t
0Pt−s(y− x)2
∥∥∥σ(Us(y))− σ(U(n)s (y))
∥∥∥2
kds
≤ 4k · (1 + CR) · Lip2σ · lim sup
n→∞
∫ t
0supy∈Zd
∥∥∥Us(y)−U(n)s (y)
∥∥∥2
kds = 0. (3.39)
Therefore, if we let n → ∞ in (3.31), then we obtain (3.28). Also, (3.28) shows that Ut(x)
has a continuous modification for every x ∈ Zd.
Now we prove the uniqueness of the mild solution of (SDE). Let Ut(x), Vt(x) be two
different mild solutions of (SDE) satisfying (3.30) such that U0 ≡ V0. By Lemma 10, for any
x ∈ Zd,
‖Ut(x)−Vt(x)‖2k ≤ 4k · (1 + CR) · Lip2
σ · lim supn→∞
∫ t
0supy∈Zd‖Us(y)−Vs(y)‖2
k ds. (3.40)
Gronwall’s inequality (see Appendix A.2) implies that supx∈Zd ‖Ut(x)−Vt(x)‖k = 0, and
hence Ut(x) = Vt(x) a.s. Q.E.D.
A natural question arises: Are these two different ‘solutions’ to (SDE) that are actually
the same? The answer is affirmative. We state it as the following theorem.
Theorem 13. Let Ut(x) be the unique solution to (SDE) satisfying (3.14) with initial data U0(x),
and Vt(x) be the unique mild solution to (SDE) with the same initial data U0(x), and Vt(x) satisfies
(3.30). Then Ut(x) = Vt(x) a.s.
Proof: The proof is complete if we could show Ut(x) is also the mild solution to (SDE).
For each x, y ∈ Zd, because Ut(x) satisfies (SDE), by the associativity property of stochastic
integrals,
∫ t
0Pt−s(y− x)σ(Us(y)) dBs(y) =
∫ t
0Pt−s(y− x) dUs(y)−
∫ t
0Pt−s(y− x)(L Us)(y) ds.
(3.41)
19
We enumerate Zd as x1, x2, · · · , and define Fn := x1, · · · , xn ∀n ≥ 1. We sum over
y ∈ Fn in (3.41), in order to obtain the following: For any x ∈ Fn,
∑y∈Fn
∫ t
0Pt−s(y− x)σ(Us(y)) dBs(y)
= ∑y∈Fn
∫ t
0Pt−s(y− x) dUs(y)− ∑
y∈Fn
∫ t
0Pt−s(y− x)(L Us)(y) ds
=Ut(x)− ∑y∈Fn
Pt(y− x) ·U0(y)− ∑y∈Fn
∫ t
0Us(y) dPt−s(y− x)
− ∑y∈Fn
∫ t
0Pt−s(y− x)(L Us)(y) ds
=Ut(x)− ∑y∈Fn
Pt(y− x) ·U0(y) + ∑y∈Fn
∫ t
0Us(y)(L Pt−s)(y− x) ds
− ∑y∈Fn
∫ t
0Pt−s(y− x)(L Us)(y) ds. (3.42)
Here we have used (3.41) and the integration by parts formula for the Ito integrals
(see Proposition IV. 3.1 of [13]). Because Ut(x) satisfies (3.14), the left-hand side of (3.42)
converges in L2(P) and the right-hand side of (3.42) converges in L1(P), as n→ ∞:
∑y∈Zd
∫ t
0Pt−s(y− x)σ(Us(y)) dBs(y)−Ut(x) + ∑
y∈Zd
Pt(y− x) ·U0(y)
= ∑y∈Zd
∫ t
0Pt−s(y− x)σ(Us(y)) dBs(y)−Ut(x) + ∑
y∈Zd
Pt(x− y) ·U0(y)
= ∑y∈Zd
∫ t
0Us(y)(L Pt−s)(y− x) ds− ∑
y∈Zd
∫ t
0Pt−s(y− x)(L Us)(y) ds
= ∑y∈Zd
∫ t
0Us(y) ∑
z∈Zd
py−x,z(Pt−s(z)− Pt−s(y− x)) ds
− ∑y∈Zd
∫ t
0Pt−s(y− x) ∑
z∈Zd
py,z(Us(z)−Us(y)) ds
= ∑y∈Zd
∫ t
0Us(y) ∑
z∈Zd
py−x,zPt−s(z) ds− ∑y∈Zd
∫ t
0Pt−s(y− x) ∑
z∈Zd
py,zUs(z) ds
= ∑y∈Zd
∑z∈Zd
∫ t
0Us(y)py−x,zPt−s(z) ds− ∑
y∈Zd∑
z∈Zd
∫ t
0Pt−s(z− x)py,zUs(y) ds = 0. (3.43)
In the above calculations, we have used the fact pi,j = pj,i and pi+k,j+k = pi,j for all
i, j, k ∈ Zd. Q.E.D.
20
3.2 Approximation of (SDE) by other SDEs under simplificationsIn this section, we are going to show that the solution to (SDE) can be approximated in
Lk(P) by the solutions to a sequence of SDEs, where each of them is indexed by a finite set
of Zd instead of Zd itself, and σ is twice differentiable and compactly supported on [a, b],
a > 0.
We define a family of finite systems of SDEs as follows:
dU(N)t (x) =
(L (N)U(N)
t)(x)dt + σ
(U(N)
t (x))dBt(x),
x ∈ KN := −N,−N + 1, · · · , N − 1, Nd (3.44)
where
1. The Brownian motions Bt(x)x∈KN are the same as in the assumption of (SDE);
2. L (N) is defined by
L (N)g(j) := ν ∑i∈KN
pj,i(g(i)− g(j)), (3.45)
for any g : KN → R. pi,j is the same as the ones in (SDE);
3. σ : R→ [0, ∞) is the same function as we define (SDE);
4. The initial condition U(N)0 (x) = U0(x) for all x ∈ KN .
We start with the following approximation result.
Theorem 14. Let U(N)t (x) solve (3.44), and Ut(x) solve (SDE). Then,
limN→∞
supt∈[0,T]
supx∈KN
‖U(N)t (x)−Ut(x)‖k = 0 (3.46)
for every k ∈ [2, ∞).
Proof: We note that by (3.14), there exists C < ∞ such that supt∈[0,T] supx∈Zd ‖Us(x)‖k ≤
C. Moreover, by using the same techniques as in Theorem 11, it can be shown that
supt∈[0,T]
supx∈KN
‖U(N)s (x)‖k < ∞, (3.47)
21
for every N ∈ N. Now for any x ∈ KN , by Minkowski’s integral inequality and Lemma
10, we have
∥∥U(N)t (x)−Ut(x)
∥∥k
≤∥∥∥∥∫ t
0
(L (N)U(N)
t)(x)−
(L Us
)(x)ds
∥∥∥∥k+
∥∥∥∥∫ t
0σ(U(N)
s (x))− σ
(Us(x)
)dBs(x)
∥∥∥∥k
≤∫ t
0
∥∥∥(L (N)U(N)t)(x)−
(L Us
)(x)∥∥∥
kds + Lipσ · (4k)1/2 ·
(∫ t
0
∥∥∥U(N)s (x)−Us(x)
∥∥∥2
kds)1/2
.
(3.48)
Therefore, for any t ∈ [0, T],
∥∥U(N)t (x)−Ut(x)
∥∥2k
≤ 2(∫ t
0
∥∥∥(L (N)U(N)t)(x)−
(L Us
)(x)∥∥∥
kds)2
+ 2Lip2σ · (4k) ·
∫ t
0
∥∥U(N)s (x)−Us(x)
∥∥2kds
≤ 2ν2
∫ t
0∑
y∈Zd\KN
px,y∥∥Us(y)−Us(x)
∥∥k
+
∥∥∥∥∥ ∑y∈KN
px,y((Us(y)−Us(x))− (U(N)
s (y)−U(N)s (x))
)∥∥∥∥∥k
ds
)2
+ 8k · Lip2σ ·∫ t
0
∥∥U(N)s (x)−Us(x)
∥∥2kds
≤ 2ν2(
2Ct · ∑y∈Zd\KN
px,y + 2∫ t
0supx∈Zd
∥∥U(N)s (x)−Us(x)
∥∥kds)2
+ 8k · Lip2σ ·∫ t
0supx∈Zd
∥∥U(N)s (x)−Us(x)
∥∥2kds
≤ 16C2ν2t2 ·
∑y∈Zd\KN
px,y
2
+(16ν2t + 8k · Lip2
σ
)·∫ t
0supx∈Zd
∥∥U(N)s (x)−Us(x)
∥∥2kds.
(3.49)
By (3.49) and Gronwall’s inequality (see Appendix), for any t ∈ [0, T] we have
supx∈KN
∥∥U(N)t (x)−Ut(x)
∥∥2k ≤ 16C2ν2T2 ·
∑y∈Zd\KN
px,y
2
· e(
16ν2T+8k·Lip2σ
)·T. (3.50)
Let N → ∞ in (3.50) to complete the proof. Q.E.D.
Next we would like to present Theorem 1.2 from [6], which would be used to prove the
nonnegativity of the solution to (3.44).
22
Theorem 15 (Geiß, Manthey). Consider two systems of SDEs
Xj(t) = Xj(0) +∫ t
0aj(s, X(s)) ds +
r
∑k=1
∫ t
0σjk(s, X(s)) dWk(s)
Yj(t) = Yj(0) +∫ t
0bj(s, Y(s)) ds +
r
∑k=1
∫ t
0σjk(s, Y(s)) dWk(s),
where 1 ≤ j ≤ n, which satisfy
(1) Xj(0) ≤ Yj(0) for 1 ≤ j ≤ n,
(2) aj(t, x) ≤ bj(t, x) for 1 ≤ j ≤ n,
(3) For any 1 ≤ j ≤ n, aj(t, x) ≤ aj(t, y) and bj(t, x) ≤ bj(t, y), whenever xj = yj and xl ≤ yl ,
l 6= j.
(4) There exists a strictly increasing function ρ : [0, ∞)→ [0, ∞) with ρ(0) = 0 and∫ 10 [ρ(u)]
−2 du = ∞, such that for each 1 ≤ j ≤ n, ∑rk=1 |σjk(t, x)− σjk(t, y)| ≤ ρ(|xj − yj|).
(5) W1, · · · , Wk are standard Brownian motions.
Then we have P(X(t) ≤ Y(t), t ∈ [0, θX ∧ θY)) = 1, where θX, θY denote the explosion times of
X, Y, respectively.
We remark in the above theorem that the Brownian motions W1, · · · , Wk are not re-
quired to be independent of each other from its proof.
Corollary 16. Let U(N)t (x) denote the solution to (3.44). If there exists m ∈ R such that σ(m) = 0
and infx∈KN U0(x) ≥ m, then U(N)t (x) ≥ m for every t ≥ 0 and x ∈ KN , a.s. If there exists
M ∈ R such that σ(M) = 0 and supx∈KNU0(x) ≤ M, then U(N)
t (x) ≤ M for every t ≥ 0 and
x ∈ KN a.s.
Proof: For each N, set n = (2N + 1)d. To see that assumption (1) of Theorem 15 is
true, because Ut(x) ≡ m is the solution to (SDE) with U0(x) ≡ m, the result follows by
comparing UN(t) to m using the previous theorem. It suffices to check if assumptions (2),
(3), (4) of the previous theorem hold. Let aj(x) = bj(x) = ν ∑i∈KNpj,i · (xi − xj) for all
1 ≤ j ≤ d, it is easy to check (2), (3) are both true. Assumption (4) holds because r in the
previous theorem equals 1, σj1(x) = σ(xj) for all j, and we let ρ(x) := Lipσ · x. For the
second assertion, we compare UN(t) to M. Q.E.D.
Corollary 17. Let Ut(x) denote the solution to (SDE). If there exists m ∈ R such that σ(m) = 0
and infx∈Zd U0(x) ≥ m, then Ut(x) ≥ m for every t ≥ 0 and x ∈ Zd a.s. If there exists M ∈ R
23
such that σ(M) = 0 and supx∈Zd U0(x) ≤ M, then Ut(x) ≤ M for every t ≥ 0 and x ∈ Zd a.s.
Proof: The proof follows from Corollary 16 and Theorem 14. Q.E.D.
Theorem 18. Let σ(N) be a Lipschitz function constructed from σ so that
σ(N)(x) =
2N · (x− 1
2N ) · σ( 1N ), x ∈ [ 1
2N , 1N ),
σ(x), x ∈ [ 1N , N],
(N + 1− x) · σ(N), x ∈ (N, N + 1),0, otherwise.
(3.51)
Let Ut(x) solve (SDE). Then there exists a sequence of solutions U(N)t (x) solving (SDE) with σ
replaced by σ(N), such that
limN→∞
supt∈[0,T]
supx∈Zd‖U(N)
t (x)−Ut(x)‖k = 0 (3.52)
for every k ∈ [2, ∞).
Proof: We note that σ(N) is a Lipschitz continuous function with compact support
supp(σ) ⊂ [ 12N , N]. Besides, for all x ∈ (N, N + 1),
|σ(N)(x)− σ(x)| ≤ (N + 1− x) · σ(N) + σ(x)
≤ Lipσ · N + Lipσ · x
≤ 2Lipσ · x. (3.53)
Also, for all x ∈ [ 12N , 1
N ),
|σ(N)(x)− σ(x)| ≤ 2N ·(
x− 12N
)· σ(
1N
)+ σ(x)
≤ Lipσ ·1N
+ Lipσ ·1N
≤ 2Lipσ ·1N
. (3.54)
We write U(N)t (x)−Ut(x) = (I) + (II), where
(I) =∫ t
0∑
y∈Zd
Pt−s(y− x) ·[σ(N)
(U(N)
s (y))− σ
(U(N)
s (y))]
dBs(y) (3.55)
(II) =∫ t
0∑
y∈Zd
Pt−s(y− x) ·[σ(U(N)
s (y))− σ
(Us(y)
)]dBs(y). (3.56)
24
By the Cauchy–Schwarz inequality and Theorem 11,
‖U(N)s (y) · 1U(N)
s (y)≥N‖k ≤ E[(U(N)
s (y))2k]1/2k
· P(U(N)s (y) ≥ N)1/2k
≤ ‖U(N)s (y)‖2k · ‖U
(N)s (y)‖2k ·
1N≤ const · 1
N. (3.57)
Therefore, Corollary 17, (3.53), (3.54), (3.57), Theorem 11, and Lemma 10 together imply
that
‖(I)‖2k ≤ const ·
∫ t
0∑
y∈Zd
P2t−s(y− x)
∥∥∥σ(N)(U(N)
s (y))− σ
(U(N)
s (y))∥∥∥2
kds
≤ const ·∫ t
0∑
y∈Zd
P2t−s(y− x)
(∥∥∥U(N)s (y) · 1U(N)
s (y)≥N
∥∥∥2
k+∥∥∥ 1
N· 1 1
N≥U(N)s (y)≥0
∥∥∥2
k
)ds
≤ const ·∫ t
0∑
y∈Zd
P2t−s(y− x) · 1
N2 ds
= const · t · 1N2 , (3.58)
By Theorem 11 and Lemma 10, we have
‖(II)‖2k ≤ const ·
∫ t
0∑
y∈Zd
P2t−s(y− x)
∥∥∥σ(U(N)
s (y))− σ
(Us(y)
)∥∥∥2
kds
≤ const ·∫ t
0∑
y∈Zd
P2t−s(y− x)
∥∥∥U(N)s (y)−Us(y)
∥∥∥2
kds. (3.59)
Now define
Dk(t) := supx∈Zd
∥∥Ut(x)−U(N)t (x)
∥∥2k . (3.60)
Theorem 11 ensures that Dk is bounded in t ∈ [0, T] for every k ∈ [2, ∞) and N ∈ N. As a
result, (3.58) and (3.59) together imply that
Dk,N(t) ≤ const · t · 1N2 + const ·
∫ t
0∑
y∈Zd
Pt−s(y)2Dk,N(s) ds
≤ const · t · 1N2 + const ·
∫ t
0Dk,N(s) ds ∀t ≥ 0. (3.61)
By Gronwall’s inequality (see Appendix A.2), we have
Dk,N(t) ≤ const · t · 1N2 · e
const·t ∀N ≥ 1, t ≥ 0. (3.62)
This completes the proof. Q.E.D.
25
Theorem 19. Let Ut(x) solve (SDE) with compactly supported σ such that supp(σ) ⊂ [a, b],
a > 0. Then there exists a sequence of U(N)t (x) solving (SDE) with σ replaced by σ(N), where
σ(N) ∈ C∞c (R) for all N ∈ N and supp(σ(N)) ⊂ [aN , bN ], aN > 0 for all N large, such that
limN→∞
supt∈[0,T]
supx∈KN
∥∥∥U(N)t (x)−Ut(x)
∥∥∥k= 0 (3.63)
for every k ∈ [2, ∞).
Proof: Let φ ∈ C∞c((0, 1)
)with
∫R φ(x) dx = 1 and define
σ(N)(x) := N∫
Rφ(
N(y− x))σ(y) dy. (3.64)
As we did in the proof of Theorem 18, we write U(N)t (x)−Ut(x) = (I) + (II), where
(I) =∫ t
0∑
y∈Zd
Pt−s(y− x) ·[σ(N)
(U(N)
s (y))− σ
(U(N)
s (y))]
dBs(y) (3.65)
(II) =∫ t
0∑
y∈Zd
Pt−s(y− x) ·[σ(U(N)
s (y))− σ
(Us(y)
)]dBs(y). (3.66)
Note that∣∣∣σ(N)(Us(y))− σ(Us(y))∣∣∣ ≤ N
∫R
φ(
N(z−Us(y)))∣∣∣σ(z)− σ(Us(y))
∣∣∣ dz
≤ const · N∫
Rφ(
N(z−Us(y)))· 1
Ndz
≤ const · 1N
. (3.67)
Therefore, Theorem 11 and Lemma 10 imply that
‖(I)‖2k ≤ const ·
∫ t
0∑
y∈Zd
P2t−s(y− x)
∥∥∥σ(N)(U(N)
s (y))− σ
(U(N)
s (y))∥∥∥2
kds
≤ const ·∫ t
0∑
y∈Zd
P2t−s(y− x) · 1
N2 ds
= const · t · 1N2 , (3.68)
Because supx∈Zd
∥∥Ut(x)−U(N)t (x)
∥∥2k is bounded in t ∈ [0, T] for every k ≥ 1 and N ∈ N
(see Theorem 14), the rest of the proof follows exactly from the one in Theorem 18.
Q.E.D.
We conclude this section with the following result, which is a combination of several
theorems presented in this section.
26
Theorem 20. Let Ut(x) solve (SDE). Then there exists a sequence of solutions U(N)t (x) solving
(3.44) with σ replaced by σ(N), where every σ(N) ∈ C∞c (R) for all N ∈ N and supp(σ(N)) ⊂
[aN , bN ], aN > 0 for all N large, such that
limN→∞
supt∈[0,T]
supx∈Zd
∥∥∥U(N)t (x)−Ut(x)
∥∥∥k= 0, (3.69)
for every k ∈ [2, ∞), T > 0.
Proof: By Theorem 18, the solution Ut(x) to (SDE) can be approximated by the ones
with compactly supported σ. By Theorem 19, the solution to (SDE) with compactly sup-
ported σ can be further approximated by the ones with smooth and compactly supported
σ. Finally, we can approximate the solution Ut(x) to (SDE) with smooth and compactly
supported σ, by the solutions to (3.44) with the same σ, due to Theorem 14. Q.E.D.
3.3 Comparison principles for (SDE)The goal of this section is to prove Theorem 1. We start with the comparison result
under simplifications.
Theorem 21. Consider two solutions U(N)t and V(N)
t to (3.44) with the same initial conditions
U(N)0 ≡ V(N)
0 , but with different σ = σ1, σ2 such that σ1 ≤ σ2, and both σ1 and σ2 are twice
continuously differentiable. In addition, we suppose supp(σ) ⊂ [0, a], a > 0, and we define
I := [0, a]. We write KN = x1, · · · , xm, and let F0 to be the class of functions f : IKN → R such
that f is twice differentiable in x1, · · · , xm with bounded continuous first and second derivatives,∂2 f
∂xi∂xj≥ 0 for all 1 ≤ i, j ≤ m, and f is nondecreasing in each xi, 1 ≤ i ≤ m. Then for any
f1, · · · , fn ∈ F0 and tn > tn−1 > · · · > t1 ≥ 0,
E
[n
∏j=1
f j(U(N)tj
)
]≤ E
[n
∏j=1
f j(V(N)tj
)
]. (3.70)
Proof: When both U(N)t and V(N)
t are solutions to (3.44) and the underlying Brownian
motions are independent, this theorem is a special case of Theorem 1 of [2]. We would like
to demonstrate here how we follow the proof in [2], to prove the comparison result for
(SDE) with a few adjustments.
27
By Corollary 17, we know that U(N)t (x), V(N)
t (x) ∈ I for every x ∈ KN . Define two
semigroups Sσ1 and Sσ2 associated with U(N)t and V(N)
t by
Sσ1t f (x) = Ez
[f (U(N)
t (x1), · · · , U(N)t (xm))
](3.71)
Sσ2t f (x) = Ez
[f (V(N)
t (x1), · · · , V(N)t (xm))
]. (3.72)
for any t ≥ 0, z ∈ IKN , and Borel measurable function f ≥ 0.
It is known that U(N)t and V(N)
t are Feller processes. For a proof the reader could refer
to, for example, Theorem 19.9 of [14], of which the proof also applies to the correlated
Brownian motion case. Furthermore, given f ∈ C2(IKN ), following the same proof as how
Theorem 8.4.3 is done in [9], we have Sσt f ∈ C2(IKN ) for σ = σ1, σ2.
When ν = 0 in (3.44), the same proof of Proposition 16 in [2] also shows Sσt f ∈ F0 when
f ∈ F0, for our case. When σ ≡ 0 in (3.44), the solution X(N)t to it is given by
X(N)t (x) = ∑
y∈KN
(eνt(P−A)
)x,y · X
(N)0 (y), (3.73)
where x ∈ KN , P := (pij)i,j∈KN , A = (aij)i,j∈KN such that aij :=(
∑k∈KNpik)· δij. The
semigroup S associated with X(N)t is given by
St f (z) = Ez
[f (X(N)
t (x1), · · · , X(N)t (xm))
]= f
(∑
1≤i≤d
(eνt(P−A)
)x1,xi· zi, · · · , ∑
1≤i≤d
(eνt(P−A)
)xm,xi· zi
). (3.74)
If f ∈ F0, then
∂2
∂zi∂zj
(St f)(z) = ∑
1≤i′,j′≤mSt
(∂2
∂zi∂zjf)(z) ·
(eνt(P−A)
)x′i ,xi·(eνt(P−A)
)x′j,xj≥ 0. (3.75)
In (3.75), we have used the fact that eνt(P−A) = eνtP · e−νtA is a nonnegative matrix
because P is nonnegative and A is a diagonal matrix. The monotonicity of St f (z) for each
zi also holds due to this fact. Therefore, St f ∈ F0.
To see that Sσ1t f , Sσ2
t f ,∈ F0 when f ∈ F0, we first use Trotter product formula (see, for
example, Corollary 1.6.7 of [4]), which shows
Sσt f = lim
n→∞
[Sσ,ν=0
t/n Sσ=0t/n
]nf , (3.76)
28
where the limit exists in C0(IKN ). The Trotter product formula is applicable, because the
infinitesimal generators Gσ, Gσ,ν=0, and Gσ=0 for Sσt , Sσ,ν=0
t , and Sσ=0t are given respectively
by
Gσ := ν ∑1≤j≤m
(pj,i − δi,j)zj∂
∂zi+
12 ∑
1≤i,j≤mσ(zi)R(i− j)σ(zj)
∂2
∂zi∂zj, (3.77)
Gσ,ν=0 :=12 ∑
1≤i,j≤mσ(zi)R(i− j)σ(zj)
∂2
∂zi∂zj, (3.78)
Gσ=0 := ν ∑1≤j≤m
(pj,i − δi,j)zj∂
∂zi, (3.79)
where z ∈ IKN . See, for instance, Theorem 19.9 of [14] for reference.
If we define for any f ∈ F0,
Sσn,t f :=
[Sσ,ν=0
t/n Sσ=0t/n
]nf , (3.80)
then we have Sσn,t f ∈ F0 from the previous discussions in this proof. We now follow the
arguments in Proposition 16 of [2]: Let
u0 = z;
ui = z + hiei;
uj = z + hiej;
uij = z + hiei + hjej; (3.81)
for any i 6= j, 1 ≤ i, j ≤ m. Then,
Sσn,t f (uij)− Sσ
n,t f (ui)− Sσn,t f (uj) + Sσ
n,t f (u0) ≥ 0, (3.82)
for any i 6= j, 1 ≤ i, j ≤ m, and
Sσn,t f (ui)− Sσ
n,t f (u0) ≥ 0, (3.83)
for any 1 ≤ i ≤ m. Due to (3.76), (3.82) and (3.83) holds for Sσt f as well. Because Sσ
t f ∈
C2(IKN ), it turns out that Sσt f ∈ F0.
29
We would like to show (Sσ1
t f)(z) ≤
(Sσ2
t f)(z), (3.84)
for t ≥ 0 and f ∈ F0. By the fundamental theorem of calculus on C(IKN ), we have
Sσ1s Sσ2
t−s f∣∣∣s=t
s=0=∫ t
0
[Sσ1
s (−Gσ2 Sσ2t−s) f + (Sσ1
s Gσ1)Sσ2t−s f
]ds. (3.85)
Thus, we have the following integration by parts formula (as is done in [2]):
(Sσ1t ) f − (Sσ2
t ) f =∫ t
0(Sσ1
s (Gσ1 − Gσ2)Sσ2t−s) f ds. (3.86)
(3.84) then follows from (3.86) because (Gσ1 − Gσ2) ≤ 0 by (3.77).
To show that (3.70) is true, we apply the Markov property of U(N) to see that
E[Πn
j=1 f j(U(N)tj
)]
=E[
f1(U(N)t1
)EU(N)
t1
[Πn
j=2 f j(U(N)tj−t1
)]]
=E[
f1(U(N)t1
)EU(N)
t1
[f2(U
(N)t2−t1
)EU(N)
t2−t1
[Πn
j=3 f j(U(N)tj−t2
)]]
]]
]]
= · · · = E[
f1(U(N)t1
)EU(N)
t1
[f2(U
(N)t2−t1
) · · · EU(N)
tn−1
[fn(U
(N)tn−tn−1
)]]]
. (3.87)
Because F0 is closed under multiplication, and Sσt f ∈ F0 if f ∈ F0, by (3.87), we may
find some g ∈ F0 such that
E
[n
∏j=1
f j(U(N)tj
)
]= E
[g(U(N)
tj)]
. (3.88)
This reduces (3.70) to (3.84), which is shown already. So (3.70) is proved and hence the
theorem. Q.E.D.
Lemma 22. Let f : Rn → R be a continuous function and let X(N)1 , · · · , X(N)
n N∈N be a family
of random variables such that for all 1 ≤ i ≤ n, X(N)i → Xi in probability. Also, assume that there
exists M < ∞ so that ∀n ≥ 1,
∥∥ f (X(N)1 , · · · , X(N)
n )∥∥
k1≤ M, (3.89)∥∥ f (X1, · · · , Xn)
∥∥k2≤ M, (3.90)
max1≤i≤N
E(|X(N)
i |k3)≤ M, (3.91)
max1≤i≤N
E(|Xi|k4
)≤ M, (3.92)
30
for some k1, k2 ∈ (1, ∞) and k3, k4 ∈ (0, ∞). Then we have
limN→∞
E[
f (X(N)1 , · · · , X(N)
n )]= E[ f (X1, · · · , Xn)]. (3.93)
Proof: Fix A > 0, and let X(N,A)i := X(N)
i 1|X(N)i |≤A and X(A)
i := Xi1|Xi |≤A. Then,∣∣∣E[ f (X(N)1 , · · · , X(N)
n )]− E[ f (X1, · · · , Xn)]∣∣∣
≤∣∣∣E[ f (X(N)
1 , · · · , X(N)n )]− E[ f (X(N,A)
1 , · · · , X(N,A)n )]
∣∣∣+∣∣∣E[ f (X(N,A)
1 , · · · , X(N,A)n )]− E[ f (X(A)
1 , · · · , X(A)n )]
∣∣∣+∣∣∣E[ f (X(A)
1 , · · · , X(A)n )]− E[ f (X1, · · · , Xn)]
∣∣∣= (I) + (II) + (III) . (3.94)
To approximate (I), we note that∣∣∣E[ f (X(N)1 , · · · , X(N)
n )]− E[ f (X(N,A)1 , · · · , X(N,A)
n )]∣∣∣
≤E[| f (X(N)
1 , · · · , X(N)n )| ·
(1⋃n
i=1X(N)i >A
)]≤∥∥∥ f (X(N)
1 , · · · , X(N)n )
∥∥∥k1
·(
n
∑i=1
P(X(N)i > A)
) k1−1k1
≤M ·(
n
∑i=1
MAk3
)(k1−1)/k1
, (3.95)
which is small when A is large. To approximate (II), we note that for each fixed A > 0,
f (z11|z1|≤A, · · · , zn1|zn|≤A) is a bounded continuous function on Rn. So (II) is small
when N is large, by the convergence in probability of each X(N)i to Xi as N → ∞ (1 ≤ i ≤ n).
To approximate (III), we use the same technique as the one for (I). Namely,∣∣∣E[ f (X1, · · · , Xn)]− E[ f (X(A)1 , · · · , X(A)
n )]∣∣∣
≤E[| f (X1, · · · , Xn)| ·
(1⋃n
i=1Xi>A
)]≤∥∥∥ f (X1, · · · , Xn)
∥∥∥k2
·(
n
∑i=1
P(Xi > A)
) k2−1k2
≤M ·(
n
∑i=1
MAk4
) k2−1k2
. (3.96)
We first pick A > 0 large so that (I) and (III) are both small. Then we fix this A and then
let N go to infinity. The lemma is thus proved. Q.E.D.
31
3.3.1 Proof of Theorem 1
Given tn > tn−1 > · · · > t1 ≥ 0, we define fi(x) = xki,1i,1 · · · x
ki,nii,ni
, where x ∈ IKN ,
x1, · · · , xi,ni ∈ KN , and ki,j ≥ 0 for 1 ≤ i ≤ n and 1 ≤ j ≤ ni. It can be seen that fi ∈ F0 for
all 1 ≤ i ≤ n.
By Theorem 21, if U(N)t and V(N)
t are solutions to (3.44) with the same initial conditions
U(N)0 ≡ V(N)
0 , such that σ1 ≤ σ2, then
E
[n
∏i=1
U(N)ti
(xi,1)ki,1 · · ·U(N)
ti(xi,ni)
ki,ni
]≤ E
[n
∏i=1
V(N)ti
(xi,1)ki,1 · · ·U(N)
ti(xi,ni)
ki,ni
]. (3.97)
Now we apply Lemma 22. Let f = f1 · · · fn, assumptions (3.89), (3.90), (3.91), and (3.92)
are satisfied, due to (3.14) and Theorem 20. Also, theorem 20 implies the convergence in
probability of U(N)t (x) to Ut(x). Therefore, we let N → ∞ in (3.97) to obtain
E
[n
∏i=1
Uti(xi,1)ki,1 · · ·Uti(xi,ni)
ki,ni
]≤ E
[n
∏i=1
Vti(xi,1)ki,1 · · ·Vti(xi,ni)
ki,ni
]. (3.98)
Relabel all ti’s and xi’s to conclude the proof of Theorem 1. Q.E.D.
CHAPTER 4
FROM INTERACTING SDES TO SHE(1): LK(P)
APPROXIMATION
Throughout this chapter, SHE(1) denotes the stochastic heat equation defined on page
2. Let ut(x) be the solution to (SHE(1)) with initial data u0(x). We would like to define a
family of SDE systems accordingly.
We say that V(ε)t (x) solves (SDE(ε)) if
dV(ε)t (x) =
(L (ε)V(ε)
t)(x)dt + σ
(V(ε)
t (x))dB(ε)
t (x), x ∈ (εZ)d, d ≥ 1, (4.1)
such that:
1. Bt(x) := ε−d∫(0,t)×C(ε)(x) η(ds, dy)x∈(εZ)d is a family of correlated Brownian mo-
tions, where C(ε)(x) := Πdj=1[xj, xj + ε) for x = (x1, · · · , xd).
2. L (ε)g(j) := ν · Cα,d · ε−α ∑i∈(εZ)d p(ε,α,d)j,i (g(i)− g(j)) for all j ∈ (εZ)d and g : (εZ)d →
R, where
nα,d := 2ζ(α + 1) when 0 < α < 2, d = 1, (4.2)
nα,d := ∑n=(n1,··· ,nd)∈Zd
n1 6=0,··· ,nd 6=0
|n|−α−d + 2d when 0 < α < 2, d > 1, (4.3)
Cα,d := nα,d ·(∫
Rd
1− cos(x · e1)
|x|α+d dx)−1
when 0 < α < 2, (4.4)
Cα,d := 2d when α = 2. (4.5)
Here in (4.2) ζ(s) := ∑∞n=1
1ns is the Riemann zeta function. When 0 < α < 2, d = 1,
p(ε,α,d)i,j = p(ε,α,d)
j,i = n−1α,d ·
12|i/ε− j/ε|α+1 (4.6)
for all i 6= j, i, j ∈ εZ, and p(ε,α)i,i = 0 for all i ∈ εZ.
33
When 0 < α < 2, d > 1,
p(ε,α,d)i,j = p(ε,α,d)
j,i = n−1α,d ·
1|i/ε− j/ε|α+d (4.7)
for all i = (i1, · · · , id), j = (j1, · · · , jd) ∈ (εZ)d such that ik 6= jk for all 1 ≤ k ≤ d, or
∑dk=1 |ik − jk| = 1. p(ε,α,d)
i,j = 0 otherwise.
When α = 2,
p(ε,α,d)i,j = p(ε,α,d)
j,i =1
2d(4.8)
for all i, j ∈ (εZ)d such that |i− j| = ε, and p(ε,α,d)i,j = 0 otherwise.
3. σ, ν, α, and η(t, x) that appeared in (SDE(ε)) are the same as the ones in (SHE(1)).
4. The initial condition U(ε)0 (x) = u0(x) for all x ∈ (εZ)d.
The goal of this chapter is to prove the following result:
Theorem 23. Let U(ε)t (x) be the solution to (SDE(ε)), and let ut(x) be the solution to (SHE(1)).
Then,
limε↓0
supt∈[T1,T2]
supx∈Rd
∥∥U(ε)t (ε[x/ε])− ut(x)
∥∥k = 0 (4.9)
for every k ∈ [2, ∞) and T2 > T1 > 0. If, in particular, u0 ≡ c for some constant c ≥ 0, then,
limε↓0
supt∈[0,T]
supx∈Rd
∥∥∥U(ε)t (ε[x/ε])− ut(x)
∥∥∥k= 0 (4.10)
for every k ∈ [2, ∞) and T > 0.
Following the arguments in Section 2.1 and 2.2.1, the solution U(ε)t (x) to (SDE(ε)) satis-
fies the following form:
U(ε)t (x) = ∑
y∈(εZ)d
P(ε)t (y− x) · u0(y) + ∑
y∈(εZ)d
∫ t
0P(ε)
t−s(y− x)σ(U(ε)s (y)) dB(ε)
s (y), (4.11)
where
P(ε)t (x) :=
∞
∑n=0
(Pε)n0,x
e−ν·Cα·ε−α·t(ν · Cα · ε−α · t)n
n!, (4.12)
34
where Pε :=(
p(ε,α,d)i,j
)i,j∈(εZ)d is a probability transition matrix. When 0 < α < 2, d > 1, the
characteristic function ψε,α,d of P(ε)t equals
ψε,α,d(z) = ∑x∈(εZ)d
P(ε)t (x)eiz·x
=∞
∑n=0
∑x∈(εZ)d
p(ε,α)0,x eiz·x
n
· e−ν·Cα·ε−α·t(ν · Cα · ε−α · t)n
n!
=∞
∑n=0
n−1α,d · ∑
x=(x1,··· ,xd)∈(εZ)d
x1 6=0,··· ,xd 6=0 or|x1|+···+|xd|=ε
1|x/ε|α+d · e
iz·x
n
· e−ν·Cα·ε−α·t(ν · Cα · ε−α · t)n
n!
=∞
∑n=0
n−1α,d · ∑
x=(x1,··· ,xd)∈Zd
x1 6=0,··· ,xd 6=0 or|x1|+···+|xd|=1
1|x|α+d · e
iεz·x
n
· e−ν·Cα·ε−α·t(ν · Cα · ε−α · t)n
n!
=∞
∑n=0
(φα,d(εz))n · e−ν·Cα·ε−α·t(ν · Cα · ε−α · t)n
n!
= e−ν·Cα·ε−α·t(1−φα,d(εz)), (4.13)
where
φα,d(z) := n−1α,d ·
∑x=(x1,··· ,xd)∈Zd
x1 6=0,··· ,xd 6=0
1|x|α+d · e
iz·x + ∑x=(x1,··· ,xd)∈Zd
|x1|+···+|xd|=1
1|x|α+d · e
iz·x
. (4.14)
When 0 < α < 2, d = 1, (4.13) holds with
φα,d(z) := n−1α,d · ∑
x∈Z,x 6=0
1|x|α+d · e
iz·x. (4.15)
When α = 2, (4.13) still holds with
φα,d(z) :=d
∑j=1
1d
cos(zj). (4.16)
Lemma 24. Let φα,d(z) be as defined in (4.14), (4.15), and (4.16), and Cα,d be as defined in (4.4)
and (4.5). Write
1− φα,d(z)− C−1α,d |z|
α = Rα,d(z). (4.17)
35
Then:
(1) There exists C > 0 such that for all |z| ≤ 1,
Rα,d(z) ≤ C|z|1+α when 0 < α < 1,
Rα,d(z) ≤ C|z|2 ln(|z|−1) when α = 1,
Rα,d(z) ≤ C|z|2 when 1 < α < 2,
Rα,d(z) ≤ C|z|3 when α = 2; (4.18)
(2) For every constant 0 < c1 < π, there exists c2 > 0 such that
1− φα,d(z) > c2 (4.19)
for all z ∈ [−π, π]d \ [−c1, c1]d,
Proof: First we prove (1). This is done in two cases:
Case 1. 0 <α< 2, d > 1.
We have
nα,d ·(1− φα,d(z)
)= ∑
x=(x1,··· ,xd)∈Zd
|x1|+···+|xd|=1
1− cos(z · x)|x|α+d + ∑
x=(x1,··· ,xd)∈Zd
x1 6=0,··· ,xd 6=0
1− cos(z · x)|x|α+d −
∫Rd
1− cos(z · x)|x|α+d dx
+∫
Rd
1− cos(z · x)|x|α+d dx, (4.20)
where∫Rd
1− cos(z · x)|x|α+d dx =
∫Rd
1− cos(|z|e1 · x)|x|α+d dx =
∫Rd
1− cos(x · e1)
|x|α+d dx · |z|α. (4.21)
Also,
∑x=(x1,··· ,xd)∈Zd
|x1|+···+|xd|=1
1− cos(z · x)|x|α+d ≤ ∑
x=(x1,··· ,xd)∈Zd
|x1|+···+|xd|=1
|z|2|x|2|x|α+d ≤ const · |z|2. (4.22)
Now we define
R := (x1, . . . , xd) : x1 > 0, . . . , xd > 0, (4.23)
D := (x1, . . . , xd) : 0 ≤ xj ≤ 1 for all 1 ≤ j ≤ d, (4.24)
1 := (1, . . . , 1), (4.25)
[x] := ([x1], . . . , [xd]) for x = (x1, . . . , xd). (4.26)
36
To estimate
∑x=(x1,··· ,xd)∈Zd
x1 6=0,··· ,xd 6=0
1− cos(z · x)|x|α+d −
∫Rd
1− cos(z · x)|x|α+d dx, (4.27)
it suffices to consider
Q :=∣∣∣ ∑
x=(x1,··· ,xd)∈Nd
x1 6=0,··· ,xd 6=0
1− cos(z · x)|x|α+d −
∫R
1− cos(z · x)|x|α+d dx
∣∣∣. (4.28)
(the estimates on the other quadrants are basically the same)
Now
Q =∣∣∣ ∫
R
1− cos(z · ([x] + 1))|[x] + 1|α+d dx−
∫R
1− cos(z · x)|x|α+d dx
∣∣∣≤∫
R\D
∣∣∣(1− cos(z · ([x] + 1))) · ( 1|[x] + 1|α+d −
1|x|α+d )
∣∣∣ dx
+∫
R\D
∣∣∣cos(z · ([x] + 1))− cos(z · x)|x|α+d
∣∣∣ dx +1− cos(z · 1)|1|α+d +
∫D
|1− cos(z · x)||x|α+d dx
= (I) + (II) + (III) + (IV). (4.29)
Estimate of (I). When |z| ≤ 1 we have
(I) ≤ const ·∫
R\D2 sin2(
z · ([x] + 1)2
) · 1|x|α+d+1 dx
≤ const ·∫
R\D(2∧ 1
4|([x] + 1)|2|z|2) · 1
|x|α+d+1 dx
≤ const ·∫
R\D(2∧ 2|x|2|z|2) · 1
|x|α+d+1 dx
≤ const ·∫ ∞
1(1∧ r2|z|2) · 1
rα+d+1 rd−1dr
= const ·∫ |z|−1
1r2|z|2 · 1
rα+2 dr + const ·∫ ∞
|z|−1
1rα+2 dr
≤ const ·(∫ |z|−1
1
1rα
dr
)· |z|2 + const · |z|α+1, (4.30)
where (4.30) ≤ const·|z|α+1 when 0 < α < 1, (4.30) ≤ const·|z|2 when 1 < α < 2, and (4.30)
≤ const·|z|2 ln(|z|−1) when α = 1.
37
Estimate of (II). When |z| ≤ 1 we have
(II) ≤ const ·∫
R\D2∣∣∣ sin
( ([x] + x + 1) · z2
)sin( ([x]− x + 1) · z
2)∣∣∣ · 1|x|α+d dx
≤ const ·∫
R\D
(1∧ |[x] + x + 1| · |z|
)· |z| · 1
|x|α+d dx
≤ const ·∫
R\D
(1∧ |x| · |z|
)· |z| · 1
|x|α+d dx
≤ const ·∫ ∞
1
(1∧ r|z|
)· |z| · 1
rα+d · rd−1dr
≤ const ·( ∫ |z|−1
1
1rα
dr)· |z|2 + const ·
( ∫ ∞
|z|−1
1rα+1 dr
)· |z|, (4.31)
where (4.31) ≤ const·|z|α+1 when 0 < α < 1, (4.31) ≤ const·|z|2 when 1 < α < 2, and (4.31)
≤ const·|z|2 ln(|z|−1) when α = 1.
Estimate of (III) and (IV). Because |1− cos(z · 1)| ≤ const · |z|2, (III) ≤ const · |z|2.
Also, it follows that
(IV) ≤ const ·(∫
D
1|x|α+d−2 dx
)· |z|2 = const · |z|2. (4.32)
Case 2. 0 < α < 2, d = 1. This can be done using the same approach as the previous
case, and the end result is that (4.18) holds.
Case 3. α= 2. We have
1− φα,d(z) =1d
d
∑j=1
(1− cos(zj)
)=
2d
d
∑j=1
sin2(
zj
2
)
=1
2d|z|2 + 2
d
d
∑j=1
(sin2(
zj
2)−
z2j
4
). (4.33)
Because |x− sin(x)| ≤ |x3|/6 for all x ∈ R, when |z| ≤ 1,
d
∑j=1
∣∣∣∣∣sin2(
zj
2
)−
z2j
4
∣∣∣∣∣ = d
∑j=1
∣∣∣∣sin(
zj
2
)+
zj
2
∣∣∣∣ · ∣∣∣∣sin(
zj
2
)−
zj
2
∣∣∣∣≤ const ·
d
∑j=1|zj|3 ≤ const · |z|3. (4.34)
Now we prove assertion (2). For all 0 < α ≤ 2 and d ≥ 1,
1− φα(z) ≥ const · ∑x=(x1,··· ,xd)∈Zd
|x1|+···+|xd|=1
(1− cos(z · x)) = const ·d
∑j=1
(1− cos(zj)
). (4.35)
It can then be easily checked that assertion (2) is true. Q.E.D.
38
We are now ready for the next lemma, which is one of the keys to prove Theorem 23.
Lemma 25. Fix T2 > T1 > 0. ε−dP(ε)t (x) → pt(x) uniformly for t ∈ [T1, T2] and x ∈ (εZ)d as
ε ↓ 0, where P(ε)t (x) is defined in (4.12), and pt(x) is defined in (1.3).
Proof: Since P(ε)t is supported on (εZ)d,(2π
ε
)d
P(ε)t (x) =
∫[−π/ε,π/ε]d
e−iz·x · ψε,α,d(z) dz
=∫[−π/ε,π/ε]d
e−iz·x · e−ν·Cα·ε−α·t(1−φα,d(εz)) dz. (4.36)
By Assertion 1 of Lemma 24, there exists 0 < C < d−1/2 < π and C′ > 0 such that for
all z = (z1, · · · , zd), |z1| ≤ C, · · · , |zd| ≤ C,
1− φα,d(z) ≥ C′|z|α. (4.37)
(2π)d
∣∣∣∣∣P(ε)t (x)
εd − pt(x)
∣∣∣∣∣ ≤∫[−C/ε,C/ε]d
∣∣e−νt|z|α − e−ν·Cα,d·ε−α·t(1−φα,d(εz))∣∣ dz
+∫[−π/ε,π/ε]d\[−C/ε,C/ε]d
e−ν·Cα,d·ε−α·t(1−φα,d(εz)) dz +∫
Rd\[−C/ε,C/ε]de−νt|z|α dz.
It is easily seen that∫
Rd\[−C/ε,C/ε]d e−νt|z|α dz→ 0 uniformly for t ≥ T1 as ε ↓ 0. By assertion
2 of Lemma 24, there exists C′′ > 0 so that
∫[−π/ε,π/ε]d\[−C/ε,C/ε]d
e−ν·Cα,d·ε−α·t(1−φα,d(εz)) dz ≤∫[−π/ε,π/ε]d\[−C/ε,C/ε]d
e−ν·Cα,d·ε−α·t·C′′ dz
≤ e−ν·Cα,d·ε−α·t·C′′ · (2π)d
εd , (4.38)
which goes to 0 uniformly for t ≥ T1 as ε ↓ 0.
The last estimate is given by∫[−C/ε,C/ε]d
∣∣∣e−νt|z|α − e−ν·Cα,d·ε−α·t(1−φα,d(εz))∣∣∣ dz
≤∫[−C/ε,C/ε]d
e−νt|z|α∣∣∣1− e−ν·Cα,d·ε−α·t(1−φα,d(εz)−C−1
α,d ·|εz|α)∣∣∣ dz. (4.39)
Due to assertion (1) of Lemma 24, because |εz| < 1, there exists some a, b, C > 0
depends only on α, d so that
∣∣ν · Cα,d · ε−α · t(1− φα,d(εz)− C−1α,d · |εz|α)
∣∣ ≤ C · t · ε−α · Rα,d(εz) ≤ C · t · εa|z|b, (4.40)
39
which goes to 0 as ε ↓ 0. For all t ≤ T2, either
∣∣1− e−ν·Cα·ε−α·t(1−φα(εz)−C−1α ·|εz|α)∣∣ ≤ ∣∣1− e−C·T2·εa|z|b ∣∣ (4.41)
or
∣∣1− e−ν·Cα·ε−α·t(1−φα(εz)−C−1α ·|εz|α)∣∣ ≤ ∣∣eC·T2·εa|z|b − 1
∣∣ (4.42)
holds. So the integrand in (4.39) converges to 0 pointwise as ε ↓ 0, uniformly in t ≤ T2.
Besides, by (4.37), for t ≥ T1 we have
1[−C/ε,C/ε]d(z)·∣∣∣e−νt|z|α − e−ν·Cα,d·ε−α·t(1−φα,d(εz))
∣∣∣≤ 1[−C/ε,C/ε]d(z) ·
(e−νT1|z|α + e−ν·Cα,d·ε−α·T1·C′|εz|α
)≤ e−νT1|z|α + e−ν·Cα,d·T1·C′|z|α . (4.43)
This function is integrable on Rd. So the dominated convergence theorem is applicable to
(4.39) to show uniform convergence of (4.39) in t ∈ [T1, T2] as ε ↓ 0. In the end, we remind
the reader the convergence is uniform in x ∈ Rd, because all the convergence in the proof
do not depend on x ∈ Rd. Q.E.D.
4.1 Proof of Theorem 23We recall from (1.2) that the solution ut(x) to (SHE(1)) satisfies
ut(x) := (pt ∗ u0)(x) +∫(0,t)×Rd
pt−s(y− x)σ(us(y)
)η(ds dy),
where (pt ∗ u0)(x) :=∫
Rd pt(x− y)u0(y) dy. Fix δ > 0. We introduce the following random
fields indexed by x ∈ Rd and t ≥ δ:
u(1,δ)t (x) := (pt ∗ u0)(x) +
∫(0,t−δ)×Rd
pt−s(y− x) · σ(us(y)
)η(ds dy), (4.44)
u(2,ε,δ)t (x) := (pt ∗ u0)(x) +
∫(0,t−δ)×Rd
pt−s(y− x) · σ(us(ε[y/ε])
)η(ds dy), (4.45)
u(3,ε,δ)t (x) := (pt ∗ u0)(x) +
∫(0,t−δ)×Rd
pt−s(ε[y/ε]− ε[x/ε]) · σ(us(ε[y/ε])
)η(ds dy),
(4.46)
u(4,ε,δ)t (x) := (pt ∗ u0)(x) +
∫(0,t−δ)×Rd
P(ε)t−s(ε[y/ε]− ε[x/ε])
εd · σ(us(ε[y/ε])
)η(ds dy),
(4.47)
40
When t ≤ δ,
u(2,ε,δ)t (x) = u(3,ε,δ)
t (x) = u(4,ε,δ)t (x) := (pt ∗ u0)(x). (4.48)
We then define another random field indexed by x ∈ Rd and t ≥ 0:
u(5,ε)t (x) := (pt ∗ u0)(x) +
∫(0,t)×Rd
P(ε)t−s(ε[y/ε]− ε[x/ε])
εd · σ(us(ε[y/ε])
)η(ds dy). (4.49)
Finally, we also recall from (4.11) and (2.23) that
U(ε)t (ε[x/ε])
= ∑y∈(εZ)d
P(ε)t (y− ε[x/ε]) · u0(y) + ∑
k∈Zd
∫ t
0P(ε)
t−s(εk− ε[x/ε]) · σ(Us(εk)
)dB(ε)
s (εk)
= ∑y∈(εZ)d
P(ε)t (y− ε[x/ε]) · u0(y)
+ ∑k∈Zd
∫(0,t)×Rd
1[εk,ε(k+1))(y)P(ε)
t−s(εk− ε[x/ε])
εd · σ(Us(εk)
)η(ds, dy)
= ∑y∈(εZ)d
P(ε)t (y− ε[x/ε]) · u0(y) +
∫(0,t)×Rd
P(ε)t−s(ε[y/ε]− ε[x/ε])
εd · σ(Us(ε[y/ε])
)η(ds, dy).
In the next few sections, we would do a few approximations, then proceed to the main
proof of (4.9).
4.1.1 The approximation of ut(x) by u(1,δ)t (x)
By (2.15), Proposition 6, and Plancherel’s Theorem, for any x ∈ Rd and t ≥ 0,
∥∥u(1,δ)t (x)− ut(x)
∥∥2k
≤ 4k ·∫(t−δ,t)
∫Rd×Rd
pt−s(z− x)pt−s(y− x) f (y− z)∥∥σ(us(y)us(z))
∥∥k/2 dy dz ds
≤ const ·∫(t−δ,t)
(∫Rd
pt−s(y− x) dy ·∫
Rdpt−s(z− x) dz
)ds
≤ const · δ. (4.50)
4.1.2 The approximation of u(1,δ)t (x) by u(2,ε,δ)
t (x)
We first note that
pt(x)− pt(x′) = (2π)−d∫
Rd(e−iz·x − e−iz·x′)e−νt|z|α dz
≤ const · |x− x′| ·∫
Rd|z|e−νt|z|α dz, (4.51)
41
and that
pt(x) = (2π)−d∫
Rde−iz·xe−νt|z|α dz
= (2π)−d∫
Rde−it−1/αy·xe−ν|y|α t−d/α dy
= t−d/α p1(t−1/αx). (4.52)
By (4.51), (2.15), and Proposition 6, and Plancherel’s Theorem, for any x, x′ ∈ Rd, |x−
x′| = ε, t ≥ 0,
∥∥ut(x)− ut(x′)∥∥2
k
≤ 4k ·∫ r
0
∫Rd×Rd
∣∣∣pt−s(y− x)− pt−s(y− x′)∣∣∣ · ∣∣∣pt−s(z− x)− pt−s(z− x′)
∣∣∣ f (y− z)
·∥∥σ(us(y))σ(us(z))
∥∥k/2 dy dz ds
≤ const ·∫ t
0
(∫Rd
∣∣∣ps(y)− ps(y− (x′ − x))∣∣∣ dy)2
ds
≤ const ·∫ ε′
04 ds + const ·
∫ t
ε′
∫|y|≥K
∣∣∣ps(y)− ps(y− (x′ − x))∣∣∣ dy ds
+ const ·∫ t
ε′
(∫|y|<K
dy · ε ·∫
Rd|z|e−νε′|z|α dz
)ds
= (I) + (II) + (III). (4.53)
Here (I) is small when ε′ is small. To approximate (II), by (4.52), we have
(II) = const ·∫ t
ε′
∫|y|≥K
s−d/α∣∣∣p1(s−1/αy)− p1(s−1/α · (y− (x′ − x)))
∣∣∣ dy ds
≤ const ·∫ t
ε′
(∫|z|≥Ks−1/α
p1(z) dz +∫|z|≥(K−ε)s−1/α
p1(z) dz)
ds
≤ const · t ·(∫|z|≥Kt−1/α
p1(z) dz +∫|z|≥(K−ε)t−1/α
p1(z) dz)
. (4.54)
This quantity can be again made small by selecting large K, for ε′ > 0 fixed and all ε small
enough. (III) is small by letting ε ↓ 0, when ε′ and K are both fixed. These estimates imply
that
lim|x−x′|↓0,x,x′∈Rd
supt∈[0,T]
∥∥ut(x)− ut(x′)∥∥2
k = 0. (4.55)
By Proposition 6,
42∥∥u(1,δ)t (x)− u(2,ε,δ)
t (x)∥∥2
k
≤ 4k ·∫ t−δ
0
∫Rd×Rd
pt−s(y− x)pt−s(z− x) f (y− z)
·∥∥σ(us(y))− σ(us(ε[y/ε])
∥∥k ·∥∥σ(us(z))− σ(us(ε[z/ε])
∥∥k dy dz ds
≤ const · t ·(
sups∈[δ,t]
supy∈Rd
∥∥us(y)− us(ε[y/ε])∥∥
k
)2
. (4.56)
Equation (4.56) implies, by (4.55), for any T > 0,
limε↓0
supt∈[0,T]
supx∈Rd
∥∥u(1,δ)t (x)− u(2,ε,δ)
t (x)∥∥
k = 0. (4.57)
4.1.3 The approximation of u(2,ε,δ)t (x) by u(3,ε,δ)
t (x)
We recall Theorem 7.3.1 of [12] that for any K > 0 there exists c = c(K) > 0 such that
pt(x) ≤ cνt|x|α+d for all |x| ≥ (νt)1/α · K, (4.58)
pt(x) ≤ c(νt)−d/α for all |x| < (νt)1/α · K. (4.59)
Equations (4.58) and (4.59) together imply that for all δ ≤ s ≤ t,
ps(x) ≤ c(νδ)−d/α1z:|z|≤(νt)1/α(x) +cνt|x|α+d 1z:|z|≥(νδ)1/α(x). (4.60)
Therefore, for all δ ≤ s ≤ t, |y− x| ≤ 13 (νδ)1/α,
ps(y) ≤ c(νδ)−d/α1z:|z|≤ 32 (νt)1/α(x) +
cνt(|x| − 1
3 (νδ)1/α)α+d1z:|z|≥ 2
3 (νδ)1/α(x). (4.61)
By (2.15) and Proposition 6,
∥∥u(2,ε,δ)t (x)− u(3,ε,δ)
t (x)∥∥2
k
≤∫ t−δ
0
∫Rd×Rd
∣∣∣pt−s(y− x)− pt−s(ε[y/ε]− ε[x/ε])∣∣∣ · ∣∣∣pt−s(z− x)− pt−s(ε[z/ε]− ε[x/ε])
∣∣∣· f (y− z)
∥∥σ(us(ε[y/ε]))σ(us(ε[z/ε]))∥∥
k/2 dy dz ds · (4k)
≤ const ·∫ t
δ
(∫Rd
∣∣∣ps(y)− ps(ε[y/ε] + x− ε[x/ε])∣∣∣ dy)2
ds. (4.62)
Now, by (4.61), for all ε < 13 (νδ)1/α, δ ≤ s ≤ t, we have
ps(ε[y/ε] + x− ε[x/ε])
≤ c(νδ)−d/α1z:|z|≤ 32 (νt)1/α(y) +
cνt(|x| − 1
3 (νδ)1/α)α+d1z:|z|≥ 2
3 (νδ)1/α(y). (4.63)
43
Therefore, due to (4.63) and (4.51), we can apply dominated convergence theorem to
obtain
limε↓0
supt∈[0,T]
supx∈Rd
∥∥u(2,ε,δ)t (x)− u(3,ε,δ)
t (x)∥∥
k = 0. (4.64)
4.1.4 The approximation of u(3,ε,δ)t (x) by u(4,ε,δ)
t (x)
By (2.15) and Proposition 6,∥∥u(3,ε,δ)t (x)− u(4,ε,δ)
t (x)∥∥2
k
≤ 4k ·∫ t−δ
0
∫Rd×Rd
∣∣∣∣∣pt−s(ε[y/ε]− ε[x/ε])−P(ε)
t−s(ε[y/ε]− ε[x/ε])
εd
∣∣∣∣∣ · f (y− z)
·∣∣∣∣∣pt−s(ε[z/ε]− ε[x/ε])−
P(ε)t−s(ε[z/ε]− ε[x/ε])
εd
∣∣∣∣∣ · ‖σ(us(ε[y/ε]))σ(us(ε[z/ε]))‖k/2 dy dz ds
≤ const ·∫ t
δ
(∫Rd
∣∣∣ps(ε[y/ε]− ε[x/ε])− P(ε)s (ε[y/ε]− ε[x/ε])
εd
∣∣∣ dy
)2
ds.
= const ·∫ t
δ
(∫Rd
∣∣∣ps(ε[y/ε])− P(ε)s (ε[y/ε])
εd
∣∣∣ dy
)2
ds. (4.65)
By (4.12), for δ ≤ s ≤ t,∫Rd\[−M,M]d
P(ε)s (ε[y/ε])
εd dz ≤ ∑y=(y1,··· ,yd)∈(εZ)d
|yi |>ε([ Mε ]−1) for all 1≤i≤d
P(ε)s (y)
≤ ∑y=(y1,··· ,yd)∈εZd
|yi |≥[ Mε ] for all 1≤i≤d
P(1)s (y)
= ∑y=(y1,··· ,yd)∈εZd
|yi |≥[ Mε ] for all 1≤i≤d
∞
∑n=0
(P1)n0,x
e−ν·Cα·s(ν · Cα · s)n
n!
= ∑y=(y1,··· ,yd)∈εZd
|yi |≥[ Mε ] for all 1≤i≤d
N
∑n=0
(P1)n0,x
e−ν·Cα·s(ν · Cα · s)n
n!+
∞
∑n=N+1
e−ν·Cα·s(ν · Cα · s)n
n!, (4.66)
where P1 :=(
p(1,α,d)i,j
)i,j∈Zd . We may select N large so that
∞
∑n=N+1
e−ν·Cα·s(ν · Cα · s)n
n!(4.67)
is small for all δ ≤ s ≤ t, by Markov’s inequality for Poisson random variables. We could
then make (4.66) uniformly small for all ε small enough and δ ≤ s ≤ t, by selecting large
M.
44
By (4.61), for all ε < 13 (νδ)1/α, δ ≤ s ≤ t, we have
ps(ε[y/ε]) ≤ c(νδ)−d/α1z:|z|≤ 32 (νt)1/α(y) +
cνt(|x| − 1
3 (νδ)1/α)α+d1z:|z|≥ 2
3 (νδ)1/α(y). (4.68)
We also note that ∫Rd
P(ε)s (ε[y/ε])
εd dz = 1. (4.69)
Therefore, (4.68), (4.69), and the fact that (4.66) is small as M gets larger, uniformly for
all ε < 13 (νδ)1/α and δ ≤ s ≤ t, imply
limM→∞
supε< 1
3 (νδ)1/α
∫ t
δ
((∫Rd
∣∣∣ps(ε[y/ε])− P(ε)s (ε[y/ε])
εd
∣∣∣ dy
)2
−(∫
[−M,M]d
∣∣∣ps(ε[y/ε])− P(ε)s (ε[y/ε])
εd
∣∣∣ dy
)2)ds = 0. (4.70)
By Lemma 25, for any fixed M > 0,
limε↓0
∫ t
δ
(∫[−M,M]d
∣∣∣ps(ε[y/ε])− P(ε)s (ε[y/ε])
εd
∣∣∣ dy
)2
ds = 0. (4.71)
Therefore, (4.65), (4.70), and (4.71) together imply
limε↓0
supt∈[0,T]
supx∈Rd
∥∥u(3,ε,δ)t (x)− u(4,ε,δ)
t (x)∥∥
k = 0. (4.72)
4.1.5 The approximation of u(4,ε,δ)t (x) by u(5,ε)
t (x)
By (2.15) and Proposition 6, when t ≥ δ,∥∥u(4,ε,δ)t (x)− u(5,ε)
t (x)∥∥2
k
≤ 4k ·∫ t
t−δ
∫Rd×Rd
P(ε)t−s(ε[y/ε]− ε[x/ε])
εd ·P(ε)
t−s(ε[z/ε]− ε[x/ε])
εd · f (y− z)
·∥∥σ(us(ε[y/ε]))σ(us(ε[z/ε]))
∥∥k/2 dy dz ds
≤ const ·∫ δ
0
∫Rd×Rd
P(ε)s (ε[y/ε])
εd · P(ε)s (ε[z/ε])
εd dy dz ds
≤ const · δ. (4.73)
When t < δ, ∥∥u(4,ε,δ)t (x)− u(5,ε)
t (x)∥∥2
k
≤ const ·∫ t
0
∫Rd×Rd
P(ε)s (ε[y/ε])
εd · P(ε)s (ε[z/ε])
εd dy dz ds
≤ const · δ. (4.74)
45
4.1.6 The approximation of∫
Rd pt(x− y)u0(y) dy
We note that∣∣∣∣∣∣ ∑y∈(εZ)d
P(ε)t (ε[x/ε]− y) · u0(y)−
∫Rd
pt(x− y)u0(y) dy
∣∣∣∣∣∣=
∣∣∣∣∣∫
Rd
P(ε)t (ε[x/ε]− ε[y/ε])
εd · u0(ε[y/ε]) dy−∫
Rdpt(x− y) · u0(y) dy
∣∣∣∣∣≤∫
Rd
∣∣∣∣∣P(ε)t (ε[x/ε]− ε[y/ε])
εd − pt(y− x)
∣∣∣∣∣ · u0(ε[y/ε]) dy
+∫
Rdpt(x− y) ·
∣∣∣u0(y)− u0(ε[y/ε])∣∣∣ dy
≤ const ·∫
Rd
∣∣∣P(ε)t (ε[x/ε]− ε[y/ε])
εd − pt(ε[y/ε]− ε[x/ε])∣∣∣ dy
+∫
Rdpt(x− y) · |u0(y)− u0(ε[y/ε])| dy
+ const ·∫
Rd
∣∣∣pt(ε[x/ε]− ε[y/ε])− pt(x− y)∣∣∣ dy
= (I) + (II) + (III). (4.75)
Using the same method as we used to bound (4.65), it can be shown that for any T2 >
T1 > 0,
limε↓0
supT1≤t≤T2
∣∣(I)∣∣ = 0. (4.76)
Due to (4.60), for any T2 > T1 > 0, T1 ≤ t ≤ T2,
∣∣(II)∣∣ ≤ ∫Rd
(c(νT1)
−d/α1z:|z|≤(νT2)1/α(x− y) +cνT2
|x− y|α+d 1z:|z|≥(νT1)1/α(x− y))
·∣∣∣u0(y)− u0(ε[y/ε])
∣∣∣ dy. (4.77)
Therefore, by the continuity of u0 and the dominated convergence theorem,
limε↓0
supT1≤t≤T2
∣∣(II)∣∣ = 0. (4.78)
Our estimate of (III) follows the same approach as was used to bound (4.64). Thus,
limε↓0
supT1≤t≤T2
∣∣(III)∣∣ = 0. (4.79)
By (4.76), (4.78), and (4.79), we have proved
limε↓0
supT1≤t≤T2
∣∣∣ ∑y∈(εZ)d
P(ε)t (ε[x/ε]− y) · u0(y)−
∫Rd
pt(x− y)u0(y) dy∣∣∣ = 0. (4.80)
46
We also point out here if, in particular, u0 ≡ c, then
∑y∈(εZ)d
P(ε)t (ε[x/ε]− y) · u0(y) =
∫Rd
pt(x− y)u0(y) dy = 1. (4.81)
4.1.7 Proof of Theorem 23, final step
First we let
v(ε,δ)t (x) :=
∥∥ut(x)− u(1,δ)t (x)
∥∥k +
∥∥u(1,δ)t (x)− u(2,ε,δ)
t (x)∥∥
k +∥∥u(2,ε,δ)
t (x)− u(3,ε,δ)t (x)
∥∥k
+∥∥u(3,ε,δ)
t (x)− u(4,ε,δ)t (x)
∥∥k +
∥∥u(4,ε,δ)t (x)− u(5,ε)
t (x)∥∥
k
+∣∣∣ ∑
y∈(εZ)d
P(ε)t (ε[x/ε]− y) · u0(y)−
∫Rd
pt(x− y)u0(y) dy∣∣∣. (4.82)
By (4.82) and Proposition 6,
∥∥ut(x)−U(ε)t (ε[x/ε])
∥∥2k
≤ 2(
v(ε,δ)t (x)
)2
+ 2∥∥ ∫
(0,t)×Rd
P(ε)t−s(ε[y/ε]− ε[x/ε])
εd ·(
σ(us(ε[y/ε])
)− σ
(Us(ε[y/ε])
))η(ds dy)
∥∥2k
≤ 2(
v(ε,δ)t (x)
)2+ const ·
∫ t
0
∫Rd×Rd
P(ε)s (ε[y/ε]− ε[x/ε])
εd · P(ε)s (ε[z/ε]− ε[x/ε])
εd · f (y− z)
·∥∥(σ
(us(ε[y/ε])
)− σ
(Us(ε[y/ε])
))·(
σ(us(ε[z/ε])
)− σ
(Us(ε[z/ε])
))∥∥k/2 dy dz
≤ 2(
v(ε,δ)t (x)
)2+ const ·
∫ t
δ′
(supz∈Rd
∥∥us(ε[z/ε]−Us(ε[z/ε])∥∥
k
)2ds + const · δ′. (4.83)
Now we replace x with ε[x/ε] in (4.83), and then take supremum over x ∈ Rd, to see
that(supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥k
)2
= supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥2k
≤2
(supx∈Rd
v(ε,δ)t (ε[x/ε])
)2
+ const ·∫ t
δ′
(supz∈Rd
∥∥us(ε[z/ε]−Us(ε[z/ε])∥∥
k
)2
ds + const · δ′.
(4.84)
Here all the constants in (4.84) are independent of our choice of t ∈ [0, T]. In view of (2.15)
and (3.14), we could now apply Gronwall’s inequality to (4.84) (see Appendix A.2) to see
for all δ′ ≤ t ≤ T,
47(supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥k
)2
≤
2 supt∈[δ′,T]
(supx∈Rd
v(ε,δ)t (ε[x/ε])
)2
+ const · δ′ · econst·(t−δ′)
≤
2 supt∈[δ′,T]
(supx∈Rd
v(ε,δ)t (ε[x/ε])
)2
+ const · δ′ · econst·T. (4.85)
By (4.50), (4.56), (4.64), (4.72), (4.74), and (4.80),
lim supε↓0
supt∈[δ′,T]
(supx∈Rd
v(ε,δ)t (ε[x/ε])
)2
≤ const · δ2. (4.86)
It follows from (4.86) that for all T1 ≤ t ≤ T2,
lim supε↓0
supt∈[T1,T2]
(supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥k
)2
≤ inf0≤δ′≤T1
lim supε↓0
supt∈[δ′,T2]
(supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥k
)2
≤ inf0≤δ′≤T1
lim supε↓0
2 supt∈[δ′,T2]
(supx∈Rd
v(ε,δ)t (ε[x/ε])
)2
+ const · δ′ · econst·T2
≤ inf0≤δ′≤T1
lim supε↓0
(const · δ2 + const · δ′
)· econst·T2
≤ const · δ2. (4.87)
The first assertion of Theorem 23 is proved because δ > 0 can be arbitrarily chosen. If,
in particular, u0 ≡ c for some constant c ≥ 0, then (4.83), (4.84), (4.85), (4.86) becomes
∥∥ut(x)−U(ε)t (ε[x/ε])
∥∥2k ≤ 2
(v(ε,δ)
t (x))2
+ const ·∫ t
0
(supz∈Rd
∥∥us(ε[z/ε]−Us(ε[z/ε])∥∥
k
)2
ds
(4.88)
(supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥k
)2
≤ 2
(supx∈Rd
v(ε,δ)t (ε[x/ε])
)2
+ const ·∫ t
0
(supz∈Rd
∥∥us(ε[z/ε]−Us(ε[z/ε])∥∥
k
)2
ds (4.89)
For all 0 ≤ t ≤ T,(supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥k
)2
≤(
2 supt∈[0,T]
(supx∈Rd
v(ε,δ)t (ε[x/ε])
)2)· econst·T.
(4.90)
48
For all T > 0,
lim supε↓0
supt∈[0,T]
(supx∈Rd
∥∥ut(ε[x/ε])−U(ε)t (ε[x/ε])
∥∥k
)2
≤ const · δ2. (4.91)
Thus the second assertion of Theorem 23 is proved, by letting δ→ 0 in (4.91).
4.2 Proof of Theorem 2Let ut(x) and ut(x) satisfy the assumptions in Theorem 2. We could find solutions
U(ε)t (x) and U(ε)
t (x) to (SDE(ε)) accordingly, such that (4.9) holds, by Theorem 23.
By Theorem 1 applied to U(ε)t (x) and V(ε)
t (x), we have for any x1, x2, · · · , xn ∈ (εZ)d,
t1, t2 · · · tn ≥ 0, and k1, · · · , kn ∈ [0, ∞),
E[U(ε)
t1(x1)
k1 · · ·U(ε)tn
(xn)kn]≤ E
[V(ε)
t1(x1)
k1 · · ·V(ε)tn
(xn)kn]
. (4.92)
By Lemma 22 and Theorem 23, Theorem 2 is proved. Note that the assumptions (3.89),
(3.90), (3.91), and (3.92) are all satisfied, due to (2.15) and Theorem 23. Theorem 23 also
implies the convergence in probability for U(ε)t (ε[x/ε) to ut(x).
CHAPTER 5
LK(P) APPROXIMATION FROM SHE(1) TO
SHE(2)
Throughout this chapter, SHE(1) and SHE(2) denote the stochastic heat equations de-
fined in Chapter 1 on page 2. The goal of this section is to prove Theorem 3. Before we
proceed to its proof, we first prove the following lemma.
Lemma 26. Let pt(x) be defined as in (1.3), and fβ := |z|−β, β < d. Then we have∫Rd
∫Rd
pt(y)pt(z) fβ(y− z) dy dz = const ·t−β/α. (5.1)
Proof: We first show that∫Rd
∫Rd
p1(y)p1(z) fβ(y− z) dy dz < ∞. (5.2)
By (4.58), (4.59), we have
p1(x) ≤ c|x|α+d 1Rd\B(0;1)(x) + c · 1B(0;1)(x), (5.3)
for some constant c > 0 depending on ν only. (5.2) then follows from the following three
approximations.
Approximation 1.∫Rd
∫Rd
1B(0;1)(y)1B(0;1)(z) fβ(y− z) dy dz ≤∫
B(0;1)
( ∫B(0;1)
fβ(z) dz)
dy < ∞. (5.4)
Approximation 2.
∫Rd
∫Rd
1B(0;1)(y)1Rd\B(0;1)(z)1|z|α+d fβ(y− z) dy dz
=∫
Rd\B(0;1)
1|z|α+d
(∫Rd
1B(0;1)(y + z) fβ(y) dy)
dz
≤ const ·∫
Rd\B(0;1)
1|z|α+d
(1B(0;2)(z) +
1|z|β
1Rd\B(0;2)(z))
dz
≤ const + const ·∫
Rd\B(0;2)
1|z|α+d+β
dz < ∞. (5.5)
50
Approximation 3. First we note that by Holder’s inequality, for all z ∈ Rd,
g(z) :=∫
Rd1Rd\B(0;1)(y− z)1Rd\B(0;1)(y)
1|y− z|α+d
1|y|α+d dy
≤∫
Rd\B(0;1)
1|y|2(α+d)
dy < ∞. (5.6)
We note that g(z) is a radial function. For |z| > 3,
g(z) = g(|z|e1)
=∫
Rd1Rd\B(0;1)(y
′ − |z|e1)1Rd\B(0;1)(y′)
1∣∣y′ − |z|e1∣∣α+d
1|y′|α+d dy′
= const · |z|d ·∫
Rd1Rd\B(0;1)
(|z|3
y− |z|e1)1Rd\B(0;1)
(|z|3
y)
1∣∣ |z|3 y− |z|e1
∣∣α+d1
|z|α+d|y|α+d
)dy
= const · |z|−2α−d ·∫
Rd1Rd\B(0;3/|z|)(y− 3e1)1Rd\B(0;3/|z|)(y)
1|y− 3e1|α+d|y|α+d dy
= const · |z|−2α−d ·( ∫
Rd1Rd\B(0;1)(y− 3e1)1Rd\B(0;1)(y)
1|y− 3e1|α+d|y|α+d dy
+∫
3|z|<|y−3e1|<1
1|y− 3e1|α+d|y|α+d dy +
∫3|z|<|y|<1
1|y− 3e1|α+d|y|α+d dy
)
= const · |z|−2α−d ·(
g(3e1) +∫
3|z|<|y′|<1
1|y′|α+d dy +
∫3|z|<|y|<1
1|y|α+d dy
)≤ const · |z|−2α−d · |z|α = const · |z|−α−d. (5.7)
Therefore, by (5.6) and (5.7),∫Rd
∫Rd
1Rd\B(0;1)(z)1Rd\B(0;1)(y)1|z|α+d
1|y|α+d fβ(y− z) dy dz
= const ·∫
Rd
(∫Rd
1Rd\B(0;1)(z + y)1Rd\B(0;1)(y)1
|z + y|α+d1|y|α+d dy
)fβ(z) dz
= const ·∫
Rd
(∫Rd
1Rd\B(0;1)(y)1Rd\B(0;1)(y− z)1|y|α+d
1|y− z|α+d dy
)fβ(z) dz
≤ const ·∫
Rd
(1B(0;3)(z) +
1|z|α+d 1Rd\B(0;3)(z)
)fβ(z) dz
≤ const ·(∫
B(0;3)
1|z|β
dz +∫
Rd\B(0;3)
1|z|α+d+β
)< ∞. (5.8)
Hence (5.2) is proved. Now by (4.52), the scaling property for pt(x), we have∫Rd
∫Rd
ps(y)ps(z) fβ(y− z) dy dz = t−2d/α∫
Rd
∫Rd
p1(t−1/αy)p1(t−1/αy) fβ(y− z) dy dz
= t−2d/αt2d/α∫
Rd
∫Rd
p1(y′)p1(z′) fβ(t1/αy′ − t1/αz′) dy′ dz′
= t−β/α∫
Rd
∫Rd
p1(y′)p1(z′) fβ(y′ − z′) dy′ dz′. (5.9)
51
This completes the proof. Q.E.D.
5.1 Proof of Theorem 3Let uδ
t (x) solve the (SHE(1)) with spatially homogeneous noise ηδβ, and its covariance
kernel is given by
f δβ(x) :=
(hδ
β ∗ hδβ
)(x), (5.10)
where
hδβ(x) :=
C2
δ + |x|(d+β)/2, δ > 0. (5.11)
Here C2 is the same as the one in Theorem 3. First we would like to show f δβ satisfies
all the assumptions in (SHE(1)), as in the following theorem.
Theorem 27. f δβ is a bounded, continuous, symmetric positive definite function.
Proof: We prove the several assertions of the theorem separately.
1. Proof of boundedness:
For all x ∈ Rd, by Holder’s inequality,
f δβ(x) =
∫Rd
1δ + |x− y|(d+β)/2
· 1δ + |y|(d+β)/2
dy
≤(∫
Rd
1(δ + |x− y|(d+β)/2)2
dy)1/2
·(∫
Rd
1(δ + |y|(d+β)/2)2
dy)1/2
=∫
Rd
1(δ + |y|(d+β)/2)2
dy ≤ C < ∞. (5.12)
2. Proof of symmetry:
f δβ(x) =
∫Rd
1δ + |x− y|(d+β)/2
· 1δ + |y|(d+β)/2
dy
=∫
Rd
1δ + | − x + y|(d+β)/2
· 1δ + | − y|(d+β)/2
dy
=∫
Rd
1δ + | − x− y′|(d+β)/2
· 1δ + |y′|(d+β)/2
dy′
= f δβ(−x). (5.13)
3. Proof of continuity:
52
Let xn → x, and there exists C > 0 such that |xn| ≤ C for all n ∈ N. Because
1δ + |x− y|(d+β)/2
· 1δ + |y|(d+β)/2
dy ≤ 1δ + max(|y| − C, 0)(d+β)/2
· 1δ + |y|(d+β)/2
dy,
(5.14)
where the right-hand side of (5.14) is integrable on Rd, so by the dominated convergence
theorem, we have f δβ(xn)→ f δ
β(x) as n→ ∞.
4. Proof of positive-definiteness:
The proof of this part is included in Appendix A.1. Q.E.D.
Now we recall Section 2.2, given a space–time white noise ξ defined on (0, ∞)×Rd, we
may construct ηβ, ηδβ by
ηβ(φ) :=∫(0,∞)×Rd
(∫Rd
φ(s, y)hβ(y− x) dy)
ξ(ds, dx), (5.15)
ηδβ(ψ) :=
∫(0,∞)×Rd
(∫Rd
φ(s, y)hδβ(y− x) dy
)ξ(ds, dx). (5.16)
for all φ, ψ such that
(t, x) 7→ φ(t, x) =∫
Rdφ(t, y)hβ(y− x) dy ∈ L2((0, ∞)×Rd), (5.17)
(t, x) 7→ ψ(t, x) =∫
Rdψ(t, y)hδ
β(y− x) dy ∈ L2((0, ∞)×Rd). (5.18)
Note that all the results we have shown in Section 2.2 are for hβ, but they remain true
if we replaced hβ with hδβ. From the mild forms of ut(x) and uδ
t (x), we have
ut(x) =∫(0,t)×Rd
pt(x− y)u0(y) dy +∫(0,t)×Rd
pt−s(x− y)σ(us(y))ηβ(dy, ds), (5.19)
uδt (x) =
∫(0,t)×Rd
pt(x− y)u0(y) dy +∫(0,t)×Rd
pt−s(x− y)σ(uδs(y))η
δβ(dy, ds). (5.20)
By (2.21),
ut(x) =∫(0,t)×Rd
pt(x− y)u0(y) dy
+∫(0,t)×Rd
(∫Rd
pt−s(x− z)σ(us(z))hβ(z− y) dz)
ξ(dy, ds),
uδt (x) =
∫(0,t)×Rd
pt(x− y)u0(y) dy
+∫(0,t)×Rd
(∫Rd
pt−s(x− z)σ(uδs(z))h
δβ(z− y) dz
)ξ(dy, ds). (5.21)
53
Therefore, by BDG inequality for space–time stochastic integral against space–time
white noise (see Proposition 4.4 of [11]), Minkowski’s integral inequality, and (2.15),
‖ut(x)−uδt (x)‖2
k
≤∫(0,t)×Rd
∥∥∥∥∫Rdpt−s(x− z)
(σ(us(z))hβ(z− y)− σ(uδ
s(z))hδβ(z− y)
)dz∥∥∥∥2
kdy ds
≤2∫(0,t)×Rd
∥∥∥ ∫Rd
pt−s(x− z)(σ(us(z))hβ(z− y)− σ(us(z))hδ
β(z− y))
dz∥∥∥2
kdy ds
+ 2∫(0,t)×Rd
∥∥∥ ∫Rd
pt−s(x− z)(σ(us(z))hδ
β(z− y)− σ(uδs(z))h
δβ(z− y)
)dz∥∥∥2
kdy ds
≤2∫(0,t)×Rd
( ∫Rd
pt−s(x− z)‖σ(us(z))‖k ·∣∣∣hβ(z− y)− hδ
β(z− y)∣∣∣ dz
)2dy ds
+ 2∫(0,t)×Rd
( ∫Rd
pt−s(x− z)‖σ(us(z))− σ(uδs(z))‖k · hδ
β(z− y) dz)2
dy ds
≤const ·∫(0,t)×Rd
( ∫Rd
pt−s(x− z) ·∣∣∣hβ(z− y)− hδ
β(z− y)∣∣∣ dz
)2dy ds
+ const ·∫(0,t)×Rd
( ∫Rd
pt−s(x− z)‖us(z)− uδs(z)‖k · hδ
β(z− y) dz)2
dy ds
=(I) + (II). (5.22)
By Lemma 26,
∫(0,t)×Rd
(∫Rd
pt−s(x− z)hβ(z− y) dz)2
dy ds
=∫(0,t)×Rd
(∫Rd×Rd
ps(x− z1)ps(x− z2)hβ(z1 − y)hβ(z2 − y) dz1 dz2
)dy ds
=∫(0,t)×Rd×Rd
ps(x− z1)ps(x− z2)
(∫Rd
hβ(z1 − y)hβ(z2 − y) dy)
dz1 dz2 ds
=∫ t
0
∫Rd×Rd
ps(x− z1)ps(x− z2) fβ(z1 − z2) dz1 dz2 ds
=∫ t
0
∫Rd×Rd
ps(z1)ps(z2) fβ(z1 − z2) dz1 dz2 ds
= const ·∫ t
0s−β/α ds < ∞. (5.23)
Therefore, (I) ↓ 0 as δ ↓ 0, by the dominated convergence theorem. By Lemma 26, BDG
inequality for space–time white noise, and the inequality hδβ ≤ hβ, we have
(II) ≤ const ·∫ t
0supx′∈Rd
‖us(x′)− uδs(x′)‖k
(∫Rd×Rd
pt−s(x− y)pt−s(x− z) fβ(y− z) dy dz)
ds
≤ const ·∫ t
0supx′∈Rd
‖us(x′)− uδs(x′)‖k · s−β/α ds. (5.24)
54
As a result,
supx∈Rd‖ut(x)− uδ
t (x)‖k ≤ (I) + const ·∫ t
0supx∈Rd‖us(x)− uδ
s(x)‖k · s−β/α ds. (5.25)
By Gronwall’s inequality (see Appendix A.2),
supx∈Rd‖ut(x)− uδ
t (x)‖k ≤ (I) · exp(
const ·∫ t
0s−β/α ds
). (5.26)
Therefore, we let δ ↓ 0 to complete the proof. Q.E.D.
5.2 Proof of Theorem 4Let ut(x) and ut(x) satisfy the assumptions in Theorem 4. We could find solutions
uδt (x) and vδ
t (x) to (SHE(1)), such that (1.16) holds, by Theorem 3.
By Theorem 2 applied to uδt (x) and vδ
t (x), we have for any
x1, x2, · · · , xn ∈ Rd,
t1, t2 · · · tn ≥ 0,
k1, · · · , kn ∈ [0, ∞),
E[uδ
t1(x1)
k1 · · · uδtn(xn)
kn]≤ E
[vδ
t1(x1)
k1 · · · vδtn(xn)
kn]
. (5.27)
Theorem 4 then follows from Theorem 2 and Lemma 22. Again, the assumptions (3.89),
(3.90), (3.91), and (3.92) are all satisfied, due to (2.15) and Theorem 3. Theorem 3 also
implies the convergence in probability for uδt (x) to ut(x).
APPENDIX A
FOURIER TRANSFORM
Recall that the Schwartz space S (Rd) is the collection of all functions f : Rd → C such
that
supx∈Rd|xαDβ f (x)| < ∞ (A.1)
for all multi-indices α, β.
The space S (Rd) is a metric space. We may construct a metric topology for S (Rd) as
follows (see also [17]). First we define metrics ρα,β( f , g) := supx∈Rd |xαDβ( f − g)(x)| on
S (Rd) for every multi-index α, β, and then enumerate these metrics as ρ1, ρ2, · · · . Then,
we could define a new metric on S (Rd):
ρ( f , g) :=∞
∑n=1
12n ·
ρn( f , g)1 + ρn( f , g)
. (A.2)
We define the Fourier transform F on S (Rd) by
F [ f ](ξ) :=1
(2π)d/2
∫Rd
f (x)e−ix·ξ dx ∀ξ ∈ Rd. (A.3)
It is known that F [ f ] ∈ S (Rd) (see [16], [17]).
The space of tempered distributions, namely the dual space S ′(Rd) of Schwartz space,
is the space of all continuous linear functionals Rd → C.
Given f ∈ S′(Rd), the Fourier transform of f is defined by (see [17])
〈F [ f ], φ〉 = 〈 f ,F [φ]〉, (A.4)
for all φ ∈ S (Rd). When f is a function on Rd, 〈 f , φ〉 :=∫
Rd f (x)φ(x) dx. Note that (A.4)
extends the definition for the Fourier transform on S (Rd).
Here we would like to prove two results. The first one is a well-known identity (see,
e.g., section 3.3 of [7]), and the second one is part of the proof of Theorem 27.
Theorem 28. Let f (x) = |x|−β, 0 < β < d, x ∈ Rd. Then F [ f ](ξ) = const ·|ξ|−d+β.
56
Proof: We first note that for k > 0,
Γ(k) =∫ ∞
0sk−1e−s ds
=∫ ∞
0(t|ξ|2)k−1e−t|ξ|2 · |ξ|2 dt
=∫ ∞
0|ξ|2k · tk−1e−t|ξ|2 dt (A.5)
Letting k = β/2 > 0 in (A.5), we have
|ξ|−β = Γ(β
2)−1 ·
∫ ∞
0t
β2−1e−t|ξ|2 dt.
Now given any f ∈ S′(Rd), for any φ ∈ S (Rd), we have⟨1|ξ|β
,F [φ]⟩
=∫
RdΓ(
β
2)−1 ·
( ∫ ∞
0t
β2−1e−t|ξ|2 dt
)· F [φ](ξ) dξ
= Γ(β
2)−1 ·
∫ ∞
0t
β2−1( ∫
Rde−t|ξ|2 · F [φ](ξ) dξ
)dt
= Γ(β
2)−1 ·
∫ ∞
0t
β2−1( ∫
Rde−t|ξ|2 ·
( 1(2π)d/2 ·
∫Rd
φ(x)e−ix·ξ dx)
dξ
)dt
= Γ(β
2)−1 ·
∫ ∞
0t
β2−1( ∫
Rdφ(x) ·
( 1(2π)d/2 ·
∫Rd
e−t|ξ|2 e−ix·ξ dξ)
dx)
dt
= Γ(β
2)−1 ·
∫ ∞
0t
β2−1( ∫
Rdφ(x) ·
( 1(2t)d/2 e−|x|
2/4t)
dx)
dt
= Γ(β
2)−1 ·
∫Rd
φ(x) ·( ∫ ∞
0t
β2−1 · 1
(2t)d/2 e−|x|2/4t dt
)dx, (A.6)
where we have used Fubini–Tonelli’s theorem three times. Apply the substitution s =
|x|2/4t, (A.6) equals
Γ(β
2)−1 · 1
2d/2 ·∫
Rdφ(x) ·
( ∫ ∞
0
( |x|24s
) β2−
d2−1· e−s · |x|
2
4· 1
s2 ds)
dx
=Γ(β
2)−1 · 2d/2−β ·
∫Rd
φ(x) · 1|x|d−β
( ∫ ∞
0s
d−β2 −1 · e−s ds
)dx
=Γ(β
2)−1 · 2d/2−β · Γ(d− β
2) ·∫
Rdφ(x) · 1
|x|d−βdx
=
⟨φ, 2d/2−β ·
Γ( d−β2 )
Γ( β2 )· 1|x|d−β
⟩. (A.7)
The calculations above show
F [ f ](x) = 2d/2−β ·Γ( d−β
2 )
Γ( β2 )· 1|x|d−β
. (A.8)
Q.E.D.
57
Theorem 29. Let f δβ(x) = (hδ
β ∗ hδβ)(x), where hδ
β(x) :=(
δ + |x|(d+β)/2)−1
, 0 < β < d,
x ∈ Rd. Then f δβ is a positive definite function.
Proof: Let hδβ,N(x) = 1
δ+|x|(d+β)/2 · 1[−N,N]d(x), and f δβ,N(x) := hδ
β,N ∗ hδβ,N(x). We have
F [ f δβ,N ](ξ) =
∣∣∣F [hδβ,N ](ξ)
∣∣∣2, (A.9)
and thus for any φ ∈ S (Rd),∫RdF [ f δ
β,N ](ξ)φ(ξ) dξ =∫
Rd
∣∣∣F [hδβ,N ](ξ)
∣∣∣2φ(ξ) dξ. (A.10)
Because hδβ,N → hδ
β in L2(Rd), F [hδβ,N ] → F [hδ
β] in L2(Rd) (see, for example, [17]).
Therefore, along with the fact that φ is bounded, we have
limN→∞
∫Rd
∣∣∣F [hδβ,N ](ξ)
∣∣∣2φ(ξ) dξ =∫
Rd
∣∣∣F [hδ](ξ)∣∣∣2φ(ξ) dξ. (A.11)
Besides, by the monotone convergence theorem, f δβ,N(x) ↑ f δ
β(x) as N → ∞ for every
ξ ∈ Rd. This shows
limN→∞
∣∣∣ ∫RdF [ f δ
β,N ](ξ)φ(ξ) dξ −∫
RdF [ fδ](ξ)φ(ξ) dξ
∣∣∣= lim
N→∞
∣∣∣ ∫Rd
f δβ,N(x)F [φ](x) dx−
∫Rd
f δβ(x)F [φ](x) dx
∣∣∣≤ lim
N→∞
∫Rd
∣∣∣ f δβ,N − f δ
β(x)∣∣∣ · ∣∣∣F [φ](x)
∣∣∣ dx
= 0, (A.12)
by the dominated convergence theorem. (A.10), (A.11), and (A.12) together show the
Fourier transform of f δβ is given by the finite positive measure
∣∣∣F [hδβ](ξ)
∣∣∣2 dξ (the meausre
is finite because F [hδβ] ∈ L2(Rd)). Therefore, f δ
β is positive definite by Bochner’s theorem
(see, for example, Theorem 2.7 of [18]). Q.E.D.
APPENDIX B
GRONWALL’S INEQUALITY FOR
MEASURABLE FUNCTIONS
Gronwall’s inequality is a famous one with many variations; the one we need for this
thesis is proved below. We also refer the reader to Appendix 5 of [4].
Theorem 30. Let u : [a, b] → R be a bounded Lebesgue measurable function, g : [a, b] → R
be a nondecreasing function, and β : [a, b] → R be a nonnegative Lebesgue measurable function.
Assume that for all t ∈ [a, b] we have
u(t) ≤ g(t) +∫ t
aβ(s)u(s) ds, (B.1)
then for all t ∈ [a, b],
u(t) ≤ g(t)e∫ t
a β(s) ds. (B.2)
Proof: Let v(z) =∫ z
a β(s)u(s) ds · e−∫ z
a β(s) ds, which is differentiable a.e. z ∈ [a, b], and
its derivative, if exists, is given by
v′(z) = β(z)u(z) · e−∫ z
a β(s) ds − β(z) ·∫ z
aβ(s)u(s) ds · e−
∫ za β(s) ds. (B.3)
Therefore, for a.e. z ∈ [a, b],
v′(z) ≤ β(z)g(z)e−∫ z
a β(s) ds. (B.4)
Because v is absolute continuous on [a, b], for all t ∈ [a, b] we have
v(t) = v(a) +∫ t
av′(z) dz =
∫ t
av′(s) dz
≤∫ t
aβ(z)g(z)e−
∫ za β(s) ds dz. (B.5)
59
Therefore, for all t ∈ [a, b],
u(t) ≤ g(t) +∫ t
0β(s)u(s) ds
= g(t) + v(t)e∫ t
a β(s) ds
≤ g(t) +∫ t
aβ(z)g(z)e
∫ tz β(s) ds dz
≤ g(t) + g(t)∫ t
aβ(z)e
∫ tz β(s) ds dz
= g(t) + g(t)(− 1 + e
∫ ta β(s) ds)
= g(t)e∫ t
a β(s) ds. (B.6)
Q.E.D.
REFERENCES
[1] D. Conus, M. Joseph, D. Khoshnevisan, and S.-Y. Shiu, On the chaotic character of thestochastic heat equation, ii, Probab. Theory Rel. Fields, 156 (2013), pp. 483–533.
[2] J. T. Cox, K. Fleischmann, and A. Greven, Comparison of interacting diffusions and anapplication to their ergodic theory, Probab. Theory Rel. Fields, 105 (1996), pp. 513–528.
[3] R. C. Dalang, Extending the martingale measure stochastic integral with applications tospatially homogeneous s.p.d.e.’s, Electron. J. Probab., 4 (1999), pp. 1–29.
[4] S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence,Wiley, 1986.
[5] M. Foondun and D. Khoshnevisan, On the stochastic heat equation with spatially-coloredrandom forcing, Trans. Amer. Math. Soc., 365 (2013), pp. 409–458.
[6] C. Geiß and R. Manthey, Comparison theorems for stochastic differential equations infinite and infinite dimensions, Stochastic Processes and their Applications, 53 (1994),pp. 23–35.
[7] I. M. Gelfand and N. Y. Vilenkin, Generalized Functions, vol. 1, Academic Press, 1964.
[8] N. Georgiou, M. Joseph, D. Khoshnevisan, and S.-Y. Shiu, Semi-discrete semi-linearparabolic spdes, Ann. Applied Probab, 25 (2015), pp. 2959–3006.
[9] I. I. Gikhman and A. V. Skorokhod, Introduction to the theory of random processes, W.B. Saunders company, 1969.
[10] M. Joseph, D. Khoshnevisan, and C. Mueller, Strong invariance and noise comparisonprinciples for some parabolic spdes, (2014).
[11] D. Khoshnevisan, Analysis of Stochastic Partial Differential Equations, vol. 119 of CBMSRegional Conference Series in Mathematics, American Mathematical Society, Wash-ington, DC, 2014.
[12] V. Kolokoltsolv, Markov processes, semigroups and generators, De Gruyter, 2011.
[13] D. Revuz and M. Yor, Continuous Martingales and Brownian Motion, vol. 293, Springer-Verlag, Berlin, 2004.
[14] R. L. Schilling and L. Partzsch, Brownian motion, De Gruyter, Berlin, 2012.
[15] T. Shiga and A. Shimizu, Infinite dimensional stochastic differential equations and theirapplications, J. Math. Kyoto Univ., 20 (1980), pp. 395–416.
[16] E. M. Stein and R. Shakarchi, Fourier Analysis: An Introduction, Princeton Lecturesin Analysis, Princeton University Press, 2011.
61
[17] E. M. Stein and G. Weiss, Introduction to Fourier analysis on Euclidean spaces, PrincetonUniversity Press, 1971.
[18] S. R. S. Varadhan, Probability theory, Courant Lecture Notes, AMS, 2001.
[19] J. B. Walsh, An Introduction to Stochastic Partial Differential Equations, vol. 1180 ofEcole d’ete de Probabilites de Saint-Flour, XIV–1984, Lecture Notes in Math., Springer-Verlag, Berlin, 1986, pp. 265–439.
Top Related