Modeling with Itô Stochastic Differential Equations §2.1 -...
Transcript of Modeling with Itô Stochastic Differential Equations §2.1 -...
Modeling with Itô Stochastic Differential Equations§2.1 - 2.3E. Allenpresentation by T. Perälä 13.10.2009Postgraduate seminar on applied mathematics 2009
Outline
Introduction to Stochastic Processes (§2.1)
Discrete Stochastic Processes (§2.2)
Markov process
Markov chains
Continuous Stochastic Processes (§2.3)
Continuous Markov process
Wiener process
Introduction (§2.1)
A stochastic process is a family of random variables defined on a probability space
If the set is discrete, the stochastic process is called discrete
If the set is continuous, the stochastic process is called continuous
The random variables can be discrete valued or continuous valued at each
Solutions of stochastic differential equations are stochastic processes
Discrete Stochastic Processes (§2.2)
Let be a set of discrete times
Let sequence of random variables each be defined on the sample space
If only the present value of is needed to determine the future value of the sequence is called Markov process
A discrete-valued Markov process is called a Markov chain
Let define the one-step transition probabilities for a Markov chain, that is
If the transition probabilities are independent of time , then the Markov chain is said to have stationary transition probabilities and is called a homogenous Markov chain
Example 2.1. A continuous-valued Markov process
Let and for , where . Let be defined by
where .
Then, is a Markov process with continuous values of and discrete values of time .
Note that so .
Example 2.2. A homogenous Markov chain
Let with for where and . Define the probability distribution of the discrete random variable so that takes on the values with probabilities
assuming that .
Let
where are independent identically distributed (i.i.d.) values with the same distribution as .
Then, and
The stochastic process is discrete time and discrete valued.
The transition probabilities are
Furthermore, we note that
Then, by the central limit theorem
In particular, if , then for large . Thus, as increases, the distribution of approaches the same distribution as the random variable in Example 2.1.
Homogeneous Markov chain
Let be a homogeneous Markov chain defined at discrete times where so that . Let for eachLet
define the transition probabilities.
The transition probability matrix is defined as and .
The probability distribution of can be computed using the transition probability matrix .
Define the th power of as . As , then by matrix multiplication
where . This relation is known as the Chapman-Kolmogorov formula for a homogeneous Markov chain.
Let be the probability distribution of
Let , where is the initial probability distribution of . We see that
Thus,
Example 2.3. Approximation to a Poisson process
Consider the discrete homogeneous stochastic process defined by the transition probabilities
Assume that . In this example, the transition probability matrix is bidiagonal and the equation has the componentwise form:
and
Rearranging these expressions yields
where and . As , the above Markov chain probabilities approach those satisfied by the Poisson process. That is,
Nonhomogeneous Markov chain
Let where . Let be the Markov chain satisfying w and where for each for a positive number .
Let
define the transition probabilities which now may depend on time . (Nonhomogenous Markov chain)
The transition probability matrix is defined as the matrix
Similar to the homogeneous Markov chain, the probability distribution for can be computed using the probability transition matrices for .
Let define the probability distribution at time .Let where is the initial probability distribution.
Noticing that we see that
Example 2.4. Forward Kolmogorov equations
Let and let . Let be given. Define the transition probabilities of a discrete stochastic process by the following:
where and are smooth nonnegative functions. Notice that with the above transition probabilities, if is the change in the stochastic process at time fixing , then
It is assumed that is small so that is positive.
Let be the probability distribution at time . Then, satisfies
(2.1)
Rearranging yields
where, and As , the discrete stochastic process approaches a continuous-time process. Then satisfies the initial-value problem:
(2.2)
with initial values .These are the Forward Kolmogorov equations for the continuous-time stochastic process.
Example 2.4. continued
Now assume that is small so that the stochastic process approaches a continuous-valued process. As
(mean value theorem)
for some values such that , then the (2.2) approximates the partial differential equation:
(2.3)
Equation (2.2) is a central-difference approximation to the above one. This approximation is accurate for small and when comparing the solutions of (2.1) and (2.3), it can be shown that
It can be shown that (2.3) is the forward Kolmogorov equation corresponding to a diffusion process having the stochastic differential equation
(2.4)
The probability density of solutions to the above stochastic differential equation satisfies the partial differential equation (2.3).
The coefficients of (2.4) are related to the discrete stochastic model (2.1) through the mean and variance in the change in the process over a short time interval fixing . Specifically,
Example 2.5. Specific example of forward Kolmogorov eq’s
Consider a birth-death process, where and is the per capita birth rate and is the per capita death rate. It is assumed that and are constants. The transition probabilities for this example have the form
It follows that the probability distribution in continuous time (letting ) satisfies the forward Kolmogorov equations
with assuming an initial population of size . Note that, fixing at time , and to order . For large , the above equations approximately satisfy the Fokker-Planck equation
with
The probability distribution is the probability distribution of solutions to the Itô stochastic differential equation
with .
Thus the solutions to the above stochastic differential equation have approximately the same probability distribution as the discrete birth-death stochastic process and a reasonable model for the simple birth-death process is the above stochastic differential equation.
Continuous Stochastic Processes (§2.3)
Let a continuous stochastic process be defined on the probability space where is an interval in time and the process is defined at all time instants in the interval.
A continuous-time stochastic process is a function of two variables and and may be discrete-valued or continuous-valued. In particular, is a random variable for each and maps the interval into and is called a sample path, realization, or a trajectory of the stochastic process for each .
Specific knowledge of is generally unnecessary, but each results in a different trajectory. The normal convention is that the variable is often suppressed, that is, represents a random variable for each and represents a trajectory over the interval .
The stochastic process is a Markov process if the state of the process at any time determines the future state of the process. Specifically,
whenever .
Example 2.6. Poisson process with intensity
Let equal the number of observations in time . Assume that the probability of one observation in time interval is equal to . This is a continuous stochastic process and the probability of observations in time is
The process is a continuous-time stochastic process which is discrete-valued. Specifically, is a Poisson process with intensity . Note that and the number of observations at any time is Poisson-distributed with mean . That is, for any ,
Indeed, the process is a Markov process and
and the probability distribution at time only depends on the state of the system at time and not on the history of the system. Also,
. The relations satisfied by the probabilities of the discrete stochastic process for Example 2.3 are finite-difference approximations to the above differential equations and approach these differential equations as .
In addition, if , then is also Poisson-distributed with intensity and .
Example 2.6. continued
Transition probability for continuous Markov process
Consider the transition probability density function for transition from at time to at time for a continuous Markov process.
Analogous to the discrete Markov process, the transition probability density function satisfies the Chapman-Kolmogorov equation:
A Markov process is said to be homogeneous if its transition probability satisfies
That is, the transition probability only depends on the elapsed time. In this case, it can be written as .
Example 2.7. An approximate Wiener process
Let be independent Poisson processes with intensity as described in Example 2.6. Let be another stochastic process defined by
By the Central Limit Theorem, as increases, approaches a random variable distributed normally with mean and variance . Indeed, by considering Example 2.6, approaches a normally distributed variable with mean and variance for every .
In this example, approaches a Wiener process or Brownian motion as increases. A Wiener process is a continuous stochastic process with stationary independent increments such that
In particular
are independent Gaussian random variables for . Notice that a Wiener process is a homogeneous Markov process.
Generation of a sample path of a Wiener process
How to generate a sample path of a Wiener process at a finite number of points?
Suppose that a Wiener process trajectory is desired on the interval at the points where . Then, and a recurrence relation that gives the values of a Wiener process trajectory at the points is given by
where are independent normally distributed numbers for .
The values determine a Wiener sample path at the points . Using these values, the Wiener process sample path can be approximated everywhere on the interval .
Another way: Karhunen-Loève expansion, which is derived from a Fourier series expansion of the Wiener process:
for , where are i.i.d. standard normal random variables
We can get the standard normal random variable from the Wiener process
Generation of a sample path of a Wiener process
Sample paths of a Wiener process
t = 1, 2, ..., 200
Recurrent relation Karhunen-Loève n = 1,2,...,10000
Generation of sample path of a Wiener process continued
Lets check that the series (2.9) has the required properties of the Wiener process
The partial sum:
It can be shown that for each and that is Cauchy in Therefore as in for each
As for each , then where
Noting that
for
In addition, it can be shown using the trigonometric identity
that
Continuity and differentiability of a Wiener process
Notice that at each , . In addition, is continuous in the mean square sense.
thus
so given there exists a such that
when
However, does not have a derivative, as
there is no such that
Expectations of functions of a Wiener process
Let the Wiener process be for
First, recall that probability density of normally distributed r.v. with mean and variance is
For and ,
In addition,
Now consider a partition of . For ,
Furthermore, for
The densities define a set of finite-dimensional probability measures on
Expectations of functions of a Wiener process continued
The probability distribution of the partition satisfies
It is interesting that this probability measure can be extended through finer and finer partitions to all where the measure is identical to the finite-dimensional measure for any partition
As these finite-dimensional probability measures satisfy certain symmetry and compatibility conditions, Kolmogorov’s extension theorem can be applied which says that there exists a probability space and a stochastic process such that the finite-dimensional probability distributions are identical to those defined above.
The stochastic process is the Wiener process or Brownian motion and over any partition of , the finite dimensional distributions of reduce to the above expression
Transition probabilities of a Wiener process
Finally, consider the transition probability density for the Wiener process from at time to at time . In this case,
and we see that
so the transition probability depends only on the elapsed time and thus the Wiener process is a continuous homogeneous Markov process.
In addition, one can directly verify the Chapman-Kolmogorov equation for this transition probability, that is, for