Research internship on optimal stochastic theory with financial application using finite differences...
-
Upload
asma-ben-slimene -
Category
Economy & Finance
-
view
109 -
download
2
Transcript of Research internship on optimal stochastic theory with financial application using finite differences...
2nd Year Internship at LAMSIN: Optimal stochasticcontrol problem with financial applications
Asma BEN SLIEMENE
ENSIIE
from June 2016 to September 2016
Overview
1 Optimal stochastic problem theoryDynamic Programming PrincipleHamilton Jacobi Bellman equation
2 Resolution methodsProbabilistic approachNumerical/Deterministic approach with PDEs
3 Financial applicationsMerton portfolio allocation ProblemInvestment/consumption Problem
4 Numerical results on C++ and ScilabFor the investment problemFor the investment/consumption problem
LAMSIN
Traning objective: An open door into financial mathematicsresearchlocated at Ecole Nationale d’Ingenieurs de Tunis (Tunisia)
comprises 83 researchers, including 40 doctoral students. Each year,6 to 8 students complete their Master’s theses within the laboratory.
1983: Creation of a research group in numeric analysis at ENIT.
2001: becomes Research Laboratory associated with INRIA (e-didonteam).
in July 2003: was selected by the Agence Universitaire de laFrancophonie (AUF) to be a regional center of excellence in AppliedMathematics.
Fields of study research: Inverse problems, financial mathematicsincluding optimoiszation control problems etc.
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory2 Applications in finance3 Dynamic programming principle4 Hamilton Jacobi Bellman equation
4 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory2 Applications in finance3 Dynamic programming principle4 Hamilton Jacobi Bellman equation
5 / 74
1 State of the system: Xt (ω) and its dynamics through a SDE
dXt = b(Xt , αt )dt + σ(Xt , αt )dWt , (1)
2 Control: a process α = (αt )t that satisfy somme constraints and definedin A the set of admissible control.
3 Performance/cost criterion: maximize (or minimize) over all admissiblecontrols J(X , α)
Consider objective functionals in the form
E
[∫ T
0f (Xs, ω, αs)ds + g(XT , ω)X = x
], on a finite horizon T
and
E[∫ ∞
0eβt f (Xs, ω, αs)ds |X = x
], on a infinite horizon
f is a running profit function, g is a terminal reward function, and β > 0 isa discount factor.Objective: find the value functionv(x) = supα J(X , α)
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory2 Applications in finance3 Dynamic programming principle4 Hamilton Jacobi Bellman equation
7 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
Portfolio allocationProduction-consumption modelIrreversible investment modelQuadratic hedging of optionsSuperreplication cost in uncertain volatilityOptimal selling of an assetValuation of natural resources
Ergodic and risk-sensitive control problemsSuperreplication under gamma constraintsRobust utility maximization problem and risk measuresForward performance criterion
8 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
Portfolio allocationProduction-consumption modelIrreversible investment modelQuadratic hedging of optionsSuperreplication cost in uncertain volatilityOptimal selling of an assetValuation of natural resources
Ergodic and risk-sensitive control problemsSuperreplication under gamma constraintsRobust utility maximization problem and risk measuresForward performance criterion
9 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
Portfolio allocationProduction-consumption modelIrreversible investment modelQuadratic hedging of optionsSuperreplication cost in uncertain volatilityOptimal selling of an assetValuation of natural resources
Ergodic and risk-sensitive control problemsSuperreplication under gamma constraintsRobust utility maximization problem and risk measuresForward performance criterion
10 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory2 Applications in finance3 Dynamic programming principle4 Hamilton Jacobi Bellman equation
11 / 74
Definition
Bellman’s principle of optimality
” An optimal policy has the property that whatever the initial state and initialdecision are, the remaining decisions must constitute an optimal policy withregard to the state resulting from the first decision”
Mathematical formulation of the Bellman’s principle or DynamicProgramming Principle (DPP)
The usual version of the DPP is written as
v(t , x) = supα∈A(t,x)
E
[∫ θ
tf (s,X t,x
s , αs) ds + v(θ,X t,xθ )
]
for any stopping time θ ∈ Tt,T (set of stopping times valued in [t ,T ]).
Usual version of the DPP
(1) Finite horizon: let (t , x) ∈ [0,T ]× Rn. Then ∀ θ ∈ Tt,T
v(t , x) = supα∈A(t,x)
supθ∈Tt,T
E
[∫ θ
tf (s,X t,x
s , αs) ds + v(θ,X t,xθ )
](2)
= supα∈A(t,x)
infθ∈Tt,T
E
[∫ θ
tf (s,X t,x
s , αs) ds + v(θ,X t,xθ )
](3)
(2) Infinite horizon: let x ∈ [0,T ]Rn. Then ∀ θ ∈ Tt,T we have
v(t , x) = supα∈A(x)
supθ∈T
E
[∫ θ
0e−βsf (X x
s , αs) dx + e−βsv(X xθ )
](4)
= supα∈A(x)
infθ∈T
E
[∫ θ
0e−βsf (X x
s , αs) dx + e−βθv(X xθ )
](5)
Strong version of the DPP
Lemma
Dynamic programming principle (i) For all α ∈ A(t , x) and θ ∈ Tt,T :
v(t , x) ≥ E
[∫ θ
tf (s,X t,x
s , αs) ds + v(θ,X t,xθ )
](6)
(ii) For all ε > 0, there exists α ∈ A(t , x) such that for all θ ∈ Tt,T :
v(t , x)− ε ≤ E
[∫ θ
tf (s,X t,x
s , αs) ds + v(θ,X t,xθ )
](7)
for any stopping time θ ∈ Tt,T .
We can assume that:
v(t , x) = supα∈A(t,x)
E
[∫ θ
tf (s,X t,x
s , αs) ds + v(θ,X t,xθ )
](8)
Proof of the DPP
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Dynamic Programming PrincipleHamilton Jacobi Bellman equation
I) Introduction to optimal stochastic problem
1 Optimal stochastic problem theory2 Applications in finance3 Dynamic programming principle4 Hamilton Jacobi Bellman equation
16 / 74
Formal derivation of HJB
Assume that the value function is smooth enough (i.e. is C2) to apply Ito’sformula.
For any α ∈ A, and a controlled process X t,x apply Ito’s formula tov(s,X t,x ) between s = t and s = t + h:
v(t +h,X t,xt+h) = v(t , x)+
∫ t+h
t
(∂v∂t
+ Lav)
(s,X t,xs )ds +(local)martingal ,
where for a ∈ A, La is the second-order operator associated to thediffusion X with constant control a:f
Law = b(x ,a)∇xw +12
tr(σ(x ,a)σ′(s,a))∇2xw
Plug into the DPP:Devide by h, send h to zero, and obtain by the mean-value theorem, theso-called HJB equation
Formal derivation of HJB
The Parabolic HJB equation
−∂v∂t
(t , x) + H1(t , x ,∇xv(t , x),∇2xv(t , x)) = 0, ∀(t , x) ∈ [0,T [×Rn, (9)
where ∀(t , x ,p,M) ∈ Rn × Rn × Sn :
H1(t , x ,p,M) = supa∈A
[−b(x ,a)p − 1
2tr(σσ′(x ,a))M − f (t , x ,a)
]. (10)
The Elliptic HJB equation
βv(x)− H2(x ;∇xv(x),∇2xv(x)) = 0, ∀x ∈ Rn,
Where ∀(x ,p,M) ∈ Rn × Rn × Sn,
H2(x ,p,M) = supa∈A
[b(x ,a)p +
12
tr(σ(x ,a)σ′(x ,a)M + f (x ,a)
]= 0,
Formal derivation of HJB
The Parabolic HJB equation
−∂v∂t
(t , x) + H1(t , x ,∇xv(t , x),∇2xv(t , x)) = 0, ∀(t , x) ∈ [0,T [×Rn, (9)
where ∀(t , x ,p,M) ∈ Rn × Rn × Sn :
H1(t , x ,p,M) = supa∈A
[−b(x ,a)p − 1
2tr(σσ′(x ,a))M − f (t , x ,a)
]. (10)
The Elliptic HJB equation
βv(x)− H2(x ;∇xv(x),∇2xv(x)) = 0, ∀x ∈ Rn,
Where ∀(x ,p,M) ∈ Rn × Rn × Sn,
H2(x ,p,M) = supa∈A
[b(x ,a)p +
12
tr(σ(x ,a)σ′(x ,a)M + f (x ,a)
]= 0,
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Probabilistic approachNumerical/Deterministic approach with PDEs
II) Resolution methods
1 Probabilistic approach2 PDE approach
20 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Probabilistic approachNumerical/Deterministic approach with PDEs
II) Resolution methods
1 Probabilistic approach2 PDE approach
21 / 74
Probabilistic approach
Approximate the process Xt with a Marcov chain εn such ε0 = x . Undersome conditions, εn converges in law to Xt .Monte Carlo algorithms one of the methods widely used to obtain anumerical approximation.Case g = 0: Let
(X (1), ...,X (k)
)be an i.i.d. sample drawn in the
distribution of X t,xT , and compute the mean:
vn(t , x) :=1k
n∑i=1
f(
X (i)).
Law of Large Numbers: vn(t , x) −→ v(t , x) Pa.s.The Central Limit Theorem:√
n(vn(t , x)− v(t , x)) −→ N(
0,Var[f(
X t,xT
)])in distribution,
Probabilistic approach
Approximate the process Xt with a Marcov chain εn such ε0 = x . Undersome conditions, εn converges in law to Xt .Monte Carlo algorithms one of the methods widely used to obtain anumerical approximation.Case g = 0: Let
(X (1), ...,X (k)
)be an i.i.d. sample drawn in the
distribution of X t,xT , and compute the mean:
vn(t , x) :=1k
n∑i=1
f(
X (i)).
Law of Large Numbers: vn(t , x) −→ v(t , x) Pa.s.The Central Limit Theorem:√
n(vn(t , x)− v(t , x)) −→ N(
0,Var[f(
X t,xT
)])in distribution,
Probabilistic approach
Approximate the process Xt with a Marcov chain εn such ε0 = x . Undersome conditions, εn converges in law to Xt .Monte Carlo algorithms one of the methods widely used to obtain anumerical approximation.Case g = 0: Let
(X (1), ...,X (k)
)be an i.i.d. sample drawn in the
distribution of X t,xT , and compute the mean:
vn(t , x) :=1k
n∑i=1
f(
X (i)).
Law of Large Numbers: vn(t , x) −→ v(t , x) Pa.s.The Central Limit Theorem:√
n(vn(t , x)− v(t , x)) −→ N(
0,Var[f(
X t,xT
)])in distribution,
Probabilistic approach
Approximate the process Xt with a Marcov chain εn such ε0 = x . Undersome conditions, εn converges in law to Xt .Monte Carlo algorithms one of the methods widely used to obtain anumerical approximation.Case g = 0: Let
(X (1), ...,X (k)
)be an i.i.d. sample drawn in the
distribution of X t,xT , and compute the mean:
vn(t , x) :=1k
n∑i=1
f(
X (i)).
Law of Large Numbers: vn(t , x) −→ v(t , x) Pa.s.The Central Limit Theorem:√
n(vn(t , x)− v(t , x)) −→ N(
0,Var[f(
X t,xT
)])in distribution,
Probabilistic approach
Approximate the process Xt with a Marcov chain εn such ε0 = x . Undersome conditions, εn converges in law to Xt .Monte Carlo algorithms one of the methods widely used to obtain anumerical approximation.Case g = 0: Let
(X (1), ...,X (k)
)be an i.i.d. sample drawn in the
distribution of X t,xT , and compute the mean:
vn(t , x) :=1k
n∑i=1
f(
X (i)).
Law of Large Numbers: vn(t , x) −→ v(t , x) Pa.s.The Central Limit Theorem:√
n(vn(t , x)− v(t , x)) −→ N(
0,Var[f(
X t,xT
)])in distribution,
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Probabilistic approachNumerical/Deterministic approach with PDEs
II) Resolution methods1 Probabilistic approach2 PDE approach
27 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Probabilistic approachNumerical/Deterministic approach with PDEs
Steps
PDE approach is based on:
Step 1: Discretization of time and space sets/Approximating derivatives
Step 2: Discretizing boundary conditions (Dirichlet/Neumann
Step 3: soving problem (Policy/Value iteration, Howard)
v: the value function
Optimal control strategy/stopping time
28 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Probabilistic approachNumerical/Deterministic approach with PDEs
Steps
PDE approach is based on:
Step 1: Discretization of time and space sets/Approximating derivatives
Step 2: Discretizing boundary conditions (Dirichlet/Neumann
Step 3: soving problem (Policy/Value iteration, Howard)
v: the value function
Optimal control strategy/stopping time
29 / 74
Time and space descretization
Let Ω = [0,1], ∆t = TN , N ∈ N∗, tk=0...N := k∆t , h step in space, tk = k∆t ,
xj = jh. Ωh, Lα, vkj (x),bk
j ,ak,αj approximate Ω, Lα, b(tk , xj ), α,a(tk , xj , α)
Approximation of firstderivative:
∂v∂x
(tk , xj ) :=vk
j+1 − vkj−1
2h1(11)
∂v∂x
(tk , xj ) :=vk
j+1 − vkj
h(12)
or
∂v∂x
(tk , xj ) :=vk
j − vkj−1
h(13)
Approximation of second derivative
∂2v∂x2 (tk , xj ) :=
vkj+1 − 2vk
j + vkj−1
h2 (14)
Approximation of time derivative
∂v∂t
(tk , xj ) :=vk
j − vk−1j
∆t(15)
or∂v∂t
(tk , xj ) :=vk+1
j − vkj
∆t(16)
Time and space descretization
Let Ω = [0,1], ∆t = TN , N ∈ N∗, tk=0...N := k∆t , h step in space, tk = k∆t ,
xj = jh. Ωh, Lα, vkj (x),bk
j ,ak,αj approximate Ω, Lα, b(tk , xj ), α,a(tk , xj , α)
Approximation of firstderivative:
∂v∂x
(tk , xj ) :=vk
j+1 − vkj−1
2h1(11)
∂v∂x
(tk , xj ) :=vk
j+1 − vkj
h(12)
or
∂v∂x
(tk , xj ) :=vk
j − vkj−1
h(13)
Approximation of second derivative
∂2v∂x2 (tk , xj ) :=
vkj+1 − 2vk
j + vkj−1
h2 (14)
Approximation of time derivative
∂v∂t
(tk , xj ) :=vk
j − vk−1j
∆t(15)
or∂v∂t
(tk , xj ) :=vk+1
j − vkj
∆t(16)
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Probabilistic approachNumerical/Deterministic approach with PDEs
Dirichlet boundary conditions: v = g in ∂Ω× [0,T [
Neumann boundary conditions:∂v∂x = g2 in Ω× [0,T [
In case f = 0 and g = xp/p, p ∈]0,1[
vNj = gj =
xpjp and
vkM−vk
M−1h = p
xMv k
M = xp−1M , k ∈ 0..N − 1, j ∈ 0..M
v kM = v k
M−1
v kM = 0, and v k
0 = 0
NB: In portfolio allocation problem − > Black and Scholes-Merton Problem ofstocks:
dSt = µdt + σdWt ,
dS0 = rS0dt
32 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Merton portfolio allocation ProblemInvestment/consumption Problem
III) Financial applications
1 Merton portfolio allocation Problem2 Investment/consumption Problem
33 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Merton portfolio allocation ProblemInvestment/consumption Problem
III) Financial applications
1 Merton portfolio allocation Problem2 Investment/consumption Problem
34 / 74
Applications 1: Merton portfolio allocation problem infinite horizon
An agent invests at any time t a proportion αt of his wealth X in a stock ofprice S and 1− αt in a bond of price S0 with interest rate r .The dynamics of the controlled wealth process is:
dXt =Xtαt
StdSt +
Xt (1− αt )
S0t
dS0t
”Utility maximization problem at a finite horizon T ”:
v(t , x) = supα∈A
E[U(
X t,xT
)], ∀ (t , x) ∈ [0,T ]× (0,∞) .
HJB eqaution for Merton’s problem
vt + rxvx + supa∈A
[a (µ− r) xvx +
12
x2a2σ2vxx
]= 0 (17)
v(T , x) = U(x) (18)
Utility function
U is C1, strictly increasing and concave on (0,∞), and satisies the Inadaconditions:
U′(0) =∞ U
′(∞) = 0 :
Convex conjugate of U:
U(y) := supx>0
[U(x)− xy ]
We use the CRRA utility function:
U(x) =xp
p,p ≺ 1,p 0
Relative Risk Aversion RRA: −xU”(x)/U′(x) = 1− p.
→ if the person experiences an increase in wealth, he/she will choose toincrease (or keep unchanged, or decrease) the fraction of the portfolioheld in the risky asset if relative risk aversion is decreasing (or constant, orincreasing).
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
Merton portfolio allocation ProblemInvestment/consumption Problem
III) Financial applications
1 Merton portfolio allocation Problem2 Investment/consumption Problem
37 / 74
Investment/consumption problem on infinite horizon
The SDE governing the wealth process
dXt = Xt (αtµ+ (1− αt )r − ct )dt + XtαtαtdWt ,
The goal is to maximize over strategies (α, c) the expected utility fromintertemporal consumption up to a random time horizon τ :
v(x) = sup(α,c)∈A×C
E[∫ τ
0e−βtu(ctX x
t ) dt].
τ is independent of F∞, denote by F (t) = P[τ ≤ t ] = P[τ ≤ t |F∞] thedistribution function of τ .Assume an exponential distribution for the random time horizon:1− F (t) = exp−λt for some positive constant λ.Infinite horizon problem:
v(x) = sup(α,c)∈A×C
E[∫ ∞
0e−(β+λ)tu(ctX x
t ) dt]
The HJB equation associated is
βv(x)− supa∈A,c≥0
[La,cv(x) + u(cx)] = 0, x ≥ 0, (19)
where La,cv(x) = x(aµ+ (1− a)r − c)v ′ + 12 x2a2σ2v ′′
Explicit solutionThe discount factor β shall satisfy: β > ρ− λv(x) = Ku(x) solves the HJB equation where
K =1− p
β + λ− ρ
1−p
and ρ =(µ− r)2
2σ2p
1− p+ rp
The optimal controls are constant given by (a, c)
a = arg maxa∈A
[a(µ− r) + r − 12
a2(1− p)σ2]
c =1x
(v ′(x))1
p−1 .
The HJB equation associated is
βv(x)− supa∈A,c≥0
[La,cv(x) + u(cx)] = 0, x ≥ 0, (19)
where La,cv(x) = x(aµ+ (1− a)r − c)v ′ + 12 x2a2σ2v ′′
Explicit solutionThe discount factor β shall satisfy: β > ρ− λv(x) = Ku(x) solves the HJB equation where
K =1− p
β + λ− ρ
1−p
and ρ =(µ− r)2
2σ2p
1− p+ rp
The optimal controls are constant given by (a, c)
a = arg maxa∈A
[a(µ− r) + r − 12
a2(1− p)σ2]
c =1x
(v ′(x))1
p−1 .
The HJB equation associated is
βv(x)− supa∈A,c≥0
[La,cv(x) + u(cx)] = 0, x ≥ 0, (19)
where La,cv(x) = x(aµ+ (1− a)r − c)v ′ + 12 x2a2σ2v ′′
Explicit solutionThe discount factor β shall satisfy: β > ρ− λv(x) = Ku(x) solves the HJB equation where
K =1− p
β + λ− ρ
1−p
and ρ =(µ− r)2
2σ2p
1− p+ rp
The optimal controls are constant given by (a, c)
a = arg maxa∈A
[a(µ− r) + r − 12
a2(1− p)σ2]
c =1x
(v ′(x))1
p−1 .
The HJB equation associated is
βv(x)− supa∈A,c≥0
[La,cv(x) + u(cx)] = 0, x ≥ 0, (19)
where La,cv(x) = x(aµ+ (1− a)r − c)v ′ + 12 x2a2σ2v ′′
Explicit solutionThe discount factor β shall satisfy: β > ρ− λv(x) = Ku(x) solves the HJB equation where
K =1− p
β + λ− ρ
1−p
and ρ =(µ− r)2
2σ2p
1− p+ rp
The optimal controls are constant given by (a, c)
a = arg maxa∈A
[a(µ− r) + r − 12
a2(1− p)σ2]
c =1x
(v ′(x))1
p−1 .
Why Markov Chain approach?
solving the descritized system requires some conditions on the matrix Aof the differential operator Lα
Case where A is not defined positive, we can obtain a descretizationsystem such satisfy the ” Discrete Maximum principle ”
Under specific condition on the space step of discretization h we get aconvergent Markov Chain. [page 89 A. SULEM, J-P. PHILIPPE, Methodenumerique en contr ole stochastique]
The convergence of the scheme can be found and explained usingstandard arguments provided by D. Kushner [Numerical Methods forStochastic Control Problems in Continuous Time.
NB Depending on the sign of the drift b of Xt , we use the right-hand-sidescheme upwind when b is positive and the left-hand-side upwindscheme when b is negative to obtain a sort of transition probabilities(∈ [0,1] )
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
For the investment problemFor the investment/consumption problem
IV) Numerical results on C++ and Scilab1. Results for the investment problem
Approximated schemeResolution method/CodingResults
2. Results for the investment/consumption problemApproximated schemeResolution method/CodingResults
44 / 74
Approximated scheme
Approximated scheme: two different scheme were used.
The forward upwind schemethe HJB approximated is:
vk−1j = supα
[ (1− ∆t
h |bk,αj | −
∆th2 ak,α
j
)vk
j +(∆th (bk,α
j )+ + 12
∆th2 ak,α
j
)vk
j+1 +(
∆th (bk,α
j )− + 12
∆th2 ak,α
j
)vk
j−1
]vN
j = gj
Denote
pαj = p(xj , xj |α),(pα+)
j = p(xj , xj+1|α),(pα−)
j = p(xj , xj−1|α)
the transition probabilities that define the transition matrix Aα.
Matrix notations: vk−1 = supα((I −∆tAα) vk
)Explicit solution is given in [1]:
Approximated scheme
Approximated scheme: two different scheme were used.
The forward upwind schemethe HJB approximated is:
vk−1j = supα
[ (1− ∆t
h |bk,αj | −
∆th2 ak,α
j
)vk
j +(∆th (bk,α
j )+ + 12
∆th2 ak,α
j
)vk
j+1 +(
∆th (bk,α
j )− + 12
∆th2 ak,α
j
)vk
j−1
]vN
j = gj
Denote
pαj = p(xj , xj |α),(pα+)
j = p(xj , xj+1|α),(pα−)
j = p(xj , xj−1|α)
the transition probabilities that define the transition matrix Aα.
Matrix notations: vk−1 = supα((I −∆tAα) vk
)Explicit solution is given in [1]:
Algorithm C++
Algorithm of the forward scheme
Initialization: ∀j in 0, ...,M, vNj =
√xj
Repeat for all k from N − 1 to 0 dovk
0 = 0calculate vk
j ∈ h := v(tk , xj ) = supαiw(tk , xj , αi )
Repeat for all j in 1, ...,M − 1,for each αi in [α− ε, α + ε] do
calculate (bαij )+ and (bαi
j )−solve
vkj = supαi
[ (1− ∆t
h |bαij | −
∆th2 aαi
j
)vk+1
j +(∆th (bαi
j )+ + 12
∆th2 aαi
j
)vk+1
j+1 +(
∆th (bαi
j )− + 12
∆th2 aαi
j
)vk+1
j−1
]vN
j = vN−1j
Algorithm C++
Algorithm of the forward scheme
Initialization: ∀j in 0, ...,M, vNj =
√xj
Repeat for all k from N − 1 to 0 dovk
0 = 0calculate vk
j ∈ h := v(tk , xj ) = supαiw(tk , xj , αi )
Repeat for all j in 1, ...,M − 1,for each αi in [α− ε, α + ε] do
calculate (bαij )+ and (bαi
j )−solve
vkj = supαi
[ (1− ∆t
h |bαij | −
∆th2 aαi
j
)vk+1
j +(∆th (bαi
j )+ + 12
∆th2 aαi
j
)vk+1
j+1 +(
∆th (bαi
j )− + 12
∆th2 aαi
j
)vk+1
j−1
]vN
j = vN−1j
ResultsThe shape of approximated value function and the explicit solution arevery close at the time 0.A very small difference is observed in the limit of x = xM
ResultsError in value function (10−3).The implementation requires a big number of points (the more N is bigalso for M)
ResultsControl: Results are satisfying.The error gets bigger from a state of time to another in the boundary setof X Ω
Results
The error is estimated to 2.10−2
The shape of the Value function density
We can draw the shape of the approximated value function in function of timeand space since we stock the different value in an Excel file.
Backward scheme
The backward upwind schemethe HJB approximated is:
vkj = vk+1
j + supα[ (
∆th (−|bαj |)−
∆th2 aαj
)vk
j
+(
∆th (bαj )+ + 1
2∆th2 aαj
)vk
j+1 +(
∆th (bαj )− + 1
2∆th2 aαj
)vk
j−1
]vN
j = gjvk
N−vkN−1
h = pxN
vkN
k ∈ 0..M − 1, j ∈ 0..N
Denote pαj =(
∆th (−|bαj |)−
∆th2 aαj
),(pα+)
j =(
∆th (bαj )+ + 1
2∆th2 aαj
),(
pα−)
j =(
∆th (bαj )− + 1
2∆th2 aαj
)the transition probabilities that define a
Marcov Chain with the transition matrix Aα.
Matrix notations: supα((I + ∆tAαh ) vk+1
)− vk = 0
Backward scheme
The backward upwind schemethe HJB approximated is:
vkj = vk+1
j + supα[ (
∆th (−|bαj |)−
∆th2 aαj
)vk
j
+(
∆th (bαj )+ + 1
2∆th2 aαj
)vk
j+1 +(
∆th (bαj )− + 1
2∆th2 aαj
)vk
j−1
]vN
j = gjvk
N−vkN−1
h = pxN
vkN
k ∈ 0..M − 1, j ∈ 0..N
Denote pαj =(
∆th (−|bαj |)−
∆th2 aαj
),(pα+)
j =(
∆th (bαj )+ + 1
2∆th2 aαj
),(
pα−)
j =(
∆th (bαj )− + 1
2∆th2 aαj
)the transition probabilities that define a
Marcov Chain with the transition matrix Aα.
Matrix notations: supα((I + ∆tAαh ) vk+1
)− vk = 0
Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solveminα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij1. Initialize α0 in A.2. Iterate for k ≥ 0 :
(i) find xk ∈ <N solution of B(α)xk = b.(ii) αk+1 := argminα∈An
(B(α)xk − b
).
3. k=k+1Note that at each iteration, we have to find the control value of α
Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solveminα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij1. Initialize α0 in A.2. Iterate for k ≥ 0 :
(i) find xk ∈ <N solution of B(α)xk = b.(ii) αk+1 := argminα∈An
(B(α)xk − b
).
3. k=k+1Note that at each iteration, we have to find the control value of α
Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solveminα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij1. Initialize α0 in A.2. Iterate for k ≥ 0 :
(i) find xk ∈ <N solution of B(α)xk = b.(ii) αk+1 := argminα∈An
(B(α)xk − b
).
3. k=k+1Note that at each iteration, we have to find the control value of α
Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solveminα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij1. Initialize α0 in A.2. Iterate for k ≥ 0 :
(i) find xk ∈ <N solution of B(α)xk = b.(ii) αk+1 := argminα∈An
(B(α)xk − b
).
3. k=k+1Note that at each iteration, we have to find the control value of α
Algorithm in Scilab
Algorithm of the Howard
sets up the Howard algorithm[3] [7] that allows us to solveminα∈A (B (α) x − b). B (α) is defined as B(α)ij = B(αi )ij = (I + δtA(αi ))ij1. Initialize α0 in A.2. Iterate for k ≥ 0 :
(i) find xk ∈ <N solution of B(α)xk = b.(ii) αk+1 := argminα∈An
(B(α)xk − b
).
3. k=k+1Note that at each iteration, we have to find the control value of α
Results: Value function
The value function approximated is very close to the the optimal solution
Results: Error between Value functionsLet’s illustrates the error between both functions, an error of around10−3.Error increases in the boundary state of x : it can be explained byboundary conditions used in the model.
Results: Optimal control αThe shape of the optimal control α compared to the the explicit solutionSame comments with the terminal condition imposed on x
Results: Error between control solutionsIn the Howard algorithm, both boundary conditions type Dirichlet thenthose type Neumann were used⇒ Neumann conditions give betterresults.
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
For the investment problemFor the investment/consumption problem
IV) Numerical results on C++ and Scilab
1. Results for the investment problemApproximated schemeResolution method/CodingResults
2. Results for the investment/consumption problemApproximated schemeResolution method/CodingResults
65 / 74
Introducing to Markov Chain approach
There is k > 0 and a Markov matrix Mαh verifying
Aαh = −βIh +1k
(Mαh − Ih)or Mα
h = Ih + k(Aαh + βIh) (20)
Hence
(Mαh )ij =
1 + k(β + (Aαh )ii ) if i = j ,k(Aαh )ij if i 6= j .
we choose k such that k ≤ 1β+|(Aαh )ii |
, ∀i = 1, ...,d which make all matrix
coefficients (Mαh )ij positive:
(Mαh )ij = 1 + k β + kMα
h )ij
= 1 if Neumann,< 1 if Dirichlet
(20) can be written as: supα∈A(Mαh − Ih − βk)vh + kuh = 0
⇒ HJB equation of a conntrol problem of a Marcov chain with a discount rateβh, and instant cost kuh and transition matrix Mα
h
Explicit Value functionThe shape of the explicit solution of the problem using CRRA utilityfunction:
Approximated value function
At the terminal set, value function goes to infinity.
The shape of both explicit and approximated solutions regardless to theterminal set of x : Results are not bad!
Error
The error is estimated to 5.10−2 and bigger at the terminal of x
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
For the investment problemFor the investment/consumption problem
Comments
71 / 74
Optimal stochastic problem theoryResolution methods
Financial applicationsNumerical results on C++ and Scilab
For the investment problemFor the investment/consumption problem
Conclusion
Optimal stochastic control problem: an interesting field of research.Merton portfolio allocation without/with consumption as classicexamples.Numerical methods (forward and backward methods, Howard and policyiteration) approximatie the optimal solutions/ must verify stability,consistence and convergence⇒ controlled Markov chain has beenused.Numerical results were satisfying despite the fact of the presence of theerror related to sophistic boundary conditions.DPP supposes a minimum of smoothness of value function to apply Ito’sformula!Not always the case⇒ viscosity approach widely used infinance.Imagine problems more complicated such investment problems withtransaction costs (singular optimal control problem). what methods touse in modeling solutions?
72 / 74
References
D. Lamberton and B. Lapeyre,
Une Introduction au Calcul Stochastique Appliquee a laFinance.Editions Eyrolles, 1997.
H. Pham.Continous-time Stochastic Control and Optimization with Financial Applications.Springer, 2008.
Jean-Philippe Chancelier et Agnes Sulem.
Methode numerique en controle stochastique.Le Cermics. 22 fevrier 2005.
Kushner H.J. and Dupuis P.Numerical Methods for stochastic Control Problems in Continuous Time.Springer Verlag, 1992.
S. Crepey.Financial Modeling.Springer, 2013.
http://www.cmap.polytechnique.fr/ touzi/Fields-LN.pdf
http://www.math.fsu.edu/ pgarreau/files/merton.pdf
The END