Necessary condition for optimality of forward–backward doubly system

10
Afr. Mat. DOI 10.1007/s13370-014-0227-1 Necessary condition for optimality of forward–backward doubly system Chala Adel Received: 8 February 2012 / Accepted: 22 January 2014 © African Mathematical Union and Springer-Verlag Berlin Heidelberg 2014 Abstract We consider a stochastic control problem in the case where the set of control domain is convex, the system is governed by a nonlinear forward–backward doubly stochastic differential equation with given terminal condition. The criteria to be minimized is in the general form, with initial and terminal costs. We derive a maximum principle of optimality. Keywords Forward-backward doubly stochastic differential equation · Maximum principle · Adjoint equation · Variational inequality Mathematics Subject Classification (2000) 60H10 · 93E20 1 Introduction Nonlinear backward stochastic differential equations (BSDEs in short) have been introduced by Pardoux and Peng [1]. The original motivation for the study of these kinds of equations was to provide probabilistic interpretation for solutions of both parabolic and elliptic semi- linear partial differential equations see Peng [2]. Since then, the theory of BSDEs has been intensively developed thanks to its connections with many other fields of research, such as mathematical finance (see El Karoui et al. [3] and references therein), stochastic control and stochastic games (see Hamadène and Lepeltier [4], and the works of Bahlali and Chala, for example [57]).The classical condition on the coefficients for proving the existence and uniqueness result is a uniform Lipschitz one. Many authors have attempted to relax this condition. Among others, we refer to El Karoui and Huang [8] for stochastic Lipschitz condition, Bahlali et al. [9] for stochastic monotone condition and Jia for discontinuous This work is Partially supported by The Algerian PNR Project No: 8/u07/857. C. Adel (B ) Laboratory of Applied Mathematics, University Mouhamed Khider, P.O. Box 145, Biskra 07000, Algeria e-mail: [email protected]; [email protected] 123

Transcript of Necessary condition for optimality of forward–backward doubly system

Page 1: Necessary condition for optimality of forward–backward doubly system

Afr. Mat.DOI 10.1007/s13370-014-0227-1

Necessary condition for optimality of forward–backwarddoubly system

Chala Adel

Received: 8 February 2012 / Accepted: 22 January 2014© African Mathematical Union and Springer-Verlag Berlin Heidelberg 2014

Abstract We consider a stochastic control problem in the case where the set of controldomain is convex, the system is governed by a nonlinear forward–backward doubly stochasticdifferential equation with given terminal condition. The criteria to be minimized is in thegeneral form, with initial and terminal costs. We derive a maximum principle of optimality.

Keywords Forward-backward doubly stochastic differential equation · Maximumprinciple · Adjoint equation · Variational inequality

Mathematics Subject Classification (2000) 60H10 · 93E20

1 Introduction

Nonlinear backward stochastic differential equations (BSDEs in short) have been introducedby Pardoux and Peng [1]. The original motivation for the study of these kinds of equationswas to provide probabilistic interpretation for solutions of both parabolic and elliptic semi-linear partial differential equations see Peng [2]. Since then, the theory of BSDEs has beenintensively developed thanks to its connections with many other fields of research, such asmathematical finance (see El Karoui et al. [3] and references therein), stochastic controland stochastic games (see Hamadène and Lepeltier [4], and the works of Bahlali and Chala,for example [5–7]).The classical condition on the coefficients for proving the existence anduniqueness result is a uniform Lipschitz one. Many authors have attempted to relax thiscondition. Among others, we refer to El Karoui and Huang [8] for stochastic Lipschitzcondition, Bahlali et al. [9] for stochastic monotone condition and Jia for discontinuous

This work is Partially supported by The Algerian PNR Project No: 8/u07/857.

C. Adel (B)Laboratory of Applied Mathematics, University Mouhamed Khider,P.O. Box 145, Biskra 07000, Algeriae-mail: [email protected]; [email protected]

123

Page 2: Necessary condition for optimality of forward–backward doubly system

C. Adel

coefficients. After they introduced the theory of BSDEs, Pardoux and Peng [10] considereda new kind of BSDE, that is a class of backward doubly stochastic differential equations(BDSDEs in short) with two different directions of stochastic integrals, i.e., the equations

involve both a standard (forward) stochastic Itô integral d−→W t and a backward stochastic Itô

integral←−d Bt . More precisely, they dealt with the following BDSDE{

dYt = f (t, Yt , Zt ) dt + g (t, Yt , Zt )←−d Bt − Zt d

−→W t ,

YT = ξ. (1)

They proved that if f and g are uniform Lipschitz, then Eq. (1), for any square integrableterminal value ξ , has a unique solution (Yt , Zt ) in the interval [0, T ]. They also showedthat BDSDEs can produce a probabilistic representation for solutions to some quasi-linearstochastic partial differential equations. Since this first existence and uniqueness result, manypaper have been devoted to existence and/or uniqueness result under weaker assumptions.Among these papers, we can distinguish two different classes: Scalar BDSDEs and multi-dimensional BDSDEs. In the first case, one can take advantage of the comparison theorem:we refer to Shi et al. [11] weakened the uniform Lipschitz assumptions to linear growth andcontinuous conditions by virtue of a comparison theorem introduced by themselves. Theyobtained the existence of solutions to BDSDEs but without uniqueness. In this spirit, let usmention the contributions of N’zi and Owo [12], which dealt with discontinuous coefficients.For multidimensional BDSDE, there is no comparison theorem and to overcome this diffi-culty a monotonicity assumption on the generator f in the variable y is used. This appears inthe works of Peng and Shi [13] they have introduced a class of forward backward doubly sto-chastic differential equations, under the Lipschitz condition and monotonicity assumption.Unfortunately, the uniform Lipschitz condition can not be satisfied in many applications.More recently, N’zi and Owo [14] established the existence and uniqueness result undernon-Lipschitz assumptions. In this paper we study a stochastic control problem where thesystem is governed by a nonlinear forward-backward doubly stochastic differential equation(F-BDSDEs for short) of the type⎧⎪⎪⎨

⎪⎪⎩dxv

t = b(t, xv

t , vt)

dt + σ(t, xv

t , vt)

dWt ,

dyvt = f

(t, xv

t , yvt , zv

t , vt)

dt + g(t, xv

t , yvt , zv

t , vt)

d Bt + zvt dWt ,

xv0 = a, yv

T = ξ,

where B = (Bt )t≥0 is a standard d-dimensional Brownian motion, defined on a probability

space

(�, F,

(F (W,B)

t

)t≥0

, P)

satisfying the usual conditions. The control variable v =(vt ) , called strict (classical) control, is an Ft -adapted process with values in some convexset U of Rk . We denote by U the class of all strict controls. The criteria to be minimized,over the set U , has the form

J (v) = E

⎡⎣�

(xv

T

)+�(yv (0)

)+T∫

0

l(t, xv

t , yvt , zv

t , vt)

dt

⎤⎦,

where �, � and l are given maps, and (xt , yt , zt ) is the trajectory of the system controlledby v. A control u ∈ U is called optimal if it satisfies J (u) = inf

v∈U J (v). Stochastic control

problems for the forward–backward system have been studied by many authors. The firstcontribution of control problem of the forward–backward system is marked by Peng [15], he

123

Page 3: Necessary condition for optimality of forward–backward doubly system

Forward–backward doubly system

obtained the maximum principle with the control domain being convex. Xu [16] establishedthe maximum principle for this kind of problem in the case where the control domain isnot necessary convex, with uncontrolled diffusion coefficient and a restricted functionalcost. The work of Peng (convex control domain) is generalized by Wu [17], where thesystem is governed by a fully coupled FBSDE. Shi and Wu [18] extend the result of Xu[16] to the fully coupled FBSDE with convex control domain and uncontrolled diffusioncoefficient. Ji and Zhou [19] use the Ekeland variational principle and establish a maximumprinciple of controlled FBSDE systems, while the forward state is constrained in a convexset at the terminal time, and apply the result to state constrained stochastic linear-quadraticcontrol models and a recursive utility optimization problem are investigated. All the citedprevious works on stochastic control of FBSDE are obtained by introducing two adjointequations. In the recent works on the subject, Bahlali and Labed [20] introduce three adjointequation to establish necessary as well sufficient optimality conditions, they establish theresult in the case where the control domain being nonconvex and uncontrolled diffusioncoefficient. For more details we can siting [21–24]. Our objective in this paper is to establishnecessary optimality conditions, of the Pontryagin maximum principle type. Firstly, we derivenecessary optimality conditions for strict controls. Since the set of strict controls is convex,the classical way to use, is the spike variation method. More precisely, if u is an optimalstrict control and v is arbitrary, then with a sufficiently small θ > 0, we define a perturbedcontrol convex. We then derive the variational equation from the state equation, and thevariational inequality from the fact that 0 ≤ J

(uθ

t

)− J (ut ). From the variational inequality,we derive the necessary conditions of optimality, by introduced two adjoint equations, whoare respectively associated with the forward equation, and backward equation and the terminalvalue of backward equation. The paper is organized as follows: In Sect. 2, we formulate theproblem and give the various assumptions used throughout the paper. Section 3, we give somepreliminary results. Section 4, we give our main result, the necessary optimality conditionsfor forward–backward doubly stochastic differential equation control problem in the casewhere the set of control domain is convex. Along this paper, we denote by C some positiveconstant, Mn×d (R) the space of n×d real matrix and Md

n×d (R) the linear space of vectorsM = (M1, M2, . . . , Md) where Mi ∈Mn×d (R). We use the standard calculus of inner andmatrix product.

2 Formulation of the problem

Let

(�, F,

(F (B,W )

t

)t≥0

, P)

be a probability space equipped with a filtration satisfying

the usual conditions, which a d−dimensional Brownian motion W = (Wt : 0 ≤ t ≤ T ) andB = (Bt : 0 ≤ t ≤ T ) are defined. We assume that (Ft ) is the P−augmentation of the nat-ural filtration of (Wt ) and (Bt ) defined by ∀t ≥ 0 Ft = σ [W (r)−W (0) ; 0 ≤ r ≤ t] ∨σ [B (r)− B (t) ; t ≤ r ≤ T ]∨N , where .N denotes the totality of ν−null sets and σ1∨σ2

denotes the σ−fields generated by σ1 ∪ σ2. For any n ∈ N, let M2 (0, T ;Rn) denote theset of n dimensional jointly measurable random processes {ϕt , t ∈ [0, T ]} which satisfy:

(i) : E[∫ T

0 |ϕt |2 dt]

<∞, (i i) : ϕt is(F (B,W )

t

)measurable, for a.e.t ∈ [0, T ] .

123

Page 4: Necessary condition for optimality of forward–backward doubly system

C. Adel

We denote similarly by S2 ([0, T ] ;Rn)-the set of continuous n dimensional random

processes which satisfy: (i) : E

[sup

0≤t≤T|ϕt |2

]< ∞, (i i) : ϕt is

(F (B,W )

t

)measurable,

for any t ∈ [0, T ]. Let T be a strictly positive real number and U is a nonempty subset of Rk .

Definition 1 An admissible control v is Ft−adapted process with valued in U such that

E

[sup

0≤t≤T|vt |2

]<∞. We denote by U the set of all admissible controls.

For any v ∈ U, we consider the following forward–backward doubly system⎧⎨⎩

dxvt = b

(t, xv

t , vt)

dt + σ(t, xv

t , vt)

dWt ,

dyvt = f

(t, xv

t , yvt , zv

t , vt)

dt + g(t, xv

t , yvt , zv

t , vt)

d Bt + zvt dWt ,

xv0 = a, yv

T = ξ, (2)

where b : [0, T ]× Rn ×U → Rn, σ : [0, T ]× Rn ×U →Mn×d (R), f : [0, T ]× Rn ×Rm × Rn×d × U → Rm, g : [0, T ] × Rn × Rm × Rn×d × U → Rm×k . We define thecriterion to be minimized, with initial and terminal cost, as follows

J (v) = E

⎡⎣ T∫

0

h(t, xv

t , yvt , zv

t , vt)

dt +�(xv

T

)+�(yv (0)

)⎤⎦ , (3)

where � : Rn → R, � : Rm → R, h : [0, T ]× Rn × Rm × Rn×d ×U → R. The controlproblem is to minimize the functional J over U . If u ∈ U is an optimal solution, that is

J (u) = infv∈U J (v). (4)

A control that solves this problem is called optimal. Our goal is to establish a necessary con-ditions of optimality, satisfied by a given optimal control, in the form of stochastic maximumprinciple.

We also assume.H1

i) b, σ, f, g, h,� and � are continuously differentiable with respect to (x, y, z).

ii) The derivatives of b, σ, f, g and h are bounded by C (1+ |x | + |y| + |z| + |v|).

iii) The derivatives of �,� are bounded by C (1+ |x |) and C (1+ |y|) respectively”.

Under the above assumption, for every v ∈ U eq. (2) has unique strong solution and thefunction cost J is well defined from U into R.

3 Preliminary results

Definition 2 (xt , yt , zt ) is said to be a solution of (2), if and only if (xt , yt , zt ) ∈M2 (mathb f Rn)×M2

(Rn×d

)× S2(Rn×m

)and it satisfies (2).

123

Page 5: Necessary condition for optimality of forward–backward doubly system

Forward–backward doubly system

3.1 Variational equation and variational inequality

Let u (.) be an optimal control and let (xt , yt , zt ) be the corresponding trajectory. Let v (.)

be such that u (.)+ v (.) ∈ U . Since U is convex, then, for any θ ∈ [0, 1], uθt = ut + θvt is

also in U . Let (Xt , Yt , Zt ) be a solution of{d Xt = [bx (t, xt , ut ) Xt+bv (t, xt , ut ) vt ] dt + [σx (t, xt , ut ) Xt + σv (t, xt , ut ) vt ] dWt ,

X (0)=0,(5)⎧⎪⎪⎪⎨

⎪⎪⎪⎩

dYt =[

fx (t, xt , yt , zt , ut ) Xt + fy (t, xt , yt , zt , ut ) Yt + fz (t, xt , yt , zt , ut ) Zt]

dt

+ [gx (t, xt , yt , zt , ut ) Xt + gy (t, xt , yt , zt , ut ) Yt + gz (t, xt , yt , zt , ut ) Zt

]d Bt

+ fv (t, xt , yt , zt , ut ) vt dt + gv (t, xt , yt , zt , ut ) vt d Bt + Zt dWt ,

Y (T ) = 0.

(6)

From Definition 2 we can find a unique triple (Xt , Yt , Zt ) ∈M2 (Rn)×M2(Rn×d

)×S2

(Rn×m

)which solves (5) and (6). Equations (5) and (6) are called variational equations.

We denote by(xθ

t , yθt , zθ

t

)the trajectory corresponding to uθ . Set xθ (t) ≡ 1

θ

(xθ

t − xt) −

Xt , yθ (t) ≡ 1θ

(yθ

t − yt) − Yt , and zθ (t) ≡ 1

θ

(zθ

t − zt) − Zt . We have the following

convergence result:

Lemma 1 We suppose H1 are hold. Then

limθ→0

[sup

t∈[0,T ]E |xθ (t)|2

]= 0, (7)

limθ→0

[sup

t∈[0,T ]E |yθ (t)|2

]= 0, (8)

limθ→0

⎡⎣E

T∫0

|zθ (t)|2 dt

⎤⎦ = 0. (9)

Proof The proof of convergence for xθ (t) can be found in Lemma 4.1 of [21]. We need onlytreat (8) and (9). Since we have

d yθ (t) = 1

θ

(dyθ

t − dyt

)− dYt

= 1

θ

(f(

t, xθt , yθ

t , zθt , uθ

t

)− f (t, xt , yt , zt , ut )

)dt

+ 1

θ

(g

(t, xθ

t , yθt , zθ

t , uθt

)− g (t, xt , yt , zt , ut )

)d Bt

− [fx (t, xt , yt , zt , ut ) Xt + fy (t, xt , yt , zt , ut ) Yt + fz (t, xt , yt , zt , ut ) Zt

]dt

− [gx (t, xt , yt , zt , ut ) Xt + gy (t, xt , yt , zt , ut ) Yt + gz (t, xt , yt , zt , ut ) Zt

]d Bt

− fv (t, xt , yt , zt , ut ) vt dt − gv (t, xt , yt , zt , ut ) vt d Bt + zθ (t) dWt

=⎡⎣ 1∫

0

fx

t

)xθ (t) dλ+

1∫0

fy

t

)yθ (t) dλ+

1∫0

fz

t

)zθ (t) dλ

⎤⎦ dt

+⎡⎣ 1∫

0

gx

t

)xθ (t) dλ+

1∫0

gy

t

)yθ (t) dλ+

1∫0

gz

t

)zθ (t) dλ

⎤⎦ d Bt

+zθ (t) dWt + μθt dt + ηθ

t d Bt ,

123

Page 6: Necessary condition for optimality of forward–backward doubly system

C. Adel

where(θ

t

) = (t, x + θλ (X + xθ ), y + θλ (Y + yθ ), z + θλ (Z + zθ ), u + θλv), and

μθt =

1∫0

(fx

t

)− fx (t))

Xt dλ+1∫

0

(fy

t

)− fy (t))

Yt dλ

+1∫

0

(fz

t

)− fz (t))

Zt dλ+1∫

0

(fv

t

)− fv (t))vt dλ,

ηθt =

1∫0

(gx

t

)− gx (t))

Xt dλ+1∫

0

(gy

t

)− gy (t))

Yt dλ

+1∫

0

(gz

t

)− gz (t))

Zt dλ+1∫

0

(gv

t

)− gv (t))vt dλ.

Using Itô’s formula to |yθ (t)|2 , we have

E |yθ (t)|2 + E

T∫t

|zθ (t)|2 ds

≤ 1

θE

T∫t

|yθ (t)|2 ds

+ θE

T∫t

∣∣∣∣∣∣1∫

0

fx(θ

t

)xθ (t) dλ+

1∫0

fy(θ

t

)yθ (t) dλ+

1∫0

fz(θ

t

)zθ (t) dλ+ μθ

t

∣∣∣∣∣∣2

ds

+ E

T∫t

∣∣∣∣∣∣1∫

0

gx(θ

t

)xθ (t) dλ+

1∫0

gy(θ

t

)yθ (t) dλ+

1∫0

gz(θ

t

)zθ (t) dλ+ ηθ

t

∣∣∣∣∣∣2

ds.

Since fx , fy, fz, gx , gy, and gz are bounded with respect to variable (x, y, z), we have

E |yθ (t)|2 + E

T∫t

|zθ (t)|2 ds

≤(

1

θ+ θ M

)E

T∫t

|yθ (t)|2 ds + (θ M + M) E

T∫t

|xθ (t)|2 ds

+ (θ M + M) E

T∫t

|zθ (t)|2 ds + θ ME

T∫t

∣∣μθt

∣∣2ds + M E

T∫t

∣∣ηθt

∣∣2ds.

Choose M = 12(1+θ)

, we get

E |yθ (t)|2 + 1

2E

T∫t

|zθ (t)|2 ds ≤ K E

T∫t

|yθ (t)|2 ds + K E

T∫t

|xθ (t)|2 ds + βθt , (10)

123

Page 7: Necessary condition for optimality of forward–backward doubly system

Forward–backward doubly system

with

βθt = θ ME

T∫t

∣∣μθt

∣∣2ds + ME

T∫t

∣∣ηθt

∣∣2ds.

We have limθ→0

βθt = 0. Thus we can apply Gronwall’s inequality and BDG inequality to

(10) to prove the last two convergence of (8) and (9). � Since u (.) is an optimal control, then

1

θ(J (ut + θvt )− J (ut )) ≥ 0. (11)

From (11) and Lemma 1, we have the following.

Lemma 2 We suppose H1, then the following variational inequality holds:

0 ≤ E

T∫0

[hx (t, xt , yt , zt , ut ) Xt + hy (t, xt , yt , zt , ut ) Yt + hz (t, xt , yt , zt , ut ) Zt

]dt

E

T∫0

hv (t, xt , yt , zt , ut ) vt dt + E[�x (xT ) XT +�y (y0) Y0

]. (12)

Proof Let θ → 0 in (11). From the first estimate (7) of Lemma 1, we get

1

θE

[�

(xθ

T

)−�(xT )] = E

⎡⎣ 1∫

0

�x(xT + λ

(xθ

T − xT))

dλ (xθ (T )+ XT )

⎤⎦

→ E [�x (xT ) XT ] .

Similarly, we have

1

θE

[�

(yθ

0

)−� (y0)] = E

⎡⎣ 1∫

0

�y(y0 + λ

(yθ

0 − y0))

dλ (yθ (0)+ Y0)

⎤⎦

→ E[�y (y0) Y0

],

and

1

θE

⎡⎣ T∫

0

[h

(t, xθ

t , yθt , zθ

t , ut + θvt)− h (t, xt , yt , zt , vt )

]dt

⎤⎦

→ E

⎡⎣ T∫

0

(hx (t, xt , yt , zt , ut ) Xt+hy (t, xt , yt , zt , ut ) Yt+hz (t, xt , yt , zt , ut ) Zt

)dt

⎤⎦

+E

⎡⎣ T∫

0

(hv (t, xt , yt , zt , ut ) vt ) dt

⎤⎦.

Thus (12) follows. �

123

Page 8: Necessary condition for optimality of forward–backward doubly system

C. Adel

Notation.01. We denote for example, A (t) = A (t, xt , ut ), B (t) = B (t, xt , yt , zt , ut ),where A = b, bx , bv, σ, σx , σv, and B = f (t, xt , yt , zt , ut ), fx (t, xt , yt , zt , ut ),

fv (t, xt , yt , zt , ut ), g (t, xt , yt , zt , ut ), gx (t, xt , yt , zt , ut ), gv (t, xt ,

yt , zt , ut ), and h (t, xt , yt , zt , ut ) , hx (t, xt , yt , zt , ut ), hv (t, xt , yt , zt , ut ).

4 Adjoint equation and maximum principle for optimality

In this section, we derive the variational inequality from (12). For this end, introduce thefollowing two adjoint equations

⎧⎨⎩−dpt = [bx (t) pt + fx (t) qt + σx (t) Pt + gx (t) Rt + hx (t)] dt − Pt dWt ,

−dqt =[

fy (t) qt+gy (t) Rt+hy (t)]

dt+Rt d Bt+[

fz (t) qt + gz (t) Rt + hz (t)]

dWt ,

pT = �x (xT ), q0 = �y (y (0)),

(13)

where: (p, R, P) ∈ (L2F ([0, T ] ;Rn)

)2 × L2F

([0, T ] ;Rn×d

), and q ∈ L2

F ([0, T ] ;Rm).By applying Itô’s formula respectively to (pt Xt ), and (qt Yt ), and by take expectation, we

have

E (pT XT ) = E (p0 X0)

+E

T∫0

[pt bv (t)+ Ptσv (t)] vt dt − E

T∫0

[ fx (t) qt + gx (t) Rt + hx ] Xt dt,

(14)

E (q0Y0) = E (qT YT )− E

T∫0

hy (t) Yt dt

+E

T∫0

[ fx (t) qt + gx (t) Rt ] Xt dt + E

T∫0

[qt fv (t)+ Rt gv (t)] vt dt, (15)

We remark that X0 = 0, YT = 0, pT = �x (xT ), and q0 = �y (y (0)). Then (14) and(15) become

E (�x (xT ) xT ) = E

T∫0

[pt bv (t)+ Ptσv (t)] vt dt − E

T∫0

[ fx (t) qt + gx (t) Rt + hx ] Xt dt,

E(�y (y (0)) y0

) = −E

T∫0

hy (t) Yt dt + E

T∫0

[ fx (t) qt + gx (t) Rt ] Xt dt

+E

T∫0

[qt fv (t)+ Rt gv (t)] vt dt,

123

Page 9: Necessary condition for optimality of forward–backward doubly system

Forward–backward doubly system

Finally, we can rewrite (12) as

0 ≤ E

T∫0

[Hv (t, xt , yt , zt , pt , qt , Pt , Rt , ut ) vt ] dt, (16)

where the Hamiltonian H is defined from [0, T ]×Rn ×Rm ×Mm×d (R)×Rn ×Rn×d ×Rm × Rn × Rn×d ×U into R by

H (t, xt , yt , zt , pt , qt , Pt , Rt , ut ) = b (t) pt + σ (t) Pt + f (t) qt + g (t) Rt + h (t). (17)

From this above variational inequality, we can easy derive the necessary conditions ofoptimality.

Theorem 1 (The necessary condition of optimality) Let u be an optimal control minimizingthe functional J over U and (xt , yt , zt ) denotes corresponding optimal trajectory. Then thereis two unique adapted processes p ∈ L2

F ([0, T ] ;Rn), q ∈ L2F ([0, T ] ;Rm), which are

respectively solution of stochastic differential equation (13) such that a.e; a,s, we have

0 ≤ E

T∫0

Hv (t, xt , yt , zt , pt , qt , Pt , Rt , ut ) (v − ut ) dt. (18)

Proof The prove flows immediately from (16), we replace vt = v − ut . � Acknowledgments The author would also like to thank the anonymous referees for their careful readingand helpful suggestions on the original version of this paper.

References

1. Pardoux, E., Peng, S.: Backward doubly stochastic differential equations and system of quasilinear SPDEs.Probab. Theory Related Fields 98(2), 209–227 (1994)

2. Peng, S.: Probabilistic interpretation for systems of quasilinear parabolic partial differential equations.Stoch. Stoch. Rep. 37, 61–74 (1991)

3. El Karoui, N., Peng, S., Quenez, M.C.: Backward stochastic differential equations in finance. Math.Finance 7(1), 1–71 (1997)

4. Hamadène, S., Lepeltier, J.P.: Backward equations stochastic control and zero-sum stochastic differentialgames. Stoch. Stoch. Rep. 54, 221–231 (1995)

5. Bahlali, S., Chala, A.: A general optimality conditions for stochastic control problems of jump diffusions.Appl. Math. Optim. 65(1), 15–29 (2012)

6. Chala, A.: The relaxed optimal control problem of forward–backward stochastic doubly systems withPoisson jumps and It’s application to LQ problem. Random Oper. Stoch. Equ. 20, 255–282 (2012)

7. Chala, A.: Necessary and sufficient condition for optimality of a backward non-Markovian system. J.Numer. Math. Stoch. 5(1), 1–13 (2013)

8. El Karoui, N., Huang, S.J.: A general result of existence and uniqueness of backward stochastic differentialequations. Backward Stochastic Differential Equations. In: El Karoui, N., Mazliak, L. (eds.) PitmanResearch Notes in Mathematics Series, vol. 364. Longman, London, pp 27–36 (1997)

9. Bahlali, K., Elouaflin, A., N’zi, M.: Backward stochastic differential equations with stochastic monotonecoefficients. J. Appl. Math. Stoch. Anal. 4, 317–335 (2004)

10. Pardoux, E., Peng, S.: Adapted solutions of backward stochastic differential equations. Syst. Control Lett.14, 55–61 (1990)

11. Shi, Y., Gu, Y., Liu, K.: Comparison theorem of backward doubly stochastic differential equations andapplications. Stoch. Anal. Appl. 23(1), 1–14 (2005)

12. N’zi, M., Owo, J.M.: Backward doubly stochastic differential equations with discontinuous coefficients.Stat. Probab. Lett. (2008)

123

Page 10: Necessary condition for optimality of forward–backward doubly system

C. Adel

13. Peng,S., Shi, Y.A.: A type-symmetric forward–backward stochastic differential equations. C. R. Acad.Sci. Paris Ser. I 336(1), 773–778 (2003)

14. N’zi, M., Owo, J.M.: Backward doubly stochastic differential equations with non-Lipschitz coefficients.ROSE. 16, 307–324 (2008)

15. Peng, S.: Backward stochastic differential equations and application to optimal control. Appl. Math.Optim. 27, 125–144 (1993)

16. Xu, W.: Stochastic maximum principle for optimal control problem of forward–backward system. J. Aust.Math. Soc. Ser. B 37, 172–185 (1995)

17. Wu, Z.: Maximum principle for optimal control problem of fully coupled forward-backward stochasticcontrol system. Syst. Sci. Math. Sci 11(3), 249–259 (1998)

18. Shi, J.T., Wu, Z.: The maximum principle for fully coupled forward–backward stochastic control system.Acta Automatica Sinica 32(2), 161–169 (2006)

19. Ji, S., Zhou, X.Y.: A maximum principle for stochastic optimal control with terminal state constraints,and its applications. Commun. Inf. Syst. 6(4), 321–338 (2006)

20. Bahlali, S., Labed, B.: Necessary and sufficient conditions of optimality for optimal problem with initialand terminal costs. ROSE 14(3), 291–301 (2006)

21. Bensoussan, A.: Lectures on stochastic control. In: Proceedings in Nonlinear Filtering and StochasticControl, Cortona (1981)

22. Bismut, J.M.: An introductory approach to duality in optimal stochastic control. SIAM J. Rev. 20(1),62–78 (1978)

23. Haussmann, U.G.: General necessary conditions for optimal control of stochastic systems. Math. Progr.Stud. 6, 34–48 (1976)

24. Peng, S.: A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim.28(4), 966–979 (1990)

123