Fleming WH, Stein J.

download Fleming WH, Stein J.

of 24

Transcript of Fleming WH, Stein J.

  • 8/11/2019 Fleming WH, Stein J.

    1/24

    Optimal I nvestment-C onsumption M odels inI nternational F inance

    Wendell H. Fleming & J eromeL. Stein

    Division of Applied Mathematics

    Brown UniversityProvidence, RI 02912

    Abstract

    We consider some multi-stage stochastic optimization models, which arise, in in-

    ternational nance. The evolution over time of national capital and debt are a!ectedby controls, in the form of investment and consumption rates. The goal is to choose

    controls that maximize total HARA discounted utility of consumption, subject to con-

    straints on investment and consumption rates as well as the debt-to-capital ratio. The

    methods are based on dynamic programming. Totally risk sensitive limits are consid-

    ered. In this analysis, ordinary expectations are replaced by totally risk-averse expec-

    tations which are additive under max-plus addition rather than ordinary addition.

    1 I ntroduction.In this paper we consider some multi-stage stochastic optimization models which arise in

    international nance. In these models an economic unit has productive capital and alsoliabilities in the form of debt. In an international nance context, the unit is a nation anddebt refers to that owed to foreigners. The evolution over time of capital and debt area!ected by controls, in the form of investment and consumption rates. The uctuationsof debt are also inuenced by changing interest rates and productivity of capital, whichare modelled as stochastic processes. The goal is to choose controls which maximize totaldiscounted utility of consumption, subject to constraints which are imposed. Some of theseconstraints are evident. For example, capital and consumption rates cannot be negative.

    Other constraints are a matter of choice in the model. A crucial modelling issue is howdebt is to be constrained, to avoid unreasonable free lunch solutions to the problem whichallow unlimited consumption. One such constraint is that the total wealth must be positive,where wealth is capital minus debt. This constraint will be used in Sections 2 and 4. Anotherpossible constraint, considered in Section 3, is an upper bound on the ratio of debt to capital.

    We begin in Section 2 with a simple discrete time model, in which the capital Kj ineach time period j can be chosen arbitrarily. Thus no constraints on rates of investmentor disinvestment are imposed. In this model, the controls at time j are capital Kj and

    "

  • 8/11/2019 Fleming WH, Stein J.

    2/24

    consumptionCj . The state at time j is the wealthXj. The interest ratesrj and productivityof capital bj are independent across di!erent time steps j. The utility function is HARA.Dynamic programming reduces the problem to nding a positive constant A which satises

    the nonlinear equation (2.9). The optimal controls keep the ratios k! of capital-to-wealth andc! of consumption-to-wealth constant. See (2."2). If interest rates are constant, this problemis mathematically equivalent to a classical discrete time Merton-type portfolio optimizationproblem. CapitalKj has the role of a risky asset with the no short-selling constraint Kj ! 0.

    In Sections 3." and 3.2 we consider a linear investment-consumption model in whichcapitalKj and debtLj are state variables. The controls are investment Ij and consumptionCj, subject to linear constraints (3.4). In addition, condition (3.3) imposes an upper bound on the debt-to-capital ratio. When = ", this is equivalent to assuming that wealthXj =Kj " Lj cannot negative. However, the boundmust hold for all possible interest ratesrj and productivities bj, including worst case scenarios. This requires a more stringentcondition < `max< ", where the constant `max is dened in formula (3.9). This stochasticinvestment-consumption control problem is studied by dynamic programming. No explicitsolution is available. However, for HARA utility the problem is equivalent to one with asingle state variable j = K

    "1j Lj. See (3."4) and (3."5). In [9] we considered a similar

    two-period model and found an explicit solution in a special case. The empirical paper ["3]examines data on default risk by countries, in the context of this two-period model.

    In Section 4 we consider a continuous time version of the discrete time model in Sec-tion 2. In this continuous time model, the wealth Xt at time t uctuates according tothe linear stochastic di!erential equation (4.6). Random uctuations in interest rates andproductivity of capital are incorporated in the model through the Brownian motion termsin (4.6). Dynamic programming leads to explicit formulas for optimal capital-to-wealth

    and consumption-to-wealth ratios k!

    and c!

    . See (4."") and (4."2). In the internationalnance/debt interpretation of this model, it is di#cult to measure a countrys capital Kt.However, data for the gross domestic product (GDP) Yt are widely available. In Remark4."we describe a mathematically equivalent model in which Kt is replaced by Yt. In [8] weexplored this model in greater detail and pointed out some of its economic implications.

    At the end of Section 4 we mention some work in progress on variants of this continuoustime model. If bounds are imposed on the investment rate It, then no xed upper boundK"1t Lt # can be enforced with probability " whenuctuations in interest rates and produc-tivity are modelled via Brownian motions, as in Section 4. Some possible modications inthe criterionJ to be maximized are suggested. Moreover, large investment-to-capital ratiosmay be less e#cient. A nonlinear model in which this is taken into account is mentioned.

    The HARA parameter !is a measure of risk sensitivity. In Section 5 we consider totallyrisk-sensitive limits as! $ "%. For simplicity, only the models in Sections 2 and 4 with nobounds on investment rates are considered. If the interest rate and productivity probabilities

    p(r, b) in Section 2 do not depend on!, then in the totally risk-averse limit controls are chosento protect against worst case interest rates and productivities. Worst case scenarios mayhave positive, but quite small probabilities. In the theory of large deviations for stochasticprocesses [2]["0] such worst case scenarios are rare events. A more interesting totally

    2

  • 8/11/2019 Fleming WH, Stein J.

    3/24

  • 8/11/2019 Fleming WH, Stein J.

    4/24

  • 8/11/2019 Fleming WH, Stein J.

    5/24

    x1= x (" + r+ (b " r)k " c) ,

    where the max is subject to the constraint (k, c) ' %. From (2.2) and (2.6),W(%x) =%!W(x)for any %> 0. Hence, for some A >0

    (2.8) W(x) =!"1Ax!.

    Let us rst consider 0< !< ". Then (2.7) becomes

    (2.9) "= max(k,c)

    hA"1c! +$&(k,c, !)

    i

    &(k,c, !) =Xr,b

    p(r, b) (" + r+ (b " r)k " c)! .

    Let

    (2."0) (!) = max(k,c)

    &(k,c, !)

    and assume that

    (2."") $(!)< ".

    T heorem 2.1 If0 < !< " and $(!)< ", then equation (2.9) has a solution for a uniqueA >0. The maximum in (2.8) is attained at a unique (k!, c!) which is either interior of%or on the vertical segment of the boundary '% where k! = 0, 0< c! < " + r".

    P roof. Let "(A) denote the right side of (2.9). Then " is continuous. Moreover,

    limA$0+

    "(A) = %, limA$#

    "(A) =$(!).

    Since $(!)< ", "(A) = "for some A >0.The maximum in (2.9) is attained at a unique (k!, c!) ' %, sinceA"1c! + $&(k,c, !) is a

    strictly concave function of (k, c). Moreover

    '

    'c

    hA"1c! +$&(k,c, !)

    i= +%

    when c = 0. This excludes the possibility that c! = 0. Since p(r, b") > 0, this partial

    derivative is "% on the segments of '%

    where "

    +r+

    + (b"

    " r+

    ) k " c = 0 or "

    +r"

    +(b" " r") k " c= 0. Hence either (k!, c!) is interior to %or k! = 0, 0< c! < ". Finally, sincec! >0 it is easy to show that "(A) is strictly decreasing as A increases. Hence, the solutionof"(A) = "is unique.

    Once a solutionAto (2.9) is found, a standard verication argument in stochastic controlshows thatW(x) in (2.9) is indeed the value function. Moreover the constant controls

    (2."2) kj =k!, cj =c

    !, j = 0, ", 2,

    5

  • 8/11/2019 Fleming WH, Stein J.

    6/24

    are optimal. See [7,p. "74] for the continuous time Merton problem.

    R emark 2.1. If the interest rate rj = r is constant, this problem is equivalent to thediscrete-time Merton optimal portfolio problem with no short selling. The control kj corre-sponds to the fraction of wealth Xj in a risky asset, and r is the riskless interest rate. Ifr

    is constant, then k! >0 providedr 0

    (!) ! &(0, 0, !) !" + r"

    !> ".

    The maximum in (2."0) occurs at some (k!, c!). Then

    (!) =P

    r,bp(r, b)g(r, b)

    g(r, b) = (" + r+ (b " r)k! " c!)! .

    The sum on the right side is an expectation Eg . Let ! > ! and ( = !/!. Since (!) < "and ( > ",

    (!)

  • 8/11/2019 Fleming WH, Stein J.

    7/24

    3 L inear investment-consumption model.

    In this section we consider the following discrete time stochastic control model. An economic

    entity has productive capital and also liabilities in the form of debt. Let Kjdenote the capitaland Lj the debt at time period j = 0, ", 2, . In the context of international nance, theeconomic entity is a country. Kj is the nations capital andLj the debt owed to foreigners.See [8][9]. IfLj 0 , Lj # Kj

    < "

    (3.4) Cj ! 0 , "iKj # Ij # iKj

    where 0 # i < "and 0< i < %.Consider the following stochastic control problem. The state variables are Kj and Lj .

    The control variables are Ij and Cj . In choosing Ij , Cj, the initial state (K0, L0) and the

    pairs (r", b") for #< j are known. However, the current interest rate rj and productivity bjare unknown when Ij andCj are chosen.

    Let U(C) be an increasing, concave utility function. Later in the section we take U(C)to be HARA, as in Section 2. The goal is to maximize total expected discounted utility ofconsumption:

    (3.5) J=E

    !"#X

    j=0

    $j"1U(Cj)

    #$ .

    The constraints imply a further restriction on the upper limit . Since < ", Xj > 0

    whereXj =Kj " Lj !"

    "

    `

    Kj. Let

    (3.6) j = LjKj

    , ij = IjKj

    , cj = CjXj

    = Cj

    (" " j) Kj.

    From (3.") and (3.2)

    (3.7) j+1= j+rj j+ (ij+ cj) (" " j) " bj

    "+ ij

    7

  • 8/11/2019 Fleming WH, Stein J.

    8/24

    From (3.4), cj ! 0 and "i # ij # i. Since Lj+1 # Kj+1, (3.3) requires j+1 # for allpossible rj , bj. IfLj # 0, there are certainly controls Ij , Cj such that j+1 # , for instanceIj =Cj = 0. For debt Lj >0, the worst case is rj =r

    +, bj =b". In this case, it must be

    possible to nd Ij , Cj such that

    (3.8) ! j+r+ j+ (ij+ cj) (" " j) " b

    "

    " + ij.

    We take ij = "i, cj = 0. Then (3.8) holds provided j # and

    r+ j " i (" " j) " b"

    # 0,

    (3.9) j #b" + i

    r+ + i=`max

    (This de

    nes `max.) Note that `max < "

    since b

    "

    < r

    +

    . For the upper bound ` on thedebt-to-capital ratio j in (3.3), we require that 0 < < `max.

    The above discussion shows that (ij , cj) ' & ( j) where &(`) is dened for ` # as theset of all (i, c) satisfying the following inequalities: "i # i # i, c ! 0 and

    (3."0) r` + (i+ c)(" " `) " b"

    " + i # " `.

    In (3."0),r =r" if` # 0 andr =r+ if` >0. Since < `max,&(`) is not empty.Let V(K, L) denote the value function. It is the supremum of J over all admissible

    control sequences (Ij, Cj), j = 0, ", , considered as a function of the initial capital anddebtK=K0, L=L0.

    P roposition 3.1 Assume thatV(K, L) is finite for all K >0 and L # K. Then(a) V(K, L) is a concave function;(b) V(K, L) is a nondecreasing function ofK;(c)V(K, L) is a nonincreasing function ofL.

    P roof. Concavity ofV follows by a standard argument from concavity ofU(C) togetherwith linearity of the state dynamics (3.")-(3.2) and of the constraints (3.3)-(3.4).

    To prove (b), let K > Kand let Kj, Lj be the solution to (3.") with K0= K, L0= Landthe same controls Ij, Cj . Then Kj < Kj, Lj < Lj, which implies Lj # Kj when Lj # Kj .Moreover, the constraint

    "i Kj # Ij # i Kjholds when Ij satises (3.4). Thus, the class of admissible controls is enlarged when initialcapitalKis increased. This implies (b).

    Similarly the class of admissible controls is increased when initial debt L is decreased.This implies (c).

    From now on we consider HARA utility:

    U(C) =!"1C!, !< "(!6= 0).

    8

  • 8/11/2019 Fleming WH, Stein J.

    9/24

    P roposition 3.2 Let!< !, where ! is as in Corollary 2.1. Then

    (3."") V(K, L) # !"1A(K" L)!

    where A is as in Theorem 2.1.

    P roof. We subtract (3.2) from (3.") to obtain

    (3."2) Xj+1= Xj " rjLj+ bjKj " Cj.

    This is the same as (2.") since Lj = "Sj . Then kj , cj dened by (2.3) satisfy the condition(kj, cj) ' %. Otherwise, Xj+1, would become negative by (2.2). Therefore,

    J # W(K" L)

    whereW(x) is the value function in Section 2. Then (3."") follows from (2.8).

    R emark 3.1. Since kj = (" " j)"1 the bound (3.9) is equivalent to kj # kmax where

    kmax= r+ + i

    r+ " b".

    Asi $ "kmaxtends to the largest value ofk in the quadrilateral %in Section 2. If one tooki= ", i= %, then there would be no investment control constraint except ij ! ""to insurethat Kj+1 ! 0. Even when investment control constraints are omitted, the investment-consumption model in this section is not equivalent to the Merton-type model in Section

    2. In the Merton-type model, the wealth Xj is freely divided into Kj and Sj = "

    Lj atthe start of time period j. The rationkj = Kj/Xj can be taken as a control rather thanIj . However, in the present section, interest and productivity coe#cients rj, bj apply to thecurrent states Kj, Lj . InvestmentIj a!ects the proportion of wealth in capital only in thenext time periodj + ". In continuous time, this distinction disappears. In Section 4, we willnd that if investment control constraints are ignored, then a continuous time version of theinvestment-consumption problem in this section reduces to a continuous time Merton-typeproblem.

    The value function satises the dynamic programming equation

    (3."3) V(K, L) = max(I,C)

    +,U(C) +$Xr,b

    p(r, b)V (K1, L1)-.

    where K1, L1 satisfy (3."),(3.2) with j = 0, K0 = K, L0 = L, (I0, C0) = (I, C). In (3."3),(I, C) must satisfy the constraints (3.4) and also (K"1I, K"1C) ' & (K"1L). For HARAutilityU(C), V(K, L) is homogeneous of degree !and hence

    (3."4) V(K, L) =!"1K!V

    K"1L

    9

  • 8/11/2019 Fleming WH, Stein J.

    10/24

    where V(`) =!V(", `). For 0< !< !,` # , (3."3) becomes

    (3."5) V(`) = max(i,c)

    +

    ,c! +$Xr,b

    p(r, b)V (`1)

    -

    .

    where (i, c) ' &(`) and `1 satises (3.7) with j = 0 and `0 = `, (i0, c0) = (i, c). Thus, forHARA utility, the stochastic control problem reduces to one with a single state j which isupdated according to (3.7), and with controls (ij, cj) ' & ( j). [For ! < 0, max in (3."5)become min.]

    R emark 3.2. We have assumed that the pairs (rj, bj) are independent across di!erent timeperiods. One could assume instead that (rj, bj) evolve according to a nite state Markovchain with stationary transition probabilities P(rj, bj ; rj+1, bj+1). Ifrj and bj are assumedknown before Ij and Cj are chosen, then dynamic programming can again be used. In this

    case the value function V(K,L,r,b) depends on r =r0 and b= b0 as well as initial capitaland debtK=K0, L= L0.

    4 C ontinuous-time investment consumption models.

    Let us now consider continuous-time versions of the discrete-time models in Sections 2 and3. Let Kt denote capital, Lt debt, It investment rate and Ct consumption rate at time t.Then capital and debt satisfy (at least formally) the di!erential equations.

    (4.") dKt = Itdt

    (4.2) dLt= (rtLt+ It+ Ct " btKt) dt,

    where rt and bt are the interest rate and productivity of capital. Let us ignore possibleconstraints onIt. (See comments later in the section if the investment rate Itis constrained,or if (4.") depends nonlinearly on the ratio of investment to capital.) The only constraintsimposed are

    (4.3) Kt ! 0, Ct ! 0, Xt> 0

    where Xt = Kt "Lt. If there are no constraints on It, then wealth Xt can be shifted

    instantaneously and frictionlessly between capital Kt and net

    nancial assets St = "

    Lt.We can take Xt as the state, andKt, Ct as controls rather than It, Ct. By subtracting (4.2)from (4.") we obtain the dynamics ofXt. Equivalently, we will take as controls kt, ct, where

    (4.4) Kt= ktXt, Ct= ctXt.

    Then

    (4.5) dXt= Xt[rt+ (bt " rt) kt " ct] dt.

    "0

  • 8/11/2019 Fleming WH, Stein J.

    11/24

    However, equation (4.5) is merely formal. We wish the pairs (rt, bt) to be independent fordi!erent times t. To make this mathematically precise, we replacertdt and btdt in (4.2),(4.5) by dRt and dBt, where Rt, Bt are processes with stationary independent increments.

    In fact, we assume as in [8] that

    Rt = rt +)1w1t

    Bt = bt +)2w2t

    wherew1t, w2t are Brownian motions which may be correlated. Let *denote the correlationcoe#cient:

    E(w1tw2t) = *t , ""

  • 8/11/2019 Fleming WH, Stein J.

    12/24

    The condition for existence of the required homogeneous solution V(x) to (4.7) with A >0is

    (4."

    0) +> !$

    !.The maximum in (4.9) occurs when k= k!, where

    (4."") k! =%k!m+%((( " *)

    provided that the right side of (4."") is positive. Otherwise, k! = 0. In (4."")

    k!m= b " r

    )22(" " !), (=

    )1

    )2, %"1 = " " 2*(+(2.

    Note thatk!mis the Merton solution ()1= 0). The maximum on the right side of (4.7) occurwhenk= k! andc= c!, where

    (4."2) c! =+" !$!

    " " ! .

    The constant controls k!, c! are the optimal ratios of capital to wealth and consumption towealth.

    R emark 4.1 In the international nance/debt interpretation of this model, it is di#cultto measure a countrys capital Kt. However, data for the gross domestic product (GDP)Yt= btKt are widely available. The following model is mathematically equivalent to the oneabove. Instead of Kt, Lt take Yt and Lt as the state variables. Assume that the GDP Ytsatises

    (4."3) dYt = bItdt +)2Ytd w2t, b >0,

    and thatLt again satises (4.2), or equivalently (4.20) below. Let

    Kt= b"1Yt, Xt= Kt " Lt, kt= (Kt)

    "1Xt.

    ThenXt again satises (4.6) and the solution to the stochastic control problem is the sameas before.

    Variants of the model. The simple continuous-time investment/consumption modeljust considered has special features which permit an explicit solution. We now mention somevariants of this simple model which are currently being investigated.

    A) Bounds on investment rates and debt to capital ratios. Suppose that nite upper andlower bounds

    "iKt # It # iKt

    are imposed, like those in (3.4). The pair (kt, Lt) is the state of the continuous-timestochastic system being controlled. The controls are It, Ct, or equivalentlyit, ct, whereit= K

    "1t It andct= C

    "1t Xt. With the model chosen forRt, Bt equation (4.2) becomes

    (4.20) dLt = (rLt+ It+ Ct " bKt) dt+)1Ltdw1t " )2Ktdw2t.

    "2

  • 8/11/2019 Fleming WH, Stein J.

    13/24

    With positive probability, the Brownian motions w1t, w2tcan undergo arbitrarily largeexcursions in any time interval. Hence, any xed upper boundfor the debt to capitalratio K"1t Lt will be exceeded with positive probability. The criterion J considered

    above must be modied to account for this fact. The choice of an appropriate criterionto be optimized is an interesting modelling issue. The following criteria J1 and J2 aretwo possibilities.

    ". Given initial state (K, L) with L < K, let , denote the rst time t such thatLt= Kt. Consider the problem of maximizing

    J1= !"1E

    Z %0

    e"$tC!tdt " 'e"$%L!%

    ,

    where ' ! 0 is a penalty imposed when the upper debt-to-capital ratio bound

    is reached.

    2. Consider the problem of maximizing

    J2= !"1E

    Z #0

    e"$t [C!t " K!t" (`t)] ,

    where `t = K"1t Lt and "(`) is a penalty function which increases as ` increases.

    The function"(`) must increase rapidly enough for large`to insure thatJ2cannotbe made arbitrarily large by suitable choice of the controls It, Ct.

    Yet another possibility would be to require that the constant mean interest rate rin (4.20) is replaced byr (`t), wherer(`) increases at a suitable rate as ` increases.

    B) Nonlinear dependence of capital growth on investment rates. Large values of|it|, whereit= K

    "1t It may be (in some sense) less e#cient. Instead of (4.") suppose that

    dKt= g (it) Ktdt,

    where g(i) # i, g(0) = 0, g00(i) < 0. The linear case g(i) = i is (4."). As in variantA) above, one could consider J1 or J2 as criterion to be optimized. The followingheuristic argument suggests (but does not prove) that the form of nonlinearity ofg(i)should imply bounds for optimal investment ratiosi!t . The investment control variableiappears in the dynamic programming equation for the value functionV(K, L) througha term

    Kmaxi [g(i)VK+ iVL]whereVK, VL are the rst-order partial derivatives. The max would occur at

    i! = (g0)"1

    "V"1K VL

    ,

    where the assumptiong 00(i)< 0 insures thatg 0 has an inverse. A positive lower boundforVKand an upper bound for|VL|would imply a corresponding bound for the optimalinvestment ratio i!.

    "3

  • 8/11/2019 Fleming WH, Stein J.

    14/24

    5 Totally risk-averse limits.

    The HARA parameter ! is a measure of sensitivity to risk. For utility functionU(C), risk

    sensitivity is dened as |U00

    (C)|/U0

    (C) which for HARA utility equals (" " !)C"1

    . In thissection we consider the totally risk averse limit ! $ "%. For simplicity, we consider onlyproblems without investment control constraints, similar to those in Sections 2 and 4.

    Let us begin with a nite-time horizon version of the model in Section 2. The discountfactor $plays no role in the totally risk averse limit. Hence we take $= ". For the controlproblem with N steps, we wish to choose controls (kj , cj) ' % for j = 0, ", , N " " tomaximize

    (5.") JN(!) =!"1E

    +,N"1Xj=0

    C!j + A0(!)X!N

    -. , A0(!) ! 0.

    The last term in (5.") is a measure of the utility of nal wealth XN. IfA0(!) = 0 and theprobabilitiesp(r, b) do not depend on !, then the main contribution to (5.") comes from thesmallest consumption Cj in the ! $ "% limit. Note thatCj = cjXj and Xj depends oncontrols (k", c") and also on (r", b") for all #< j. See (2.2). In the totally risk averse limit,the optimal controls (kj, cj) are chosen to maximize minimum consumption under worst casescenariosr"=r

    , b"=b".

    A more interesting totally risk-averse limit problem is obtained by introducing ideas fromthe theory of large deviations for stochastic processes [2]. Suppose that p(r, b) = p!(r, b)depends on ! in the following way:

    (5.2) lim!$"#

    (p!(r, b))1

    ! =q(r, b).

    Since 0 < p!(r, b) < " and these probabilities sum to ", q(r, b) ! " and q(r, b) = " for atleast one of the nitely many possible pairs (r, b). Ifp!(r, b) =p(r, b) does not depend on !,then q(r, b) = " for all (r, b). As a simple example, suppose that the interest raterj =r isconstant and productivity of capital is eitherbj =b

    + (good) orbj =b" (bad). See Example

    5."below. Ifq(b+) = "and q(b")> ", then the undesirable event bj =b" becomes rare as

    ! $ "%. We also assume that (A0(!))1

    ! tends to a limitB0> 0 as! $ "%. Let

    (5.3) JN= lim!$"#

    (!JN(!))1

    ! .

    The nite horizon, totally risk-averse control problem is to choose a sequence of controls(kj, cj) , j= 0, ", , N " "to maximizeJN. We recall from Section 2 that (kj, cj) ' %and

    thatkj and cj are functions of~rj,~bj, where

    ~rj = (r0, r1, , rj"1) , ~bj = (b0, b1, , bj"1)

    It is convenient to express JN as a totally risk-averse expectation, which is dened asfollows. The totally risk-averse expectation of a positive function"

    ~rN,~bN is dened as

    "4

  • 8/11/2019 Fleming WH, Stein J.

    15/24

    (a) E#(") = min(~rN,~bN)

    qN

    ~rN,~bN "

    ~rN,~bN

    ,

    (5.4)

    (b) qj

    ~rj ,~bj

    =j=1Y"=0

    q(r", b") = lim!$"#

    !"j"1Y"=0

    p!(r", b")

    #$

    1

    !

    , j= ", , N.

    From the denition and the assumption that (rj , bj) are independent over di!erent timeperiods, it is easily shown (see Appendix) that

    (5.5) E#(") = lim!$"#

    [E("!)]1

    ! .

    Let a & b = min(a, b) denote the min-plus sum of two numbers a, b. ThenE# is additivewith respect to min-plus addition (see the Appendix):

    (5.6) E# ("1 & & "m) =E# ("1) & & E

    # ("m) = lim!$"#

    "E

    mX`=1

    "!`

    !# 1!

    .

    If we take m =N+ ", "j+1= Cj =cjXj for 0 # j # N " ", "N+1= B0XN, then

    (5.7) JN=E#

    +,N"1Mj=0

    Cj & B0XN

    -. .

    In other words, JN is the totally risk averse expectation of the smaller of the minimumconsumption overNtime periods and B0XN, where XN is the nal wealth.

    This totally-risk averse limit control problem can be solved by straightforward modica-tions of standard dynamic programming methods. LetZN(x) denote the maximum ofJ,considered as a function of the initial wealth X0 = x. It satises the nite time dynamicprogramming equation

    (5.8) ZN(x) = max(k,c)

    {(cx) & E# [ZN"1(X1)]} ,

    where from (2.2) with r = r0,b =b0

    X1= x (" + r+ (b " r)k " c)

    The totally risk-averse value function is linear: ZN(x) =BNx. From (5.8) the constants BN

    satisfy the recursive formula

    (5.9) BN= max(k,c)

    "c & BN"1min

    (r,b)q(r, b) (" + r+ (b " r)k " c)

    #

    Let (k!N, c!N) give the maximum in (5.9) among all (k, c) ' %. The optimal controls for the

    totally risk-averse control problem are

    (5."0) kj =k!N"j , cj =c

    !N"j, j= 0, ", , N " ".

    "5

  • 8/11/2019 Fleming WH, Stein J.

    16/24

    Steady-state solution. Of particular interest is the case when B0 is chosen such thatBN =B0 for all N = ", 2, . In this case, the optimal controlsk

    !, c! do not depend on j.The following lemma states that a steady state solution exists and is unique.

    L emma 5.1 T here exists a unique B >0 such that

    (5."") "= max(k,c)

    "B"1c & min

    (r,b)q(r, b) (" + r+ (b " r)k " c)

    #.

    P roof. We proceed as in the proof of Theorem (2."). Let &(B) denote the right sideof (5."). Then&(B) is a continuous, strictly decreasing function ofB and &(B) $ 0 asB $ %. For each xedc&,

    maxk

    min(r,b)

    q(r, b) (" + r+ (b " r)k " c&) ! "+ b" " c&,

    where we have taken k = "on the right side and have used the fact that q(r, b) ! ". Choosec& such that 0< c& < b". IfB < c&, then

    &(B) ! minn

    B"1c&, " + b" " c&o

    > ".

    Hence, there exists a solutionB to &(B) = "withB ! b". Since & is strictly decreasing,Bis unique.

    From now on, we take B0= B and hence from (5.9) BN=B for allN. The steady statetotally risk-sensitive value function is Z(x) =Bx.

    R emark 5.1. Z(x) can also be obtained from the discounted innite horizon controlproblem in Section 2, as follows. In (2.8) assume thatA=A(!) =B!, and write W(x) =

    W(x, !).Then

    " = max(k,c)

    nB"1c

    !+$&(k,c,!)

    o 1!

    and the right side tends to the right side of (5."") as ! $ "%. Thus

    Z(x) = lim!$"#

    (!W(x, !))1

    ! .

    L emma 5.2 B=c! and

    (5."2) min(r,b)

    q(r, b) (" + r+ (b " r)k! " c!) = ".

    P roof. We rst note that either (k!, c!) is interior to % or k! = 0, 0 < c! < "+ r".Otherwise, the right side of (5."") would be 0. We must have

    B"1c! = min(r,b)

    q(r, b) (" + r+ (b " r)k! " c!) .

    Otherwise, the max in (5."") could be increased by keeping k = k! xed and slightly in-creasing or decreasingc (thusc= c! + for small +> 0). By (5.""),B = c!.

    We summarize these results in the following Theorem

    "6

  • 8/11/2019 Fleming WH, Stein J.

    17/24

    T heorem 5.1 Let(k!, c!) give the maximum in (5.11). Then:

    (a) k!, c! are the optimal capital-to-wealth and consumption-to-wealth ratios for the steady

    state, totally risk-averse optimal control problem andZ(x) = c

    !

    x is the steady statevalue function.

    (b) Let C!j = c!X!j , where X

    !j satisfies (2.2) withkj = k

    !, cj = c! andX!0 = x. Then

    E#

    C!j

    = c!x does not depend onj.

    P roof (outline). For part (a), a standard verication argument shows that the maximumofJN over all admissible control sequences kj, cj is Z(x) = Bx, and that the maximum isattained by the controls k!, c!.

    We prove part (b) by induction on j . By using the product form (5.4b) ofqj

    ~rj,~bj

    , weobtain

    E#

    X!

    j+1

    = minrj ,bj q(rj, bj) (" + rj+ (bj " rj) k!

    " c!

    ) E#

    X!

    j

    .

    By (5."2), E#

    X!j+1

    = E#

    X!j

    . By induction, E#

    X!j

    = x for j = ", 2, . Since

    C!j =c!X!j , E

    #

    C!j

    = c!E#

    X!j

    = c!x.Since

    N"1Mj=0

    C!j = min0'j'N"1

    C!j ,

    we have by (5.6):

    Corollary 5.1 For every N= ", 2,

    (5."3) E#

    min0'j'N"1

    C!j

    = c!x.

    Example 5.1 Let r+ = r" = r, bj = b+ or b" where b" < r < b+. Let q(b+) = ",

    q(b") > ". In large deviations terminology, the undesirable productivity bj = b" is rare if

    !0, b" " r

  • 8/11/2019 Fleming WH, Stein J.

    18/24

    Note thatk! does not depend on the interest rate r in this example.

    Continuous time model. We turn now to the continuous time model in Section 4. Forsimplicity, we take r =constant ()1 = 0 in equation (4.6)). The problem is then mathe-matically equivalent to the Merton portfolio optimization problem with the no short sellingcontraintKt ! 0. We let the volatility coe#cient)2= )2(!) depend on!in such a way that

    (5."5) lim!$"#

    [)2(!)]2 (" " !) = )2#> 0

    From (4.9), (4."0), (4."2), the optimal controlsk!(!), c!(!) for the innite horizon problemin Section 4 tend as ! $ "%to k!, c!, where

    (a) k! =b " r

    )2#, c! = $

    (5."

    6)(b) $ =

    (b " r)2

    2)2#+ r.

    We write V(x) =V(x, !) for the value function in Section 4. Then

    !V(x, !) =A(!)x!,

    where from the last term in the dynamic programming equation (4.7)

    A(!) = [c!(!)]!"1 , lim!$"#

    [A(!)]1

    ! = $.

    Thus

    (5."7) lim!$"#

    [!V(x, !)]1

    ! =Z(x) = $x.

    We will interpretZ(x) as the steady state value function for a totally risk-averse limit controlproblem, andk!, c! as optimal controls for this problem The optimal controls turn out to beconstants. Hence in (4.6) let us consider only controls kt, ct which are not random. Since)1= 0, (4.6) becomes

    (5."8) dXt= Xt[(r+ (b " r)kt " ct) dt + kt)2(!)dw2t] .

    Since )2(!) $ 0 as ! $ "%, this is a small random perturbation of the correspondinglinear di!erential equation with )2= 0. The asymptotic behavior ofE(X

    !t) as ! $ "%is

    obtained from the Freidlin-Wentzell theory of small random perturbations ["0]. Letkt, ct becontinuous functions oft, and letvtdenote an arbitrary deterministic disturbance function.Letxt be the solution to the di!erential equation

    (5."9) dxt

    dt =xt[r+ (b " r) kt " ct+)#ktvt]

    "8

  • 8/11/2019 Fleming WH, Stein J.

    19/24

    with the same initial data x0= x as for (5."8). Then

    (5.20) log E# (xt) = log lim!$"#

    [E(X!t)]1

    ! =E" [log xt] ,

    where forg(x) = log(ax) with any constant a >0

    (5.2") E" [g (xt)] = infv.

    g (xt) +

    "

    2

    Z t0

    v2sds

    .

    Ifkt= k, ct= c are constants, then (5.20) can easily be derived directly. See the Appendix.E" is sometimes called a min-plus expectation [5].

    As in (5.") let

    (5.22) J(T, !) =!"1E"Z T

    0

    e"$tC!tdt + A(!)e$TX!t # .

    Note thatCt= ctXt, wherect is not random. Let

    (5.23) J(T) = lim!$"#

    [!J(T, !)]1

    ! .

    By (5.20)

    (5.24) log lim!$"#

    [E(C!t)]1

    ! =E" [log (ctxt)] = log ct+ E" [log xt] .

    From this it can be seen that

    (5.25) J(T) = min0't'T

    E#

    Ct

    & $E# (xT) ,

    where Ct= ctxt and for any constanta >0

    (5.26) E# [axt] =aE# (xt) = exp

    E" [log (axt)]

    .

    Formula (5.25) is a continuous time analogue of (5.7).The steady state, totally risk-averse continuous time optimal control problem is to nd

    control functionsk., c. which minimize J(T) on any nite time interval 0 # t # T.

    T heorem 5.2 Letk!, c!,$ be as in (5.16). Then

    (a) k!, c! are optimal controls and Z(x) = $x is the steady state value function.

    (b) Let Ct = c!x!t , wherex

    !t satisfies (5.19) withkt = k

    !, ct = c! andx!0 = x. Then

    E#

    Ct!

    =c!x=Z(x) does not depend ont.

    "9

  • 8/11/2019 Fleming WH, Stein J.

    20/24

    P roof. The dynamic programming principle in stochastic control then implies that J(T, !) #V(x, !) for allk., c.. ThereforeJ(T) # Z(x) for all such controls k., c.. Letkt= k

    !,ct= c!$

    for 0 #t #T. Then a direct calculation shows thatE# (x!t ) =x for all t (see Appendix).

    Since Ct!

    =c!x!t ,

    J(T) =E#

    Ct!

    = $E# (xT) = $x for 0 # t # T.

    R emark 5.2. The optimal capital-to-wealth ratio k! in (5."6) depends on the interest r,as well as on b and )#. In contrast, the correspondingk

    ! for the discrete time problem inExample 5."does not depend onr. See (5."4a). These results are not comparable, since thelarge deviations scaling of parameters is di!erent for the discrete time and continuous timemodels.

    R emark 5.3. If consumption were omitted from the model (ct = 0 in (5."9)), thenE" [log xt] is maximized by choosing ks =k

    ! for 0 #s # t. With this choice of investmentcontrol,

    E# (xt) =x e!t.

    Thus, $ is the optimal totally risk-averse growth rate in absence of consumption. When theconsumption-to-wealth ratio is a constant c >0, then this growth rate becomes $ " c. Forthe steady state, totally risk-averse model with consumption, Theorem 5.2 implies that theoptimal choice c! = $makes the totally risk-averse growth rate equal to 0.

    20

  • 8/11/2019 Fleming WH, Stein J.

    21/24

    A ppendix

    Consider the totally risk-averse expectationE#(") dened by (5.4). Formula (5.5) followsreadily from (5.2) and

    E("!) =Xr,b

    N"1Y"=0

    hp!(r", b")

    1

    ! "N

    ~rN,~bNi!

    .

    Similarly, given positive functions "`

    ~rN,~bN `= ", , m,

    (A.") lim!$"#

    "E

    mX`=1

    "!`

    !# 1!

    = min(~rN,~bN)

    qN

    ~rN,~bN

    min`

    "`

    ~rN,~bN

    = min` min(~rN,~bN) qN

    ~rN,

    ~

    bN"

    `

    ~rN,

    ~

    bN .

    This gives (5.6). We also note that if" = "

    ~rj ,~bj

    does not depend on (r", b") for # ! j ,then

    (A2) E#(") = min(~rj ,~bj)

    qj

    ~rj,~bj"

    ~rj,~bj

    .

    This is seen by choosing (r", b") such that q(r", b") = " when # ! j. From (A2) it is alsoimmediate that

    E#(a") =aE#(") for any constanta >0

    E#(") # E#(&) if" # &.

    Derivation of (5.20). Let us next derive formula (5.20) in the special case whenkt =kand ct= c are constants. Let = r+ (b " r)k " c. The (5."8) becomes

    dXt= Xt(dt + k)2(!)dw2t) .

    A direct calculation using the Ito di!erential rule gives

    E(X!t) =x!exp

    "!

    "

    (" " !)k2)2(!)2

    2

    !t

    #.

    By (5."5)

    (A.3) log lim!$"#

    [E(X!t)]1

    ! = log x +

    "

    k2)2#2

    !t.

    Letyt= log xt. From (5."9)

    dytdt

    = + k)#vt, y0= y = log x.

    2"

  • 8/11/2019 Fleming WH, Stein J.

    22/24

    Consider the following control problem: choosevs for 0 # s # tto minimize

    ((v.) =yt+"

    2Z t

    0

    v2sds.

    The minimum is the same as among constant controls (vs = v). By elementary calculus,

    E" (yt) = infv.

    yt+

    "

    2

    Z t0

    v2sds

    = infv

    y+

    + k)#v+

    "

    2v2

    t

    =y +

    "

    k2)2#2

    !t.

    Since this is the same as (A.3), we get (5.20). Ifg(x) = log(ax) as in (5.2"), the constantlogais added to both sides.

    C ompletion of proof of T heorem 5.2. Let us takek = k!, c=c! = $in (5."6). In theabove calculation = 1

    2(k!)2 )2# in this case. Therefore, E

    " (log x!t ) = log xwhich by (5.26)

    is equivalent to E#

    (x!

    t ) =x, as required.R emark A .1. Whenk = k!, c= c! andyt= log xt,

    yt= y+Z t0

    k!)#vs+

    k!)#2

    !ds.

    In the terminology of [5], "yt is a max-plus martingale.

    R emark A .2. The computation above also provides a direct, elementary proof of optimalityofk!, c! if only constant controls kt = k, ct= c are admitted. For such controls

    (A.4) E#

    Ct

    = cx exp

    r+ (b " r)k " c ""

    2k2

    )2#

    t

    .

    For xedc, the right side is maximized by taking k = k!. Whenk= k!, Ct=c xt

    E#

    Ct

    = c x exp[($" c)t] =c E# (xt) ,

    E#

    min0't'T

    Ct

    = min

    0't'TE#

    Ct

    =

    ( c x if 0< c # $c x exp($ " c)T if c ! $.

    If 0< c # $, the rst term is no more than $E# (xT). If$ # c, then the second term is noless than $E# (xT). Thus

    J(T) =

    ( c x if 0< c # $$E# (xT) if c ! $.

    The maximum ofJ(T) over allc >0 occurs at c! = $, and equals Z(x) = $x= c!x.

    22

  • 8/11/2019 Fleming WH, Stein J.

    23/24

    R eferences

    ". D.P. Bertsekas, Dynamic Programming and Optimal Control, Vols. I and II, Athena

    Scientic Press, Belmont, MA "995.

    2. P. Dupuis and R.S. Ellis, A Weak Convergence Approach to Large Deviations, Wiley,New York, "997.

    3. W.H. Fleming, Controlled Markov processes and mathematical nance, in NonlinearAnalysis, Di!erential Equations and Control, (F.H. Clarke and R.J. Stern, eds) 407446, Kluwer Academic Publishers, "999.

    4. W.H. Fleming, Stochastic control models of optimal investment and consumption,Aportaciones Matematicas, Sociedad Matematica, Mexicana No. "6, 200", "59204.

    5. W.H. Fleming, Max-plus stochastic processes and control, preprint.

    6. W.H. Fleming and R.W. Rishel, Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, "975.

    7. W.H. Fleming and H.M. Soner, Controlled Markov Processes and Viscosity Solutions,Springer-Verlag, New York, "992.

    8. W.H. Fleming and J.L. Stein, A stochastic control approach to international financeand debt, CESifo Working Paper #204, Munich "999. Available at the CESifo site.http://www/CESifo.de

    9. W.H. Fleming and J.L. Stein, Stochastic inter-temporal optimization in discrete time,in Economic Theory, Dynamics and Markets (eds T. Negishi, R.V. Ramachandran andK. Mino) 325339, Kluwer Academic Publishers, 200".

    "0. M.I. Fredlin and A.D.Wentzell, Random Perturbations of Dynamical Systems, Springer-Verlag, New York, "984.

    "". M. Obstfeld and K. Rogo!, Foundations of International Microeconomics, MIT Press,"996.

    "2. A.A. Puhalskii, Large Deviations and Indempotent Probability, Chapman and HallCRC Press, 200"

    "3. J.L. Stein and G. Paladino, Country default risk: an empirical assessment, MonashUniversity Conf. on Growth, Performance and Concentration in International FinancialMarkets, Prato, Italy, November 2000. Australian Economic Papers 40 (200") 4"7436.

    "4. N.J. Stokey and R.E. Lucas, Jr., Recursive Methods in Economic Dynamics, HarvardUniversity Press, "989.

    23

  • 8/11/2019 Fleming WH, Stein J.

    24/24

    "5. S.J. Turnovsky, International Macroeconomic Dynamics, MIT Press, "997.

    4