Captive Reinsurance White Paper - Prudential … Reinsurance White Paper Prudential ...
Optimal Reinsurance under VaR and CTE Risk Measures Jun Cai ...
Transcript of Optimal Reinsurance under VaR and CTE Risk Measures Jun Cai ...
Optimal Reinsurance under VaR and CTE Risk Measures
Jun Cai†, Ken Seng Tan†, Chengguo Weng†, Yi Zhang‡∗
†Department of Statistics and Actuarial Science,
University of Waterloo, Waterloo, N2L 3G1, Canada
‡Department of Mathematics,
Zhejiang University, Hangzhou, 310027, China
Revised version: July 18, 2007
Abstract
Let X denote the loss initially assumed by an insurer. In a reinsurance design, theinsurer cedes part of its loss, say f(X), to a reinsurer, and thus the insurer retainsa loss If (X) = X − f(X). In return, the insurer is obligated to compensate thereinsurer for undertaking the risk by paying the reinsurance premium. Hence, the sumof the retained loss and the reinsurance premium can be interpreted as the total costof managing the risk in the presence of reinsurance. Based on a technique used inMuller and Stoyan (2002) and motivated by Cai and Tan (2007) on using the value-at-risk (VaR) and the conditional tail expectation (CTE) of an insurer’s total cost asthe criteria for determining the optimal reinsurance, this paper derives the optimalceded loss functions in a class of increasing convex ceded loss functions. The resultsindicate that depending on the risk measure’s level of confidence and the safety loadingfor the reinsurance premium, the optimal reinsurance can be in the forms of stop-loss,quota-share, or change-loss.
Keywords: Value-at-risk (VaR), conditional tail expectation (CTE), ceded loss, retained
loss, increasing convex function, expectation premium principle, stop-loss reinsurance, quota-
share reinsurance, change-loss reinsurance.
JEL classification: C02, C61 SIBC classification: IM52, IE10, IB90
∗Corresponding author. Postal address: Department of Mathematics, Zhejiang University, Hangzhou,310027, China. Email address: [email protected]
1 Introduction
Reinsurance is a commonly employed risk management strategy to ensure the insurer’s
earnings remain relatively stable or to protect the insurer against potentially large losses.
Let X be the (aggregate) loss initially assumed by an insurer. Suppose that X is a nonneg-
ative random variable with cumulative distribution function FX(x) = Pr{X ≤ x}, survival
function SX(x) = 1− FX(x) = Pr{X > x}, and mean 0 < E[X] < ∞. Under a reinsurance
arrangement, the insurer cedes part of its loss, say f(X) with 0 ≤ f(X) ≤ X, to a reinsurer,
and thus the insurer retains a loss If (X) = X − f(X), where the function f(x), satisfying
0 ≤ f(x) ≤ x, is known as a ceded loss function and the function If (x) = x − f(x) is
called a retained loss function. By transferring part of the risk exposure to a reinsurer, the
insurer also incurs an additional cost in the form of reinsurance premium that is payable to
the reinsurer. Let δf (X) denote the reinsurance premium which corresponds to a ceded loss
function f(x) and let Tf (X) represent the resulting total cost or the total risk exposure of
the insurer in the presence of reinsurance. Then we obtain the following relationship:
Tf (X) = If (X) + δf (X). (1.1)
This demonstrates that in the presence of reinsurance, the insurer is now concerned with the
risk exposure Tf (X), instead of the ground up loss X. This also suggests that an appropriate
choice of the ceded loss function can provide an effective way of reducing the risk exposure
of an insurer.
In practice, there is a variety of reinsurance designs in which an insurer can choose from.
These include quota-share reinsurance with f(x) = ax and If (x) = (1 − a)x, 0 < a ≤ 1;
stop-loss reinsurance with f(x) = (x − d)+ = max{0, x − d} and If (x) = min{x, d}, d ≥ 0;
change-loss reinsurance with f(x) = a(x− d)+ and If (x) = (1− a)x + a min{x, d}. Various
optimization criteria have been proposed for the determination of the optimal reinsurance.
For example, it is well known that the stop-loss reinsurance is the optimal one that minimizes
the variances of retained losses among the class of ceded loss functions that have the same
expectations; see, for example, Bowers, et al. (1997), Kaas, et al. (2001) and Gerber
(1979). By using the criterion of minimizing some specific convex risk measures, Gajek and
Zagrodny (2004) demonstrated that change-loss design is the optimal reinsurance. In a series
of published papers, Kaluszka (2001, 2004a, 2004b, 2005) also makes important contributions
to the optimal reinsurance models.
More recently, using risk measures such as the value-at-risk (VaR) and the conditional
tail expectation (CTE), Cai and Tan (2007) derived explicitly the optimal retention level
of a stop-loss reinsurance under the expectation premium principle. Their work, in part,
1
was sparked by an unprecedented surge in the usage of these measures as risk management
tools among banks, financial institutions, and insurance companies in recent years. See, for
example, Artzner, et al. (1999), Basak and Shapiro (2001), McNeil, et al. (2005), Cai and Li
(2005), Inui and Kijima (2005) and Yamai and Toshiba (2005).
The VaR of a nonnegative random variable X at a confidence level 1− α, 0 < α < 1, is
defined as
VaRX(α) = inf{x : Pr{X > x} ≤ α}. (1.2)
Note that if α ≥ SX(0), then VaRX(α) = 0, and if X has a continuous strictly increasing
distribution function on (0,∞), then for 0 < α < SX(0), VaRX(α) = S−1X (α), where S−1
X is
the inverse functions of SX . The conditional tail expectation (CTE) of a random variable X
at its VaR is defined as
CTEX(α) = E[X|X ≥ VaRX(α)]. (1.3)
See Proposition A.1 in Appendix A for important properties associated with VaR and CTE
that are relevant to our subsequent discussion.
Since Tf (X) defined in (1.1) captures the overall cost of insuring a loss for a ceded loss
function f , a prudent risk management for an insurer is therefore to ensure that the risk
measures associated with Tf (X) are as small as possible. Motivated by Cai and Tan (2007)
and by assuming the expectation premium principle1 for setting the reinsurance premium
δf (X), i.e.
δf (X) = (1 + ρ)E[f(X)], (1.4)
where ρ > 0 is the safety loading, this paper strives to determine the optimal ceded loss
functions that, respectively, minimize VaR and CTE of the total cost Tf (X) in some classes
of ceded loss functions. More specifically, our objective is to seek the optimal reinsurance in
the following class of ceded loss function:
Definition 1.1. Let F denote the class of ceded loss functions, which consists of all in-
creasing convex functions f(x) defined on [0,∞) and satisfying 0 ≤ f(x) ≤ x for x ≥ 0 but
excluding f(x) ≡ 0.
1There are many other principles that have been proposed in connection to determining the premium.The expectation premium principle has remained the most fundamental and widely used and hence wecontinue to use this in this paper.
2
Mathematically, our optimal reinsurance model can be formulated as follows, depending on
the chosen risk measure:
VaR-optimization : VaRTf! (X)(α) = minf∈F
{VaRTf (X)(α)
}, (1.5)
and
CTE-optimization : CTETf! (X)(α) = minf∈F
{CTETf (X)(α)
}, (1.6)
where VaRTf (X)(α) and CTETf (X)(α) are defined analogously as in (1.2) and (1.3) except for
the total risk random variable Tf (X).
To facilitate the discussion of the optimal ceded loss function in F of the above optimiza-
tion problems, we introduce the following class of ceded loss functions:
Definition 1.2. Let H denote the class of ceded loss functions, which consists of all non-
negative functions h(x) defined on [0,∞) with the following form
h(x) =n∑
j=1
cn,j(x− dn,j)+, x ≥ 0, n = 1, 2, · · · , (1.7)
where cn,j > 0 and dn,j ≥ 0 are constants such that
0 <n∑
j=1
cn,j ≤ 1 and 0 ≤ dn,1 ≤ dn,2 ≤ · · · ≤ dn,n for all n = 1, 2, · · · .
The importance of analyzing the optimal reinsurance in H can be argued as follows.
First, note that H is a subclass of F and all functions in F are continuous on [0,∞) since
increasing convex functions are continuous. Second, by adopting a technique similar to the
proof of Theorem 1.5.7 in Muller and Stoyan (2002, p18) on increasing convex order, we
formally show that any function in F is the limit of a sequence of functions in H, namely the
class H is a dense subclass of F (see Lemma 3.1). Consequently, by using some convergence
results on VaR and CTE, we prove that the optimal functions in H, which minimize the VaR
and CTE of the total cost Th(X) for h ∈ H, also optimally minimize the VaR and CTE of the
total cost Tf (X) for f ∈ F . The above arguments imply that we can deduce the solutions
to the optimal reinsurance models (1.5) and (1.6) by first confining the optimal ceded loss
functions within the class H. Following this strategy, we derive the optimal reinsurance
analytically as presented in Theorem 3.1 and Theorem 4.1. These results indicate that
optimal reinsurance can be in the forms of stop-loss, quota-share, or change-loss, depending
on the risk measure’s level of confidence and the safety loading for the reinsurance premium.
3
Throughout this paper, the terms “increasing” and “decreasing” mean “non-decreasing”
and “non-increasing”, respectively. To simplify our discussions, we assume that X has a
continuous strictly increasing distribution function on (0,∞) with a possible jump at 0,
which allows X to be a random sum∑N
i=1 Xi and is an important special case in actuarial
loss model. See Cai and Tan (2007) for details. Furthermore, to avoid discussing trivial
cases, we further assume that the parameter α associated with the definitions of VaR and
CTE satisfies
0 < α < SX(0). (1.8)
The rest of the paper is organized as follows. Section 2 provides additional notations and
preliminary results on the VaR of Th(X) for h ∈ H. Sections 3 and 4 analyze the solutions to
the optimal reinsurance models (1.5) and (1.6), respectively. Section 5 concludes our paper
and Appendix A presents some of the technical propositions, lemmas and proofs.
2 Preliminaries
By defining VaRIf (X)(α) and CTEIf (X)(α) as, respectively, the VaR and CTE of the retained
loss random variable If (X), then it follows from the translation invariance property of VaR
and CTE that:
VaRTf (X)(α) = VaRIf (X)(α) + δf (X), (2.1)
and
CTETf (X)(α) = CTEIf (X)(α) + δf (X). (2.2)
The above relations can also be justified as follows: (2.1) follows from (A.1) and (1.1), which
in turn leads to the second relation together with the definitions of CTE.
Under our assumption that the reinsurance premium is determined using the expectation
premium principle, this implies that for any function h(x) =∑n
j=1 cn,j(x − dn,j)+ ∈ H, the
reinsurance premium on the ceded loss h(X) can be written as
δh(X) = (1 + ρ)E[h(X)] = (1 + ρ){ n∑
j=1
cn,j
∫ ∞
dn,j
SX(x)dx}
. (2.3)
Furthermore, by defining An,i = 1 −∑i
j=1 cn,j, and Bn,i =∑i
j=1 cn,j dn,j, i = 1, · · · , n, it is
4
easy to show that the retained loss is
Ih(X) = X − h(X) = X −n∑
j=1
cn,j(X − dn,j)+
=
X, X ≤ dn,1,
An,i X + Bn,i, dn,i ≤ X ≤ dn,i+1, i = 1, · · · , n− 1,
An,n X + Bn,n, X ≥ dn,n.
(2.4)
We point out that Ih(x) in the above expression is an increasing continuous concave
function of x and the coefficients of Ih(x) satisfy
An,i dn,i+1 + Bn,i = An,i+1 dn,i+1 + Bn,i+1 for all i = 1, · · · , n− 1.
It also follows from (2.4) that the survival function of Ih(X) can be represented as
SIh(X)(x) =
SX(x), x ≤ dn,1,
SX
(x−Bn,i
An,i
), An,i dn,i + Bn,i ≤ x ≤ An,i dn,i+1 + Bn,i,
i = 1, · · · , n− 1,
SX
(x−Bn,n
An,n
), x ≥ An,n dn,n + Bn,n,
(2.5)
where SX
(x−Bn,i
An,i
)= SX(∞) = 0 when An,i = 0 for some i = 1, ..., n. Moreover, by (2.5),
we derive the VaR of Ih(X) at a confidence level 1− α as
VaRIh(X)(α) =
S−1X (α), S−1
X (α) ≤ dn,1,
An,i S−1X (α) + Bn,i, dn,i ≤ S−1
X (α) ≤ dn,i+1,
i = 1, · · · , n− 1,
An,n S−1X (α) + Bn,n, dn,n ≤ S−1
X (α),
(2.6)
which together with (2.1), leads to the following expressions for the VaR of Th(X):
VaRTh(X)(α) =
S−1X (α) + δh(X), S−1
X (α) ≤ dn,1,
An,i S−1X (α) + Bn,i + δh(X), dn,i ≤ S−1
X (α) ≤ dn,i+1,
i = 1, · · · , n− 1,
An,n S−1X (α) + Bn,n + δh(X), dn,n ≤ S−1
X (α).
(2.7)
We conclude this section by introducing the following notations:
ρ∗ =1
1 + ρ, d∗ = S−1
X (ρ∗), (2.8)
g(x) = x +1
ρ∗
∫ ∞
x
SX(t) dt, x ≥ 0, (2.9)
u(x) = S−1X (x) +
1
ρ∗
∫ ∞
S"1X (x)
SX(t) dt, x ≥ 0. (2.10)
See Proposition A.2 in the appendix for some useful properties associated with functions g(·)and u(·).
5
3 Optimal Reinsurance under VaR Risk Measure
In this section, we will derive the optimal ceded loss function f ∗ in the class F under the VaR
criterion; i.e., the solutions to the VaR-optimization problem (1.5). As mentioned earlier we
achieve this by first establishing that the optimal ceded loss functions in the class H are also
optimal in the class F (see Lemma 3.2). This result suggests that it would be sufficient to
just seeking the solutions to the following optimal reinsurance model:
VaRTh! (X)(α) = minh∈H
{VaRTh(X)(α)
}. (3.1)
We now present the following two lemmas. The former lemma, which we delay its proof
to the appendix, is important for proving the latter lemma.
Lemma 3.1. For any f ∈ F , there exists a sequence of functions {hn, n = 1, 2, · · · } in Hsuch that limn→∞ hn(x) = f(x) for all x ≥ 0 and
hn(x) ≤ f(x) ≤ x for all x ≥ 0 and n = 1, 2, · · · .
Lemma 3.2. Optimal ceded loss functions which minimize the VaR of insurer’s total risk
in the class H are also optimal in the class F .
Proof. Let h∗ be any optimal ceded loss function in the class H under the VaR criterion,
i.e., the solution to the optimization problem (3.1). We need to show that
VaRTf (X)(α) ≥ VaRTh! (X)(α), for any f ∈ F . (3.2)
By Lemma 3.1, we take a sequence of functions {hn, n = 1, 2, · · · } in H satisfying
limn→∞
hn(x) = f(x) for all x ≥ 0 (3.3)
and
hn(x) ≤ f(x) ≤ x for all x ≥ 0 and n = 1, 2, .... (3.4)
Then it follows from the dominated convergence theorem and (3.3) that
limn→∞
Ehn(X) = Ef(X), (3.5)
and the optimality of h∗ in the class F implies
VaRThn (X)(α) ≥ VaRTh! (X)(α), (3.6)
for any n = 1, 2, · · · . Note that the retained loss functions Ihn(x) = x − hn(x) and If (x) =
x − f(x) are increasing and continuous. Thus, by the continuous mapping theorem we see
6
that Thn(x) = Ihn(x)+δhn(X) = Ihn(x)+(1+ρ)E[hn(X)]→ If (x)+(1+ρ)E[f(X)] as n→∞.
Thus, it follows from part (b) of Proposition A.1 that limn→∞VaRThn(X)(α) = VaRTf (X)(α),
which, together with (3.6), implies (3.2) and hence we complete the proof. !
We proceed to deriving the solutions to the optimal reinsurance model (3.1). For each
n = 1, 2, · · · , and a given 0 < α < 1, let us first define the following sets of (dn,1, ..., dn,n):
Dn = {(dn,1, ..., dn,n) : 0 ≤ dn,1 ≤ · · · ≤ dn,n},D0
n = {(dn,1, ..., dn,n) : S−1X (α) ≤ dn,1 ≤ · · · ≤ dn,n},
Din = {(dn,1, ..., dn,n) : 0 ≤ dn,1 ≤ · · · ≤ dn,i ≤ S−1
X (α) ≤ dn,i+1 ≤ · · · ≤ dn,n},i = 1, ..., n− 1,
Dnn = {(dn,1, ..., dn,n) : 0 ≤ dn,1 ≤ · · · ≤ dn,n ≤ S−1
X (α)}.
(3.7)
Note that D0n, D
1n, ..., D
nn form a partition of Dn and Dn =
⋃ni=0 Di
n. For any h(x) =∑n
j=1 cn,j (x−dn,j)+ ∈ H, we use the notation VaRTh(X)(dn,1, ..., dn,n, α) to denote the VaR of
the insurer’s total cost associated with the ceded loss h(X) as a function of dn,j, j = 1, · · · , n.
Combining (2.3) and (2.7), we obtain the following lemma for the explicit expressions of
VaRTh(X)(dn,1, ..., dn,n, α) in terms of dn,1, ..., dn,n:
Lemma 3.3. For any h(x) =∑n
j=1 cn,j (x − dn,j)+ ∈ H and a given confidence level 1 − α
with 0 < α < SX(0).
(a) When S−1X (α) ≤ dn,1, i.e., (dn,1, ..., dn,n) ∈ D0
n,
VaRTh(X)(dn,1, ..., dn,n, α) = S−1X (α) +
1
ρ∗
{n∑
j=1
cn,j
∫ ∞
dn,j
SX(x)dx
}. (3.8)
(b) When dn,i ≤ S−1X (α) ≤ dn,i+1, i.e., (dn,1, ..., dn,n) ∈ Di
n, and for i = 1, · · · , n− 1,
VaRTh(X)(dn,1, ..., dn,n, α)
=
(1−
i∑
j=1
cn,j
)S−1
X (α) +i∑
j=1
cn,j g(dn,j) +1
ρ∗
n∑
j=i+1
cn,j
∫ ∞
dn,j
SX(x)dx. (3.9)
(c) When dn,n ≤ S−1X (α), i.e., (dn,1, ..., dn,n) ∈ Dn
n,
VaRTh(X)(dn,1, ..., dn,n, α) =
(1−
n∑
j=1
cn,j
)S−1
X (α) +n∑
j=1
cn,j g(dn,j). (3.10)
7
!Based on the above expressions of VaRTh(X)(dn,1, ..., dn,n, α), we analyze its minimum on
set Dn for n = 1, 2, · · · , by discussing its infimum on Din for each i = 0, 1, 2, · · · , n. The
results are summarized in the following lemma (see Appendix A for the proof):
Lemma 3.4. Given a confidence level 1− α with 0 < α < SX(0). For any function h(x) =∑n
j cn,j(x− dn,j)+ ∈ H with given coefficients cn,j, j = 1, ..., n.
(a) If
ρ∗ < SX(0) and S−1X (α) ≥ u(ρ∗), (3.11)
then
minDn
VaRTh(X)(dn,1, ..., dn,n, α) = S−1X (α) +
n∑
j=1
cn,j
[u(ρ∗)− S−1
X (α)]
(3.12)
and the minimum VaR is attained at dn,1 = · · · = dn,n = d∗ or at
h∗(x) =n∑
j=1
cn,j (x− d∗)+ =n∑
j=1
cn,j
(x− S−1
X (ρ∗))+
. (3.13)
(b) If
ρ∗ ≥ SX(0) and S−1X (α) ≥ g(0), (3.14)
then
minDn
VaRTh(X)(dn,1, ..., dn,n, α) = S−1X (α) +
n∑
j=1
cn,j
[g(0)− S−1
X (α)]
(3.15)
and the minimum VaR is attained at dn,1 = · · · = dn,n = 0 or at
h∗(x) =n∑
j=1
cn,j x. (3.16)
(c) For all other cases, mindn∈Dn VaRTh(X)(dn,1, ..., dn,n, α) does not exist.
We are now ready to present the key results of this section which are stated in Theo-
rem 3.1. The above lemma is used to obtain the solution to (3.1) by comparing the minimum
of VaRTh(X)(dn,1, ..., dn,n, α) on Dn for each n = 1, 2, · · · . Lemma 3.2, in turn, asserts that
these solutions are also the solutions to the proposed optimal reinsurance model (1.5).
8
Theorem 3.1. For a given confidence level 1− α with 0 < α < SX(0) .
(a) If ρ∗ < SX(0) and S−1X (α) > u(ρ∗), then minf∈F VaRTf (X)(α) = u(ρ∗) and the mini-
mum VaR is attained at
f ∗(x) = (x− d∗)+ . (3.17)
(b) If ρ∗ < SX(0) and S−1X (α) = u(ρ∗), then minf∈F VaRTf (X)(α) = S−1
X (α) and the
minimum VaR is attained at
f ∗(x) = c (x− d∗)+ (3.18)
for any constant c such that 0 < c ≤ 1 .
(c) If ρ∗ ≥ SX(0) and S−1X (α) > g(0), then minf∈F VaRTf (X)(α) = g(0) and the minimum
VaR is attained at
f ∗(x) = x. (3.19)
d) If ρ∗ ≥ SX(0) and S−1X (α) = g(0), then minf∈F VaRTf (X)(α) = S−1
X (α) and the mini-
mum VaR is attained at
f ∗(x) = cx (3.20)
for any constant c such that 0 < c ≤ 1.
Proof. Note that all ceded loss functions given in the theorem are included in H. Hence,
by Lemma 3.2, we only need to show the optimality of these ceded loss functions in class H;
i.e., for any h(x) =∑n
j cn,j(x− dn,j)+ ∈ H,
VaRTh(X)(α) ≥ VaRTf! (X)(α). (3.21)
(a) By (3.12) of Lemma 3.4 and the assumption S−1X (α) > u(ρ∗), we have
VaRTh(X)(α) ≥ minDn
VaRTh(X)(dn,1, ..., dn,n, α) = S−1X (α) +
n∑
j=1
cn,j
[u(ρ∗)− S−1
X (α)]
≥ S−1X (α) + u(ρ∗)− S−1
X (α) = u(ρ∗). (3.22)
Furthermore, by (3.12) and (3.13), VaRTf! (X)(α) = u(ρ∗) when∑n
j=1 cn,j = 1 in (3.13) or
when f ∗(x) = (x− d∗)+. Hence, (3.22) implies (3.21).
(b) By (3.12) of Lemma 3.4 and the assumption S−1X (α) = u(ρ∗), we have
VaRTh(X)(α) ≥ minDn
VaRTh(X)(dn,1, ..., dn,n, α)
= S−1X (α) +
n∑
j=1
cn,j
[u(ρ∗)− S−1
X (α)]≥ S−1
X (α). (3.23)
9
Furthermore, by (3.12) and (3.13), VaRTf! (X)(α) = S−1X (α) when
∑nj=1 cn,j = c in (3.13) or
when f ∗(x) = c(x− d∗)+ for any 0 < c ≤ 1. Hence, (3.23) implies (3.21).
Parts (c) and (d) are proved similarly as in Parts (a) and (b) by using (3.15) of Lemma 3.4
and the assumptions S−1X (α) > g(0) and S−1
X (α) = g(0), respectively.
!Remark 3.1. Theorem 3.1 establishes that for our proposed optimal reinsurance model, the
optimal reinsurance is a stop-loss reinsurance in case (a), a change-loss reinsurance in case
(b), and a quota-share reinsurance in cases (c) and (d).
We conclude this section by examining several special functions in F , as illustrated in
the following examples:
Example 3.1. Let f(X) = X so that an insurer cedes all her losses to a reinsurer. In this
case, the retained loss is If (X) = 0. The total cost for the insurer is Tf (X) = (1 + ρ)E(X).
However, the VaR of a constant is itself. Hence, VaRTf (X)(α) = (1 + ρ)E(X) = g(0). It is
easy to verify that
g(0) = (1 + ρ)E(X) > u(ρ∗). (3.24)
Indeed, noticing that SX(x) is decreasing in x and SX(d∗) = SX
(S−1
X (ρ∗))
= ρ∗, we have
(1 + ρ)E(X) = (1 + ρ)
(∫ d!
0
SX(x)dx +
∫ ∞
d!SX(x)dx
)
> S−1X (ρ∗) +
1
ρ∗
∫ ∞
S"1X (ρ!)
SX(x)dx = u(ρ∗).
Hence, VaRTf (X)(α) is bigger than the minimum VaRs in cases (a) and (b) of Theorem 3.1.
However, f is optimal in cases (c) and (d). !Example 3.2. Let fa(X) = aX for 0 < a ≤ 1 so that an insurer uses a quota-share
reinsurance. In this case, the retained loss is Ifa(X) = (1 − a)X. The total cost for the
insurer is Tfa(X) = (1− a)X + a(1 + ρ)E(X). By (A.1), we have
VaRTfa (X) = (1− a)VaRX(α) + a(1 + ρ)E(X) = (1− a)S−1X (α) + a(1 + ρ)E(X).
It follows from (3.24) that in cases (a) and (b) of Theorem 3.1, VaRTfa (X) > (1− a)u(ρ∗) +
au(ρ∗) = u(ρ∗). In case (c), VaRTfa (X) > (1− a)g(0) + ag(0) = g(0). However, fa is optimal
in case (d). !Example 3.3. Let fd(X) = (X − d)+ for d > 0, namely, an insurer uses a stop-loss reinsur-
ance. In this case, the retained loss is Ifd(X) = min{X, d}. The total cost for the insurer
is Tfd(X) = min{X, d} + (1 + ρ)E[(X − d)+]. It has been proved in Cai and Tan (2007)
under the conditions of case (a) of Theorem 3.1 that mind>0{VaRTfd(X)(α)} = u(ρ∗) and the
minimum VaR is attained at d∗ = S−1X (ρ∗). !
10
4 Optimal Reinsurance under CTE Risk Measure
Last section derived explicitly the optimal reinsurance that minimizes the VaR of the total
cost of an insurer for reinsuring its losses. In this section, we extend the analysis by assuming
CTE as the relevant risk measure. More specifically, we are interested in the optimal ceded
loss functions for the CTE-optimization as formulated in (1.6). Our procedure to identify
the optimal ceded functions under the CTE criterion will parallel to the situation of VaR
criterion in the previous section. We will also first establish the fact that optimal ceded loss
functions in the class H, which minimize the CTE of the insurer’s total cost, are also optimal
in the larger class F (see Lemma 4.2), and then consider the following optimization problem
over the class of ceded loss functions in H:
CTETh! (X)(α) = minh∈H
{CTETh(X)(α)
}. (4.1)
It should be emphasized that it is considerably more complicated to discuss the optimal
ceded loss functions under the CTE criterion than the VaR criterion. The difficulty lies
mainly on the proof of the following Lemma 4.1, which will be crucial to the proof for
Lemma 4.2. We defer the proof of Lemma 4.1 to the appendix.
Lemma 4.1. For any f(x) ∈ F , there exists a sequence of functions hn(x) ∈ H such that
limn→∞
hn(x)→ f(x) for all x ≥ 0 (4.2)
and for α < SX(0),
Pr{Ihn(X) ≥ VaRIhn (X)(α)}→ Pr{If (X) ≥ VaRIf (X)(α)} as n→∞. (4.3)
Lemma 4.2. Optimal ceded loss functions which minimize the CTE of insurer’s total risk
in class H are also optimal in class F .
Proof. Let h∗ be the optimal ceded loss function in class H and f be any ceded loss function
in the class F . We need to show
CTETf (X)(α) ≥ CTETh! (X)(α). (4.4)
Recall that by combining (A.3) of Proposition A.1 and (2.1), we have
CTEThn(X)(α) = VaRIhn (X)(α) +
∫∞VaRIhn (X)(α) SIhn (X)(x)dx
P{Ihn(X) ≥ VaRIhn (X)α)} + (1 + ρ)E[hn(X)], (4.5)
11
and
CTETf (X)(α) = VaRIf (X)(α) +
∫∞VaRIf (X)(α) SIf (X)(x)dx
P{If (X) ≥ VaRIf (X)α)} + (1 + ρ)E[f(X)]. (4.6)
By Lemma 4.1, there exists a sequence of functions hn(x) ∈ H such that limn→∞ hn(x) = f(x)
with 0 ≤ hn(x) ≤ f(x) ≤ x for any x ≥ 0 and
Pr{Ihn(X) ≥ VaRIhn (X)(α)}→ Pr{If (X) ≥ VaRIf (X)(α)}, as n→∞. (4.7)
Thus, using the dominated convergence theorem yields
limn→∞
Ehn(X) = Ef(X), (4.8)
since 0 < EX < ∞. Recall that If (x) = x− f(x) is increasing and continuous in x ≥ 0 and
hence it follows from part (b) of Proposition A.1 that
limn→∞
VaRIhn (X)(α) = VaRIf (X)(α). (4.9)
Now, by denoting Yhn = Ihn(X)−VaRIhn (X)(α) and the continuity of function κ(y) = yI{y≥0},
we have κ(Yhn)a.s.−→ κ(Yf ). Hence, based on the fact that
0 ≤ v(Yhn) = [X − VaRIhn (X)(α)]I{X−VaRIhn (X)(α)≤0} ≤ X, a.s.,
we conclude
limn→∞
∫ ∞
VaRIhn (X)(α)
SIhn (X)(x)dx = limn→∞
E[κ(Yhn)] = E[κ(Yf )]
=
∫ ∞
VaRIf (X)(α)
SIf (X)(x)dx. (4.10)
using the dominated convergence theorem. Consequently, by (4.5)-(4.10), we obtain
limn→∞
CTEThn (X)(α) = CTETf (X)(α). (4.11)
On the other hand, the optimality of h∗ in H implies
CTEThn (X)(α) ≥ CTETh! (X)(α), (4.12)
for n = 1, 2, · · · , which together with (4.11) implies (4.4). !
12
To proceed with the discussion of the solution to the optimal reinsurance model (4.1),
it is convenient to first determine the explicit expressions for the corresponding CTE of the
total cost. Recall that from (A.3) of Proposition A.1 and together with (2.1), we obtain
CTETf (X)(α) = VaRIf (X)(α) +
∫∞VaRIf (X)(α) SIf (X)(x)dx
Pr{If (X) ≥ VaRIf (X)(α)} + δf (X). (4.13)
Similar to the VaR-optimization, for any function h(x) =∑n
j=1 cn,j (x − dn,j)+ ∈ H, we
express the CTE of the ceded loss h(X) as a function of dn,1, ..., dn,n, which we denote as
CTETh(X)(dn,1, ..., dn,n, α). Then, by defining the following function
v(x) = S−1X (x) +
1
α
∫ ∞
S"1X (x)
SX(t) dt, x ≥ 0, (4.14)
we obtain explicit expressions of CTETh(X)(dn,1, ..., dn,n, α), as shown in Lemma 4.3. Capi-
talizing on this lemma, we determine the minimum of CTETh(X)(dn,1, ..., dn,n, α) on the set
Dn for each i = 0, 1, 2, · · · , n and a given integer n. The results are given in Lemma 4.4.
The proofs of these two lemmas are presented in Appendix A.
Lemma 4.3. Given a confidence level 1 − α with 0 < α < SX(0). For any h(x) =∑n
j=1 cn,j (x− dn,j)+ ∈ H.
(a) When S−1X (α) ≤ dn,1, i.e., (dn,1, ..., dn,n) ∈ D0
n,
CTETh(X)(dn,1, ..., dn,n, α) = v(α) +
[1
ρ∗− 1
α
] n∑
j=1
cn,j
∫ ∞
dn,j
SX(x)dx. (4.15)
(b) When dn,i ≤ S−1X (α) ≤ dn,i+1, i.e., (dn,1, ..., dn,n) ∈ Di
n, for i = 1, ..., n− 1,
CTETh(X)(dn,1, ..., dn,n, α)
=
(1−
i∑
j=1
cn,j
)v(α) +
i∑
j=1
cn,j g(dn,j) +
[1
ρ∗− 1
α
] n∑
j=i+1
cn,j
∫ ∞
dn,j
SX(x)dx.(4.16)
(c) When dn,n ≤ S−1X (α), i.e., (dn,1, ..., dn,n) ∈ Dn
n,
CTETh(X)(dn,1, ..., dn,n, α) =
(1−
n∑
j=1
cn,j
)v(α) +
n∑
j=1
cn,j g(dn,j). (4.17)
Lemma 4.4. Consider any function h(x) =∑n
j=1 cn,j (x−dn,j)+ ∈ H with given coefficients
cn,j, j = 1, ..., n, and a confidence level 1− α such that 0 < α < SX(0).
13
(a) If α < ρ∗ < SX(0), then
minDn
{CTETh(X)(dn,1, ..., dn,n, α)} = v(α) +n∑
j=1
cn,j[u(ρ∗)− v(α)] (4.18)
and the minimum CTE is attained at dn,1 = · · · = dn,n = d∗ or at
h∗(x) =n∑
j=1
cn,j(x− d∗)+. (4.19)
(b) If α = ρ∗ < SX(0), then
minDn
{CTETh(X)(dn,1, · · · , dn,n, α)} = v(α) (4.20)
and the minimum CTE is attained at any (dn,1, · · · , dn,n) or at any
h∗(x) =n∑
j=1
cn,j(x− dn,j)+ (4.21)
provided that (dn,1, · · · , dn,n) satisfy d∗ ≤ dn,1 ≤ · · · ≤ dn,n. In particular, the minimum
CTE is attained at (d∗, · · · , d∗) or at h∗(x) =∑n
j=1 cn,j(x− d∗)+.
(c) If α < SX(0) ≤ ρ∗, then
minDn
{CTETh(X)(dn,1, ..., dn,n, α)} = v(α) +n∑
j=1
cn,j[g(0)− v(α)] (4.22)
and the minimum CTE is attained at dn,1 = · · · = dn,n = 0 or at
h∗(x) =n∑
j=1
cn,jx. (4.23)
(d) For all other cases, minDn CTETh(X)(dn,1, · · · , dn,n, α) does not exist.
Finally, by comparing the minimum of CTETh(X)(dn,1, ..., dn,n, α) for each n, we obtain
the solutions to the optimization problem (4.1). Lemma 4.2, in turn, asserts that these
solutions are also the optimal ceded loss functions in the class F and hence the solutions to
our proposed optimal reinsurance model (1.6). The results are summarized in the following
theorem.
Theorem 4.1. For a given confidence level 1− α with 0 < α < SX(0).
(a) If α < ρ∗ < SX(0), then minf∈F CTETf (X)(α) = u(ρ∗) and the minimum CTE is
attained at
f ∗(x) = (x− d∗)+. (4.24)
14
(b) If α = ρ∗ < SX(0), then minf∈F CTETf (X)(α) = u(ρ∗) and the minimum CTE is
attained at any
f ∗(x) =n∑
j=1
cn,j(x− dn,j)+ ∈ H (4.25)
such that d∗ ≤ dn,1 ≤ · · · ≤ · · · ≤ dn,n and n = 1, 2, · · · .(c) If α < SX(0) ≤ ρ∗, then minf∈F{CTETf (X)(α)} = u(ρ∗) and the minimum CTE is
attained at
f ∗(x) = x. (4.26)
Proof. The proof is based on Lemma 4.2 and Lemma 4.4 and parallels to the proof of
Theorem 3.1. Note that all ceded loss functions given in the theorem are included in the
class H. Hence, by Lemma 4.2, we only need to show the optimality of these ceded loss
functions in the class H; i.e., for any h(x) =∑n
j cn,j(x− dn,j)+ ∈ H,
VaRTh(X)(α) ≥ VaRTf! (X)(α). (4.27)
(a) First, note that when α < ρ∗ < SX(0), we have
u(ρ∗) < v(α). (4.28)
Indeed, in this case, S−1X > S−1
X (ρ∗) ≡ d∗, which, together with (A.4), implies g(S−1X (α)) ≥
g(d∗). Moreover, it follows from the definitions of the functions u(·), v(·), and g(·) that
v(α) > u(α) = g(S−1X (α)) and g(d∗) = u(ρ∗). Consequently, we obtain (4.28), which,
together with (4.18) of Lemma 4.4, implies for any function h ∈ H,
CTETh(X)(α) ≥ minDn
CTETh(X)(dn,1, ..., dn,n, α)
= v(α) +n∑
j=1
cn,j[u(ρ∗)− v(α)] ≥ u(ρ∗). (4.29)
Furthermore, (4.18) and (4.19) lead to CTETf! (X)(α) = u(ρ∗) when∑n
j=1 cn,j = 1 in (4.19)
or when f ∗(x) = (x− d∗)+. Hence, (4.29) implies (4.27).
(b) By (4.20) of Lemma 4.4, we have for any function h ∈ H,
CTETh(X)(α) ≥ minDn
CTETh(X)(dn,1, ..., dn,n, α) = v(α). (4.30)
Furthermore, by (4.20) and (4.21), CTETf! (X)(α) = v(α) when f ∗ is of the form such that
d∗ ≤ dn,1 ≤ · · · ≤ · · · ≤ dn,n, and n = 1, 2, · · · . Hence, (4.30) implies (4.27).
15
(c) By the definitions of v(·) and g(·), we have g(S−1X (α)) < v(α) and by (A.6) of Proposi-
tion , we have g(0) ≤ g(S−1X (α)). Thus g(0) < v(α), which together with (4.22) of Lemma 4.4,
leads to
CTETh(X)(α) ≥ minDn
CTETh(X)(dn,1, ..., dn,n, α)
= v(α) +n∑
j=1
cn,j[g(0)− v(α)] ≥ v(α), (4.31)
for any function h ∈ H. Furthermore, by (4.22) and (4.23), CTETf! (X)(α) = v(α) when∑n
j=1 cn,j = 1 in (4.23) or when f ∗(x) = x. Hence, (4.31) implies (4.27). !
5 Conclusion
It is well-known that reinsurance can be an effective risk management technique for insurers
to transfer part of its risk to reinsurer. The key, however, hinges on an optimal choice of
reinsurance contracts. By formulating as an optimization problem that minimizes the VaR
(or CTE) of the total cost of the reinsurer, this paper establishes the conditions for optimal
reinsurance designs. In particular, depending on the confidence level 1 − α for the risk
measure and the safety loading ρ for the reinsurance premium, we formally justified that
in some cases, a stop-loss reinsurance is optimal while in some other cases, a quota-share
reinsurance or a change-loss reinsurance is optimal. It is also of interest to note that the
conditions for optimal solutions for CTE are less restrictive than those for VaR.
It should be pointed out that a basic assumption in our proposed optimal reinsurance
model is adopting the expectation premium principle for setting the reinsurance premium.
In practice, there exists a number of other premium principles. It will be of interest to
investigate the impact of these premium principles on the optimal reinsurance designs. We
leave this for future research exploration.
Appendix A
This appendix collects the proofs to some of the results stated in the previous sections. Other
necessary propositions and lemmas are also presented and proved in this appendix.
Proposition A.1. (a) If function f is increasing and continuous, then
VaRf(X)(α) = f(VaRX(α)). (A.1)
16
(b) If functions fn and f are increasing and continuous and satisfy limn→∞ fn(x) = f(x)
for all x ≥ 0, then
limn→∞
VaRfn(X)(α) = VaRf(X)(α), (A.2)
(c) CTE and VaR of X are related as
CTEX(α) = E[X|X ≥ VaRX(α)] = VaRX(α) +
∫∞VaRX(α) SX(x)dx
Pr{X ≥ VaRX(α)} . (A.3)
Proof. Parts (a) and (b) follow immediately from the definition of VaR while for part (c),
see Cai and Tan (2007). !
Proposition A.2. (a) If ρ∗ < SX(0), then the continuous function g(x) defined in (2.9) is
decreasing on (0, d∗) while increasing on (d∗,∞) and satisfies
min0≤x≤a
{g(x)} = g(a) for 0 ≤ a ≤ d∗, (A.4)
min0≤x≤a
{g(x)} = g(d∗) = u(ρ∗) for d∗ ≤ a. (A.5)
(b) If ρ∗ ≥ SX(0), then the continuous function g(x) defined in (2.9) is increasing on
(0,∞) and satisfies
min0≤x≤a
{g(x)} = g(0) for a ≥ 0. (A.6)
Proof. (a) The proof follows from the condition that if ρ∗ < SX(0), then g′(x) < 0 for
0 < x < d∗ and g′(x) > 0 for x > d∗.
(b) The proof follows from g′(x) > 0 for x > 0 if ρ∗ ≥ SX(0). !
Proof of Lemma 3.1 It is known that for any nonnegative increasing convex function f
defined on [0,∞), there exists a sequence of nonnegative functions {hn, n = 1, 2, ...} defined
on [0,∞) such that hn(x) =∑n
j=1 cn,j(x − dn,j)+ for some constants cn,j ≥ 0 and dn,j ≥ 0
and limn→∞ hn(x) = f(x) from below for any x ≥ 0, which implies that
hn(x) ≤ f(x) for all x ≥ 0 and n = 1, 2, .... (A.7)
See, for example, Case 1 of the proof of Theorem 1.5.7 of Muller and Stoyan (2002, p18).
For any f ∈ F , we have 0 ≤ f(x) ≤ x, which, together with (A.7), implies (3.4).
Furthermore, it follows from (3.4) that for any x > 0 and n = 1, 2, ...,
0 ≤ hn(x)
x=
n∑
j=1
cn,j(x− dn,j)+
x≤ 1. (A.8)
17
Thus, letting x→∞ in (A.8) yields 0 ≤∑n
j=1 cn,j ≤ 1 for all n = 1, 2, · · · .
Finally, to show hn ∈ H, we just need to verify that cn,1 > 0, ..., cn,n > 0 and 0 ≤ dn,1 ≤· · · ≤ dn,n for all n = 1, 2, .... Since f ∈ F and f(x) ≡ 0 does not hold, there exists a positive
integer n0 so that when n ≥ n0, there is at least one of cn,1, ..., cn,n which is positive. Hence,
for any n ≥ n0, we delete the term cn,j(x − dn,j)+ from hn(x) when cn,j = 0, and relabel
the rest of the coefficients of {(cn,j, dn,j)} such that {dn,j} is in an increasing order. Thus,
the sequence of the functions {hn, n ≥ n0} satisfies the requirements for the lemma and this
completes the proof of the lemma. !
Proof of Lemma 3.4 We will first establish the fact that VaRTh(X)(dn,1, ..., dn,n, α) has
minimum on Dn if and only if
minDn
n
{VaRTh(X)(dn,1, ..., dn,n, α)} ≤ mini=0,1,...,n−1
{infDi
n
{VaRTh(X)(dn,1, ..., dn,n, α)}}
. (A.9)
To this end, we consider the value of infDin{VaRTh(X)(dn,1, ..., dn,n, α)} and if it is attainable
on the indicated set Din for i = 0, 1, 2, · · · , n.
Note that∫∞
d SX(x)dx is a decreasing function in d ≥ 0 and∫∞
d SX(x)dx→ 0 as d→∞.
Thus, it follows from (3.8) that
infD0
n
{VaRTh(X)(dn,1, ..., dn,n, α)} = VaRTh(X)(∞, ...,∞, α) = S−1X (α). (A.10)
For Din, i = 1, 2, · · · , n, we consider the following three possible situations:
(i) When ρ∗ < SX(0) and d∗ = S−1X (ρ∗) ≤ S−1
X (α), then from (A.5) of Proposition A.2,
(3.9) and (3.10), we have for i = 1, ..., n− 1,
infDi
n
{VaRTh(X)(dn,1, ..., dn,n, α)} = VaRTh(X)( d∗, ..., d∗︸ ︷︷ ︸i variants
,∞, ...,∞, α)
=
(1−
i∑
j=1
cn,j
)S−1
X (α) +i∑
j=1
cn,j g(d∗)
= S−1X (α) +
[u(ρ∗)− S−1
X (α)] i∑
j=1
cn,j, (A.11)
and for i = n,
minDn
n
{VaRTh(X)(dn,1, ..., dn,n, α)} = VaRTh(X)(d∗, ..., d∗, α)
=
(1−
n∑
j=1
cn,j
)S−1
X (α) +n∑
j=1
cn,j g(d∗)
= S−1X (α) +
[u(ρ∗)− S−1
X (α)] n∑
j=1
cn,j. (A.12)
18
(ii) If ρ∗ < SX(0) and d∗ = S−1X (ρ∗) > S−1
X (α), then from (A.4) of Proposition A.2, (3.9)
and (3.10), we have for i = 1, ..., n− 1,
infDi
n
{VaRTh(X)(dn,1, ..., dn,n, α)} = VaRTh(X)(S−1X (α), ..., S−1
X (α)︸ ︷︷ ︸i variants
,∞, ...,∞, α)
=
(1−
i∑
j=1
cn,j
)S−1
X (α) +i∑
j=1
cn,j g(S−1X (α))
= S−1X (α) +
[u(α)− S−1
X (α)] i∑
j=1
cn,j, (A.13)
and for i = n,
minDn
n
{VaRTh(X)(dn,1, ..., dn,n, α)} = VaRTh(X)(S−1X (α), ..., S−1
X (α), α)
=
(1−
n∑
j=1
cn,j
)S−1
X (α) +n∑
j=1
cn,j g(S−1X (α))
= S−1X (α) +
[u(α)− S−1
X (α)] n∑
j=1
cn,j. (A.14)
(iii) When ρ∗ ≥ SX(0), then by Proposition A.2 (b), (3.9) and (3.10), we have for i =
1, ..., n− 1,
infDi
n
{VaRTh(X)(dn,1, ..., dn,n, α)} = VaRTh(X)(0, . . . , 0︸ ︷︷ ︸ivariants
,∞, ...,∞, α)
=
(1−
i∑
j=1
cn,j
)S−1
X (α) +i∑
j=1
cn,j g(0)
= S−1X (α) +
[g(0)− S−1
X (α)] i∑
j=1
cn,j, (A.15)
and for i = n,
minDn
n
{VaRTh(X)(dn,1, ..., dn,n, α)} = VaRTh(X)(0, ..., 0, α)
=
(1−
n∑
j=1
cn,j
)S−1
X (α) +n∑
j=1
cn,j g(0)
= S−1X (α) +
[g(0)− S−1
X (α)] n∑
j=1
cn,j. (A.16)
19
Combining all the above, we immediately see that for any h(x) =∑n
j=1 cn,j (x− dn,j)+ ∈ Hwith fixed coefficients cn,j, j = 1, ..., n, VaRTh(X)(dn,1, ..., dn,n, α) has a minimum only on
Dnn and has infimum but no minimum on all other sets D0
n, D1n, ..., D
n−1n . This implies that
VaRTh(X)(dn,1, ..., dn,n, α) has a minimum on Dn if and only if (A.9) holds.
Note that when the above inequality (A.9) holds then
minDn
{VaRTh(X)(dn,1, ..., dn,n, α)} = minDn
n
{VaRTh(X)(dn,1, ..., dn,n, α)} (A.17)
since Dn =⋃n
i=1 Din. In addition, we point out that
g(0) =1
ρ∗E[X] >
1
ρ∗
∫ S"1X (α)
0
SX(x)dx >α
ρ∗S−1
X (α)
and u(ρ∗) > S−1X (ρ∗). Hence, if α ≥ ρ∗, then g(0) > S−1
X (α) and u(ρ∗) > S−1X (α). These
discussions are useful in proving our results in the lemma. Now we are ready to prove the
results.
(a) Together with (2.10), the condition of S−1X (α) ≥ u(ρ∗) implies the inequality d∗ =
S−1X (ρ∗) < S−1
X (α). Thus, if (3.11) holds, noticing that 0 <∑i
j=1 cn,j <∑n
j=1 cn,j ≤ 1 for
i = 1, ..., n−1, we see that the minimum in (A.12) is less than or equal to any of the infimums
in (A.10) and (A.11) since u(ρ∗)−S−1X (α) ≤ 0. This implies that inequality (A.9) holds and
so is (A.17). Consequently, it follows from (A.12) that
minDn
VaRTh(X)(dn,1, ..., dn,n, α) = minDn
n
VaRTh(X)(dn,1, ..., dn,n, α)
= VaRTh(X)(d∗, ..., d∗, α)
= S−1X (α) +
n∑
j=1
cn,j
[u(ρ∗)− S−1
X (α)],
where VaRTh(X)(d∗, ..., d∗, α) means that the minimum VaR is attained at dn,1 = · · · = dn,n =
d∗, and this proves (3.13).
(b) The condition (3.14) implies g(0) − S−1X (α) ≤ 0. Then by comparing (A.16) with
(A.15) and (A.10) and using the similar argument as in the prove of Part (a), it is easy to
conclude that
minDn
VaRTh(X)(dn,1, ..., dn,n, α) = minDn
n
VaRTh(X)(dn,1, ..., dn,n, α)
= VaRTh(X)(0, ..., 0, α)
= S−1X (α) +
n∑
j=1
cn,j
[g(0)− S−1
X (α)],
where VaRTh(X)(0, ..., 0, α) means that the minimum VaR is attained at dn,1 = · · · = dn,n = 0.
Hence, (3.16) holds.
20
(c) For all other cases, it is easy to check that (A.9) does not hold and hence
VaRTh(X)(dn,1, ..., dn,n, α) has no minimum on Dn. !
Proof of Lemma 4.3 (a) When S−1X (α) ≤ dn,1, it follows from (2.6) that VaRIh(X)(α) =
S−1X (α) ≤ dn,1. Thus, by (2.5) and after some algebras, it is not difficult to show that
∫ ∞
VaRIh(X)(α)
SIh(X)(x)dx =
∫ ∞
S"1X (α)
SIh(X)(x)dx
=
∫ dn,1
S"1X (α)
SX(x)dx +n−1∑
i=1
∫ An,i dn,i+1+Bn,i
An,i dn,i+Bn,i
SX
(x−Bn,i
An,i
)dx +
∫ ∞
An,n dn,n+Bn,n
SX
(x−Bn,n
An,n
)dx
=
∫ ∞
S"1X (α)
SX(x)dx−n∑
j=1
cn,j
∫ ∞
dn,j
SX(x)dx.
Moreover, by (2.4) and noticing that Ih(X) is increasing in X, we have
Pr{Ih(X) ≥ VaRIh(X)(α)} = Pr{Ih(X) ≥ S−1X (α)} = Pr{X ≥ S−1
X (α)} = α.
Together with (4.13), (2.3) and (2.6), we obtain (4.15).
(b) When dn,i ≤ S−1X (α) ≤ dn,i+1, it follows from (2.6) that
VaRIh(X)(α) = An,i S−1X (α) + Bn,i (A.18)
and An,i dn,i + Bn,i ≤ An,i S−1X (α) + Bn,i ≤ An,i dn,i+1 + Bn,i. Hence by (2.5), we have
∫ ∞
VaRIh(X)(α)
SXIh(X)(x)dx =
∫ ∞
An,i S"1X (α)+Bn,i
SIh(X)(x)dx
=
∫ An,i dn,i+1+Bn,i
An,i S"1X (α)+Bn,i
SX
(x−Bn,i
An,i
)dx +
n−1∑
j=i+1
∫ An,j dn,j+1+Bn,j
An,j dn,j+Bn,j
SX
(x−Bn,j
An,j
)dx
+
∫ ∞
An,n dn,n+Bn,n
SX
(x−Bn,n
An,n
)dx
= An,i
∫ ∞
S"1X (α)
SX(x)dx−n∑
j=i+1
cn,j
∫ ∞
dn,j
SX(x)dx. (A.19)
Note that 0 < An,i < 1 for i = 1, ..., n − 1. Furthermore, substituting (2.3), (A.18), (A.19)
and the following result
Pr{Ih(X) ≥ VaRIh(X)(α)} = Pr{Ih(X) ≥ An,i S−1X (α) + Bn,i}
= Pr{An,i X + Bn,i ≥ An,i S−1X (α) + Bn,i}
= Pr{X ≥ S−1X (α)} = α.
21
into (4.13), we obtain the required expression (4.16).
(c) When dn,n ≤ S−1X (α), it follows from (2.6) that
VaRIh(X)(α) = An,n S−1X (α) + Bn,n (A.20)
and An,n S−1X (α) + Bn,n ≥ An,n dn,n + Bn,n. Hence, by (2.5)
∫ ∞
VaRIh(X)(α)
SIh(X)(x)dx =
∫ ∞
An,n S"1X (α)+Bn,n
SX
(x−Bn,n
An,n
)dx
= An,n
∫ ∞
S"1X (α)
SX(x)dx.
Note that 0 ≤ An,n < 1. If An,n = 0, then SX
(x−Bn,n
An,n
)= 0 so that
∫ ∞
VaRIh(X)(α)
SIh(X)(x)dx = 0. (A.21)
Moreover, in this case it follows from (2.4) and (2.6)
Pr{Ih(X) ≥ VaRIh(X)(α)}
=
{Pr{An,n X + Bn,n ≥ An,n S−1
X (α) + Bn,n}, if 0 < An,n < 1
Pr{Ih(X) ≥ Bn,n}, if An,n = 0
=
{Pr{X ≥ S−1
X (α)} = α, if 0 < An,n < 1
Pr{X ≥ dnn}, if An,n = 0(A.22)
Substituting (2.3), (A.20), (A.21) and (A.22) into (4.13) for cases of 0 < An,n < 1 and
An,n = 0, respectively, we obtain (4.17). !
Proof of Lemma 4.4 The proof is similar to Lemma 3.4 by noticing that∫∞
d SX(x)dx is
a decreasing function in d ≥ 0,∫∞
d SX(x)dx → 0 as d → ∞, and applying Proposition A.2
to the CTEs in Lemma 4.3. The details are as follows.
(a) When α < ρ∗, we have 1ρ! −
1α < 0 and d∗ = S−1
X (ρ∗) < S−1X (α). Thus, it follows from
(4.15) that
minD0
n
{CTETh(X)(dn,1, ..., dn,n, α)} = CTETh(X)(S−1X (α), ..., S−1
X (α), α)
= v(α) +
[1
ρ∗− 1
α
] n∑
j=1
cn,j
∫ ∞
S"1X (α)
SX(x)dx.(A.23)
22
Furthermore, by (A.5), (4.16) and (4.17), we have for i = 1, ..., n− 1,
minDi
n
{CTETh(X)(dn,1, ..., dn,n, α)} = CTETh(X)( d∗, ..., d∗︸ ︷︷ ︸i variants
, S−1X (α), ...., S−1
X (α), α)
=
(1−
i∑
j=1
cn,j
)v(α) +
i∑
j=1
cn,j g(d∗) +
[1
ρ∗− 1
α
] n∑
j=i+1
cn,j
∫ ∞
S"1X (α)
SX(x)dx
= v(α) + [u(ρ∗)− v(α)]i∑
j=1
cn,j +
[1
ρ∗− 1
α
] n∑
j=i+1
cn,j
∫ ∞
S"1X (α)
SX(x)dx, (A.24)
and for i = n,
minDn
n
{CTETh(X)(dn,1, ..., dn,n, α)} = CTETh(X)(d∗, ..., d∗, α)
=
(1−
n∑
j=1
cn,j
)v(α) +
n∑
j=1
cn,j g(d∗) = v(α) + [u(ρ∗)− v(α)]n∑
j=1
cn,j. (A.25)
Note that the condition α < ρ∗ implies
u(ρ∗)− v(α) = S−1X (ρ∗)− S−1
X (α) +
[1
ρ∗− 1
α
] ∫ ∞
S"1X (α)
SX(x)dx < 0. (A.26)
It is therefore easy to see that the minimum in (A.25) is less than any of the minimums in
(A.23) and (A.24), which leads to
minDn
{CTETh(X)(dn,1, ..., dn,n, α)} = mindn∈Dn
n
{CTETh(X)(dn,1, ..., dn,n, α)}.
Hence justifying both (4.18) and (4.19).
(b) When α = ρ∗, we have d∗ = S−1X (ρ∗) = S−1
X (α) and u(ρ∗) = g(d∗) = v(α) so that in
this case, (4.15) reduces to
CTETh(X)(dn,1, ..., dn,n, α) = v(α) (A.27)
for any (dn,1, ..., dn,n) ∈ D0n. Furthermore, by (A.5), (4.16) and (4.17), we have for i =
1, ..., n− 1,
minDi
n
{CTETh(X)(dn,1, ..., dn,n, α)} = CTETh(X)(d∗, ..., d∗︸ ︷︷ ︸ivariants
, dn,i+1, ..., dn,n, α)
=
(1−
i∑
j=1
cn,j
)v(α) +
i∑
j=1
cn,j g(d∗) = v(α), (A.28)
23
and for i = n,
minDn
n
{CTETh(X)(dn,1, ..., dn,n, α)} = CTETh(X)(d∗, ..., d∗, α)
=
(1−
n∑
j=1
cn,j
)v(α) +
n∑
j=1
cn,j g(d∗)
= v(α), (A.29)
where (A.28) holds for any dn,i+1, ..., dn,n such that d∗ = S−1X (α) ≤ dn,i+1 ≤ · · · ≤ dn,n.
Thus, CTETh(X)(dn,1, ..., dn,n, α) has the same minimum of v(α) on all sets Din, i =
0, 1, ..., n. Hence, minDn CTETh(X)(dn,1, ..., dn,n, α) = v(α) and the minimum CTE is attained
at any
(dn,1, ..., dn,n) ∈ D0n
⋃(
n−1⋃
i=1
{(d∗, ..., d∗, dn,i+1, ..., dn,n) : d∗ ≤ dn,i+1 ≤ · · · ≤ dn,n})
⋃{(d∗, ..., d∗)}.
Because d∗ = S−1X (α) in this case, we obtain
D0n
⋃(
n−1⋃
i=1
{(d∗, ..., d∗, dn,i+1, ..., dn,n) : d∗ ≤ dn,i+1 ≤ · · · ≤ dn,n})
⋃{(d∗, ..., d∗)} = D0
n,
which consists of all (dn,1, ..., dn,n) such that d∗ ≤ dn,1 ≤ · · · ≤ dn,n. This completes the proof
of case (b).
(c) In this case, α < ρ∗ implies 1ρ! −
1α < 0 and d∗ = S−1
X (ρ∗) < S−1X (α). Thus, it follows
from (4.15) that
minD0
n
{CTETh(X)(dn,1, ..., dn,n, α)} = CTETh(X)(S−1X (α), ..., S−1
X (α), α)
= v(α) +
[1
ρ∗− 1
α
] n∑
j=1
cn,j
∫ ∞
S"1X (α)
SX(x)dx.(A.30)
Furthermore, by (A.6) and (4.16) and (4.17), we have for i = 1, ..., n− 1,
minDi
n
{CTETh(X)(dn,1, ..., dn,nα)} = CTETh(X)( 0, ..., 0︸ ︷︷ ︸i variants
, S−1X (α), ...., S−1
X (α), α)
=
(1−
i∑
j=1
cn,j
)v(α) +
i∑
j=1
cn,j g(0) +
[1
ρ∗− 1
α
] n∑
j=i+1
cn,j
∫ ∞
S"1X (α)
SX(x)dx
= v(α) + [g(0)− v(α)]i∑
j=1
cn,j +
[1
ρ∗− 1
α
] n∑
j=i+1
cn,j
∫ ∞
S"1X (α)
SX(x)dx, (A.31)
24
and for i = n,
minDn
n
{CTETh(X)(dn,1, ..., dn,n, α)} = CTETh(X)(0, ..., 0, α)
=
(1−
n∑
j=1
cn,j
)v(α) +
n∑
j=1
cn,j g(0) = v(α) +n∑
j=1
cn,j [g(0)− v(α)] . (A.32)
Now, note that from the condition α < SX(0) ≤ ρ∗, we have 1ρ!
∫ S"1X (α)
0 SX(x)dx −S−1
X (α) <(
1ρ!
)S−1
X (α)SX(0)− S−1X (α) ≤ 0 and 1
ρ! −1α < 0. This leads to
g(0)− v(α) = (1 + ρ)E[X]− S−1X (α)− 1
α
∫ ∞
S"1X (α)
SX(x)dx
=1
ρ∗
∫ S"1X (α)
0
SX(x)dx− S−1X (α) +
[1
ρ∗− 1
α
] ∫ ∞
S"1X (α)
SX(x)dx < 0.
so that it is easy to conclude that the minimum in (A.32) is less than any of the minimums
in (A.30) and (A.31). Consequently, we have
minDn
{CTETh(X)(dn,1, ..., dn,n, α)} = minDn
n
{CTETh(X)(dn,1, ..., dn,n, α)}
which confirms both (4.22) and (4.23).
(d) It is easy to verify that mindn∈Dn{CTETh(X)(dn,1, ..., dn,n, α)} does not exist for all
other cases. !
Lemma A.1. For any f(x) ∈ F , If (x) = x− f(x) is increasing and concave in x.
Proof. The concavity of If (x) comes immediately from the fact that f(x) is convex. Now
suppose there exist two points x1 and x2 such that 0 ≤ x1 < x2 satisfying If (x1)−If (x2) > 0,
i.e.,
f(x2)− f(x1)
x2 − x1> 1. (A.33)
Moreover, it follows from the convexity of f(x) that f(x2) ≤ x−x2x−x1
f(x1) + x2−x1x−x1
f(x) for
x ≥ x2, or equivalently f(x) ≥ f(x2)−f(x1)x2−x1
x + x2f(x1)−x1f(x2)x2−x1
. Hence, it follows from (A.33)
that there exists a constant x0 such that f(x0) > x0, which contradicts to the assumption
that f(x) ≤ x for all x ≥ 0. Therefore we conclude that If (x) is increasing. !
Lemma A.2. If If (x) is an increasing concave function, then the distribution function of
If (x), FIf (X)(x), has at most one discontinuity on (0,∞). If such a discontinuity exists, then
If (x) =
{g(x), 0 < x ≤ e0,
g(e0), x > e0,(A.34)
25
for some strictly increasing function g(x) and some constant e0 ∈ (0,∞), and g(e0) is the
only discontinuity of FIf (X)(x).
Proof. Since If (x) is increasing concave, If (x) must be either strictly increasing on [0,∞)
or has the following representation of (A.34) for some strictly increasing function g and some
constant e0 ∈ (0,∞). Otherwise, if there exists an interval [x1, x2] such that If (x) is constant
while strictly increasing on [x2, x2 + ε) (for any real number ε > 0), then there exists a real
number 0 < δ < ε such that
δ
δ + x2 − x1If (x1) +
x2 − x1
δ + x2 − x1If (δ + x2) =
δ
δ + x2 − x1If (x2) +
x2 − x1
δ + x2 − x1If (δ + x2)
>δ
δ + x2 − x1If (x2) +
x2 − x1
δ + x2 − x1If (x2)
= If (x2),
which violates the concavity of the function If (x).
If If (x) is strictly increasing globally, by the assumption that FX(x) is a one-to-one
distribution function on (0,∞), then FIf (X)(x) has no discontinuous point. On the other
hand, if If (x) takes the form of (A.34), then
FIf (X)(x) =
FX(0), x = 0,
FX(g−1(x)), 0 < x ≤ g(e0),
1, x > g(e0).
Therefore g(e0) is the only discontinuous point on (0,∞). !
Proof of Lemma 4.1 We verify the result by considering two cases respectively. First of
all, we suppose the distribution function of If (x), as denoted by FIf (X)(x), is continuous
at VaRIf(X)(α). By Lemma 3.1, there exists a sequence of functions hn(x) ∈ H such that
limn→∞ hn(x) → f(x) for all x ≥ 0 and hence it follows from Proposition A.1 (b) and the
fact that hn(x) and f(x) are increasing and continuous that
limn→∞
VaRIhn (X)(α) = VaRIf(X)(α).
Consequently, by the dominated convergence theorem we obtain
Ihn(X)− VaRIhn (X)(α)a.s.−→ If (X)− VaRIf(X)
(α),
which immediately leads to (4.3).
Next we suppose VaRIf (X)(α) is one discontinuity of FIf(X)(x). By Lemma A.1, If (x) is
increasing concave in x, and hence it follows from Lemma A.2 that VaRIf (X)(α) is the only
26
discontinuity of FIf (X)(x), and
If (x) =
{g(x), 0 < x ≤ e0,
g(e0), x > e0,(A.35)
for some strictly increasing and continous function g(x) and a constant e0 such that g(e0) =
VaRIf (X)(α). Consequently
Pr{If (X) ≥ VaRIf (X)(α)} = Pr{If (X) ≥ g(e0)} = Pr{X ≥ e0}.
To verify the result in this case, we adopt the technique of constructing a sequence of
functions hn(x) in class H satisfying (4.2) and (4.3) as bellows.
Denote x′2n,i = i · e0/2n, a′2n,i = f ′+(x′2n,i), and li(x) = a′2n,i(x− b′2n,i) with b′2n,i ∈ R, to be
a tangent curve of f(x), i = 0, 1, 2, · · · , 2n. Then,
b′2n,i = x′2n,i −f(x′2n,i)
f ′+(x′2n,i)=
i
2ne0 −
f( i2n e0)
f ′+( i2n e0)
and l2n(x) = x−g(e0) = f(x), i.e. a′2n,2n = f ′+(e0) = 1, b′2n,2n = g(e0) = e0−f(e0). Note that
a′2n,i−1 < a′2n,i−1 ≤ 1, i = 1, 2, · · · , 2n due to the assumption that f(x) is strictly increasing
convex on interval (0, e0] and the fact that 0 ≤ f ′+(x) ≤ 1. Take
hn(x) = max{0, l1(x), l2(x), · · · , l2n−1(x), l2n(x)}.
Then for any given ε > 0, it follows from the continuity of f(x) and the construction of hn(x)
that there exists a large enough integer n such that |f(x)− li(x)| ≤ ε2 and |hn(x)− li(x)| ≤ ε
2
for x ∈ [x′2n,i, x′2n,i+1), i = 0, 1, 2, · · · , 2n − 1, and hn(x) = l2n(x) = f(x) for x ∈ [e0,∞).
Hence it follows
limn→∞
hn(x)→ f(x) for all x ≥ 0.
Furthermore, hn(x) can be rewritten as follows.
hn(x) =2n∑
i=1
(li(x)− li−1(x))+
=2n∑
i=1
(a′2n,i − a′2n,i−1)(x−
b′2n,ia′2n,i − b′2n,i−1a
′2n,i−1
a′2n,i − a′2n,i−1
)
+
=2n∑
i=1
c2n,i(x− d2n,i)+,
where
c2n,i = a′2n,i − a′2n,i−1, and d2n,i =b′2n,ia
′2n,i − b′2n,i−1a
′2n,i−1
a′2n,i − a′2n,i−1
,
27
for i = 1, 2, · · · , 2n. Obviously,∑2n
i=1 c2n,i =∑2n
i=1(a′2n,i − a′2n,i−1) = a′2n,2n = 1 with c2n,i > 0.
And due to the convexity of f(x), we have
b′2n,ia′2n,i − b′2n,i−1a
′2n,i−1
=[ i
2ne0 −
f( i2n e0)
f ′+( i2n e0)
]f ′+(
i
2ne0)−
[ i− 1
2ne0 −
f( i−12n e0)
f ′+( i−12n e0)
]f ′+(
i− 1
2ne0)
=i− 1
2ne0
[f ′+(
i
2ne0)− f ′+(
i− 1
2ne0)
]+
e0
2n
[f ′+(
i
2ne0)−
f( i2n e0)− f( i−1
2n e0)
,e0/2
n]
> 0
which means d2n,i > 0 for i = 1, 2, · · · , 2n. Therefore hn(x) ∈ H.
Now concentrate on d2n,i, which can be expressed as follows.
d2n,i =i
2ne0 +
e02n f ′+( i−1
2n e0)−[f( i
2n e0)− f( i−12n e0)
]
f ′+( i2n e0)− f ′+( i−1
2n e0). (A.36)
And by the convexity of f(x) we have
d2n,i − d2n,i−1 =e0
2n+
e02n
[f ′+( i−1
2n e0)−f( i
2n e0)−f( i"12n e0)
e0/2n
]
f ′+( i2n e0)− f ′+( i−1
2n e0)+
e02n
[f( i"1
2n e0)−f( i"22n e0)
e0/2n − f ′+( i−22n e0)
]
f ′+( i−12n e0))− f ′+( i−2
2n e0)
=
e02n
[f ′+( i
2n e0)−f( i
2n e0)−f( i"12n e0)
e0/2n
]
f ′+( i2n e0)− f ′+( i−1
2n e0)+
e02n
[f( i"1
2n e0)−f( i"22n e0)
e0/2n − f ′+( i−22n e0)
]
f ′+( i−12n e0)− f ′+( i−2
2n e0)> 0,
which means d2n,i is increasing in i and hence d2n,2n is the maximum among {d2n,i, i =
1, 2, · · · , n}. As a result, we know that Ihn can be represented as the form of (A.35) by
replacing e0 with d2n,2n , i.e.,
Ihn(x) =
{k(x), 0 < x ≤ d2n,2n ,
k(d2n,2n), x > d2n,2n ,(A.37)
for some strictly increasing and continuous function k(x). Moreover, it follows from (A.36)
and the convexity of f(x) that
d2n,2n = e0 +
e02n f ′+(2n−1
2n e0)−[f(e0)− f(2n−1
2n e0)]
1− f ′+(2n−12n e0)
≤ e0, (A.38)
and
d2n,2n = e0 +
e02n f ′+(2n−1
2n e0)−[f(e0)− f(2n−1
2n e0)]
1− f ′+(2n−12n e0)
= e0 −e02n
[1− f ′+(2n−1
2n e0)]
1− f ′+(2n−12n e0)
+
e02n f ′+(e0)−
[f(e0)− f(2n−1
2n e0)]
1− f ′+(2n−12n e0)
≥ e0 −e0
2n. (A.39)
28
Combining (A.38) and (A.39) immediately yields
limn→∞
d2n,2n = e0.
Now turn to investigate Pr{Ihn(X) ≥ VaRIhn (X)(α)}. From (A.37) and (A.38) we have
Pr{Ihn(X) ≥ Ihn(d2n,2n)} = Pr{X ≥ d2n,2n} ≥ Pr{X ≥ e0} ≥ α,
and
Pr{Ihn(X) > Ihn(d2n,2n)} = 0.
These results in turn imply VaRIhn (X)(α) = Ihn(d2n,2n), and consequently,
limn→∞
Pr{Ihn(X) ≥ VaRIhn (X)(α)} = limn→∞
Pr{Ihn(X) ≥ Ihn(d2n,2n)} = limn→∞
Pr{X ≥ d2n,2n}
= Pr{X ≥ e0} = Pr{If (X) ≥ VaRIf (X)(α)}.
This completes the proof. !
Acknowledgment
Cai acknowledges research support from the Natural Sciences and Engineering Research
Council of Canada (NSERC). Tan thanks the fundings from the Cheung Kong Scholar Pro-
gram (China), Canada Research Chairs Program and NSERC. Weng acknowledges the re-
search support from the Institute of Quantitative Finance and Insurance and the CAS/SOA
Ph.D. grant. They also thank the anonymous referee for the constructive comments and
suggestions and the paper is significantly improved as a result.
References
[1] Artzner, P., Delbaen, F., Eber, J.M. and Heath, D., 1999. Coherent measures of risks.
Mathematical Finance 9, 203-228.
[2] Basak, S., Shapiro, A., 2001. Value-at-Risk-Based Risk Management: Optimal Policies
and Asset Prices. The Review of Financial Studies 14(2), 371-405.
[3] Bowers, N.J., Gerber, H. U., Hickman, J. C., Jones, D. A. and Nesbitt, C. J., 1997.
Actuarial Mathematics. Second Edition. The Society of Actuaries, Schaumburg.
[4] Cai, J. and Li, H., 2005. Conditional tail expectations for multivariate phase type dis-
tributions. Journal of Applied Probability 42, 810-825.
29
[5] Cai, J. and Tan K. S., 2007. Optimal Retention for a Stop-Loss Reinsurance under the
VaR and CTE Risk Measure. Astin Bulletin 37(1), 93-112.
[6] Gajek, L., Zagrodny, D., 2004. Optimal reinsurance under general risk measures. Insur-
ance: Mathematics and Economics 34, 227-240.
[7] Gerber, H.U., 1979. An Introduction to Mathematical Risk Theory. S.S. Huebner Foun-
dation for Insurance Education, Wharton School, University of Pennsylvania, Philadel-
phia.
[8] Inui, K. and Kijima, M. 2005. On the significance of expected shortfall as a conherent
risk measure. Journal of Banking and Finance 29, 853-864.
[9] Kaluszka, M., 2001. Optimal reinsurance under mean-variance premium principles. In-
surance: Mathematics and Economics 28, 61-67.
[10] Kaluszka, M., 2004a. An extension of Arrow’s result on optimality of a stop loss contract.
Insurance: Mathematics and Economics 35, 527-536.
[11] Kaluszka, M., 2004b. Mean-variance optimal reinsurance arrangements. Scandinavian
Actuarial Journal 2004(1),28-41.
[12] Kaluszka, M., 2005. Optimal reinsurance under convex principles of premium calcula-
tion. Insurance: Mathematics and Economics 36, 375-398.
[13] Kass, R., Goovaerts, M., Dhaene, J. and Denuit, M., 2001. Modern Actuarial Risk
Theory. Kluwer Academic Publishers, Boshton.
[14] McNeil, A. J., Frey, R., Embrechts, P., 2005. Quantitative Risk Management: Concepts,
Techniques, and Tools. Princeton University Press,
[15] Muller, A., and Stoyan, D., 2002. Comparison Methods for Stochastic Models and Risks.
Willey Series in Probability and Statistics.
[16] Yamai, Y. and Yoshiba, T. 2005. Value-at-risk versus expected shortfall: A practical
perspective. Journal of Banking and Finance 29, 997-1015.
30